Nothing is more important in IT than the timely delivery of working software safely into production.
Historically this has been harder than it should have been because the Dev team and the Ops team have been focused on different and competing objectives. The last decade has seen astonishing growth in the complexity of releases and the consequences of failure. As each market, technology and methodology shift has occurred, it has become more critical for Dev and Ops to execute software changes flawlessly.
Business demands both more change (volume and velocity) and risk elimination (secure and compliant). Dev responds to the time-to-market pressure with more iterative approaches to software delivery and this overwhelms the Change and Release team who, in turn, are forced to relax controls or face significant increases in workloads. Ops mitigates the risk inherent in continual change with more constraints and deeper scrutiny and the pendulum swings back obliging Change and Release to be more governing and restrictive.
The only way out of this pendulum ping-pong is to change the paradigms altogether. Dev, often portrayed as agents of change relentlessly introducing instability, and Ops, caricatured as guardians of the inviolate constancy of systems, are, in reality, teams of highly trained and experienced professionals with the same goal in mind, delivering software systems, that business needs, safely.
That realization is where DevOps begins, making Dev and Ops jointly responsible for the changes the business needs and for the system stability the business demands. This means great focus on quality, reliability and external impact for Dev. It means optimizing controls, automating activities and streamlining processes for Ops. Dev is now invested in system stability and Ops makes deployment velocity possible.
For DevOps purists this can mean even greater, and more radical, changes to the fabric of how applications are designed, a complete rethink of the infrastructure that supports development, delivery and execution and wholesale organizational restructuring from Biz to Dev to Ops and DevOps. Microservices, long cherished at Amazon.com, free app-functionality from limited use and enables constant re-purposing and re-invention; they change the very definition of application. Lightweight, throwaway tools, frequently open source software (OSS), loosely coupled together allow creativity and experimentation to thrive and velocity to increase. Project-teams morph into product-teams that embed themselves with the business and own the outcomes from ideation to implementation and even the operational running of the product.
In the new economy, this kind of digital transformation is key to remaining competitive and it is easy to see why organizations like Facebook, Salesforce.com and Google have reinvented their development and delivery archetype. Ways of working in these organizations are also shifting to MicroSourcing and contingent employees and this is driving the need for more automation and collaboration tools to manage this highly dynamic, loosely federated workforce.
But what about the most highly regulated, large enterprises (HRLEs) in the world. How do they pivot to a DevOps culture?
Enterprise DevOps is different
Forrester analysts Kurt Bittner and Amy DeMartine wrote in SD Times recently, “[We] found that DevOps adoption varies greatly by industry and application type due to varying customer sophistication, regulatory constraints, and competitor-savvy factors.”
Their research correlates exactly with my experience. I often see DevOps adoption led by those enterprises that are both time-to-market driven and heavily regulated. Other aren’t slouching either and are ramping up fast behind them. Even in the public sector, and other traditionally slow to evolve industries, DevOps adoption is happening and is on every CIO’s agenda.
|Adoption Rate||Fastest adoption||Faster adoption||Fast adoption||Adoption||Slower adoption|
Systems of Innovation
Manufacturing, Pharma, Services
High-Tech Manufacturing, Financial Services
Media, Entertainment, Healthcare
Energy, Mining, Utilities, Telcos
Systems of Differentiation
Manufacturing, Pharma, Services, Financial Services
High-Tech Manufacturing, Retail, Wholesale
Utilities, Telcos Government,
Energy, Mining, Healthcare
Systems of Record
Manufacturing, Pharma, Services
High-Tech Manufacturing, Financial Services
Retail, Wholesale, Utilities, Telcos, Government
Energy, Mining, Healthcare
Highly regulated large enterprises (HRLEs) in bold.
Highly regulated large enterprises (HRLEs) are faced with unique constraints that make adoption of pure DevOps practices difficult. HRLEs face exceptional challenges of scale, application inter-dependencies, multi-modal IT, disparate and dispersed teams, resource pressures, modern versus legacy application architectures and compliance concerns.
Separation of Duties
Separation of Duties (SoD – also called “Segregation of Duties”) is a time-honored principle that requires individuals with a particular responsibility in business not to be the person that reviews and approves the actions associated with that responsibility. As some have said, the poachers can’t be the gamekeepers. For many in IT, the first they experienced this requirement in the wake of the Sarbanes-Oxley legislation which resulted following the financial crises between 2000 and 2003.
Increasingly, internal auditors are demanding SoD in IT processes. Risk and Compliance conferences frequently have IT Security Risk Assessment as a leading topic and they are advising CISOs and Auditors to look closely at IT processes and practices. Hard and fast rules are implemented because of the fear of failure (such as the Knight Capital disaster). Mandates like: developers are forbidden from deploying code to production, changes must be approved by an independent board before they are implemented, release engineers have no access to source code, are common. And rightly so.
But SoD makes it hard to create multi-disciplinary DevOps teams of Developers and Operations personnel with shared responsibility and shared ownership of code deployment as desired by pure DevOps. Instead what HRLEs seek is DevOps infrastructure that requires Dev and Ops to collaborate and provides them with common, shared data about release activities. They need tools with full traceability and auditability so that, in the event of an outage, root cause analysis procedures can determine the point of failure and changes can be made to the processes and automation to prevent further occurrences.
As organizations move towards “generative cultures” where very high standards are demanded we see collaboration as a key corporate capability. These organizations are looking to pre-emptively prevent ‘bad situations’ from occurring by including security, penetration and load testing to be included as part of core development activities. Traceability and audit compliance must also be enforced and not seen as a later ‘bolt on’ solution to try and address a compliance problem.
Specialization and Segregation
Similarly, HRLEs frequently optimize their resources to an extreme degree and compartmentalize skillsets. The Database Administrator is the specialist who makes database changes as needed by an application release. Generalists in the Dev, Ops or DevOps team do not have this permission. Indeed, there is often a physical (and technical) segregation of the production environment from the development environment. Moving code from one to the other requires special, secure, sometime secret, approved transfer of artifacts. Frequently this arcane activity is conducted in the, so called, “million dollar meeting” where stakeholders from the Biz, Dev and Ops gather, two or three times a week, to vet and approve the deployment of code. All of this makes the joint ownership and collaborative problem solving less optimal.
These are real issues for DevOps purists and they do affect how DevOps is implemented in HRLEs, but they do not prevent highly successful DevOps initiatives from occurring.
The idea of the product-team owning every stage of an application lifecycle is admirable. But not optimal for HRLEs. The massive amount of compliance requirements that have nothing to do with the application, yet still need to be demonstrably delivered, require exceptional specialization. The underlying architecture of n-tier applications is complex because of the built-in security layers, historic compatibility issues, needs of foreign governments and is, too, in need of expert enterprise application architects to manage and evolve.
Finding a single resource with a skillset that can address all of these challenges is very difficult and very expensive. The DevOps goal of a single cross functional team is difficult to put into practice as the cross-skilled, trained and security-cleared resources simply don’t exist in vast numbers in the job-market. Replacing the existing team and their system knowledge or updating your teams complete skillset overnight are not realistic choices.
Even the simple need to be able to move team members around to meet project resource needs requires that developers, quality engineers and build experts do not become too tied to one technology and stack of tools.
Security and stability
As the pressure to release apps at greater velocity grows, so has the freedom and empowerment experienced by development teams. At the same time, there is a risk of opening of the floodgates (or more accurately a backdoor) that allows the criminal fraternity to access to internal systems, customer facing apps and in some senses worst of all, sensitive customer data. APIs are speeding up integration between suppliers, partners and various platforms enabling rapid application development. Adoption of APIs can also bring risks when there is no true API management that caters for volume, puts in place the right access controls, and enables an API to be sunsetted when no longer needed, rather than leaving an inadvisable entry point. On a similar note, the rush to explore the value of containers has to be balanced by security considerations and safeguards
There is no shortage of software companies eager to persuade their enterprises that their technology is essential for DevOps success. Many of these address the “Automation” pillar of the DevOps credo, and some can save vast time and effort vs manual approaches. However, the fact that many tools are Open Source and offer free trials can lead to a proliferation in use without any coordination or wider considerations
Legacy systems are legendary
Though the technology is 5-decades old it still processes more transactions every day than the Internet does. The mainframe, for most every HRLE, is the transaction processing workhorse of the business. The systems that run on them still contain code written before the first WiFi-signal beeped from Sputnik. Billions of lines of code executing and keeping the business running cannot be replaced by Microservices overnight. Indeed the value of doing so rarely exceeds the cost of doing it. And the risk introduced from such a massive overhaul is too great. Many organizations are surrounding their legendary systems with newer technology in the hope, that one day, they will have replaced the old system.
DevOps has been a part of the mainframe landscape since its inception. Not called DevOps but endowed with all the DevOps principles, the approaches and culture have long been about shared ownership of release success. Today, as the resources who support the mainframe get younger, the willingness to adopt newer approaches and methodologies is increasing and the platform-silo thinking being eroded.
So the move to Microservices for a business a few years old is reasonable and practical but for a business a few centuries old not so practical. There are many mainframe applications that are designed with a SOA but none that have embraced the full adoption of Microservices. HRLEs are transforming their mainframe solutions to meet the needs of then non-mainframe purists who have already transformed and we are seeing the encirclement of legendary systems with modern applications that enhance and replace some of the original functionality. The encirclement to replace the older code is not the end in itself, but adding Microservices around the older apps are a way to get to the business objective sooner.
You can and should do DevOps with legacy systems. Enterprises are not “pure” and will never be “pure” but they can do always go faster, deliver better quality and be more predictable and reliable. Any mainframe application and apply DevOps principles and still improve its delivery rate dramatically.
Enterprise DevOps succeeds
- Culture – Own the change to drive collaboration and communication
- Automation – Take manual steps out of your value chain
- Lean – Use lean principles to enable higher cycle frequency
- Metrics – Measure everything and use data to refine cycles
- Sharing – Share experiences, successful or not, to enable others to learn
Our view is that CALMS, should rather be ALMSC. In that it is very difficult to change a culture organically without first automating processes so that the human desire to do things the “old way” is circumvented.
Automation brings processes tangibly, and visibly to life enabling thoughtful re-evaluation and optimization of the processes thus creating leaner, higher velocity processes. Automation brings much needed telemetry (records of every action, by whom, for whom, where, what, when, how and why) enabling detailed measurement of the activities and the ability to visualize the data in dashboards shared by Dev and Ops and other stakeholders. As resources, priorities and methods evolve everyone can see, real-time in their dashboards the positive (or not) effect of those changes enabling continuous improvement to occur and for all participants to share in the outcomes. This, ultimately, leads to the lasting cultural change that is the hallmark of DevOps.
It cannot be underestimated how vital executive sponsorship is to effect this kind of cultural transformation. Automation is also key to buttressing executive’s continuing investment and commitment. As the automation spins off data on the improvements in quality, timeliness, completeness and efficiency being delivered this reinforces the reasons this initiative was important and allows for proof points on the ROI that initially justified the decision.
Note that you certainly don’t want to automate a bad process. Common sense review of process is required as a starting point. One should also be wary of falling into the “analysis paralysis” trap and over-thinking existing processes and trying to optimize them completely before automating them. It is far easier to modify and improve an automated process than a paper one.
In order to implement successful automation of processes, the three types of processes, documented, undocumented and working practices, must be consolidated: This provides a great opportunity to apply common-sense to simplify and make lean where practicable.
For DevOps to succeed in HRLEs automation is key. In order to deliver the collaboration and cooperation necessary to be effective, technology must provide the access to the data required to be effective. With automation it is possible to achieve a level of openness and understanding of the current state far more easily than through interminable status meetings and conference calls.
As a minimum investment in technology to support your Enterprise DevOps initiative consider these four technologies as you baseline starting point. Each addresses a key issue in DevOps and they can be implemented in any order, and be connected in any way as your DevOps matures. Start with the one that is going to address your most important concerns as you start your DevOps program.
- DevOps collaboration platform enabling clear visibility for Biz, Dev and Ops into what is happening. Sometimes called the “DevOps Pipeline”, this capability that connects together the tool-chain flow of technologies that support the release and deploy activities. Shared, common data must be readily accessible showing the state of all project (and product) team activity from the onset of development to the safe delivery to production. Dashboard, alerts, notifications and approvals must be integrated into a common DevOps infrastructure too.
- DevOps deployment automation enabling repeatable, predictable and reliable delivery of applications through the deployment pipeline and on into production. Automated support for Continuous Deployment initiatives is essential as it allows for reuse of common deployment techniques (back up the database, reset the server configuration, start the web server etc). Automated backout and recovery processes minimize downtime and guarantee service remain on-air. Build and release engineers spend more time on optimizing and tuning deployments than on writing throw-away scripts resulting in every increasing performance, compliance and quality.
- DevOps secure artifact repository that represents the “single version of the truth.” Ops has long been concerned with innumerable changes entering the production environment from too many different paths. Quality, reliability and transparency need to be the same irrespective the size, complexity of origin of the changes. A single secure artifact repository is the essential first step. Dev teams to deliver into the common repository where a battery of standard test are applied. This ensure a common minimum standard of quality is always maintained. From here the code is then automatically moved through the lifecycle towards production (see #2). Sometimes this is called the Definitive Media Library (DML) in ITIL terms.
- DevOps infrastructure for Agile teams in support of development efforts who are using iterative methods of software creation and delivery. Continuous Integration is a key capability for Agile development teams. Version management (branching and merging) is essential and it is critical to understand what code is in use in each part of the delivery pipelines. Work-item management, managing the backlog (epics and stories), has to be flawlessly performed to ensure Biz, Dev and Ops alignment. Having an enterprise scaled solution that meets the needs of the Agile development and delivery team is vital but it must also meet the needs of Enterprise stakeholders responsible for oversight.
With technology as a starting point the DevOps migration can begin.
The goal of DevOps is the same irrespective of the environment in which it exists: lowering the risk of change through tools and culture. There are differences that are important and it is important understand them.
|Pure Agile teams||Variable speed IT|
|Multidisciplinary team members||Team maintains Separation of Duties (SoD)|
|Drawn primarily from Dev and Ops teams||Drawn primarily from Change and Release Teams|
|Limited variability in platforms, technology, toolset||Wide variances in platforms, technology and toolsets|
|Generally collocated small teams||Generally geographically dispersed large teams|
|Frequent microsourcing and contingent workforce||Frequent outsourcing offshore|
|Light compliance culture||Tight compliance culture|
|Limited cross-project dependencies, micro services||Complex cross-project dependencies|
|Experimental, A-B testing, Fail-Fast culture||Move Fast Without Breaking Things culture|
|Distributed ownership||Centralized, specialist culture|
|Team developing the app runs the app||Team developing the app kept separate from team executing the app|
|Make do with toolchain of loosely integrated Open Source Solutions||Needs vendor supported, secure, scalable infrastructure|
As Gartner shows in their Pace-Layered Architecture and Bimodal IT taxonomies, Enterprise DevOps has to be equipped to handle not just the high-speed, express release trains, from the Agile teams but also the relentless, heavy freight releases from the more traditionally organized teams. Enterprise DevOps needs to have its own infrastructure to support low-touch incremental releases on demand and high-scrutiny releases months out in the calendar.
 Analyst group Altimeter defines Digital Transformation as “the realignment of, or new investment in, technology and business models to more effectively engage digital customers at every touch point in the customer experience lifecycle.”
 A shift from employees to on-demand independent providers and individuals with specialist skills and services
 Contingent employees are a provisional group of workers who work for an organization on a non-permanent basis.
 SD Times, Where’s the heat in DevOps, 2016-02-01
 Sources Forrester and Gartner
 CISO, Chief Information Security Officer
 Forbes, 2012-08-12 – Knight Capital Trading Disaster Carries $440 Million Price Tag
 Generative organizations set high standards and constantly strive to exceed them. Failures are seen as reason to improve not to cast blame. Leaders know what is happening because employees tell them. Knowledge about current status is shared goal because it enables teams to preempt and prevent errors. This is no Utopian goal though, everyone knows errors will occur and the culture says that that is OK as long as they are mitigated quickly, do not escalate and lessons are subsequently learned.
 Service Oriented Architecture
 Gen Kim, Damon Edwards, Jez Humble, John Willis et al
 Originally 4 principles and known then as CAMS
 CD (Continuous Deployment) requires that application artifacts flow automatically from one environment to another on successful completion of required testing (automated or not)
 Pace-Layered Architecture classifies applications as Systems of Innovation, Systems of Differentiation and Systems of Record. Each has differences in their base technologies, rate of change, and criticality to the business this affects the investment they receive.
 BiModal IT suggests that IT has two speeds of development. For Systems of Innovation, use Mode 2, high-speed, iterative development. For Systems of Record, use Mode 1, slow speed, low risk development. HRLEs have Variable-Speed IT with many different development approaches as needed by the time-to-market, compliance, risk, technology, methodology and topology requirements.