Full Unemployment

Lamplighters: whither they?

Recently, on TechCrunch, an author posited that no child born from this point on will ever have a driver’s license. By 2032, 16 years from now, self-driving will be shunned (as smoking is today) and parents will insist that their kids go out with their friends in the auto-driving car. Self-driving might still be around but insurance rates will be so high that these vehicles will only be in the hands of extreme hobbyists and collectors.

And that is another change we will see, “self-driving” will be the term we use when we go rogue and try to navigate the highways ourselves and “auto-driving” will be the safe, regular and routine way we move around the planet. Auto-buses, auto-trucks, auto-trains, auto-cargo ships and even auto-planes will become the preferred way of driving.

Vanity Fair just reported on the recent fatality that occurred in a Tesla Model S. It pointed out that, amongst the hysteria about this tragic event, the data got lost. In the USA, Vanity Fair suggests, one road fatality occurs for every 100 million miles driven. Tesla’s, in Autopilot mode, have driven 130 million miles without incident until last week. In other words one is safer in Autopilot mode. [Thanks to Jon Ziegler for the correction].

This is the start of a series of posts on the future of our society that might affect you, probably will affect your children and will definitely impact your grandchildren. I want you to think, to extrapolate our current technology capabilities a little way out into the future.

A day is coming when we switch from full employment to a world where full unemployment becomes possible

Autonomous cars means autonomous trucks and buses will be next. Autonomous trucks roll 24×7 without need of truck-stops (except for gas or electric charge) and without the need truck-drivers. There are 3.5 million truck drivers in the USA today. No truck-stops, no truck-stop chefs, waiters, or clerks. Autonomous buses, means no bus-drivers (all 665,000). Autonomous taxis, auto-Ubers, means no drivers and no dispatchers. There are about 250,000 taxi drivers in the US and around 170,000 Uber drivers. That’s almost 4.5 million jobs that could disappear in the next decade. Add in train drivers, postal delivery drivers, ferry captains, harbor pilots, tram drivers, fixed-wing and rotary wing pilots.

In the future transportation will be autonomous. It will be safer, faster and cheaper. Which means better for the transported but not for the transporter.

Autonomy will move into vehicles that operate in hostile environments. Forest fires fought with autonomous all-terrain fire-trucks, minerals mined with autonomous drilling and boring machines and hazardous waste cleared by monster machines with impregnable defences against the most noxious and toxic substances.

Not forgetting your Amazon package delivered by drone.

All of these jobs, vital and essential, are keeping hard-working men and women employed so they can feed, clothe and house their families. But they will all disappear. Gone the way of the hostler, the footman and the lamplighter.

As technology advances it frees us to go from humans doing things to become humans being things

So what will these millions of people do instead. If we plan now we can prepare for the era of full unemployment (#FullUnemployment). We can devote our energies to contributing to society in other ways. And we have to find a way that our contribution is rewarded even if our contribution is not about increasing material value but about increasing societal value.

And that’s the topic of the next post.

Posted in Business and Technology, Personal growth | Tagged | 2 Comments

IT Peers Overspend

In March Gartner released research that showed just over a third of development was Agile and almost half remained waterfall. The same survey noted that more than half of all organizations are doing some Agile development for some of their applications. In large enterprises this means development communities with disparate methodologies and, frequently, a diverse set of tools to support them. While some would consider standardized tooling the hobbledy-horse of creativity others would look to the advantages and flexibility common infrastructure brings. So, is it time to take the choice of tools away from the development and delivery teams and put this in the hands of the corporate infrastructure team?

Depending on your place in the lifecycle, you need different capabilities from the tools that make up your development and delivery toolchain. Teams are always looking for better, cooler, technologies that enhance the way they develop and deliver and always want to be on the leading edge with resume-enhancing product names. What if your role in IT is more concerned with security and stability or cost-control and access rights? Your job responsibilities will guide what and how you procure your infrastructure solutions for development and delivery.

Letting each team procure its own infrastructure for software development and delivery is not the right choice for highly regulated, large enterprises. Here’s why:

  • It is more expensive in the long run
  • If it is vended software, multiple contracts to manage, few economies from bulk buying
  • Each instance of the software will need separate administration and support – who is going to keep track of all the instances, versions, patch levels
  • The underlying process, policies and procedures will vary from instance to instance creating an expensive maintenance problem
  • Who will make sure the solution integrates with other tools in the toolchain and other tools used by different teams doing the same functions? Who will keep these integrations current?
  • If it is open source software (OSS), who will vouch for it being free from malicious code, backdoors and security vulnerabilities? Who will support it?
  • Developers will get stuck on projects because they only know one infrastructure toolset making resource-leveling difficult to do
  • Project tracking and reporting will also be complex, prone to inaccuracy and difficult to make universally visible as tools will not follow common standards for measuring progress and defining milestones
  • Cross-project collaboration and code-sharing will be virtually impossible without constant meetings and vigilant team members tracking parallel development activity
  • Development velocity will be impaired because little automation will be possible and what is possible will vary widely from project to project

I see, time and again, development teams that have “gone rogue” and developed homegrown scripts to hold together a collection of tools obtained from various sources to solve a point issue. No thought is given to automation of the end-to-end SDLC, the telemetry needed by Project Leaders and the PMO (and other stakeholders) to do their jobs, no thought about the provenance of the solution and the vulnerabilities it may have and certainly no thought about the long term impact and cost .

As the Gartner report infers, organizations must look for the tooling they need to manage the disruption caused by the rapid rise in development and delivery volumes and velocity. We now realize that successful DevOps initiatives, in highly regulated large enterprises, succeed when automation is the heart of the transformation. While the adoption of Agile is trending towards a plateau, DevOps adoption rates continue to accelerate. Much of that growth comes from traditional development and delivery teams looking to match their cadence with ever growing and demanding needs of the business.

This means that, as you build your toolchain, you must acquire solutions that meet all your corporate stakeholder’s needs, that support your diverse methodologies, technologies, topologies and geographies. Compliance, risk, access controls, data integrity, scalability  and security matters and must be factored in. Cross-tool, cross-platform and cross-function integration should be easy and obvious.

This does not mean finding one vendor to supply all your needs. No one vendor has the best-in-class solution to any part of the lifecycle. Nor is their any one vendor who has deep domain experience across the entire lifecycle. Instead you should look to acquire the best solutions you can afford that support all your needs and partner with the vendors to create a common, automated, end-to-end experience for everyone concerned with fast innovation with minimal risk.

Posted in Business and Technology | Leave a comment

Enterprise DevOps is different: here’s why

BizDevOpsNothing is more important in IT than the timely delivery of working software safely into production.

Historically this has been harder than it should have been because the Dev team and the Ops team have been focused on different and competing objectives. The last decade has seen astonishing growth in the complexity of releases and the consequences of failure. As each market, technology and methodology shift has occurred, it has become more critical for Dev and Ops to execute software changes flawlessly.

Business demands both more change (volume and velocity) and risk elimination (secure and compliant). Dev responds to the time-to-market pressure with more iterative approaches to software delivery and this overwhelms the Change and Release team who, in turn, are forced to relax controls or face significant increases in workloads. Ops mitigates the risk inherent in continual change with more constraints and deeper scrutiny and the pendulum swings back obliging Change and Release to be more governing and restrictive.

The only way out of this pendulum ping-pong is to change the paradigms altogether. Dev, often portrayed as agents of change relentlessly introducing instability, and Ops, caricatured as guardians of the inviolate constancy of systems, are, in reality, teams of highly trained and experienced professionals with the same goal in mind, delivering software systems, that business needs, safely.

That realization is where DevOps begins, making Dev and Ops jointly responsible for the changes the business needs and for the system stability the business demands. This means great focus on quality, reliability and external impact for Dev. It means optimizing controls, automating activities and streamlining processes for Ops. Dev is now invested in system stability and Ops makes deployment velocity possible.

For DevOps purists this can mean even greater, and more radical, changes to the fabric of how applications are designed, a complete rethink of the infrastructure that supports development, delivery and execution and wholesale organizational restructuring from Biz to Dev to Ops and DevOps. Microservices, long cherished at Amazon.com, free app-functionality from limited use and enables constant re-purposing and re-invention; they change the very definition of application. Lightweight, throwaway tools, frequently open source software (OSS), loosely coupled together allow creativity and experimentation to thrive and velocity to increase. Project-teams morph into product-teams that embed themselves with the business and own the outcomes from ideation to implementation and even the operational running of the product.

In the new economy, this kind of digital transformation[1] is key to remaining competitive and it is easy to see why organizations like Facebook, Salesforce.com and Google have reinvented their development and delivery archetype. Ways of working in these organizations are also shifting to MicroSourcing[2] and contingent employees[3] and this is driving the need for more automation and collaboration tools to manage this highly dynamic, loosely federated workforce.

But what about the most highly regulated, large enterprises (HRLEs) in the world. How do they pivot to a DevOps culture?

Enterprise DevOps is different

Forrester analysts Kurt Bittner and Amy DeMartine wrote in SD Times recently[4], “[We] found that DevOps adoption varies greatly by industry and application type due to varying customer sophistication, regulatory constraints, and competitor-savvy factors.”

Their research correlates exactly with my experience. I often see DevOps adoption led by those enterprises that are both time-to-market driven and heavily regulated. Other aren’t slouching either and are ramping up fast behind them. Even in the public sector, and other traditionally slow to evolve industries, DevOps adoption is happening and is on every CIO’s agenda.

Adoption Rate[5] Fastest adoption Faster adoption Fast adoption Adoption Slower adoption
Quintile Top 2nd 3rd 4th Bottom
Systems of Innovation
Manufacturing, Pharma, Services
High-Tech Manufacturing, Financial Services
Retail, Wholesale
Media, Entertainment, Healthcare
Energy, Mining, Utilities, Telcos
Systems of Differentiation
Manufacturing, Pharma, Services, Financial Services
High-Tech Manufacturing, Retail, Wholesale
Utilities, Telcos Government,
Energy, Mining, Healthcare
Media, Entertainment
Systems of Record
Manufacturing, Pharma, Services
High-Tech Manufacturing, Financial Services
Retail, Wholesale, Utilities, Telcos, Government
Energy, Mining, Healthcare
Media, Entertainment
Highly regulated large enterprises (HRLEs) in bold.

Highly regulated large enterprises (HRLEs) are faced with unique constraints that make adoption of pure DevOps practices difficult. HRLEs face exceptional challenges of scale, application inter-dependencies, multi-modal IT, disparate and dispersed teams, resource pressures, modern versus legacy application architectures and compliance concerns.

Separation of Duties

Separation of Duties (SoD – also called “Segregation of Duties”) is a time-honored principle that requires individuals with a particular responsibility in business not to be the person that reviews and approves the actions associated with that responsibility. As some have said, the poachers can’t be the gamekeepers. For many in IT, the first they experienced this requirement in the wake of the Sarbanes-Oxley legislation which resulted following the financial crises between 2000 and 2003.

Increasingly, internal auditors are demanding SoD in IT processes. Risk and Compliance conferences frequently have IT Security Risk Assessment as a leading topic and they are advising CISOs[6] and Auditors to look closely at IT processes and practices. Hard and fast rules are implemented because of the fear of failure (such as the Knight Capital disaster[7]). Mandates like: developers are forbidden from deploying code to production, changes must be approved by an independent board before they are implemented, release engineers have no access to source code, are common. And rightly so.

But SoD makes it hard to create multi-disciplinary DevOps teams of Developers and Operations personnel with shared responsibility and shared ownership of code deployment as desired by pure DevOps. Instead what HRLEs seek is DevOps infrastructure that requires Dev and Ops to collaborate and provides them with common, shared data about release activities. They need tools with full traceability and auditability so that, in the event of an outage, root cause analysis procedures can determine the point of failure and changes can be made to the processes and automation to prevent further occurrences.

As organizations move towards “generative cultures”[8] where very high standards are demanded we see collaboration as a key corporate capability. These organizations are looking to pre-emptively prevent ‘bad situations’ from occurring by including security, penetration and load testing to be included as part of core development activities. Traceability and audit compliance must also be enforced and not seen as a later ‘bolt on’ solution to try and address a compliance problem.

Specialization and Segregation

Similarly, HRLEs frequently optimize their resources to an extreme degree and compartmentalize skillsets. The Database Administrator is the specialist who makes database changes as needed by an application release. Generalists in the Dev, Ops or DevOps team do not have this permission. Indeed, there is often a physical (and technical) segregation of the production environment from the development environment. Moving code from one to the other requires special, secure, sometime secret, approved transfer of artifacts. Frequently this arcane activity is conducted in the, so called, “million dollar meeting” where stakeholders from the Biz, Dev and Ops gather, two or three times a week, to vet and approve the deployment of code. All of this makes the joint ownership and collaborative problem solving less optimal.

These are real issues for DevOps purists and they do affect how DevOps is implemented in HRLEs, but they do not prevent highly successful DevOps initiatives from occurring.

The idea of the product-team owning every stage of an application lifecycle is admirable. But not optimal for HRLEs. The massive amount of compliance requirements that have nothing to do with the application, yet still need to be demonstrably delivered, require exceptional specialization. The underlying architecture of n-tier applications is complex because of the built-in security layers, historic compatibility issues, needs of foreign governments and is, too, in need of expert enterprise application architects to manage and evolve.

Finding a single resource with a skillset that can address all of these challenges is very difficult and very expensive. The DevOps goal of a single cross functional team is difficult to put into practice as the cross-skilled, trained and security-cleared resources simply don’t exist in vast numbers in the job-market. Replacing the existing team and their system knowledge or updating your teams complete skillset overnight are not realistic choices.

Even the simple need to be able to move team members around to meet project resource needs requires that developers, quality engineers and build experts do not become too tied to one technology and stack of tools.

Security and stability

As the pressure to release apps at greater velocity grows, so has the freedom and empowerment experienced by development teams. At the same time, there is a risk of opening of the floodgates (or more accurately a backdoor) that allows the criminal fraternity to access to internal systems, customer facing apps and in some senses worst of all, sensitive customer data. APIs are speeding up integration between suppliers, partners and various platforms enabling rapid application development. Adoption of APIs can also bring risks when there is no true API management that caters for volume, puts in place the right access controls, and enables an API to be sunsetted when no longer needed, rather than leaving an inadvisable entry point. On a similar note, the rush to explore the value of containers has to be balanced by security considerations and safeguards


There is no shortage of software companies eager to persuade their enterprises that their technology is essential for DevOps success. Many of these address the “Automation” pillar of the DevOps credo, and some can save vast time and effort vs manual approaches. However, the fact that many tools are Open Source and offer free trials can lead to a proliferation in use without any coordination or wider considerations

Legacy systems are legendary

Though the technology is 5-decades old it still processes more transactions every day than the Internet does. The mainframe, for most every HRLE, is the transaction processing workhorse of the business. The systems that run on them still contain code written before the first WiFi-signal beeped from Sputnik. Billions of lines of code executing and keeping the business running cannot be replaced by Microservices overnight. Indeed the value of doing so rarely exceeds the cost of doing it. And the risk introduced from such a massive overhaul is too great. Many organizations are surrounding their legendary systems with newer technology in the hope, that one day, they will have replaced the old system.

DevOps has been a part of the mainframe landscape since its inception. Not called DevOps but endowed with all the DevOps principles, the approaches and culture have long been about shared ownership of release success. Today, as the resources who support the mainframe get younger, the willingness to adopt newer approaches and methodologies is increasing and the platform-silo thinking being eroded.

So the move to Microservices for a business a few years old is reasonable and practical but for a business a few centuries old not so practical. There are many mainframe applications that are designed with a SOA[9] but none that have embraced the full adoption of Microservices. HRLEs are transforming their mainframe solutions to meet the needs of then non-mainframe purists who have already transformed and we are seeing the encirclement of legendary systems with modern applications that enhance and replace some of the original functionality. The encirclement to replace the older code is not the end in itself, but adding Microservices around the older apps are a way to get to the business objective sooner.

You can and should do DevOps with legacy systems. Enterprises are not “pure” and will never be “pure” but they can do always go faster, deliver better quality and be more predictable and reliable. Any mainframe application and apply DevOps principles and still improve its delivery rate dramatically.

Enterprise DevOps succeeds

The thought leaders of the DevOps movement[10], have determined that the path to a successful implementation depended on 5 principles, known by the acronym CALMS[11], or:

  • Culture – Own the change to drive collaboration and communication
  • Automation – Take manual steps out of your value chain
  • Lean – Use lean principles to enable higher cycle frequency
  • Metrics – Measure everything and use data to refine cycles
  • Sharing – Share experiences, successful or not, to enable others to learn

Our view is that CALMS, should rather be ALMSC. In that it is very difficult to change a culture organically without first automating processes so that the human desire to do things the “old way” is circumvented.

Automation brings processes tangibly, and visibly to life enabling thoughtful re-evaluation and optimization of the processes thus creating leaner, higher velocity processes. Automation brings much needed telemetry (records of every action, by whom, for whom, where, what, when, how and why) enabling detailed measurement of the activities and the ability to visualize the data in dashboards shared by Dev and Ops and other stakeholders. As resources, priorities and methods evolve everyone can see, real-time in their dashboards the positive (or not) effect of those changes enabling continuous improvement to occur and for all participants to share in the outcomes. This, ultimately, leads to the lasting cultural change that is the hallmark of DevOps.

It cannot be underestimated how vital executive sponsorship is to effect this kind of cultural transformation. Automation is also key to buttressing executive’s continuing investment and commitment. As the automation spins off data on the improvements in quality, timeliness, completeness and efficiency being delivered this reinforces the reasons this initiative was important and allows for proof points on the ROI that initially justified the decision.

Note that you certainly don’t want to automate a bad process. Common sense review of process is required as a starting point. One should also be wary of falling into the “analysis paralysis” trap and over-thinking existing processes and trying to optimize them completely before automating them. It is far easier to modify and improve an automated process than a paper one.

In order to implement successful automation of processes, the three types of processes, documented, undocumented and working practices, must be consolidated: This provides a great opportunity to apply common-sense to simplify and make lean where practicable.

For DevOps to succeed in HRLEs automation is key. In order to deliver the collaboration and cooperation necessary to be effective, technology must provide the access to the data required to be effective. With automation it is possible to achieve a level of openness and understanding of the current state far more easily than through interminable status meetings and conference calls.

As a minimum investment in technology to support your Enterprise DevOps initiative consider these four technologies as you baseline starting point. Each addresses a key issue in DevOps and they can be implemented in any order, and be connected in any way as your DevOps matures. Start with the one that is going to address your most important concerns as you start your DevOps program.

  1. DevOps collaboration platform enabling clear visibility for Biz, Dev and Ops into what is happening. Sometimes called the “DevOps Pipeline”, this capability that connects together the tool-chain flow of technologies that support the release and deploy activities. Shared, common data must be readily accessible showing the state of all project (and product) team activity from the onset of development to the safe delivery to production. Dashboard, alerts, notifications and approvals must be integrated into a common DevOps infrastructure too.
  2. DevOps deployment automation enabling repeatable, predictable and reliable delivery of applications through the deployment pipeline and on into production. Automated support for Continuous Deployment[12] initiatives is essential as it allows for reuse of common deployment techniques (back up the database, reset the server configuration, start the web server etc). Automated backout and recovery processes minimize downtime and guarantee service remain on-air. Build and release engineers spend more time on optimizing and tuning deployments than on writing throw-away scripts resulting in every increasing performance, compliance and quality.
  3. DevOps secure artifact repository that represents the “single version of the truth.” Ops has long been concerned with innumerable changes entering the production environment from too many different paths. Quality, reliability and transparency need to be the same irrespective the size, complexity of origin of the changes. A single secure artifact repository is the essential first step. Dev teams to deliver into the common repository where a battery of standard test are applied. This ensure a common minimum standard of quality is always maintained. From here the code is then automatically moved through the lifecycle towards production (see #2). Sometimes this is called the Definitive Media Library (DML) in ITIL terms.
  4. DevOps infrastructure for Agile teams in support of development efforts who are using iterative methods of software creation and delivery. Continuous Integration is a key capability for Agile development teams. Version management (branching and merging) is essential and it is critical to understand what code is in use in each part of the delivery pipelines. Work-item management, managing the backlog (epics and stories), has to be flawlessly performed to ensure Biz, Dev and Ops alignment. Having an enterprise scaled solution that meets the needs of the Agile development and delivery team is vital but it must also meet the needs of Enterprise stakeholders responsible for oversight.

With technology as a starting point the DevOps migration can begin.


The goal of DevOps is the same irrespective of the environment in which it exists: lowering the risk of change through tools and culture. There are differences that are important and it is important understand them.

DevOps Enterprise DevOps
Pure Agile teams Variable speed IT
Multidisciplinary team members Team maintains Separation of Duties (SoD)
Drawn primarily from Dev and Ops teams Drawn primarily from Change and Release Teams
Limited variability in platforms, technology, toolset Wide variances in platforms, technology and toolsets
Generally collocated small teams Generally geographically dispersed large teams
Frequent microsourcing and contingent workforce Frequent outsourcing offshore
Light compliance culture Tight compliance culture
Limited cross-project dependencies, micro services Complex cross-project dependencies
Experimental, A-B testing, Fail-Fast culture Move Fast Without Breaking Things culture
Distributed ownership Centralized, specialist culture
Team developing the app runs the app Team developing the app kept separate from team executing the app
Make do with toolchain of loosely integrated Open Source Solutions Needs vendor supported, secure, scalable infrastructure

As Gartner shows in their Pace-Layered Architecture[13] and Bimodal IT[14] taxonomies, Enterprise DevOps has to be equipped to handle not just the high-speed, express release trains, from the Agile teams but also the relentless, heavy freight releases from the more traditionally organized teams. Enterprise DevOps needs to have its own infrastructure to support low-touch incremental releases on demand and high-scrutiny releases months out in the calendar.



[1] Analyst group Altimeter defines Digital Transformation as “the realignment of, or new investment in, technology and business models to more effectively engage digital customers at every touch point in the customer experience lifecycle.”

[2] A shift from employees to on-demand independent providers and individuals with specialist skills and services

[3] Contingent employees are a provisional group of workers who work for an organization on a non-permanent basis.

[4] SD Times, Where’s the heat in DevOps, 2016-02-01

[5] Sources Forrester and Gartner

[6] CISO, Chief Information Security Officer

[7] Forbes, 2012-08-12 – Knight Capital Trading Disaster Carries $440 Million Price Tag

[8] Generative organizations set high standards and constantly strive to exceed them. Failures are seen as reason to improve not to cast blame. Leaders know what is happening because employees tell them. Knowledge about current status is shared goal because it enables teams to preempt and prevent errors. This is no Utopian goal though, everyone knows errors will occur and the culture says that that is OK as long as they are mitigated quickly, do not escalate and lessons are subsequently learned.

[9] Service Oriented Architecture

[10] Gen Kim, Damon Edwards, Jez Humble, John Willis et al

[11] Originally 4 principles and known then as CAMS

[12] CD (Continuous Deployment) requires that application artifacts flow automatically from one environment to another on successful completion of required testing (automated or not)

[13] Pace-Layered Architecture classifies applications as Systems of Innovation, Systems of Differentiation and Systems of Record. Each has differences in their base technologies, rate of change, and criticality to the business this affects the investment they receive.

[14] BiModal IT suggests that IT has two speeds of development. For Systems of Innovation, use Mode 2, high-speed, iterative development. For Systems of Record, use Mode 1, slow speed, low risk development. HRLEs have Variable-Speed IT with many different development approaches as needed by the time-to-market, compliance, risk, technology, methodology and topology requirements.


Posted in Business and Technology | Leave a comment


Ending my last post with “just because you: doesn’t mean you should” only tells half of the story. Sometimes an industry is ripe for disruption.

Healthcare is a mess. Secret systems, arcane rules, grotesque profits, arbitrary pricing, ridiculous paperwork and terrifying customer service. It is time to separate the poachers from the gamekeepers, those who insure must not be allowed to collude with those who provide. Pricing should be open, public and competitive and maximums should be regulated. 

Insurers should refund the healthy some of their premiums and the government should enter the market offering an alternative comprehensive program paid through payroll for those who can and free for those who can’t. 

If the doctor says it’s required, it’s required and the insurer pays without questioning the patient ever. If the doctor is found to be over prescribing they get suspended at least and lose their license at worst. Doctors need to audited but have to be assumed to be doing the best for their patients not subject to suspicion by the insurers.

There needs to be the same 5-star system for doctors that Uber has for drivers and the same 5-star system for patients that Uber has for passengers. Surge pricing can work for good doctors at unpleasant hours just as it does for Uber rides. The price should be known in advance and the patient should be able to shop around. The insurer should only hold the money and distribute it and not get between the patient and the doctor. 

Ok valley … let’s get to it … let’s make healthcare workable. I don’t mind if you start in the valley first. 

Posted in Business and Technology | 1 Comment

Change you can believe in: why change isn’t to be feared

ChangeHow much does a single change cost you personally? Do you resist change because you’re inherently fearful of new things? Or do you embrace change and live for the excitement of the novel experience.

The one thing we all take for granted is that change is inevitable. If we look around our desks for just one minute we will see the agents of change in every corner.

Our day is filled with making changes, telling people about the changes, hearing about changes, challenging changes other are trying to make, responding to changes that got out of hand, resetting changes and completely missing changes we should have noticed.

We don’t stop and ponder the change in isolation. We see the entirety that exists now and we imagine the new entirely that will exist soon. We frequently worry about change it terms of the coming wholeness rather than the incremental difference.

We make laws with sweeping provisions but rarely do they affect many people and even those affected are hardly inconvenienced. Take “texting” while driving: most people don’t do that and those that do are few and, yes reckless, and deserve to be punished. But some look on this as an insidious infringement on their first amendment rights. No room for commonsense only extreme reactions.

To delay or defer a change some people look for what will go wrong and how the worst case will impact them. They seek the extreme corner cases and extrapolate the most sinister outcomes.

Technology life is like that too. “This new release will change everything we do and will result in lost orders, customer dissatisfaction and employees will quit!” we might hear even before the requirements are written and a single line of code has been changed.

Some changes are disastrous but no one starts out with that as their objective. When changes go wrong, and they do of course, it is not the change that was bad but the process that allowed a bad change to be delivered.

We have to find a way to embrace change and see it for what it is: incremental improvement. That means focusing on what is different: asking what the impact of each element is and quantifying it. Wouldn’t you rather be dealing with exceptions rather than mundane and repetitive tasks.

Embrace change or be left behind.

Posted in Business and Technology, Personal experiences | Tagged , | Leave a comment