An interesting email dropped on my digital doormat this morning asking for my input on the “key KPI’s for a Fortune 2000 IT department.” What made this more interesting is that I have just left Singapore where I have been working with a client on this very topic.
So, in the way that atoms randomly smash together sometimes into what looks like a pattern, it seems like a good time to explore this subject in the blog.
Let me start by saying that my view of KPI’s (Key Performance Indicators) is that they serve the purpose of helping managers see the effect of their decisions and to give them early warning of when they need to intervene. I fully support the trend towards compensating managers based on their affect on KPI’s. And I also see a real distinction between KPI’s and SLA’s (Service Level Agreements) that often gets blurred these days. KPI’s help you manage what you are responsible for delivering: SLA’s help you manage your relationship with other parts of the business and what they deliver to you or you deliver to them.
In this morning’s email the author posited that these 5 KPI’s were a good starting point.
- IT Cost/Revenue
- Rate of Application Change
- Employee Satisfaction
I like the simplicity of these. They have the same emotional charm as the “flat tax”: I can work them out in my head, I know what they mean and I know what good and bad are.
But after a little thought it occurred to me that if we dig deeper they are flawed too:
- IT Cost/Revenue – It is simple to calculate but what does it actually tell you? If the KPI is trending up or down is that a good or a bad thing? How this number compares to industry norms is interesting for investors but this ratio depends a lot on the effectiveness of all parts of the business that are outside the control of IT. One might suggest that when revenue is down increased investment in IT should be the strategy. My counter suggestion would be Cost/Revenue:Industry norm so that the business can see it is investing at a rate 85% of of similar insurance companies or 110% the rate of the world as a whole? This would then reflect trends in the IT industry as well as vertical sector the business is in.
- Availability – Also easy and straightforward to calculate as total time available/total systems are up with planned and unplanned downtime factored in or reported as part of the KPI. But for this to be a meaningful measure I would want the word availability to also measure the reach of the systems. Today much of the delivery of the system is through hand held devices. A company that just deploys to the iPhone because it is easy is missing out on the fast growing ‘Droid market and may be missing the BlackBerry market entirely. So built into availability has to be reach too. This is a key performance indicator of the appeal of the application and by extension the business as a whole.
- Rate of Application Change – Is for me, an ALM Guru, the most interesting as it rolls up many aspects of change. It acknowledges the rate at which the business is placing demand on IT, IT’s ability to respond to the demand and the agility of the application development system to deliver on quality, on budget, on time change into production. On Monday this week I just completed a study on release management for a large telco and the problem came down to “less haste, more speed”, in other words the more they rushed to change things the more the business processes clogged up, error rates soared, customer satisfaction declined and changes missed market windows. So rate of change is inextricably tied to quality of change, timeliness of change and customer satisfaction (internal and external). So the recommendation I would make about rate of change is to measure the satisfaction of the business with IT’s performance across a broad range of vectors rolled up into one rating. Those vectors would include, value for money, collaboration, engagement, communication, timeliness, quality, fit for purpose, strategy alignment, business alignment, innovation, delighting the customers (internal and external) and would be measured across all departmental executives on a quarterly basis.
- Risk – this is a fascinating measurement. Of course there is the quantification of any risk associated with Security, Disaster, or Compliance but let’s not forget the risk associated with software changes. Too much lip service is paid to this kind of risk and its evil twin “complexity”. Serious tracking of risk and complexity of releases is essential and their trending needs to be carefully monitored. Incremental increases in risk and complexity are significant multipliers on development, testing and deployment time frames and failure to acknowledge this leads to overruns of time and budget and compromising to address those impacts quality. Software change risk is a clear indicator of poor early stage planning and lack of communication and collaboration between IT and the business.
- Employee Satisfaction – Don’t underestimate the stress on employees. When there is stress there is staff turnover, loss of institutional knowledge and resulting drop in productivity and quality.
And if I could only have two it would be number 3 and number 5. Customer satisfaction and employee satisfaction are the best litmus tests on what we do. Ultimately they are all that matters.
One question about this line:
“I fully support the trend towards compensating managers based on their affect on KPI’s.”
How does one ensure that KPI’s are well enough defined that managers don’t simply game their systems and results to show whatever they need to show to get the maximum bonus? I see plenty of evidence of this kind of thing all the time. To take an extreme example, banks set KPI’s for managers based on numbers of sub-prime mortgages sold. The assumption underlying all of this was that no one would sell foolishly to achieve targets because that would be bad for business and everyone had an interest in keeping the business, and their compensation, intact.
But when managers and salespeople found they could make big enough bonuses by hitting or exceeding their KPI’s, they drove sales through the roof and we all know what happened next. By the time it happened, many of the ground level culprits had made off with their millions.
Compensating people based on KPI’s strikes me as less a way of ensuring business success than a way of ensuring maximum control by senior management to comply with its (likely KPI-driven) goals. The best ideas I have seen to date at my present site have come from people on the development team who have no KPI-driven compensation whatsoever, but who are possessed of something far more important: a sense of integrity and professionalism.
What you said makes a lot of sense. However, I would like to say that KPI’s should be positioned to measure the success of business. The key lies in designing the KPI’s. Senior management should balance out the KPI’s so that one KPI checks the overwhelming results of the other. (OR) the KPI itself should have factors or components that give a balanced value)
For example, in the subprime mortgage example, I consider the KPI design to include 1. Number of subprime mortgages sold 2) Default Rate. Hence the KPI could be a ratio – number sold/ Default rate. A senior manager can check the performance of business by Improving this ratio.
Therefore, KPI is not flawed. But the way KPI’s are designed and handled makes the difference.
I’m flattered my email prompted a blog post. Thank you for the thoughtful analysis. But please let me offer a little context for the question and some thoughts on your response. Sorry for the lengthy reply.
The reason this is an interesting question to us is the somewhat selfish reason that we’re building a SaaS application to help vendors selling Complex (expensive) solutions into the enterprise. We believe (along with lots of other people) that one of the biggest challenges – and shortcomings – of most complex selling approaches is their lack of aligning to key Business Drivers and Metrics (KPIs). Very closely linked to this challenge is the lack of quantifying the impact of a solution EARLY in the process. In essence people just aren’t focused on selling value (instead giving in to features, functions, and demos). Buyers want value; they want to understand how vendors help drive initiatives and solve their problems; they want to vendors to explain in terms of dollars and key metrics. Having good common KPIs makes it easier to overcome this challenge.
Thus, we believe KPIs should meet some important, basic, criteria:
• Key means few and most important. We chose 5
• Performance means it should easily tie to the top line
• These “metrics” should be easily quantifiable and easily comparable.
This of course doesn’t mean there are not lots of other things an IT department – or any department really – shouldn’t measure, just that it would be extremely useful to have a common, small, list that basically allows a “summary” view. We also contend that every other metric can be “rolled up” into one of these metrics. Virtualization software – reduced Cost. Network optimization appliance – better Application Performance. Web Security Service – reduced Risk.
In the end there are really two things you measure in business – whether a company, a department, or an individual. They are performance and cost (or think top line and bottom line). Great performance is wonderful but not at unbearable cost. And cheaper is nice, but not without performance. Cost is relatively easy to measure (I would argue that because of this it receives too much attention). But performance requires a little more nuance. To provide some more context our list of 5 KPIs for Sales is:
1. Sales Conversion Rate
3. Sales Cycle Time
4. Sales Costs
5. Sales Growth
Maybe a better way to ask the question might be: If all the fortune 500 CEOs got together and had to jointly agree on what the 5 metrics would be for them to judge the effectiveness of all of their IT departments – and easily compare between each other, what would those 5 metrics be?
To your specific comments of our suggested KPIs, my replies:
• Our implementation for KPIs will be based on industry and company size as you suggested
• While reach is an interesting idea as part of availability, I’m not sure how to easily measure and compare it. I would suggest anything that impacts availability will show up in this KPI. If users can’t access an application, then it’s down, whether it’s at a server or at their tablet.
• Our catch all for Applications is Application Performance. We’re suggesting this would include more than the traditional measurement of response times but anything associated with how an application performs, is deployed, is supported, etc. Rate of Application change, and other ALM metrics, would be a “sub” metrics under this KPI.
• Your comment on our Risk KPI highlights a challenge with any attempt at new definitions: that folks will see things differently. We’re trying to say “our” Risk is the one that measures things normally associated with the folks that manage things like business continuity, security, and compliance. Of course things like backup or software change management impact and involve risk, and are very important, but we would argue these are more operations things that would be captured under KPIs like Availability and Application Performance.
• We’re least sold on employee satisfaction not because it doesn’t feel like a really, really, good idea to capture it but because it will be difficult to compare and quantify. If employee satisfaction goes down 3% what’s the dollar impact of that?
Kevin, really appreciate the insight and offer to extend our discussion. For those interested, I’ve been involved in several other discussions on this topic, including with the folks at Intel. Happy to point you to those if interested. Just drop me an email: email@example.com
An interesting post, although I’m unclear as to how you would obtain the key comparison information for the Cost/Revenue KPI i.e. “investing at a rate 85% of of similar insurance companies”; I can’t imagine many businesses wanting to openly share this information in a timeframe that would make it useful?
I like your take on Availability however, not least because the Reach is something that’s so often overlooked: I wonder how many businesses don’t get to make money from me simply because they don’t make it easy for me to interface from my BlackBerry? The same could be said for those businesses that only publish 0845/0870 numbers of course; one can only presume that they don’t wish to receive calls and/or do business with anybody who may be using a mobile phone on a regular monthly contract to ring around…
I guess my other pet hate of client’s who hang on to woefully obsolete technology internally, and/or decline to provide adequate disk space for people to use systems properly, would also come under Availability? Arguably though it could go under Cost/Revenue as a Cost Impact of *not* spending adequate money on infrastructure? I’d appreciate your thoughts.
I am not an IT person however, I am still able to understand the reply & explainations given by Kevin. I work in a Power Plant, Kevin can you shed some light on “What should be the KPIs of an IT section in a Power Plant?”
Pingback: The need for business driven IT performance indicators. | Miroslav Jasso