Organizations we visit are consumed with “getting it right”, to the point of being immobilized sometimes by their own perfectionism. We do not contest the fact that, in some industries and in some activities in every industry, there is a need for six standard deviations (3.4 errors per million activities) or better in performance: some of these are a function of safety (power plant operation, balancing the grid, air traffic control, and the like) and some of these are mandated (compliance issues, account integrity in financial services). Where this level of performance is required, it is needed, and cost is not the issue.
But for most activities in an organization we can actually maintain quality and service with slightly less obsessive attention to perfection. This is why, to truly manage cost and yet remain nimble and able to respond when challenged, managers must pay attention to what level of “perfection” is really required — and when that is reached, the quest for more certainty, or more risk reduction, must be brought to a close. These take time, cost money, and delay a market response, all of which penalize the organization for no good purpose.
In studying organizations, I recognize that only the truly entrepreneurial ones are comfortable with the level of ambiguity and potential for problems that Pareto’s Law (20% of the effort yields about 80% of the results) holds out.
Yet a doubling of the Pareto principle — taking the second 20% of effort to reach another 80% of the 1/5 of the potential for perfect results that remained — is far more often than not “good enough”.
Yes, yes, I know. Flawless execution and so forth. Far less requires better than 96% than meets the eye (80% + 80%*20%).
So what does this mean in practice?
It means delegation of the details that are not critical to be worked out after the decision to proceed is made. An error or two won’t really cause that much trouble.
It means a willingness to recognize an error, recover from it, and carry on. Of course, if you plan your budget to the penny, and didn’t plan for errors…
It means accepting that everything that is done is imperfect, but good enough, and open for change in the future. Be like a vendor: there’s always a next release.
It means providing after-the-implementation services to support the business, fix obvious flaws, enhance what is delivered to fit the real world, and knowing that in a few years what you’ve just delivered will be ready for replacement. Nothing lasts long.
What organizations get from this sort of behaviour is an increase in responsiveness that typically is attributed to wholesale change in methods and operations, but within the context of ways of working that are known and understood.
Let me share an example that derives from a six-month long consulting engagement to assess organizational readiness for a major IT change: all new applications, new sourcing agreements, the works.
I observed that typically 60-70% of any person’s work in a day was focused on trying to be perfect, trying to avoid error, and continuing analysis past the point of reasonable returns for the effort put in.
Converting even half of this to actual progress-making effort would be like doubling the throughput of the unit: the odd recovery step here and there would be fully paid for before a mis-step was even made.
Moreover, I observed that the accelerated pace (a second engagement a year later gave me the opportunity to try a test) did not feel stressful, as most of the stress felt by staff came from the quest for perfection itself.
As a result, there would be less overtime, and far less downtime, in such a situation. There’s the budget money for errors right there.
Overcoming the desire to be perfect is not easy, and changing expectations up and down the line is a non-trivial task.
Still, doubling productivity, or doubling the responsiveness of the organization, pays off directly on the income statement.
In turbulent times, when the economy is troubled, isn’t every bit of insurance against a failure to perform worth experimenting with?