For discussion by ASQ’s Influential Voices this month, Paul Borawski looks at Risk, Failure & Careers in Quality.
There is a bias toward avoiding the possibility of failure by avoiding actions which may lead to failure or even any action at all. This is a problem. The need in so many organizations to avoid failure means wise actions are avoided because there is a risk of failure.
Many times the criticism of such cultures however gets a bit sloppy, in my opinion, and treats the idea of avoiding failure as bad. Reducing the impact of failure is very wise and sensible. We don’t want to sub-optimize the whole system in order to optimize avoiding as much failure as possible. But we don’t want to sub-optimize the whole system by treating failure as a good thing to welcome either.
Part of the problem is sloppy thinking about what failure is. Running an experiment and getting results that are not as positive as you might have hoped is not failure. That is going to happen when run experiments. The reason you run PDSAs on a small scale is to learn. It is to minimize the cost of running the experiments and minimize the impacts of disappointments.
Running an experiment and having results that negatively impact customers or result in costs that were not planned may well be failure. Though even in that case calling it failure may be less than useful. I have often seen that a new process that eliminated 10 problems for customers but added 2 is attacked for the 2 new problems. While those new problems are not good that you have a net gain of 8 fewer problems should be seen as success, I would argue, not failure. However, often this is not the case. And the attitude that any new problem is blamed on those making a change, regardless of the overall system impact does definitely hamper improvement.
As I said in a previous post, Learn by Seeking Knowledge, Not Just from Mistakes:
The culture I want to develop is one where systems thinking leads to optimizing the overall system. And to the extent that to do so it is wise to take risks that may include some failures taking risks is good. But we need to also use the long known practices to reduce any costs of adverse results.
“Fail fast” is not really about failing, though it uses that word. It is about learning quickly, at a low cost, by using wise strategies. If you see learning what won’t work as failing then you want to encourage failure. But that is not a useful way to look at things, in my opinion. The strategy behind failing fast is wise. But it is really just about piloting changes or innovations on a small scale and learning about your customers quickly and iterating quickly.
We should have mistake proofing in place to avoid as many failures as possible. Failing because we do a lousy job of running an experiment isn’t good. It isn’t something to encourage. We want to develop a culture where risks are accepted when they are useful but that doesn’t mean we want to take risks that are foolish and not helpful. And minimizing damage to customers is critical. At times we even must risk bad results impacting customers but that should be minimized.
Failure that is due to adopting a new process on a wide scale that should have been iterated using PDSA on a small scale first is not something we want to accept as the cost of innovating. While we are innovating and taking risks we need to use the existing tools and practices to: maximize the learning we can achieve, learn as much as we can quickly, minimize the costs of any problems, etc..
We should not have a stigma attached to any failure. But we shouldn’t accept every failure as the cost of innovating and improving. Many failures are due to poor management practices (failing to use PDSA, failing to understand variation…) and in such cases we need to examine the management weaknesses that lead to failure and take steps to improve the management system.
The goal is to maximize innovation and improvement. To the extent we need to take risks and accept some failures to achieve this we should accept failure. But that doesn’t mean we don’t continually try to improve our management systems to reduce the costs of failure. Even while we take risks we want to do so intelligently.
Related: Is Google Failing Too Often? – Mistake Proofing Deployment of Software Code – Making Changes and Taking Risks – One factor at a time (OFAT) Versus Factorial Designs (interactions must be tested early to optimize effective experimentation)
Pingback: Links for Feb 17 2013 - Eric D. Brown
My ideology on how to link both risk and failure.
“Take the risk and go ahead with the plan if the impact of the failure is reduced to minimal loss”
Pingback: Taking Risks Based on Evidence Â» Curious Cat Management Improvement Blog
Pingback: The Intellectual Foundation of Modern Improvement « The W. Edwards Deming Institute Blog