The following are my comments, which were sparked by question “Trust, but verify. Is this a good example of Profound Knowledge in action?” on the Linked In Deming Institute group.
Trust but verify makes sense to me. I think of verify as process measures to verify the process is producing as it should. By verifying you know when the process is failing and when to look for special causes (when using control chart thinking with an understanding of variation). There are many ways to verify that would be bad. But the idea of trust (respect for people) is not just a feel-good, “be nice to everyone and good things happen”, in Deming’s System of Profound Knowledge.
I see the PDSA improvement cycle as another example of a trust-but-verify idea. You trust the people at the gemba to do the improvement. They predict what will happen. But they verify what does actually happen before they run off standardizing and implementing. I think many of us have seen what happens when the idea of letting those who do the work, improve the process, is adopted without a sensible support system (PDSA, training, systems thinking…). It may actually be better than what was in place, but it isn’t consistent with Deming’s management system to just trust the people without providing methods to improve (and education to help people be most effective). Systems must be in place to provide the best opportunity to succeed. Trusting the people that do the work, is part of it.
I understand there are ways to verify that would be destructive. But I do believe you need process measures to verify systems are working. Just trusting people to do the right thing isn’t wise.
A checklist is another way of “not-trusting.” I think checklists are great. It isn’t that I don’t trust people to try and do the right thing. I just don’t trust people alone, when systems can be designed with verification that improves performance. I hear people complaign that checklists “don’t respect my expertise” or have the attitude that they are “insulting to me as a professional” – you should just trust me.
Sorry, driving out fear (and building trust – one of Deming’s 14 points) is not about catering to every person’s desire. For Deming’s System of Profound Knowledge: respect for people is part of a system that requires understand variation and systems thinking and an understanding of psychology and theory of knowledge. Checklists (and other forms of verification) are not an indication of a lack of trust. They are a a form of process measure (in a way) that has been proven to improve results.
I have special affinity for the use of statistics to understand and improve. I imaging it is both genetic and psychological. My father was a statistician and I have found memories of applying statistical thinking to understand a result or system. I also am comfortable with numbers, and like most people enjoy working with things I have an affinity for.
Mark Anderson’s Stats Made Easy blog brings statistical thinking to managers. And this is not an easy thing to do, as one of his posts shows, we have an ability to ignore data we don’t want to know. Wrong more often than right but never in doubt: “Kahneman examined the illusion of skill in a group of investment advisors who competed for annual performance bonuses. He found zero correlation on year-to-year rankings, thus the firm was simply rewarding luck. What I find most interesting is his observation that even when confronted with irrefutable evidence of misplaced confidence in one’s own ability to prognosticate, most people just carry on with the same level of self-assurance.”
That actually practice of experimentation (PDSA…) needs improvement. Too often the iteration component is entirely missing (only one experiment is done). That is likely partially a result another big problem: the experiments are not nearly short enough. Mark offered very wise advice on the Strategy of experimentation: Break it into a series of smaller stages. “The rule-of-thumb I worked from as a process development engineer is not to put more than 25% of your budget into the first experiment, thus allowing the chance to adapt as you work through the project (or abandon it altogether).” And note that, abandon it altogether option. Don’t just proceed with a plan if what you learn makes that option unwise: too often we act based on expectations rather than evidence.
In Why coaches regress to be mean, Mark explained the problem with reacting to common cause variation and “learning” that it helped to do so. “A case in point is the flight instructor who lavishes praise on a training-pilot who makes a lucky landing. Naturally the next result is not so good. Later the pilot bounces in very badly — again purely by chance (a gust of wind). The instructor roars disapproval. That seems to do the trick — the next landing is much smoother.” When you ascribe special causation to common cause variation you often confirm your own biases.
Mark’s blog doesn’t mention six sigma by name in his 2011 posts but the statistical thinking expressed throughout the year make this a must for those working in six sigma programs.
Yearly figures, it should be noted, are neither to be ignored nor viewed as all-important. The pace of the earth’s movement around the sun is not synchronized with the time required for either investment ideas or operating decisions to bear fruit. At GEICO, for example, we enthusiastically spent $900 million last year on advertising to obtain policyholders who deliver us no immediate profits. If we could spend twice that amount productively, we would happily do so though short-term results would be further penalized. Many large investments at our railroad and utility operations are also made with an eye to payoffs well down the road.
At Berkshire, managers can focus on running their businesses: They are not subjected to meetings at headquarters nor financing worries nor Wall Street harassment. They simply get a letter from me every two years and call me when they wish. And their wishes do differ: There are managers to whom I have not talked in the last year, while there is one with whom I talk almost daily. Our trust is in people rather than process. A “hire well, manage little” code suits both them and me.
Cultures self-propagate. Winston Churchill once said, “You shape your houses and then they shape you.” That wisdom applies to businesses as well. Bureaucratic procedures beget more bureaucracy, and imperial corporate palaces induce imperious behavior. (As one wag put it, “You know you’re no longer CEO when you get in the back seat of your car and it doesn’t move.”) At Berkshire’s “World Headquarters” our annual rent is $270,212. Moreover, the home-office investment in furniture, art, Coke dispenser, lunch room, high-tech equipment – you name it – totals $301,363. As long as Charlie and I treat your money as if it were our own, Berkshire’s managers are likely to be careful with it as well.
At bottom, a sound insurance operation requires four disciplines… (4) The willingness to walk away if the appropriate premium can’t be obtained. Many insurers pass the first three tests and flunk the fourth. The urgings of Wall Street, pressures from the agency force and brokers, or simply a refusal by a testosterone-driven CEO to accept shrinking volumes has led too many insurers to write business at inadequate prices. “The other guy is doing it so we must as well” spells trouble in any business, but none more so than insurance.
I don’t agree with everything he says. And what works at one company, obviously won’t work everywhere. Copying doesn’t work. Learning from others and understanding what makes it work and then determining how to incorporate some of the ideas into your organization can be valuable. I don’t believe in “Our trust is in people rather than process.” I do believe in “hire well, manage little.” Exactly what those phrases mean is not necessarily straight forward. I believe you need to focus on creating a Deming based management system and that will require educating and coaching managers about how to manage such a system. But that the management decisions about day to day operations should be left to those who are working on the processes in question (which will often be workers, that are not managers, sometimes will be supervisors and managers and sometimes will be senior executives).
We [Israel] said, ‘We’re not going to do this. You’re going to find a way that will take care of security without touching the efficiency of the airport.”
“The whole time, they are looking into your eyes — which is very embarrassing. But this is one of the ways they figure out if you are suspicious or not. It takes 20, 25 seconds,” said Sela. Lines are staggered. People are not allowed to bunch up into inviting targets for a bomber who has gotten this far.
Lean thinking: customer focus, value stream (don’t take actions that destroy the value stream to supposedly meet some other goal), respect for people [this is a much deeper concept than treat employees with respect], evidence based decision making (do what works – “look into your eyes”), invest in your people (Israel’s solution requires people that are good at their job and committed to doing a good job – frankly it requires engaged managers which is another thing missing from our system).
The USA solution if something suspicious is found in bag screening? Evacuate the entire airport terminal. Very poor design (it is hard to over-emphasis how poor this is). It will take time to design fixes into physical space, as it always does in lean thinking. It has been nearly 10 years. Where is the progress?
A screener at Ben-Gurion has a pair of better options. First, the screening area is surrounded by contoured, blast-proof glass that can contain the detonation of up to 100 kilos of plastic explosive. Only the few dozen people within the screening area need be removed, and only to a point a few metres away.
Second, all the screening areas contain ‘bomb boxes’. If a screener spots a suspect bag, he/she is trained to pick it up and place it in the box, which is blast proof. A bomb squad arrives shortly and wheels the box away for further investigation.
This is a very small simple example of how we can simply stop a problem that would cripple one of your airports,” Sela said.
Lean thinking: design the workspace to the task at hand. Obviously done in one place and not the other. Also it shows the thought behind designing solutions that do not destroy the value stream unlike the approach taken in the USA. And the better solution puts a design in place that gives primacy to safety: the supposed reason for all the effort. Continue reading →
Metrics should be operationally defined so that the data is collected properly. Without operationally definitions data collected by more than one person will often include measurement error (in this case, the resulting data showing the results of different people measuring different things but calling the result the same thing).
And without operational definitions those using the resulting data may well mis-interpret what it is saying. Often data is presented without an operational definition and people think the data is saying something that it is not. I find most often when people say statistics lie it is really that they made an incorrect assumption about what the data said – which most often was because they didn’t understand the operational definition of the data. Data can’t lie. People can. And people can intentionally mislead with data. But far more often people unintentionally mislead with data that is misunderstood (often this is due to failure to operationally define the data).
At New York Hospital, Eichenwald and infectious disease specialist Henry Shinefield conceived and developed a controversial program that entailed deliberately inoculating a newborn’s nostrils and umbilical stump with a comparatively harmless strain of staph before 80/81 could move in. Shinefield had found the protective strain – dubbed 502A – in the nostrils of a New York Hospital baby nurse. Like a benign Typhoid Mary, Nurse Lasky had been spreading her staph to many of the newborns in her care. Her babies remained remarkably healthy, while those under the care of other nurses were falling ill.
This is a great example of a positive special cause. How would you identify this? First you would have to stratify the data. It also shows that sometimes looking at the who is important (the problem is just that we far too often look at who instead of the system so at times some get the idea that it is not ok to stratify data based on who – it is just be careful because we often do that when it is not the right approach and we can get fooled by random variation into thinking there is a cause – see the red bead experiment for an example); that it is possible to stratify the data by person to good effect.
The following 20 pages in the book are littered with very interesting details many of which tie to thinking systemically and the perils of optimizing part of the system (both when considering the system to be one person and also when viewing it as society).
Many people state that data can lie. Obviously data can’t lie.
There are three kinds of lies: Lies, damn lies and statistics – Mark Twain
Many people don’t understand the difference between being manipulated because they can’t understand what the data really says and data itself “lying” (which, of course, doesn’t even make sense). The same confusion can come in when someone just draws the wrong conclusion from the data that exists (and them blames the data for “lying” instead of themselves for drawing a faulty conclusion). The data can be wrong (and the data can even be made faulty intentionally by someone). Or someone can draw the wrong conclusion from data that is correct. But in neither case is the data lying. It is also common to believe the data means something other than what it does (therefore leading to a faulty conclusion).
For a very simple example, believing if the average height for adults in the USA is 5 feet 9 inches that half the people must be taller and half the people must be shorter. You could then draw the conclusion that half the adults must be shorter than 5 feet 9 inches. But that is not what an average height means (it is basically what median means, though if you want to get technical, it doesn’t mean exactly that). You might draw the conclusion that the average height of an adult in California is 5 feet 9 inches but that is not supported by only the data that says what the height of an average adult in the country is. The same hold for drawing the conclusion that 5 feet 9 inches is the average height of a women. Now in this simple example, hopefully people can see the faulty reasoning, but such reasoning often goes on without consideration.
In a great speech by Marisa Meyer she speaks of Google makes decisions using data and that data is apolitical. One benefit of this, she says, is that Google makes decisions on what the data supports not political considerations. The belief that basing decision on what the data supports leads to better decisions can seem false for those that accept the quote about 3 types of lies (or those that see there is some weakness to this point if those supposedly basis decisions on data don’t really understand how to do so).
Jeffrey Pfeffer Testifies to Congress About Evidence-Based Practices [the broken link was removed]:
In this short statement, I want to make five points as succinctly as possible, providing references for background and documentation for my arguments. First, organizations in both the public and private sector ought to base policies not on casual benchmarking, on ideology or belief, on what they have done in the past or what they are comfortable with doing, but instead should implement evidence-based management. Second, the mere prevalence or persistence of some management practice is not evidence that it works — there are numerous examples of widely diffused and quite persistent management practices, strongly advocated by practicing executives and consultants, where the systematic empirical evidence for their ineffectiveness is just overwhelming. Third, the idea that individual pay for performance will enhance organizational operations rests on a set of assumptions. Once those assumptions are spelled out and confronted with the evidence, it is clear that many — maybe all — do not hold in most organizations. Fourth, the evidence for the effectiveness of individual pay for performance is mixed, at best — not because pay systems don’t motivate behavior, but more frequently, because such systems effectively motivate the wrong behavior. And finally, the best way to encourage performance is to build a high performance culture. We know the components of such a system, and we ought to pay attention to this research and implement its findings.
Interesting article on applying game theory to business decisions. Game theory is a tool that is not often used. Though most organizations are probably better off improving how they use the rest of their management tools, it is fun to read about and does have merit in the right situations. 16,777,236 [the broken link was removed] – That’s the number of outcomes that are possible when eight competitors each consider three strategic options.
California Institute of Technology professor R. Preston McAfee, a leading game theorist who helped the U.S. government design auctions for broadband spectrum, says doubters ought to remember that game theory is a tool, not an answer. “Game theory is sometimes criticized because it doesn’t actually completely solve the problem,” McAfee says. “On the other hand, the exercise of applying game theory very often clears up things that you can dispense with—issues that aren’t salient to the decision process. Sometimes just thinking it through identifies strategies that you hadn’t thought available.”