Jerome Groopman (photo) is a doctor who discovered that he needed a doctor. When his hand was hurt, he went to six prominent surgeons and got four different opinions about what was wrong. Groopman was advised to have unnecessary surgery and got a seemingly made-up diagnosis for a nonexistent condition. Groopman, who holds a chair in medicine at Harvard Medical School, eventually found a doctor who helped…
“Usually doctors are right, but conservatively about 15 percent of all people are misdiagnosed. Some experts think it’s as high as 20 to 25 percent,” Groopman tells Steve Inskeep. “And in half of those cases, there is serious injury or even death to the patient.”
Errors in thinking: We use shortcuts. Most doctors, within the first 18 seconds of seeing a patient, will interrupt him telling his story and also generate an idea in his mind [of] what’s wrong. And too often, we make what’s called an anchoring mistake – we fix on that snap judgment.
An understanding of theory of knowledge is helpful to counteract errors in thinking. How we think is not perfect, and an understanding the weaknesses and faulty conclusions we are susceptible to making is helpful. That can help avoid jumping to conclusions that are faulty and to design systems that counteract such behavior.
If the output for working for the year is a square. And the job is to produce dark squares who do you pay more A or B? Of course it is a trick question, the squares are the same color. But it doesn’t look that way at first does it? Optical illusions provide evidence that you cannot always trust what seems obvious.
Dr. Deming’s red bead experiment provides some additional insight into the idea that our management systems often use “evidence” to support our believes when in fact the “evidence” does not mean what we think it does. Dr. Deming included the theory of knowledge (how do we know what we know) as one of the four areas of his management system. It is the areas of his work that is least appreciated and understood by managers today. Optical illusions provide a simple reminder of how easily we can think we know things that are not so.
It is important to question what you believe; even when it is as obvious as the A square being darker than the B square. Understanding the ease with which we can reach false conclusions can be a powerful aid in improving management decision making.
Never be satisfied that your current viewpoint is complete and accurate, instead create a climate of never ending continual improvement. By continually questioning and seeking improvement we can avoid traps our brains lay for us.
The “Illusion of Explanatory Depth”: How Much Do We Know About What We Know? (broken link 🙁 was removed) is an interesting post that touches on psychology and theory of knowledge.
Often (more often than I’d like to admit), my son… will ask me a question about how something works, or why something happens the way it does, and I’ll begin to answer, initially confident in my knowledge, only to discover that I’m entirely clueless. I’m then embarrassed by my ignorance of my own ignorance.
I wouldn’t be surprised, however, if it turns out that the illusion of explanatory depth leads many researchers down the wrong path, because they think they understand something that lies outside of their expertise when they don’t.
I really like the title – it is more vivid than theory of knowledge. It is important to understand the systemic weaknesses in how we think in order to improve our thought process. We must question (more often than we believe we need to) especially when looking to improve on how things are done.
If we question our beliefs and attempt to provide evidence supporting them we will find it difficult to do for many things that we believe. That should give us pause. We should realize the risk of relying on beliefs without evidence and when warrented look into getting evidence of what is actually happening.
I attended the annual W. Edwards Deming Institute conference this weekend: it was quite good. Tom Nolan [the broken link was removed] lead off the conference with: Developing and Applying Theory to Get Results.
Theory of knowledge is also something people have difficulty relating to what they do every day. The most obvious connection, I believe, is the understanding that much of what is “known” is not so. People manage with faulty beliefs. With an understanding of the theory of knowledge decision making can be guided to avoid the pitfalls of basing decisions on faulty beliefs. This is, of course, just one aspect of how the theory of knowledge impacts Deming’s management system.
Tom Nolan also discussed some interesting work that Paul Carlie and Clayton Christensen are doing based on descriptive “theory” and normative theory. My simple explanation is that descriptive theory reports on what is seen. This can be interesting, but has problems when people assign causation based on just observation (without experimentation). Normative theory involves testing theories (such as is done with the scientific method). Good article on this by Carlie and Christensen: The Cycles of Theory Building in Management Research [the broken link was removed].
Tom also discussed the PDSA cycle (he co-authored the best book on applying the PDSA to improve: The Improvement Guide). One point he made was that he often finds that organizations fail to properly “turn” the PDSA cycle (by running through it 5-15 times quickly and instead to one huge run through the PDSA cycle). One slow turn is much less effective then using it as intended to quickly test and adapt and test and adapt…
re: post on prediction [link broken, so removed] on the Deming Electronic Network:
Petter Ogland wrote:
…that intelligence more or less boils down to updating a predictive model of the world. As far as I can see, this is the C.I. Lewis epistemology that Shewhart and Deming based their philosophy upon.
…but is there any kind of operational definition for ‘prediction’ that would explain what Deming means when he uses this word in various contexts?
I think your first point is correct, which I see as: learning by predicting, then looking at the result and then adjusting understanding to this new information is very powerful.
I believe Deming’s thoughts about prediction are most effectively put into action using the PDSA cycle. Specifically, you must predict the results in the planning phase (prior to piloting improvements). I find that this is rarely done. I don’t think the form of that prediction is critical (narrative with loose numerical guesses, precise numerical prediction…). The critical issue is making the prediction, then comparing the results to that prediction and then figuring out how your original understanding can be improved based on the new data. Continue reading →