I attended the annual W. Edwards Deming Institute conference this weekend: it was quite good. Tom Nolan [the broken link was removed] lead off the conference with: Developing and Applying Theory to Get Results.
He discussed the theory of knowledge: how we know what we know. See my attempt to introduce the idea of the theory of knowledge within Deming’s management system. It is probably the least understood of Deming’s four areas of profound knowledge, the others areas are: knowledge of variation, appreciation for a system and psychology.
Theory of knowledge is also something people have difficulty relating to what they do every day. The most obvious connection, I believe, is the understanding that much of what is “known” is not so. People manage with faulty beliefs. With an understanding of the theory of knowledge decision making can be guided to avoid the pitfalls of basing decisions on faulty beliefs. This is, of course, just one aspect of how the theory of knowledge impacts Deming’s management system.
Tom Nolan also discussed some interesting work that Paul Carlie and Clayton Christensen are doing based on descriptive “theory” and normative theory. My simple explanation is that descriptive theory reports on what is seen. This can be interesting, but has problems when people assign causation based on just observation (without experimentation). Normative theory involves testing theories (such as is done with the scientific method). Good article on this by Carlie and Christensen: The Cycles of Theory Building in Management Research [the broken link was removed].
Tom also discussed the PDSA cycle (he co-authored the best book on applying the PDSA to improve: The Improvement Guide). One point he made was that he often finds that organizations fail to properly “turn” the PDSA cycle (by running through it 5-15 times quickly and instead to one huge run through the PDSA cycle). One slow turn is much less effective then using it as intended to quickly test and adapt and test and adapt…
In my experience people have difficulty articulating a theory to test (which limits the learning that can be gained). He offered a strategy to help with this: write down the key outcome that is desired. Then list the main drivers that impact that outcome. Then list design changes for each outcome to be tested with the PDSA cycle. This simple process seems likely to me, to help improve the use of the PDSA cycle to improve. When PDSA does not involve testing a theory it becomes really just trial and error (try one thing then another then another).
From Carlie and Christensen’s paper:
If you use the PDSA to test a theory and get a result that is not expected this should cause you to adjust your theory and improve your understanding. This will then improve your decisions going forward. He also discussed the importance of understanding operational definition: that data is not an abstract measure, but a measure based on the operational definitions used when collecting the data.
He also discussed the 100,000 lives campaign. Quotes he used: “Management is prediction” – W. Edwards Deming, “There is no substitute for knowledge” W. Edwards Deming, “Knowing is not enough, we must apply” Geothe.
I actually wrote more than I thought I would on this but still I only scratched the surface of what he presented. I hope to add a couple more posts on other thoughts from the conference.