The “Illusion of Explanatory Depth”: How Much Do We Know About What We Know? (broken link 🙁 was removed) is an interesting post that touches on psychology and theory of knowledge.
Often (more often than I’d like to admit), my son… will ask me a question about how something works, or why something happens the way it does, and I’ll begin to answer, initially confident in my knowledge, only to discover that I’m entirely clueless. I’m then embarrassed by my ignorance of my own ignorance.
I wouldn’t be surprised, however, if it turns out that the illusion of explanatory depth leads many researchers down the wrong path, because they think they understand something that lies outside of their expertise when they don’t.
I really like the title – it is more vivid than theory of knowledge. It is important to understand the systemic weaknesses in how we think in order to improve our thought process. We must question (more often than we believe we need to) especially when looking to improve on how things are done.
If we question our beliefs and attempt to provide evidence supporting them we will find it difficult to do for many things that we believe. That should give us pause. We should realize the risk of relying on beliefs without evidence and when warrented look into getting evidence of what is actually happening.
spoke of 3 ways to improve the figures: distort the data, distort the system and improve the system. Improving the system is the most difficult.
Another example of this in practice: Recount helps one university rise in the rankings [the broken link was removed]:
Behnke, who says he’s no fan of rankings, said he recently spoke to a provost at another institution who was capping class sizes at 19 to boost the “Classes Under 20” number.
I am sure “classes under 20” is a proxy for an intimate learning environment and interaction with knowledgeable professors that can teach well. You can’t directly measure the benefit of interaction with a professor in a small group on learning to create data to be used in ranking schools (Deming on unknown and unknowable figures). So classes with under 20 students and % of faculty with PhDs… are used as proxies for this idea.
If the proxy is the focus (as in school rankings) then distorting the system to create better looking data is a likely result. The purpose behind the action has great significance. If an institution desired to create a better learning environment and they used say a cause and effect diagram to find a group of problems and then determined one appropriate improvement step was to reduce class size (and perhaps another was to reduce the importance of tests and perhaps another was to provide professors training on effective teaching strategies) that a sensible path to improving the system.
It’s important to listen to customers – but not follow their words without skepticism. Ask them to design your next product and you’re likely to miss the mark, suggests this Harvard Business Review excerpt.
Does the customer invent new product of service? The customer generates nothing. No customer asked for electric lights… No customer asked for photography… No customer asked for an automobile… No customer asked for an integrated circuit.
In response to: Why executives order reorgs [the broken link was removed]
“We trained hard… but it seemed that every time we were beginning to form up into teams we would be reorganized. I was to learn later in life that we tend to meet any new situation by reorganizing; and a wonderful method it can be for creating the illusion of progress while producing confusion; inefficiency, and demoralization.”
These lines, from the Satyricon of Petronius written 2,000 years ago…
Unfortunately it seems this quote is not actually his [the broken link was removed. Peter Scholtes first told me this quote wasn’t accurate, when he was in the process of researching it for his book, The Leader’s Handbook]. Instead apparently someone attributed the quote to him to give it the weight of time. I think that the sentiment expressed rings true speaks to the experience of many.
The Improvement Guide: the Practical Approach to Enhancing Organizational Performance, is an excellent handbook on making changes that are improvements rather than just a way to create the illusion of progress. The book uses three simple questions to frame the improvement strategy.
What are we trying accomplish?
How will we know that a change is an improvement?
What changes can we make that will result in improvement?
The second questions if rarely used. Without that question it is much easier to make vague statements that seem like reasons to change and why it would be an improvement. But if you have to document how you will know the change is successful it makes it more difficult to change for just the appearance of improvement.
Once the organization does that regularly, the next step is to actually measure the results and validate the success or failure of the improvement efforts.
As a first step, we hope to collaborate with interested Googlers to find better ways to learn what works around the world. Identifying powerful solutions to poverty that are useful to people in different settings, and that are market-driven, scalable, and sustainable, is our greatest challenge. Second, we’re hoping to strengthen how the world measures both social and financial returns to investments in delivering critical goods and services to the poor. Like Google, we hold a deep belief in the power of measuring everything we can.
Google has done a fantastic job of using data to make decisions. In fact so much so, that some think they may go overboard trying to find an algorithm for everything. My dinner with Sergey [the broken link was removed]: Continue reading →
There have a been a number of articles (Ford to slash vendors of key parts [the broken link was removed] – Ford Rethinks Supply Strategy and posts (A “Kinder and Gentler” Lean Supply Base? – Ford Adopts Toyota-style Supplier Strategy [the broken link was removed]) about supplier management, many springing from Ford’s announcement [the broken link was removed].
My first thought on reading the stories about the press release was, didn’t Ford already say they were going to do this in the 1980’s or 1990’s?
John Weiser, Spring 1997 Executive-in-Residence, Graduate School of Management:
When Ford Motor Company embraced the Deming initiative, Ford’s president told his suppliers:
“We are in the process of making a major change when it comes to dealing with our supplying companies. My goal is that this will become a truly partnership effort, rather than the type of arm’s length relationship that has all too often been the way we have worked in the past.”
Americans’ Dirty Secret Revealed by Bjorn Carey
See also: Google News on washing hands [broken link was remove] – Soap and Detergent Association press release [another broken link was remove]
A study released recently spawned a flurry of articles on washing hands. I have seen such reporting before and again I find it interesting (as sad as that might be). The stories repeatedly say things like: “Men’s hands dirtier than women’s.” The study actual was focused on the percentage of people who washed their hands. While there is likely a correlation, making such leaps in reporting data is not wise. This example is often mirrored in the data use of organizations; where interpretations of the data are given as the facts instead of the data itself.
However that is not what I find most interesting. Instead I find the lack of operational definition interesting. In many of the articles they have quotes like:
In a recent telephone survey, 91 percent of the subjects claimed they always washed their hands after using public restrooms. But, when researchers observed people leaving public restrooms, only 83 percent actually did so.
Only 75 percent of men washed their hands compared to 90 percent of women, the observations revealed.
Claims are often made about results that only are justified based on unstated assumptions about the real world results that the data are meant to represent. But those claims are undermined when there is no evidence provided that the assumptions are valid. Without operational definitions for the data there is a significant risk of making claims about what the data means that are not valid.