Tag Archives: evidence based management

The Importance of Critical Thinking and Challenging Assumptions

There are many factors that are important to effectively practice the management improvement ideas I have discussed in this blog for over a decade. One of the most important is a culture that encourages critical thinking as well as challenging claims, decisions and assumptions.

I discussed this idea some in: Customers Are Often Irrational. There is a difference between saying people wish to have their desires met and people act in the manner to maximize the benefits they wish to receive.

It is important to study customer’s choice and learn from them. But being deceived by what their choice mean is easier than is usually appreciated. Often the decision made is contrary to the ideal choice based on their beliefs. It is often poor decision making not an indication that really they want a different result than they express (as revealed versus stated preference can show). People that ignore the evidence behind climate change and condemn coastal areas to severe consequences don’t necessarily prefer the consequences that their decision leads to. It may well be that decision to ignore the evidence is not based on a desire to suffer long term consequences in order to get short term benefits. It may well be just an inability to evaluate evidence in an effective way (fear of challenging ourselves to learn about matters we find difficult often provides a strong incentive to avoid doing so).

Knowing the difference between choosing short term benefits over long term consequences and a failure to comprehend the long term consequences is important. Just as in this example, many business decisions have at the root a desire to pretend we can ignore the consequences of our decisions and a desire accept falsehoods that let us avoid trying to cope with the difficult problems.

photo of me with a blackboard in my father's office

Photo of me and my artwork in my father’s office by Bill Hunter

It is important to clearly articulate the details of the decision making process. We need to note the actual criticism (faulty logic, incorrect beliefs/assumptions…) that results in what some feel is a poor conclusion. But we seem to find shy away from questioning faulty claims (beliefs that are factually incorrect – that vaccines don’t save people from harm, for example) or lack of evidence (no data) or poor reasoning (drawing unsupported conclusions from a well defined set of facts).

Critical thinking is important to applying management improvement methods effectively. It is important to know when decisions are based on evidence and when decisions are not based on evidence. It can be fine to base some decisions on principles that are not subject to rational criticism. But it is important to understand the thought process that is taken to make each decision. If we are not clear on the basis (evidence or opinion regardless of evidence) we cannot be as effective in targeting our efforts to evaluate the results and continually improve the processes in our organizations.

Describing the decision as “irrational” is so imprecise that it isn’t easy to evaluate how much merit the criticism has. If specific facts are called into question or logical fallacies within the decision making process are explained it is much more effective at providing specific items to explore to evaluate whether the criticism has merit.

When specific criticisms are made clear then those supporting such a decision can respond to the specific issues raised. And in cases where the merits of one course of action cannot be agreed to then such critical thought can often be used to create measures to be used to evaluate the effectiveness of the decision based on the results. Far too often the results are not examined to determine if they actually achieved what was intended. And even less often is care taken to examine the unintended consequences of the actions that were taken.

Continue reading

Integrating Technical and Human Management Systems

ASQ has asked the Influential Voices on quality management to look at the question of integrating technical quality and human management systems. How do different systems—technical or human—work together? How should they work together?

My view is that the management system must integrate these facets together. A common problem that companies face is that they bring in technical tools (such as control charts, PDSA improvement cycle, design of experiments, kanban, etc.) without an appreciation for the organization as a system. Part of understanding the organization as a system is understanding psychology within this context (as W. Edwards Deming discussed frequently and emphasized in his management system).

To try and implement quality tools without addressing the systemic barriers (due to the management system and specifically the human component of that system) is a path to very limited success. The failure to address how the organization’s existing management system drives behaviors that are often counter to the professed aims of the organization greatly reduces the ability to use technical tools to improve.

If the organization rewards those in one silo (say purchasing) based on savings they make in cutting the cost of supplies it will be very difficult for the organization to optimize the system as a whole. If the purchasing department gets bonuses and promotions by cutting costs that is where they will focus and the total costs to the organization are not going to be their focus. Attempts to create ever more complex extrinsic incentives to make sure the incentives don’t leave to sub-optimization are rarely effective. They can avoid the most obvious sub-optimization but rarely lead to anything close to actually optimizing the overall system.

image of the cover of Managmenet Matters by John Hunter

Management Matters by John Hunter

It is critical to create an integrated system that focuses on letting people use their brains to continually improve the organization. This process doesn’t lend itself to easy recipes for success. It requires thoughtful application of good management improvement ideas based on the current capabilities of the organization and the short, medium and long term priorities the organization is willing to commit to.

There are principles that must be present:

  • a commitment to treating everyone in the organization as a valuable partner
  • allowing those closest to issues to figure out how to deal with them (and to provide them the tools, training and management system necessary to do so effectively) – see the last point
  • a commitment to continual improvement, learning and experimentation
  • providing everyone the tools (often, this means mental tools as much as physical tools or even quality tools such as a control chart). By mental tools, I mean the ability to use the quality tools and concepts. This often requires training and coaching in addition to a management system that allows it. Each of these is often a problem that is not adequately addressed in most organizations.
  • an understanding of what data is and is not telling us.

An integrated management system with an appreciation for the importance of people centered management is the only way to get the true benefit of the technical tools available.

I have discussed the various offshoots of the ideas discussed here and delved into more details in many previous posts and in my book – Management Matters: Building Enterprise Capability. An article, by my father, also addresses this area very well, while explaining how to capture and improve using two resources, largely untapped in American organizations, are potential information and employee creativity. It is only by engaging the minds of everyone that the tools of “technical” quality will result in even a decent fraction of the benefit they potentially can provide if used well.

Continue reading

Don’t Ignore Customer Complaints

I find Paul Graham’s ideas very useful. I disagree with his recent tweet though.

tweets from Paul Graham

Update: See note at bottom of the post – Paul tweeted that his original tweet was wrong.

Base your assessment of the merit of an idea on the actual merit of the idea, not the category you place the person in that is expressing the idea.

His reply tweet addresses the problem with the first one in a very specific case. But you have “bugs” that are part of your management system, “policies,” products or services. Few customers will bother to voice those problems. Rather than ignoring some of what you hear, you should evaluate the merit of the complaint.

If the complaint is not something that should be addressed or explored fine. But that has nothing to do with the category of the person (“complainer” or not); it has to do with the merit of the complaint.

I understand some people are annoying because they make lots of meritless complaints. Ignoring the meritless complaints is fine with me. But just as I think ignoring advice because the person giving the advice doesn’t follow it is a bad practice I think having a policy of basing decisions on something other than the merit of the complaint/suggestion is unwise.

This is especially true since organizations on the whole do a lousy job of listening to customers and understanding customer desires. We need to greatly enhance the practice of customer focus not seek to reduce it. Every organization is unique, however, and if customer focus is exceptionally great, I can understand the idea of the tweet: that we are devoted to customer focus and each new insight, but we have taken it too far and need to discriminate better. I still think discriminating based on the merit of the complaint is a better than doing so based on our categorization of the complainer but in that case (which is very rare in organizations) the advice isn’t nearly as bad as it is for most organizations.

Continue reading

Taking Risks Based on Evidence

My opinion has long been that football teams are too scared to take an action that is smart but opens the coach to criticism. So instead of attempting to make it on 4th down (if you don’t understand American football, just skip this post) they punt because that is the decision that is accepted as reasonable.

So instead of doing what is wise they do what avoids criticism. Fear drives them to take the less advantageous action. Now I have never looked hard at the numbers, but my impression is that it is well worth the risk to go for it on 4th down often. In a quick search I don’t see a paper by a Harvard professor (this article refers to it also – Fourth down: To punt or to go?) on going for it on 4th down but I found on by a University of California, Berkeley economist (David Romer wrote called “Do Firms Maximize? Evidence from Professional Football.”).

On the 1,604 fourth downs in the sample for which the analysis implies that teams are on average better off kicking, they went for it only nine times. But on the 1,068 fourth downs for which the analysis implies that teams are on average better off going for it, they kicked 959 times.

My guess is that the advantages to going for it on 4th down are greater for high school than college which is greater than the advantage for the pros (but I may be wrong). My guess is this difference is greater the more yardage is needed. Basically my feeling is the variation in high school is very high in high school and decreases with greater skill, experience and preparation. Also the kicking ability (punting and field goals) impacts the choices of going for it on 4th down and that dramatically increases in college. So if I am correct, I think pro coaches should be more aggressive on 4th down, but likely less aggressive than high school coaches should be.

But in any event the data should be explored and strategies should be tested.

Continue reading

Resources for Using the PDSA Cycle to Improve Results

graphic image showing the PDSA cycle

PDSA Improvement cycle graphic from my book – Management Matters

Using the PDSA cycle (plan-do-study-act) well is critical to building a effective management system. This post provides some resources to help use the improvement cycle well.

I have several posts on this blog about using the PDSA cycle to improve results including:

The authors and consultants with Associates for Process Improvement have the greatest collection of useful writing on the topic. They wrote two indispensable books on the process improvement through experimentation: The Improvement Guide and Quality Improvement Through Planned Experimentation. And they have written numerous excellent articles, including:

Related: Good Process Improvement PracticesThe Art of Discovery (George Box)Planning requires prediction. Prediction requires a theory. (Ron Moen)

The Art of Discovery

Quality and The Art of Discovery by Professor George Box (1990):


Quotes by George Box in the video:

“I think of statistical methods as the use of science to make sense of numbers”

“The scientific method is how we increase the rate at which we find things out.”

“I think the quality revolution is nothing more, or less, than the dramatic expansion of the of scientific problem solving using informed observation and directed experimentation to find out more about the process, the product and the customer.”

“It really amounts to this, if you know more about what it is you are doing then you can do it better and you can do it cheaper.”

“We are talking about involving the whole workforce in the use of the scientific method and retraining our engineers and scientists in a more efficient way to run experiments.”

“Tapping into resources:

  1. Every operating system generates information that can be used to improve it.
  2. Everyone has creativity.
  3. Designed experiments can greatly increase the efficiency of experimentation.

An informed observer and directed experimentation are necessary for the scientific method to be applied. He notes that the control chart is used to notify an informed observer to explain what is special about the conditions when a result falls outside the control limits. When the chart indicates a special cause is likely present (something not part of the normal system) an informed observer should think about what special cause could lead to the result that was measured. And it is important this is done quickly as the ability of the knowledgable observer to determine what is special is much greater the closer in time to the result was created.

The video was posted by Wiley (with the permission of George’s family), Wiley is the publisher of George’s recent autobiography, An Accidental Statistician: The Life and Memories of George E. P. Box, and many of his other books.

Related: Two resources, largely untapped in American organizations, are potential information and employee creativityStatistics for Experimenters (book on directed experimentation by Box, Hunter and Hunter)Highlights from 2009 George Box SpeechIntroductory Videos on Using Design of Experiments to Improve Results (with Stu Hunter)

Indirect Improvement

Often the improvements that have the largest impact are focused on improving the effectiveness of thought and decision making. Improving the critical thinking in an organization has huge benefits over the long term.

My strategy along the lines of improving critical thinking is not to make that the focus of some new effort. Instead that ability to reason more effectively will be an outcome of things such as: PDSA projects (where people learn that theories must be tested, “solutions” often fail if you bother to look at the results…), understanding variation (using control charts, reading a bit of material on: variation, using data effectively, correlation isn’t causation etc.), using evidenced based management (don’t make decision based on the authority of the person speaking but on the merit that are spoken).

These things often take time. And they support each other. As people start to understand variation the silly discussion of what special causes created the result that is within the expected outcomes for the existing process are eliminated. As people learn what conclusions can, and can’t, be drawn from data the discussions change. The improvements from the process of making decisions is huge.

As people develop a culture of evidence based management if HiPPOs try to push through decision based on authority (based on Highest Paid Person’s Opinion) without supporting evidence those attempts are seen for what they are. This presents a choice where the organization either discourages those starting to practice evidence based decision making (reverting to old school authority based decision making) or the culture strengthens that practice and HiPPO decision making decreases.

Building the critical thinking practices in the organization creates an environment that supports the principles and practices of management improvement. The way to build those critical thinking skills is through the use of quality tools and practices with reminders on principles as projects are being done (so until understanding variation is universal, continually pointing out that general principle with the specific data in the current project).

The gains made through the direct application of the tools and practices are wonderful. But the indirect benefit of the improvement in critical thinking is larger.

Related: Dan’t Can’t LieGrowing the Application of Management Improvement Ideas in Your OrganizationBuild Systems That Allow Quick Action – Don’t Just Try and Run FasterBad Decisions Flow From Failing to Understand Data and Failing to Measure Results of Changes

Ackoff: Corporations Are Not Led By Those Seeking to Maximize Shareholder Value

If I had to limit myself to a handful of management experts, Russel Ackoff would definitely be in that group. Thankfully there is no such limit. Ackoff once again provides great insight with great wit in the above clip.

A corporation says that its principle value is maximizing shareholder value. That’s non-sense. If that were the case executives wouldn’t fly around on private jets and have Philippine mahogany lined offices and the rest of it. The principle function to those executives is to provide those executives with the quality of work life that they like. And profit is merely a means which guarantees their ability to do it.

If we are going to talk about values, we got to talk about what the values are in action, not in proclamation.

Related: Ackoff, Idealized Design and Bell LabsDr. Russell Ackoff Webcast on Systems ThinkingA Theory of a System for Educators and ManagersCEOs Plundering Corporate Coffers

Customers Are Often Irrational

Penney Pinching

“The first rule is that there are no irrational customers,” Drucker wrote in Management: Tasks, Responsibilities, Practices. “Customers almost without exception behave rationally in terms of their own realities and their own situation.”

“in terms of their own realities and their own situation.” is a huge caveat. Essentially plenty of customers behave irrationally – by any sensible definition of rational. I agree, to make them customers and keep them as customers you need to develop theories that can make sense of their behavior. And it doesn’t make sense to think if they behave irrationally that means randomly (chaotically, unpredictably, uncontrollably). Customers can be predictably irrational (as a group).

Seeing that people will chose* to fly lousy airlines because the initial price quoted is a little bit cheaper than an alternative (or because they are in a frequent flyer program) you can say the customer is behaving rationally if you want. Coming up with some convoluted way to make their decision, which based based solely on their desired outcomes (and cost factors etc.) is not rational, to be seen as rational seems like a bad idea to me. Instead figure out the models for how they fail to behave rationally.

They consistently chose an option they shouldn’t rationally want; in order to save some amount of money they don’t care about nearly as much as the pain they will experience. And the amount they will then complain about having to suffer because they chose to deal with the badly run airline. That isn’t rational. It is a common choice though.

The problem is not in thinking the customers are being irrational for not buying what you are selling. The problem is in thinking the customers will behave rationally. Your theory should not expect rational behavior.

There are plenty of other examples where customers make irrational decisions. I don’t think calling them rational (within the irrationality of their “own realities” makes sense). People will buy things because they think it is a better bargain to get the more expensive item that is the same, for more money, because originally the store charged more and now it is on sale. Anchoring isn’t an understanding of how people are rational. It is an understanding of how psychology influences people in ways that are not rational.

Continue reading

Trust But Verify

The following are my comments, which were sparked by question “Trust, but verify. Is this a good example of Profound Knowledge in action?” on the Linked In Deming Institute group.

Trust but verify makes sense to me. I think of verify as process measures to verify the process is producing as it should. By verifying you know when the process is failing and when to look for special causes (when using control chart thinking with an understanding of variation). There are many ways to verify that would be bad. But the idea of trust (respect for people) is not just a feel-good, “be nice to everyone and good things happen”, in Deming’s System of Profound Knowledge.

I see the PDSA improvement cycle as another example of a trust-but-verify idea. You trust the people at the gemba to do the improvement. They predict what will happen. But they verify what does actually happen before they run off standardizing and implementing. I think many of us have seen what happens when the idea of letting those who do the work, improve the process, is adopted without a sensible support system (PDSA, training, systems thinking…). It may actually be better than what was in place, but it isn’t consistent with Deming’s management system to just trust the people without providing methods to improve (and education to help people be most effective). Systems must be in place to provide the best opportunity to succeed. Trusting the people that do the work, is part of it.

I understand there are ways to verify that would be destructive. But I do believe you need process measures to verify systems are working. Just trusting people to do the right thing isn’t wise.

A checklist is another way of “not-trusting.” I think checklists are great. It isn’t that I don’t trust people to try and do the right thing. I just don’t trust people alone, when systems can be designed with verification that improves performance. I hear people complaign that checklists “don’t respect my expertise” or have the attitude that they are “insulting to me as a professional” – you should just trust me.

Sorry, driving out fear (and building trust – one of Deming’s 14 points) is not about catering to every person’s desire. For Deming’s System of Profound Knowledge: respect for people is part of a system that requires understand variation and systems thinking and an understanding of psychology and theory of knowledge. Checklists (and other forms of verification) are not an indication of a lack of trust. They are a a form of process measure (in a way) that has been proven to improve results.

Continue reading

2011 Management Blog Roundup: Stats Made Easy

The 4th Annual Management blog roundup is coming to a close soon. This is my 3rd and final review post looking back at 2001, the previous two posts looked at: Gemba Panta Rei and the Lean Six Sigma Blog.

I have special affinity for the use of statistics to understand and improve. I imaging it is both genetic and psychological. My father was a statistician and I have found memories of applying statistical thinking to understand a result or system. I also am comfortable with numbers, and like most people enjoy working with things I have an affinity for.

photo of Mark Anderson

Mark Anderson

Mark Anderson’s Stats Made Easy blog brings statistical thinking to managers. And this is not an easy thing to do, as one of his posts shows, we have an ability to ignore data we don’t want to know. Wrong more often than right but never in doubt: “Kahneman examined the illusion of skill in a group of investment advisors who competed for annual performance bonuses. He found zero correlation on year-to-year rankings, thus the firm was simply rewarding luck. What I find most interesting is his observation that even when confronted with irrefutable evidence of misplaced confidence in one’s own ability to prognosticate, most people just carry on with the same level of self-assurance.”

That actually practice of experimentation (PDSA…) needs improvement. Too often the iteration component is entirely missing (only one experiment is done). That is likely partially a result another big problem: the experiments are not nearly short enough. Mark offered very wise advice on the Strategy of experimentation: Break it into a series of smaller stages. “The rule-of-thumb I worked from as a process development engineer is not to put more than 25% of your budget into the first experiment, thus allowing the chance to adapt as you work through the project (or abandon it altogether).” And note that, abandon it altogether option. Don’t just proceed with a plan if what you learn makes that option unwise: too often we act based on expectations rather than evidence.

In Why coaches regress to be mean, Mark explained the problem with reacting to common cause variation and “learning” that it helped to do so. “A case in point is the flight instructor who lavishes praise on a training-pilot who makes a lucky landing. Naturally the next result is not so good. Later the pilot bounces in very badly — again purely by chance (a gust of wind). The instructor roars disapproval. That seems to do the trick — the next landing is much smoother.” When you ascribe special causation to common cause variation you often confirm your own biases.

Mark’s blog doesn’t mention six sigma by name in his 2011 posts but the statistical thinking expressed throughout the year make this a must for those working in six sigma programs.

Related: 2009 Curious Cat Management Blog Carnival2010 Management Blog Review: Software, Manufacturing and Leadership

Eliminate the Waste of Waiting in Line with Queuing Theory

One thing that frustrates me is how managers fail to adopt proven strategies for decades. One very obvious example is using queuing theory to setup lines.

Yes it may be even better to adopt strategies to eliminate as much waiting in line as possible, but if there is still waiting in line occurring and you are not having one queue served by multiple representatives shame on you and your company.

Related: Customer Focus and Internet Travel SearchYouTube Uses Multivariate Experiment To Improve Sign-ups 15%Making Life Difficult for Customers

Warren Buffett’s 2010 Letter to Shareholders

Warren Buffett has published his always excellent annual shareholder letter. His letters, provide excellent investing insight and good management ideas.

Yearly figures, it should be noted, are neither to be ignored nor viewed as all-important. The pace of the earth’s movement around the sun is not synchronized with the time required for either investment ideas or operating decisions to bear fruit. At GEICO, for example, we enthusiastically spent $900 million last year on advertising to obtain policyholders who deliver us no immediate profits. If we could spend twice that amount productively, we would happily do so though short-term results would be further penalized. Many large investments at our railroad and utility operations are also made with an eye to payoffs well down the road.

At Berkshire, managers can focus on running their businesses: They are not subjected to meetings at headquarters nor financing worries nor Wall Street harassment. They simply get a letter from me every two years and call me when they wish. And their wishes do differ: There are managers to whom I have not talked in the last year, while there is one with whom I talk almost daily. Our trust is in people rather than process. A “hire well, manage little” code suits both them and me.

Cultures self-propagate. Winston Churchill once said, “You shape your houses and then they shape you.” That wisdom applies to businesses as well. Bureaucratic procedures beget more bureaucracy, and imperial corporate palaces induce imperious behavior. (As one wag put it, “You know you’re no longer CEO when you get in the back seat of your car and it doesn’t move.”) At Berkshire’s “World Headquarters” our annual rent is $270,212. Moreover, the home-office investment in furniture, art, Coke dispenser, lunch room, high-tech equipment – you name it – totals $301,363. As long as Charlie and I treat your money as if it were our own, Berkshire’s managers are likely to be careful with it as well.

At bottom, a sound insurance operation requires four disciplines… (4) The willingness to walk away if the appropriate premium can’t be obtained. Many insurers pass the first three tests and flunk the fourth. The urgings of Wall Street, pressures from the agency force and brokers, or simply a refusal by a testosterone-driven CEO to accept shrinking volumes has led too many insurers to write business at inadequate prices. “The other guy is doing it so we must as well” spells trouble in any business, but none more so than insurance.

I don’t agree with everything he says. And what works at one company, obviously won’t work everywhere. Copying doesn’t work. Learning from others and understanding what makes it work and then determining how to incorporate some of the ideas into your organization can be valuable. I don’t believe in “Our trust is in people rather than process.” I do believe in “hire well, manage little.” Exactly what those phrases mean is not necessarily straight forward. I believe you need to focus on creating a Deming based management system and that will require educating and coaching managers about how to manage such a system. But that the management decisions about day to day operations should be left to those who are working on the processes in question (which will often be workers, that are not managers, sometimes will be supervisors and managers and sometimes will be senior executives).

Related: Too often, executive compensation in the U.S. is ridiculously out of line with performance.Management Advice from Warren BuffetGreat Advice from Warren Buffett to University of Texas – Austin business school students2004 Warren Buffet Report
Continue reading

Airport Security with Lean Management Principles

The ‘Israelification’ of airports: High security, little bother

We [Israel] said, ‘We’re not going to do this. You’re going to find a way that will take care of security without touching the efficiency of the airport.”

“The whole time, they are looking into your eyes — which is very embarrassing. But this is one of the ways they figure out if you are suspicious or not. It takes 20, 25 seconds,” said Sela. Lines are staggered. People are not allowed to bunch up into inviting targets for a bomber who has gotten this far.

Lean thinking: customer focus, value stream (don’t take actions that destroy the value stream to supposedly meet some other goal), respect for people [this is a much deeper concept than treat employees with respect], evidence based decision making (do what works – “look into your eyes”), invest in your people (Israel’s solution requires people that are good at their job and committed to doing a good job – frankly it requires engaged managers which is another thing missing from our system).

The USA solution if something suspicious is found in bag screening? Evacuate the entire airport terminal. Very poor design (it is hard to over-emphasis how poor this is). It will take time to design fixes into physical space, as it always does in lean thinking. It has been nearly 10 years. Where is the progress?

The Colbert Report Mon – Thurs 11:30pm / 10:30c
Tip/Wag – TSA, Bert & Dogs<a>
www.colbertnation.com
Colbert Report Full Episodes 2010 Election March to Keep Fear Alive
A screener at Ben-Gurion has a pair of better options. First, the screening area is surrounded by contoured, blast-proof glass that can contain the detonation of up to 100 kilos of plastic explosive. Only the few dozen people within the screening area need be removed, and only to a point a few metres away.

Second, all the screening areas contain ‘bomb boxes’. If a screener spots a suspect bag, he/she is trained to pick it up and place it in the box, which is blast proof. A bomb squad arrives shortly and wheels the box away for further investigation.

This is a very small simple example of how we can simply stop a problem that would cripple one of your airports,” Sela said.

Lean thinking: design the workspace to the task at hand. Obviously done in one place and not the other. Also it shows the thought behind designing solutions that do not destroy the value stream unlike the approach taken in the USA. And the better solution puts a design in place that gives primacy to safety: the supposed reason for all the effort.
Continue reading

Actionable Metrics

Metrics are valuable when they are actionable. Think about what will be done if certain results are shown by the data. If you can’t think of actions you would take, it may be that metric is not worth tracking.

Metrics should be operationally defined so that the data is collected properly. Without operationally definitions data collected by more than one person will often include measurement error (in this case, the resulting data showing the results of different people measuring different things but calling the result the same thing).

And without operational definitions those using the resulting data may well mis-interpret what it is saying. Often data is presented without an operational definition and people think the data is saying something that it is not. I find most often when people say statistics lie it is really that they made an incorrect assumption about what the data said – which most often was because they didn’t understand the operational definition of the data. Data can’t lie. People can. And people can intentionally mislead with data. But far more often people unintentionally mislead with data that is misunderstood (often this is due to failure to operationally define the data).

In response to: Metrics Manifesto: Raising the Standard for Metrics

Related: Outcome MeasuresEvidence-based ManagementMetrics and Software DevelopmentDistorting the System (due to misunderstanding metrics)Manage what you can’t measure

Data Can’t Lie

Many people state that data can lie. Obviously data can’t lie.

There are three kinds of lies: Lies, damn lies and statistics – Mark Twain

Many people don’t understand the difference between being manipulated because they can’t understand what the data really says and data itself “lying” (which, of course, doesn’t even make sense). The same confusion can come in when someone just draws the wrong conclusion from the data that exists (and them blames the data for “lying” instead of themselves for drawing a faulty conclusion). The data can be wrong (and the data can even be made faulty intentionally by someone). Or someone can draw the wrong conclusion from data that is correct. But in neither case is the data lying. It is also common to believe the data means something other than what it does (therefore leading to a faulty conclusion).

For a very simple example, believing if the average height for adults in the USA is 5 feet 9 inches that half the people must be taller and half the people must be shorter. You could then draw the conclusion that half the adults must be shorter than 5 feet 9 inches. But that is not what an average height means (it is basically what median means, though if you want to get technical, it doesn’t mean exactly that). You might draw the conclusion that the average height of an adult in California is 5 feet 9 inches but that is not supported by only the data that says what the height of an average adult in the country is. The same hold for drawing the conclusion that 5 feet 9 inches is the average height of a women. Now in this simple example, hopefully people can see the faulty reasoning, but such reasoning often goes on without consideration.

In a great speech by Marisa Meyer she speaks of Google makes decisions using data and that data is apolitical. One benefit of this, she says, is that Google makes decisions on what the data supports not political considerations. The belief that basing decision on what the data supports leads to better decisions can seem false for those that accept the quote about 3 types of lies (or those that see there is some weakness to this point if those supposedly basis decisions on data don’t really understand how to do so).

Continue reading