Tag Archives: Statistics

Podcast Discussion on Management Matters

I continue to record podcasts as I promote my new book – Management Matters: Building Enterprise Capability. This the second part, of 2, of my podcast with Joe Dager, Business 901: Management Matters to a Curious Cat. The first part featured a discussion of 2 new deadly diseases facing companies.

image of the cover of Managmenet Matters by John Hunter

Management Matters by John Hunter

Listen to this podcast.

Links to more information on some of the topics I mention in the podcast:

More podcasts: Process Excellence Network Podcast with John HunterBusiness 901 Podcast with John Hunter: Deming’s Management Ideas Today (2012)Leanpub Podcast on Management Matters: Building Enterprise Capability

Introductory Videos on Using Design of Experiments to Improve Results

The video shows Stu Hunter discussing design of experiments in 1966. It might be a bit slow going at first but the full set of videos really does give you a quick overview of the many important aspects of design of experiments including factorial designed experiments, fractional factorial design, blocking and response surface design. It really is quite good, if you find the start too slow for you skip down to the second video and watch it.

My guess is, for those unfamiliar with even the most cursory understanding of design of experiments, the discussion may start moving faster than you can absorb the information. One of the great things about video is you can just pause and give yourself a chance to catch up or repeat a part that you didn’t quite understand. You can also take a look at articles on design of experiments.

I believe design of experiments is an extremely powerful methodology of improvement that is greatly underutilized. Six sigma is the only management improvement program that emphasizes factorial designed experiments.

Related: One factor at a time (OFAT) Versus Factorial DesignsThe purpose of Factorial Designed Experiments

Continue reading

2011 Management Blog Roundup: Stats Made Easy

The 4th Annual Management blog roundup is coming to a close soon. This is my 3rd and final review post looking back at 2001, the previous two posts looked at: Gemba Panta Rei and the Lean Six Sigma Blog.

I have special affinity for the use of statistics to understand and improve. I imaging it is both genetic and psychological. My father was a statistician and I have found memories of applying statistical thinking to understand a result or system. I also am comfortable with numbers, and like most people enjoy working with things I have an affinity for.

photo of Mark Anderson

Mark Anderson

Mark Anderson’s Stats Made Easy blog brings statistical thinking to managers. And this is not an easy thing to do, as one of his posts shows, we have an ability to ignore data we don’t want to know. Wrong more often than right but never in doubt: “Kahneman examined the illusion of skill in a group of investment advisors who competed for annual performance bonuses. He found zero correlation on year-to-year rankings, thus the firm was simply rewarding luck. What I find most interesting is his observation that even when confronted with irrefutable evidence of misplaced confidence in one’s own ability to prognosticate, most people just carry on with the same level of self-assurance.”

That actually practice of experimentation (PDSA…) needs improvement. Too often the iteration component is entirely missing (only one experiment is done). That is likely partially a result another big problem: the experiments are not nearly short enough. Mark offered very wise advice on the Strategy of experimentation: Break it into a series of smaller stages. “The rule-of-thumb I worked from as a process development engineer is not to put more than 25% of your budget into the first experiment, thus allowing the chance to adapt as you work through the project (or abandon it altogether).” And note that, abandon it altogether option. Don’t just proceed with a plan if what you learn makes that option unwise: too often we act based on expectations rather than evidence.

In Why coaches regress to be mean, Mark explained the problem with reacting to common cause variation and “learning” that it helped to do so. “A case in point is the flight instructor who lavishes praise on a training-pilot who makes a lucky landing. Naturally the next result is not so good. Later the pilot bounces in very badly — again purely by chance (a gust of wind). The instructor roars disapproval. That seems to do the trick — the next landing is much smoother.” When you ascribe special causation to common cause variation you often confirm your own biases.

Mark’s blog doesn’t mention six sigma by name in his 2011 posts but the statistical thinking expressed throughout the year make this a must for those working in six sigma programs.

Related: 2009 Curious Cat Management Blog Carnival2010 Management Blog Review: Software, Manufacturing and Leadership

Dr. Deming in 1980 on Product Quality in Japan and the USA

I posted an interesting document to the Curious Cat Management Library: it includes Dr. Deming’s comments as part of a discussion organized by the Government Accounting Office in 1980 on Quality in Japan and the United States.

The document provides some interesting thoughts from Dr. Deming and others; Dr. Deming’s statements start on page 52 of the document. For those really interested in management improvement ideas it is a great read. I imagine most managers wouldn’t enjoy it though (it isn’t giving direct advice for today, but I found it very interesting).

Some selected quotes from the document follow. On his work with Japan in 1950:

This movement, I told them, will fail and nothing will happen unless management does their part. Management must know something about statistical techniques and know that if they are good one place, they will work in another. Management must see that they are used throughout the company.
Quality control must take root with simple statistical techniques that management and everyone in the company must learn. By these techniques, people begin to understand the different kinds of variation. Then quality control just grow with statistical theory and further experience. All this learning must be guided by a master. Remarkable results may come quick, but one has no right to expect results in a hurry. The learning period never ends.

The statistical control of quality is not for the timid and the halfhearted. There is no way to learn except to learn it and do it. You can read about swimming, but you might drown if you had to learn it that way!

One of the common themes at that time was Deming’s methods worked because Japanese people and culture were different. That wasn’t why the ideas worked, but it was an idea many people that wanted to keep doing things the old way liked to believe.

There may be a lot of difference, I made the statement on my first visit there that a Japanese man was never too old nor too successful to learn, and to wish to learn; to study and to learn. I know that people here also study and learn. I’ll be eighty next month in October. I study every day and learn every day. So you find studious people everywhere, but I think that you find in Japan the desire to learn, the willingness to learn.

You didn’t come to hear me on this; there are other people here much better qualified than I am to talk. But in Japan, a man works for the company; he doesn’t work to please somebody. He works for the company, he can argue for the company and stick with it when he has an idea because his position is secure. He doesn’t have to please somebody. It is so here in some companies, but only in a few. I think this is an important difference.

At the time the way QC circles worked in Japan was basically employee led kaizen. So companies that tried to copy Japan told workers: now go make things better like the workers we saw in Japan were doing. Well with management not changing (and understanding Deming’s ideas, lean thinking, variation, systems thinking…) and staff not given training to understand how to improve processes it didn’t work very well. We (those reading this blog) may all now understand the advantages one piece flow. I can’t imagine too many people would jump to that idea sitting in their QC circle without having been told about one piece flow (I know I wouldn’t have), and all the supporting knowledge needed to make that concept work.

QC circles can make tremendous contributions. But let me tell you this, Elmer. If it isn’t obvious to the workers that the managers are doing their part, which only they can do, I think that the workers just get fed up with trying in vain to improve their part of the work. Management must do their part: they must learn something about management.

Continue reading

Management Improvement Carnival #139

comic showing the dangers of drawing false conclusion based on statistical significance

Randall Munroe illustrates RA Fisher point that you must think to draw reasonable conclusions from data. Click the image to see the full xkcd comic.


The Curious Cat Management Improvement Carnival has been published since 2006. We find great management blog posts and share them with you 3 times a month. We hope you find these post interesting and find some new blogs to start reading. Follow John Hunter online: Google+, Twitter, LinkedIn, more.

  • Questioning the Value of the P-Value by Jon Miller – “Father of modern statistics Ronald A. Fisher invented the p-value as an informal measure of evidence against the null hypothesis. Although often overlooked, Fisher called on scientists use other types of evidence such as the a priori plausibility of the hypothesis and the relative strengths of results from previous studies in combination with the p-value.”
  • Teachers Cheating and Incentives by Dan Ariely – “they began to do anything that would improve their performance on that measure even by a tiny bit—even if they messed up other employees in the process. Ultimately they were consumed with maximizing what they knew they would be measured on”
  • It’s About The Journey and Sometimes It Starts With Failure by Tim McMahon – “If we allow ourselves to become discouraged during the learning process we may give up right before we reach our goal. Anytime we learn from our efforts we are in the process of succeeding. Each lesson brings us closer to our intended result.”
  • When Patents Attack – “as many as 80 percent of software engineers say the patent system actually hinders innovation. It doesn’t encourage them to come up with new ideas and create new products. It actually gets in their way.” (I added “An outdated intellectual property system” as deadly management/economic disease number 9 – building on Deming’s 7 deadly
    diseases a few years ago – John). Also from NPR: The Patent War
  • 3 Things You Can Do When Your Manager Doesn’t Support Continuous Improvement by Ron Pereira – “So keep fighting… keep learning… keep improving. If you do this, one thing is for certain, you and the organization you work for will be better off even if they don’t realize it.”
  • Continue reading

Best Selling Books In the Curious Cat Bookstore

The most popular books in July at Curious Cat Books were, Statistics for Experiments (1st edition), followed by Statistics for Experiments (2nd edition) and the Leader’s Handbook by Peter Scholtes. These books are great, I am happy others have been finding them and reading them. Statistics for Experimenters is co-authored by my father.

Top sellers so far this year (adding together all editions, including Kindle):
1) The Leader’s Handbook
2) Statistics for Experimenters
3) New Economics
4) Abolishing Performance Appraisals
5) The Team Handbook
6) Out of the Crisis

The Leader’s Handbook is far away in the lead. The order of popularity on Amazon overall: 1) Out of the Crisis, 2) New Economics, 3) The Team Handbook, 4) Abolishing Performance Appraisals, 5) Statistics for Experimenters and 6) The Leader’s Handbook. The only thing that surprises me with the overall numbers is the Leader’s Handbook. The Amazon rankings are hugely biased by recent activity (it isn’t close to a ranking of sales this year). Still I expected the Leader’s Handbook would rank very well. It is the first book I recommend for almost any situation (the only exceptions are if there is a very specific need – for example Statistics for Experimenters for multi-factorial designed experiments or The Improvement Guide for working on the process of improvement.

My guess is Curious Cat site users (and I am sure a fair number of people sent by search engines) are much more likely to buy those books I recommend over and over. Still many books I don’t promote are bought and some books I recommend consistently don’t rack up many sales through Curious Cat.

I started this as a simple Google+ update but then found it interesting enough to expand to a full post. Hopefully others find it interesting also.

Related: Using Books to Ignite ImprovementWorkplace Management by Taiichi OhnoProblems with Management and Business BooksManagement Improvement Books (2005)

One factor at a time (OFAT) Versus Factorial Designs

Guest post by Bradley Jones

Almost a hundred years ago R. A. Fisher‘s boss published an article espousing OFAT (one factor at a time). Fisher responded with an article of his own laying out his justification for factorial design. I admire the courage it took to contradict his boss in print!

Fisher’s argument was mainly about efficiency – that you could learn as much about many factors as you learned about one in the same number of trials. Saving money and effort is a powerful and positive motivator.

The most common argument I read against OFAT these days has to do with inability to detect interactions and the possibility of finding suboptimal factor settings at the end of the investigation. I admit to using these arguments myself in print.

I don’t think these arguments are as effective as Fisher’s original argument.

To play the devil’s advocate for a moment consider this thought experiment. You have to climb a hill that runs on a line going from southwest to northeast but you are only allowed to make steps that are due north or south or due east or west. Though you will have to make many zig zags you will eventually make it to the top. If you noted your altitude at each step, you would have enough data to fit a response surface.

Obviously this approach is very inefficient but it is not impossible. Don’t mistake my intent here. I am definitely not an advocate of OFAT. Rather I would like to find more convincing arguments to persuade experimenters to move to multi-factor design.

Related: The Purpose of Factorial Designed ExperimentsUsing Design of Experimentsarticles by R.A. Fisherarticles on using factorial design of experimentsDoes good experimental design require changing only one factor at a time (OFAT)?Statistics for Experimenters

Factorial Designed Experiment Aim

Multivariate experiments are a very powerful management tool to learn and improve performance. Experiments in general, and designed factorial experiments in particular, are dramatically underused by managers. A question on LinkedIn asks?

When doing a DOE we select factors with levels to induce purposely changes in the response variable. Do we want the response variable to move within the specs of the customers? Or it doesn’t matter since we are learning about the process?

The aim needs to consider what you are trying to learn, costs and potential rewards. Weighing the various factors will determine if you want to aim to keep results within specification or can try options that are likely to return results that are outside of specs.

If the effort was looking for breakthrough improvement and costs of running experiments that might produce results outside of spec were low then specs wouldn’t matter much. If the costs of running experiments are very high (compared with expectations of results) then you may well want to try designed experiment values that you anticipate will still produce results within specs.

There are various ways costs come into play. Here I am mainly looking at the costs as (costs – revenue). For example the case where if the results are withing spec and can be used the costs (net costs, including revenue) of the experiment run are substantially lower.
Continue reading

How to Manage What You Can’t Measure

In Out of the Crisis, page 121, Dr. Deming wrote:

the most important figures that one needs for management are unknown or unknowable (Lloyd S. Nelson, director of statistical methods for the Nashua corporation), but successful management must nevertheless take account of them.

So what do you do then? I am a strong advocate of Deming’s ideas on management. I see understanding system thinking, psychology, the theory of knowledge and variation as the tools to use when you can’t get precise measures (or when you can).

Even if you can’t measure exactly what you want, you can learn about the area with related data. You are not able to measure the exact benefit of a happy customer but you can get measures that give you evidence of the value and even magnitude. And you can get measures of the costs of dis-satisfied customers. I just mention this to be clear getting data is very useful and most organizations need to focus on gathering sensible data and using it well.

Without precise measure though you have to use judgment. Judgment will often be better with an understanding of theory and repeated attempts to test those theories and learn. Understanding variation can be used even if you don’t have control charts and data. Over-reaction to special causes is very common. Even without data, this idea can be used to guide your thoughts.

The danger is that we mistake measures for the thing itself. Measures are a proxy and we need to understand the limitation of the data we use. The main point Deming was making was we can’t just pretend the data we have tells us everything we need to know. We need to think. We need to understand that the data is useful but the limitations need to be remembered.

Human systems involve people. To manage human systems you need to learn about psychology. Paying attention to what research can show about motivation, fear, trust, etc. is important and valuable. It aids management decisions when you can’t get the exact data that you would like. If people are unhappy you can see it. You may also be able to measure aspects of this (increased sick leave, increased turnover…). If people are unhappy they often will not be as pleasant to interact with as people who are happy. You can make judgments about the problems created by internal systems that rob people of joy in work and prevent them from helping customers.

For me the key is to use the Deming’s management system to guide action when you can’t get clear data. We should keep trying to find measures that will help. In my experience even though there are many instances where we can get definite data on exactly what we want we fail to get data that would help guide actions a great deal). Then we need to understand the limitations of the data we can gather. And then we need to continually improve and continually learn.

When you have clear data, Deming’s ideas are also valuable. But when the data is lacking it is even more important to take a systemic approach to making management decisions. Falling back into using the numbers you can get to drive decision making is a recipe for trouble.

Related: Manage what you can’t measureStatistical Engineering Links Statistical Thinking, Methods and Toolsoutcome measures

Actionable Metrics

Metrics are valuable when they are actionable. Think about what will be done if certain results are shown by the data. If you can’t think of actions you would take, it may be that metric is not worth tracking.

Metrics should be operationally defined so that the data is collected properly. Without operationally definitions data collected by more than one person will often include measurement error (in this case, the resulting data showing the results of different people measuring different things but calling the result the same thing).

And without operational definitions those using the resulting data may well mis-interpret what it is saying. Often data is presented without an operational definition and people think the data is saying something that it is not. I find most often when people say statistics lie it is really that they made an incorrect assumption about what the data said – which most often was because they didn’t understand the operational definition of the data. Data can’t lie. People can. And people can intentionally mislead with data. But far more often people unintentionally mislead with data that is misunderstood (often this is due to failure to operationally define the data).

In response to: Metrics Manifesto: Raising the Standard for Metrics

Related: Outcome MeasuresEvidence-based ManagementMetrics and Software DevelopmentDistorting the System (due to misunderstanding metrics)Manage what you can’t measure