Tag Archives: in-process measures

How to Sustain Long Term Enterprise Excellence

This month Paul Borawski asked ASQ’s Influential Voices to explore sustaining excellence for the long term.

There are several keys to pulling sustained long term excellence. Unfortunately, experience shows that it is much easier to explain what is needed than to build a management system that delivers these practices over the long term. The forces pulling an organization off target often lead organization astray.

Each of these concepts have great deal more behind them than one post can explain. I provide some direct links below, and from those there are many more links to more valuable information on the topics. I also believe my book provides valuable additional material on the subject – Management Matters: Building Enterprise Capability. Sustained long term excellence is the focus of the book. A system that consistently provides excellent performance is a result of building enterprise capability over the long term.

Related: Distorting the System, Distorting the Data or Improving the SystemSustaining and Growing the Adoption of Enterprise Excellence Ideas in Your OrganizationManaging to Test Result Instead of Customer ValueGood Process Improvement PracticesChange is not ImprovementManaging Our Way to Economic Success Two Untapped Resources by William G. HunterSoftware Process and Measurement Podcast With John HunterCustomer Focus by Everyone

Special Cause Signal Isn’t Proof A Special Cause Exists

One of my pet peeves is when people say that a point outside the control limits is a special cause. It is not. It is an indication that it likely a special cause exists, and that special cause thinking is the correct strategy to use to seek improvement. But that doesn’t mean there definitely was a special cause – it could be a false signal.

This post relies on an understand of control charts and common and special causes (review these links if you need some additional context).

Similarly, a result that doesn’t signal a special cause (inside the control limits without raising some other flag, say a run of continually increasing points) does not mean a special cause is not present.

The reason control charts are useful is to help us maximize our effectiveness. We are biased toward using special cause thinking when it is not the most effective approach. So the control chart is a good way to keep us focused on common cause thinking for improvement. It is also very useful in flagging when it is time to immediately start using special cause thinking (since timing is key to effective special cause thinking).

However, if there is result that is close to the control limit (but inside – so no special cause is indicated) and the person that works on the process everyday thinks, I noticed x (some special cause) earlier, they should not just ignore that. It very well could be a special cause that, because of other common cause variation, resulted in a data point that didn’t quite reach the special cause signal. Where the dot happened to land (just above or just below the control limit – does not determine if a special cause existed).

The signal is just to help us systemically make the best choice of common cause or special cause thinking. The signal does not define whether a special cause (an assignable cause) exists of not. The control chart tool helps guide us to use the correct type of improvement strategy (common cause or special cause). But it is just a signaling device, it isn’t some arbiter of whether a special cause actually exists.

Continue reading

Trust But Verify

The following are my comments, which were sparked by question “Trust, but verify. Is this a good example of Profound Knowledge in action?” on the Linked In Deming Institute group.

Trust but verify makes sense to me. I think of verify as process measures to verify the process is producing as it should. By verifying you know when the process is failing and when to look for special causes (when using control chart thinking with an understanding of variation). There are many ways to verify that would be bad. But the idea of trust (respect for people) is not just a feel-good, “be nice to everyone and good things happen”, in Deming’s System of Profound Knowledge.

I see the PDSA improvement cycle as another example of a trust-but-verify idea. You trust the people at the gemba to do the improvement. They predict what will happen. But they verify what does actually happen before they run off standardizing and implementing. I think many of us have seen what happens when the idea of letting those who do the work, improve the process, is adopted without a sensible support system (PDSA, training, systems thinking…). It may actually be better than what was in place, but it isn’t consistent with Deming’s management system to just trust the people without providing methods to improve (and education to help people be most effective). Systems must be in place to provide the best opportunity to succeed. Trusting the people that do the work, is part of it.

I understand there are ways to verify that would be destructive. But I do believe you need process measures to verify systems are working. Just trusting people to do the right thing isn’t wise.

A checklist is another way of “not-trusting.” I think checklists are great. It isn’t that I don’t trust people to try and do the right thing. I just don’t trust people alone, when systems can be designed with verification that improves performance. I hear people complaign that checklists “don’t respect my expertise” or have the attitude that they are “insulting to me as a professional” – you should just trust me.

Sorry, driving out fear (and building trust – one of Deming’s 14 points) is not about catering to every person’s desire. For Deming’s System of Profound Knowledge: respect for people is part of a system that requires understand variation and systems thinking and an understanding of psychology and theory of knowledge. Checklists (and other forms of verification) are not an indication of a lack of trust. They are a a form of process measure (in a way) that has been proven to improve results.

Continue reading

Annual Performance Reviews Are Obsolete

Sam Goodner, the CEO of Catapult Systems, wrote about his decision to eliminate the annual performance appraisal.

the most critical flaw of our old process was that the feedback itself was too infrequent and too far removed from the actual behavior to have any measurable impact on employee performance.

I decided to completely eliminate of our annual performance review process and replace it with a real-time performance feedback dashboard.”

I think this is a good move in the right direction. I personally think it is a mistake to make the measures focused on the person. There should be performance dashboards (with in-process and outcome measures) that provide insight into the state of the processes in the company. Let those working in those processes see, in real time, the situation, weaknesses, strengths… and take action as appropriate (short term quick fixes, longer term focus on areas for significant improvement…). It could be the company is doing this, the quick blog post is hardly a comprehensive look at their strategies. It does provide some interesting ideas.

I also worry about making too much of the feedback without an understanding of variation (and the “performance” results attributed to people due merely to variation) and systems thinking. I applaud the leadership to make a change and the creative attempt, I just also worry a bit about how this would work in many organizations. But that is not really what matters. What matters is how it works for their organization, and I certainly believe this could work well in the right organization.

Related: Righter Performance AppraisalWhen Performance-related Pay BackfiresThe Defect Black Marketarticles, books, posts on performance appraisal

Metrics and Software Development

Lean-based Metrics for Agile CM Environments [the broken link was removed] by Brad Appleton, Robert Cowham and Steve Berczuk:

Measure Up! Don’t use metrics to measure individuals in a way that compares their performance to others or isolates the value of their contributions from the rest of the team. The last of the seven principles of Lean software development tells us to “Optimize across the whole.” When measuring value or performance, it is often better to measure at the next level-up. Look at the big-picture because the integrated whole is greater than the sum of its decomposition into parts. Metrics on individuals and subparts often create suboptimization of the whole, unless the metric connects to the “big picture” in a way that emphasizes the success of the whole over any one part.

I agree that measuring individuals is normally not an effective way improve. And “measuring up” can often be valuable. Often a fixation on small process measures can result in improvements that don’t actually improve the end result. But rather than the measure up view, I find looking at outcome measures (to measure overall effectiveness) and process measures (for viewing specific parts of the system “big picture”) the most useful strategy.

The reason for process measures is not to improve those results alone. But those process measures can be selected to measure key processes within the system. Say finding 3 process measures that if we can improve these then this important outcome measure will improve (using PDSA to make sure your prediction is accurate – don’t fall into the trap of focusing on improving that measure even after the data shows it does not result in the desired improvement to the overall results that was predicted).

Also, process measures are helpful in serving as indicators that something is going wrong (or potentially going better than normal). Process measures will change quickly (good ones can be close to real time) thus facilitate immediate remedies and immediate examination of what lead to the problem to aid in avoiding that condition in the future.

Continue reading

Customer Un-focus

Counties caught in conundrum: getting Amish to take food stamps [the broken link was removed] by John Horton

Accepting public assistance is verboten within the Amish culture. It simply is not done. But Taylor is under orders to at least try to get them enrolled. The Ohio Department of Job & Family Services has asked Geauga and Holmes counties, which feature the state’s largest Amish populations, to lift dismal food-stamp participation rates.

Taylor and his Holmes counterpart, Dan Jackson, called the mandate a waste of tax dollars, time and resources. In their eyes, the directive is government bureaucracy that ignores the obvious in setting an unrealistic goal.

Taylor and Jackson said they’ve both asked the state to readjust participation goals for their counties. Carroll said the request is under consideration. This is the first year for the performance standard.

Data, such as participation rates can be used as in-process measures to help you locate areas to look at for improvement. When you discover a good reason for the numbers then look to other in-process measures. Don’t make the mistake of managing to the measure. The measure should help you manage. Improving the number is not the goal. Improving the situation that the number is a proxy for is the goal.

Related: Another Quota Failure ExampleForget TargetsWelfare waste

via: Amish Refusal to Accept Food Stamps Makes Welfare Workers Look Bad