Nice talk on fear of looking foolish. The speakers discuss the idea that visibility is good. Don’t hide. Make everything visible and the benefit from many people’s ideas. The talk focuses on software development but is true for any work.
“criticism is not evil” – Very true. “At Google we are not allowed to submit code until there is code review.” At the bottom line they are repeating Deming’s ideas: improve the system – people are not the problem, bad systems are the problem. Iterate quickly.
I suppose it’s a question of precision then. There are many things that you could argue are useful, if you argue backward from the end result. Yet they are not predictive, or repeatable to any degree of precision. In addition to “last things” there should also be the “next things” that a theory allows for or predicts. As a pragmatist, it’s hard to argue with results. As a Lean thinker, I have to argue for process and predictability.
There are strong ties between Deming’s ideas and the pragmatic philosophy; one paper offers a nice overview: Deming and Pragmatism.
I like George Box’s quote “All Models Are Wrong But Some Are Useful” This can also be dangerous when people don’t understand the limits of usefulness. A danger is that people believe the model is more true than it is (they don’t understand the limitations).
The pragmatists were concerned with the theory of knowledge – how we know what we know. They were very concerned with evaluating thought and beliefs. They believed in testing to determine whether theories were correct. This thinking underpins the Shewhart/Deming/PDSA cycle.
I believe the question raised in the original post is very similar to the struggle Shewhart went through in developing the control chart and Shewhart cycle. He wanted to address the exact issue of finding things that not only appear to be useful (which includes many instances of things that appear to be useful but in fact are not – we people are prone to this in many ways) but are predictably useful.
In many companies the impact of multi-tasking is obscured by the fact that in spite of its prevalence most projects still finish on time. While this reliability is nice, it masks the even more significant opportunity to cut project durations substantially. If projects are being delivered on or close to schedule, and multi-tasking is occurring, it can only mean that the task estimates used in the plan are significantly inflated.
But understanding is not enough. The drivers of multi-tasking are built into the processes, measurements, and systems most companies manage their projects. We strive hard to keep people busy all of the time, to maximize the output of all of our resources and be efficient. Performance measures on project managers and executives motivate them to focus on delivering individual projects, without understanding of the impact of their actions on the rest of the pipeline. Conventional scheduling and pipelining tools pay no attention to these factors and routinely overload resources making multi-tasking nearly inevitable.
Measure Up! Don’t use metrics to measure individuals in a way that compares their performance to others or isolates the value of their contributions from the rest of the team. The last of the seven principles of Lean software development tells us to “Optimize across the whole.” When measuring value or performance, it is often better to measure at the next level-up. Look at the big-picture because the integrated whole is greater than the sum of its decomposition into parts. Metrics on individuals and subparts often create suboptimization of the whole, unless the metric connects to the “big picture” in a way that emphasizes the success of the whole over any one part.
I agree measuring individuals is normally not an effective way improve. And “measuring up” can often be valuable. Often a fixation on small process measures can result in improvements that don’t actually improve the end result. But rather than the measure up view, I find looking at outcome measures (to measure overall effectiveness) and process measures (for viewing specific parts of the system “big picture”) the most useful strategy.
The reason for process measures is not to improve those results alone. But those process measures can be selected to measure key processes within the system. Say finding 3 process measures that if we can improve these then this important outcome measure will improve (using PDSA to make sure your prediction is accurate – don’t fall into the trap of focusing on improving that measure even after the data shows it does not result in the desired improvement to the overall results that was predicted).
Also, process measures are helpful in serving as indicators that something is going wrong (or potentially going better than normal). Process measures will change quickly (good ones can be close to real time) thus facilitate immediate remedies and immediate examination of what lead to the problem to aid in avoiding that condition in the future. Continue reading →
analysis of unique recordings of the Muslim pilgrimage in Mina/Makkah, Saudi Arabia. It suggests that high-density flows can turn “turbulent” and cause people to fall. The occuring eruptions of pressure release bear analogies with earthquakes and are de facto uncontrollable.
entrance of the previous Jamarat Bridge, where upto 3 million Muslims perform the stoning ritual within 24 hours.
On the 12th day of Hajj, about 2/3 of the pilgrims executed lapidation even within 7 hours.
Every system has variation. Common cause variation is the variation due to the current system. Dr. Deming increased his estimate of variation due to the system (common cause variation) to 97% (earlier in his life he cited figures around 80%). Special cause variation is that due to some special (not part of the system) cause.
The control chart (in addition to other things) helps managers to avoid tampering (taking action on common cause variation as though it were a special cause). In order to take action against the results of common cause variation the performance of the system the system itself must be changed. A systemic improvement approach is needed.
To take action against a special cause, that isolated special cause can be examined. Unfortunately that approach (the one we tend to use almost all the time) is the wrong approach for systemic problems (which Deming estimated at 97% of the problems).
That doesn’t mean it is not possible to improve results by treating all problems as some special event. Examining each failure in isolation is just is not as effective. Instead examine the system that produced those results is the best method. The control chart provides a measurement of the system. The chart will show what the process is capable of producing and how much variation is in the system now.
If you would like to reduce the variation picking the highest data values (within the control limits) and trying to study them to figure out why they are so high is not effective. Instead you should study the whole system and figure out what systemic changes to make. One method to encourage this type of thinking is asking why 5 times. It seeks to find the systemic reasons for individual results.
Managing the Constraint is mostly about managing the non-bottleneck systems and making them “aware” how fast they should work — when they should slow down, when they should stop, or when they should increase pace and by how much. The Drum-Buffer-Rope system allows for a systems-wide awareness.
The Bottleneck or Constraint, acts as a Drum — it sets the rhythm that the whole system should follow. In Lean Manufacturing, this is also called “Takt Time.”
I demonstrated these ideas recently by taking an updated version of my XIT Sustained Engineering paper from the TOCICO in Barcelona to the Lean Design and Development conference and recasting all the exploitation and subordination steps as waste reduction instead.
This is the dilemma: “Optimize everything” conflicts with “Only optimize the bottleneck”. I like both approaches and have used them both successfully. How is it possible that two of my favourite techniques disagree?
I like the way the post looks at this question. I must admit, my personally view is that the conflict is not as stark as it may appear. Continue reading →
“Subordination happens first!” In the 5 focusing steps, the third step is to subordinate the rest of the system to the decision made in step 2 to fully exploit the capacity constrained resource. I had observed in my work with the XIT Sustained Engineering group (the subject of my paper for the conference), that the subordination actions always had to happen first before the constraint could be fully exploited. However, this is counter-intuitive given the order of the steps. As Eli reminded the audience, step 2 is “Decide what (and how) to exploit.” This then leads to a set of subordination decisions which make exploitation possible. Subordination always happens first.
using the Theory of Constraints 5 focusing steps and the drum-buffer-rope solution for production flow problems, it was possible to increase the productivity of a sustained engineering department by more than 200%. In the final, quarter of the study period, a 25% increase (elevation) of the capacity constrained resource, produces a 25% increase in overall system throughput – just as the theory and model would predict.