Catalyzing innovation, problem solving, and discovery, the Second Edition provides experimenters with the scientific and statistical tools needed to maximize the knowledge gained from research data, illustrating how these tools may best be utilized during all stages of the investigative process. The authors’ practical approach starts with a problem that needs to be solved and then examines the appropriate statistical methods of design and analysis.
* Graphical Analysis of Variance
* Computer Analysis of Complex Designs
* Simplification by transformation
* Hands-on experimentation using Response Service Methods
* Further development of robust product and process design using split plot arrangements and minimization of error transmission
* Introduction to Process Control, Forecasting and Time Series
The recipient of the 2008 William G. Hunter Award is Ronald Does. The Statistics Division of the American Society for Quality (ASQ) uses the attributes that characterize Bill Hunter’s (my father – John Hunter) career – consultant, educator for practitioners, communicator, and integrator of statistical thinking into other disciplines to decide the recipient. In his acceptance speech Ronald Does said:
The first advice I received from my new colleagues was to read the book by Box, Hunter and Hunter. The reason was clear. Because I was not familiar with industrial statistics I had to learn this from the authors who were really practicing statisticians. It took them years to write this landmark book.
For the past 15 years I have been the managing director of the Institute for Business and Industrial Statistics. This is a consultancy firm owned by the University of Amsterdam. The interaction between scientific research and the application of quality technology via our consultancy work is the core operating principle of the institute. This is reflected in the type of people that work for the institute, all of whom are young professionals having strong ambitions in both the academic world and in business and industry.
The kickoff conference attracted approximately 80 statisticians and statistical practitioners from all over Europe. ENBIS was officially founded in June 2001 as “an autonomous Society having as its objective the development and improvement of statistical methods, and their application, throughout Europe, all this in the widest sense of the words” Since the first meeting membership has grown to about 1300 from nearly all European countries.
Since full factorial gathers additional data, it reveals all possible interactions, but as seen by the numbers above, there is a trade-off. More data equals more information but more data also equals a longer test duration. The minimum data requirements for full factorial are very high since you are showing every experiment.
Even if you are using full factorial to get the same amount of information as a fractional factorial test, it will take more time since you need more data to see statistically relevant differences between the many experiments. You might be wondering how fractional factorial can be accurate if interactions are possible?
Random interactions of high relevance are very rare, especially when looking for interactions of more than 2 factors. You really need to design tests where you look for meaningful interactions that are based on true business requirements rather than hoping for a random and low influence interaction between a red button, a hero shot and a headline.
The traditional approach to optimizing a product or process using computer simulation is to evaluate the effects of one design parameter at a time. The problem with this approach is that interactions between design factors and second-order effects are likely to result in a locally optimized design that will provide far less performance than the global optimum. Kodak researchers use DOE to develop tests that examine first-order, second-order, and multiple factor effects simultaneously with relatively few simulation runs. The result is that the analyst can iterate to a globally optimized design with a far higher level of certainty and in much less time than the traditional approach.
By using DOE to drive CFD, Kodak researchers were able to optimize the design of the printhead in considerably less time than competitors. The advantages of simulation were especially apparent late in the project when researchers discovered a more optimal ink formulation for one of the colors.
Another difficulty in industrial experimentation is the existence of interactions. As has been stated, manufacturing processes are complex with many factors involved. In many processes these factors interact. This is particularly so for continuous processes such as plating or sputtering. Saying that the factors interact means more than that they are related to each other. It means that the effect of one (or more) factors on the response variable(s) changes when one (or more) other factor(s) changes its value.
In order to detect interactions and understand the nature of their effects it is necessary to combine the interacting factors into the same experimental runs. The problem is not necessarily knowing in advance if the interactions exist. Sometimes they are predictable with theory. Sometimes they are discovered when the process behaves ‘strangely’.
In addition to their efficiency, factorial designs also offer the only method of detecting interactions through experimentation. Because numerous factors can be combined in the same series of experimental runs, the interactions can be detected and the nature of their effects can be evaluated when they are present.
The paper also explains analytic and enumerative studies. Dr. Deming stressed the importance of understanding the distinction between the two.
We are fielding a Design of Experiments concept to ensure we conduct the right amount of testing — not too much or too little, but just right. We will field this approach in phases as we must train our people and put the right tools in place. However, it is already showing great promise.
In a recent Benefield Anechoic Facility test, the 412th Electronic Warfare Group used Design of Experiments methodology to cut a two-month program to three weeks. This schedule reduction translated directly into savings and helped reduce the concept-to-fielding cycle time while still ensuring the system was thoroughly tested. While building these capabilities is critical, the most critical piece of the puzzle is our people. We must continue to develop engineers, pilots, navigators, program managers and maintainers to test these systems and “find stuff so the warfighter doesn’t.”
You may not realize that I first met Bill 38 year ago, when he was in Singapore helping us set up the first school of engineering in the country. He persuaded me to go to the graduate school at UW-Madison and I daresay that’s the best advice I ever got in my whole career. Now when I come to think of it, what Bill stood for in his lifetime has not been, and never will be, out of date. He had advocated the use of statistical thinking and the systems approach, which if anything is even more critical today in handling issues such as global warming and government effectiveness.
Also, statistical design of experiments has assumed an increasingly important role in performance improvement and optimization in the face of constrained resources, again something always in the minds of engineers, managers and business leaders. From time to time there are others who package statistical tools under labels Bill might not even have seen himself, such as “Design for Six Sigma“, but the underlying idea is still the same: recognize the existence of variation, and the earlier you anticipate it and do something about it, the better off you will be in the end.
Bill’s zeal in spreading the message and sharing his knowledge and expertise with people in other parts of the world is well known; I would even say that he had recognized that “the world is flat” way before the likes of Tom Friedman discovered the reality of globalization!
So that’s to share my thoughts with you, having being honored by the Bill Hunter award. I am copying this to Stu, also to Doug who chairs the committee for this award. I reality enjoy the professional association and friendship with you all.
I had not realized Dad was helping set up the first school of engineering in Singapore. This is the kind of thing I mentioned in, The Importance of Management Improvement, where I mention people telling me the positive impact Dad had on their lives.
“A lot of big companies are developing their own software engineering variance of Six Sigma training,” said Siviy, “putting software-specific examples into the normal Six Sigma curriculum.” However, she said, it’s early in the adoption curve. “In the software world there is a real lack of case studies that show applications of Six Sigma in software engineering,” she said. And those that use Six Sigma in software are often reluctant to share examples because they consider it a competitive advantage.
Still, Siviy said, “At a lot of software conferences now you see a sprinkling of presentations that somehow touch on Six Sigma or Lean, and the quality and depth of questions have evolved tremendously. In general, and not just in Six Sigma, as the [software] industry matures you see a wave of interest in measurement and analytical techniques.”
McKesson is a prime example. “Measurement is key,” Childers said. “What you can’t or don’t measure, you don’t know.”
The Software Engineering Institute at Carnegie Mellon University has great materials. There is a danger in using those materials to become overly bureaucratic but the material was developed out of an excellent understanding of quality management (way back when that was the way this stuff was referred to). David Anderson provides some good insights, see: Stretching Agile to fit CMMI Level 3
Humans are very good at detecting patterns, but rather poor at detecting randomness. We expect random incidents of cancer to be spread homogeneously, when in fact true randomness results in random clusters, not homogeneity. It is a mistake for an experiment to consider a pool of 47,000 possibilities, and then only report on the 7 cases that seem interesting.
A proper experiment states its hypothesis before gathering evidence and then puts the hypothesis to the test. Remember when you did your seventh grade science fair experiment: you made up a hypothesis first (“Hamsters will get fatter from eating Lucky Charms than Wheaties”) and then did the experiment to confirm or refute the hypothesis. You can’t just make up a hypothesis after the fact to fit the data.
This is an excellent article discussing very common errors in how people use data. We have tendencies that lead us to draw faulty conclusions from data. Given that it is important to understand what common mistakes are made to help us counter the natural tendencies.
Website Optimizer, Google’s free multivariate testing application, helps online marketers increase visitor conversion rates and overall visitor satisfaction by continually testing different combinations of site content (text and images).
Rather than sitting in a room and arguing over what will work better, you can save time and eliminate the guesswork by simply letting your visitors tell you what works best. We’ll guide you through the process of designing and implementing your first experiment. Start optimizing your most important web pages and see detailed reports within hours.
Google provides an online slide show with audio (a good example of one way to share online information sharing in my opinion). This tool seems to have limited experimental options to what is on the page (it does not appear, for example, that one variable could be current customer v. new visitor…). Still it looks like an very easy way to do some simple multi-factorial experiments. Google offers a list of partners [the link that Google broke was removed] for those interested in consulting and more advanced features (and for those experts reading this you can apply to be a partner).