One factor at a time (OFAT) Versus Factorial Designs

Guest post by Bradley Jones

Almost a hundred years ago R. A. Fisher‘s boss published an article espousing OFAT (one factor at a time). Fisher responded with an article of his own laying out his justification for factorial design. I admire the courage it took to contradict his boss in print!

Fisher’s argument was mainly about efficiency – that you could learn as much about many factors as you learned about one in the same number of trials. Saving money and effort is a powerful and positive motivator.

The most common argument I read against OFAT these days has to do with inability to detect interactions and the possibility of finding suboptimal factor settings at the end of the investigation. I admit to using these arguments myself in print.

I don’t think these arguments are as effective as Fisher’s original argument.

To play the devil’s advocate for a moment consider this thought experiment. You have to climb a hill that runs on a line going from southwest to northeast but you are only allowed to make steps that are due north or south or due east or west. Though you will have to make many zig zags you will eventually make it to the top. If you noted your altitude at each step, you would have enough data to fit a response surface.

Obviously this approach is very inefficient but it is not impossible. Don’t mistake my intent here. I am definitely not an advocate of OFAT. Rather I would like to find more convincing arguments to persuade experimenters to move to multi-factor design.

Related: The Purpose of Factorial Designed ExperimentsUsing Design of Experimentsarticles by R.A. Fisherarticles on using factorial design of experimentsDoes good experimental design require changing only one factor at a time (OFAT)?Statistics for Experimenters

This entry was posted in Data, Design of Experiments, Popular, Quality tools, Science, Six sigma, Statistics and tagged , , , , , , . Bookmark the permalink.

6 Responses to One factor at a time (OFAT) Versus Factorial Designs

  1. Matt Wrye says:

    I think it depends on what is trying to be understood.

    For example, if the purpose is trying to understand a new tool or process than a factorial design could be beneficial. Receiving a new molding tool can be set in a molding press and a factorial design setup to understand the settings to run the tool for the best results.

    If a person is problem solving, then I believe one factor at a time is best. My problem solving mentor taught me that if you have to design a factorial with more than 3 factors then you haven’t done due diligence in truly understanding the problem.

    I know that 2 or 3 factors is a factorial design but the number of trials are small enough to be easily managed. I consider a factorial design when the number of trials starts to explode or you have to do substitutions so you can’t see all the factors because some are buried in together.

  2. I push for both efficiency and for interaction detection. For non-stat people, the interaction argument is easier for them to understand. In fact, Brad, I use the mountain example (but mine rising ridge goes from SE to NW 🙂 ) to show that the 1FAT (OFAT looks like zero FAT to me…) approach leads to what LOOKS like a optimum but is not –the worst kind of information gathering because it effectively precludes more experiments.

    The efficiency is really from two angles. The abstract one is the s.d. of the effects for DOE vs 1FAT, especially for FF’s in the same number of runs. However, to me much more traction is gained when people begin to see that they really can study 5 or 10 factors AS A SYSTEM rather than 1 or 2 at a time. This encourages experimenters to collaborate (more factors get considered) instead of hiding in a corner and deciding what small number to study on their own.

    In fact, it was just earlier today that a “client” (student at a corporation) presented the results of a 4 factor experiment. The “last” factor, machine, contributed almost all of the variation–the 3 “engineering factors” contributed far less. A big surprise to the company.

  3. Dave Olson says:

    One can argue very effectively against OFAT as follows: think about a 3-factor process for simplicity. Draw a cube, with each axis representing the range of values for one of the factors. Every point in that cube is a “process,” and responses, in principle, will differ at each point.

    Without doing a DOE to create model of the connection between factors and responses, how can one optimize even such a simple linear, non-interactive process? What does an OFAT really give us? A list of trials that we tend to look at as individuals so that we can go shopping for a better process than we have. Do we create a predictive function for the process for each response? The answer to that is almost always a resounding “no,” because the basic philosophy behind an OFAT has nothing to do with such concerns, but is mostly about getting lucky and making enough of an improvement to get your boss off your back.

    I’ve seen the carnage of many an OFAT run by really bright and well-educated people, and find that they are most often pretty random and disorganized attempts to wander through the forest at night, hoping to bump into the biggest tree in the land.

    Efficiency arguments often, in my experience, fall on deaf ears because there seems to be a presumption that “my profound understanding of the subject matter” will lead in nearly a direct route to process perfection. If small incremental improvements are enough, most of us can do a few OFAT runs and make things a little better, but if we want to optimize (and to understand the cause and effect between factors and responses) there is no replacement for DOE.

    I sum this up with a quote from a book by Madhav Phadke written 25 years ago:

    “In practice, many design-improvement experiments, where only one factor is studied at a time, get terminated after studying only a few control factors because both the R & D budget and the experimenter’s patience run out. As a result, the quality improvement turns out to be only partial ….”

    (Phadke, M., Quality Engineering Using Robust Design, New Jersey, Prentice-Hall, Inc., 1989, p.80
    )

  4. A. Ames says:

    It makes sense to do one variable at a time until you learn how to control the inputs, and at least roughly where they need to be set.

    To optimize a process you should do as many variables as possible, if only to prove there are no significant interactions.

    Variables in mechanical systems tend to be independent, whereas chemical
    systems can have many interactions.

    Imagine a process that produces an aluminum silicate zeolite by alkali(Na/K) dissolution of silica with leaching of aluminum from TiO2, where the concentration of zeolite determines a subsequent coating viscosity in a polymer matrix. Such systems were routinely optimized with 8 factor central composite designs, and all interactions were significant.

  5. John Hunter says:

    >>> It makes sense to do one variable at a time until you learn how to control the inputs, and at least roughly where they need to be set. <<< Adjusting only one variable at a time is time consuming and risky. There is lots of work on how to find optimum results. Knowledge of the systems is important (to determine what variables are important and what values for those variable are reasonable). I would suggest anyone that wants to only adjust 1 variable and only then start using multivariate experimentation read more about design of experiments and evolutionary operations (evop) (Evolutionary Operation by George E. P. Box and Norman Draper).

    As stated above, knowledge of the system being improved is critical, if it is true that you can vary only one variable to find an optimal point to then to multivariate experimentation that is fine, it is just often not the case and it is risky to do if the expert isn’t as wise as they think they are.

  6. Pingback: Understanding Design of Experiments (DoE) in Protein Purification » Curious Cat Management Improvement Blog

Comments are closed.