One of my pet peeves is when people say that a point outside the control limits is a special cause. It is not. It is an indication that it likely a special cause exists, and that special cause thinking is the correct strategy to use to seek improvement. But that doesn’t mean there definitely was a special cause – it could be a false signal.
Similarly, a result that doesn’t signal a special cause (inside the control limits without raising some other flag, say a run of continually increasing points) does not mean a special cause is not present.
The reason control charts are useful is to help us maximize our effectiveness. We are biased toward using special cause thinking when it is not the most effective approach. So the control chart is a good way to keep us focused on common cause thinking for improvement. It is also very useful in flagging when it is time to immediately start using special cause thinking (since timing is key to effective special cause thinking).
However, if there is result that is close to the control limit (but inside – so no special cause is indicated) and the person that works on the process everyday thinks, I noticed x (some special cause) earlier, they should not just ignore that. It very well could be a special cause that, because of other common cause variation, resulted in a data point that didn’t quite reach the special cause signal. Where the dot happened to land (just above or just below the control limit – does not determine if a special cause existed).
The signal is just to help us systemically make the best choice of common cause or special cause thinking. The signal does not define whether a special cause (an assignable cause) exists of not. The control chart tool helps guide us to use the correct type of improvement strategy (common cause or special cause). But it is just a signaling device, it isn’t some arbiter of whether a special cause actually exists.
The biggest reason to be cautious about using special cause thinking when the control chart does not signal the likelihood of a special cause is we generally use special cause thinking far too often (our psychology already is heavily biased toward this strategy). We want to reduce how often we apply special cause thinking because we will be more effective using common cause strategies. But that doesn’t mean that we should ignore real special causes just because the data doesn’t quite signal a special cause.
The special cause signal means there is a high likelihood of a special cause, and using special cause thinking is a profitable strategy given this signal. But people that have been trained and have experience to understand variation (and thus resist the urge to jump to special cause thinking far too often) should not ignore real special causes.
It is wise to temper your conviction of how aggressively to invest in investigating the special cause signal depending on your knowledge of the situation. A signal of a special cause isn’t proof that one exists. Don’t become so enamored with the signals that you throw out judgement. Judgement should be grounded in data, understanding variation an understanding the process. But it is dangerous to apply tools as though they magically devine whether a special cause exists.
A control chart helps you manage processes. But knowledge of the actual work process is important in combination with an understanding of variation. They both are valuable. Each one on its own is much less effective.