Text Size
Wednesday, January 16, 2019
Six Sigma Heretic The Truth About Control Charts

Six Sigma

Six Sigma Solutions

Business Performance Excellence

Maximizing your profitability

Experimental Design

Solving the unsolvable in industry

On-site Training

On-site Training

Don't believe what you see; at least at first.

Many people go through a point in their lives where they question the beliefs they hold most dear. Then there are those who question the entire basis for statistical process control (SPC) once they have learned the statistical basis for them. I can’t help you with the former, but I have something to say to the latter after the break.

There are lots of ways to mess up control charts, but today I’d like to discuss a concern I hear from students and clients after they have learned some statistics and encounter one of these situations. Interestingly, although one example is with a very large sample size (for continuous data) and the other a fairly small one (for discrete data), the root of the problem is the same misunderstanding about control charts. In both cases, you might be tempted to either spend money trying to control a process only to find yourself losing more due to increased variation, or you might abandon control charts altogether as tools that don’t work for you.

Let me give you two scenarios.

Scenario 1

You have a high-volume process that automatically samples 50 units for your control chart. You decide to use an x-bar and s chart because you know s is a better chart to use when n > 7. This is what you get:


Figure 1 - A standard x-bar and s chart for n=50


Is your process really that out of control? If so, should you have been reacting and adjusting the process all that time for those out-of-control signals? If you tell your production group to do that, they won’t be able to make anything at all—they will be spending all their time making tiny adjustments to the process.

Dang it! I guess SPC doesn’t work at our business.

Scenario 2

You have a low-volume process that only produces 100 units a day. This is a bleeding-edge process and you’re still doing product control as you learn more about the process variables. You only have a pass-fail test, so you’re using a p-chart. However, you have just had an SPC class and your instructor told you about how the control limits on the p-chart are based on  approximately—because it’s discrete—the 99.73-percent confidence intervals around the average failure rate. Because you measure everything you produce, there should be no sample error in your estimate of what the fail rate is—you just measured the entire population, right? Because there’s no error in the estimate of your nonconformance rate, does this mean that any day that isn’t exactly what your overall average has been is a day that you’re out of control, as in the chart below? Time for some heads to roll, I guess.


Figure 2 - A p-chart with limits +/- 0.5 units around the historical average

Clues

Well, obviously there’s something screwy with these control charts. I’ll give you a couple of hints to see if you can figure out what is behind both of these scenarios. First, let’s say that in Scenario 1, instead of measuring all 50, you had only measured five for each point. Let’s keep it as an x-bar and s chart, even though with n = 5 we would probably choose an x-bar and r chart. That way we’re comparing the same things. Using the exact same data, but with n = 5 randomly chosen from the 50:


Figure 3 - A standard x-bar and s chart but using n = 5 from the 50

Wow! That looks pretty good. That’s your first clue.

The second hint is from Scenario 2. What population are we really tracking here? Is it the nonconformance rate we had today? What’s going on?

We first need to review the purpose of the control chart. Control charts are a powerful tool for identifying when a process is subject to common cause variability (random variability that’s common to the process) or the common and special cause variability (in addition to the usual process, some destabilizing effect is present). The whole reason for doing that is to avoid two errors. Statisticians call them alpha and beta, or producer’s and consumer’s risk. We can think of them as “reacting when I should not” and “not reacting when I should.”

Why avoid reacting when I should not? Well, if adjusting the process costs money, then we waste money that we didn’t need to spend. But, as Dr. W. Edwards Deming showed, what’s even more important is that adjusting a process that’s subject only to common cause variability will actually add to the process variability. A process only affected by common cause variability is in statistical control.

And of course we want to avoid not reacting when we should—if something is happening, we want to know about it as soon as possible. A process affected by special and common causes is out of statistical control; we can’t predict what’s going to happen next.

When you get right down to it, when to react or not is an economic question. When Walter Shewhart invented the control chart, he used the statistics as a heuristic (a cocktail-party word for a decision-making rule) for balancing these two errors. The reason that we use three standard errors (the 99.73-percent confidence interval) on most control charts is so that it takes a pretty clear difference before we adjust a process, so as to minimize reacting when we shouldn’t. At the same time, when we’re pretty sure there has been a change in the process, we know that we should investigate and react.

Alright, so if that is what we’re doing with control charts, why are they not working in our two scenarios?

Our purpose is to use the statistics as a heuristic for making economic decisions. We