You must balance risk and benefits when determining acceptability.
So I thought I was done with measurement system analysis after my last column, but I just finished reading Don Wheeler’s June 1 column, “Is the Part in Spec?” and the first thing I thought was, “Well, that was… complicated and ultimately unhelpful in answering the article’s title question.” I like a diversity of viewpoints, but they have to make sense. Does Wheeler’s? Let’s take a closer look.
To be fair, I corresponded with Wheeler as I wrote this response to make sure I was accurately portraying his viewpoint, which is exactly the point I'm missing in the article: i.e., whether a particular part conformed, given a particular measurement. The way I see it, the article talks about (and would be interpreted as) changing production limits for the process based on measurement error. However, doing so could drive some bad decisions, even if this was not what Wheeler intended. I showed the article to a number of people, and that is how they interpreted it as well.
To determine if a part is conforming to spec, we must understand the measurement system, among other important things. Repeated measurements will give us an estimate of the measurement error, σ E = 54.458. So far so good. But then (to my way of thinking), the line of reasoning goes off the rails.
Wheeler’s article could very well give someone the idea that they need to set their manufacturing specifications to some tighter limit to ship conforming product, given a certain measurement error. Nothing could be further from the truth. As process engineers know, changing a manufacturing specification doesn't affect the process output; changing the process affects the process output.
There are two processes that affect the as-measured product conformance to customer specification—the manufacturing and the measurement processes. The manufacturing process could be helped by first attaining control if you haven’t already, and second, by decreasing variability around the customer target by designed experimentation. But before you do that, you probably should deal with the measurement process. Although that is not completely necessary to make improvements, a gauge that is highly variable (but in control) will increase your sample size needed to detect improvements and thus make improvement more expensive.
Like any other process, a measurement system must show its capability to meet requirements. It also must show control, as I describe in my measurement system analysis (MSA) articles listed at the end of this column. Capability for measurement systems is determined in regard to being able to measure within the specification limits.
If you have a customer specification to which your gauge is not capable of measuring, setting tighter manufacturing limits will only result in you internally scrapping a lot of product that is probably within specification. Instead, if you have a gauge that is incapable of measuring to the customers’ specification—here comes a radical idea—consider fixing the measurement system or buying a new one.
This course of action should be obvious. Why is it not when using Wheeler’s approach and generating manufacturing limits? Because there is no calculation of measurement system capability to measure to a specification. All the calculations of watershed specifications and probable errors so far remove the reader’s focus on the reality of measurement error that by the time you get through calculating such things and someone walks by and suggests buying a new gauge, you say, “Wait, what?”
Wheeler’s notion seems to allow for the backward view that changing a specification will magically cause the process to produce product that meets that specification. This is a throwback to the days before the “Red Bead Experiment” showed that changing the specification regardless of inherent process variation only stresses out our “willing workers.” In my experience, an earnest desire to meet a tighter specification does not, in fact, have an effect on the process output.
Wheeler's article doesn’t give a specification for the viscosity, so let’s make some up to play with and see how it works. Let’s say that, regardless of viscosity target, customers expect ±175 centistokes around that target as the tolerance. Let’s say one of the products is targeted to 2,500 centistokes.
Now that you have read Wheeler’s article, is your gauge even capable of making a decision about conformance? Or as your boss would ask, “Can you use it?”
If you are like most people, at this point you will realize that you don’t know how to answer that question, given the information in the article. Wheeler doesn’t answer that question. And that is the question your boss is going to want answered.
Wheeler tells us, “If you tighten the watershed specifications by three probable errors on each end, then all of the product you will end up shipping will have at least a 99-percent chance of conforming.” (This is where I get the idea that someone might think that tightening up the specifications is the way to ship conforming parts—a leap, I’ll grant you.)
All right, let’s follow his advice and see where it leads. You go to production and find out that the current product is in control with a capability as measured of Cp= C pk= Cpm= 1 (mean = 2,500, as-measured standard deviation = 58.324, normal distribution). This means that the as-measured variability, directly from the lab database and including product and measurement error, is exactly equal to the customer’s specification width and is centered and on-target. You are currently making 99.73 percent of the product in specification.
Now we will use Wheeler’s calculations to generate the watershed specifications—2,328.5 and 2,671.5. To come within the three-probable-errors test on each side, based on our short repeated-measures study (3 × 36.759 = 110.277), would mean that we have to move our internal rejection specifications to a lower specification of 2,438.777 and an upper specification of 2,561.223. If we use that as our internal rejection limits, we will now be making 29.385-percent scrap, assuming a normal distribution.
Wait, what?
The process going in was already making 99.73-percent in specification with the current gauge variability, so that 29.385-percent scrap is almost certainly all in specification. So why are we scrapping it, when it neither measures out of spec nor do we expect that the product itself is really out of spec? Because the manufacturing specs told us to.
How can this be? It fundamentally doesn’t make any sense to me to move internal rejection limits based on gauges with high measurement error in the absence of understanding the process variability. This approach is confusing, backwards, and it turns out, unnecessary.
I would use that measurement error of 54.458 and compare it to the customer’s specifications by calculating a %R&R.
We would discover that the viscosity gauge had a %R&R of 80.1 percent. So, about 80 percent of the customer specification is taken up by the measurement device itself. Still, the process as it stands, variable gauge and all, is minimally capable of producing product that meets the specification. Presumably, that is because the true product variability in the absence of measurement error is pretty low, so even when thrown together with that large measurement error, we still as-measured make almost everything in specification.
By way of illustration, something like the graph in figure 1 is really going on.
Figure 1: Product viscosity
Note that the “real” product viscosity is tightly conforming to the nominal, that 80 percent of the spec is taken up by the measurement error alone, and that the as-measured variability is minimally capable at Cp = Cpk = Cpm = 1. (At this point, you should be thinking, “Thank goodness standard deviations are not additive!”)
Hint: if the customer is going to tighten the specification, your actual product is performing better than the measurements would indicate, so look at the measurement process before thinking you need to improve the manufacturing process. With the low actual product variability and the high measurement error still resulting in minimal capability, this gauge might be acceptable for production.
Side note: Acceptability is different than capability. Capability is gauge error compared to specification width, as measured by %R&R, and is a calculation. Acceptability is if it is reasonable to use that gauge in production, and is a business decision. If my product variability was higher (i.e., I was in control and making more product out of specification) or out of statistical control, I would run the chance of misclassifying product as conforming or nonconforming. I describe this in the MSA articles listed below. In such a case, this gauge would definitely be too risky. Right now, based on our historical data, the real product value doesn’t often get anywhere near the specifications, but—“Danger, Will Robinson!”—if you do have one excursion, you may not detect it right away, which might be bad. Perhaps very, very bad. However, if it goes on long enough, you will eventually detect it on your control chart. You must balance risk and benefits when determining acceptability.
If this process is a high priority for improvement or at risk for excursions, we would probably first investigate the measurement system for opportunities to decrease measurement error. If that failed, we might consider purchasing a new one. At no point would we consider moving in the internal rejection limits. That is just crazy talk.
If, on the other hand, the process was highly capable, say Cp = Cpk = Cpm = 2, both process and actual product variability are low compared to the specification. Let’s say that we would be there if our tolerance was ±350, so it would look something like the graph in figure 2.
Figure 2: Product viscosity in capable process
If I go out of control, I had better be investigating it, regardless of whether it is in or outside the manufacturing limits. I still don’t need tightened manufacturing limits, although they are not as damaging as with the earlier example.
If the process is highly variable compared to the specification, say Cp = Cpk = Cpm= 0.5, then I have real problems because I have promised to do something (i.e., make specification) that I clearly am incapable of doing. In such a case, something like the graph in figure 3 is going on.
Figure 3: Viscosity is highly variable compared to specification.
Now this might be the one place where the tightened specifications make sense mathematically. However, I would not enjoy watching you tell your boss that not only is your process totally incapable of making the specification, but you also have to scrap 59.97 percent of what you make, as opposed to the 13.36 percent that is as-measured falling outside the customer’s specification.
Fortunately, I don’t often see processes with such low capability anymore. And really, do you need tightened specifications to tell you that you had better get off your duff and work on reducing the process and measurement variability for this process? On your way to do that, don’t forget to go punish sales for taking this business. (Make them do the math above; they hate that.)
My point is that you cannot efficiently decide if a part is in or out of specification without knowing more about the process itself than the measurement error. Following the recommendations from Wheeler’s article produces results from meh to disastrous.
Rather than using watershed specifications and coming in by a certain number of probable errors (which to me is just one step up from Acceptance Sampling ), I would recommend doing a proper measurement system analysis followed by a capability analysis to answer the question, “Is this part in spec?”
So don’t forget: Test for and attain control in your measurement process first, assess the capability of the measurement system to measure within the specification, then determine if the gauge is acceptable for use based on factors such as the total variability, cost to make multiple measures, cost of misclassification, and cost to replace the gauge. Move on and do a traditional capability study so that you can build a smart reaction plan. Based on what you find, assess your need for reducing variability of the gauge and/or the process, and use the tools in the quality sciences to do so. That sounds pretty straightforward, and I have given you the tools for much of that in the MSA articles listed at the end of this column.
Wheeler’s column, I fear, has not provided a framework to answer the question, “Is this part in spec?” It could mislead someone into making a bad decision by tightening the manufacturing limits in an effort to answer Wheeler's final question, “How can we be sure we are shipping conforming product?” Following the process, his column seems to promote could encourage bad behavior—and it would be hard to “gauge” the costs of doing so.
The approach I use to perform an MSA is detailed in this sequence of articles:
Letting You In On a Little Secret
The Mystery Measurement Theatre
Performing a Short-Term MSA Study