Measurement System Analysis
Measurement System Analysis
Observed Process
Variation
Equipment Hardware
Software
Setup
Calibration Frequency
Calibration Technique
Sample Preparation Performance
Cleanliness
Operator Procedure
Humidity & Temp
Data Entry
Vibration
Calculations
Lighting
Procedures Power Source Environment
Figure 7.3 Accuracy vs Precision – The Center of the Target is the Objective
2repeatability + 2reproducibility
% Contribution = X 100 Eqn 7.3
2 total
1
National Institute of Standards and Technology
• Code the samples such that the coding gives no indication to the expected
measurement value – this is called blind sample coding.
• Have each sample measured by an outside laboratory.
• These measurements will serve as your reference values.
• Ask each Operator to measure each sample three times in random sequence.
• Ensure that the Operators do not “compare notes”.
• We will utilize Minitab to analyze the measurement system described in Case Study III.
1
As Reported by Hong Kong R&D Center
1/28/2017 Ronald Morgan Shewchuk 18
Operational Excellence
Measurement System Analysis
Figure 7.8 Measurement System Analysis Steps – Variable Data
Open a new worksheet. Click on Stat Quality Tools Gage Study Create Gage R&R Study Worksheet on the top menu.
Enter the Number of Operators, the Number of Replicates and the Number of Parts in the dialogue box. Click OK.
Name the adjoining column Silica Conc and transcribe the random sample measurement data to the relevant cells in the worksheet.
Click on Stat Quality Tools Gage Study Gage R&R Study (Crossed) on the top menu.
Select C2 Parts for Part numbers, C3 Operators for Operators and C4 Silica Conc for Measurement data in the
dialogue box. Click the radio toggle button for ANOVA under Method of Analysis. Click Options.
1/28/2017 Ronald Morgan Shewchuk 24
Operational Excellence
Measurement System Analysis
Figure 7.8 Measurement System Analysis Steps – Variable Data
Six (6) standard deviations will account for 99.73% of the Measurement System variation. Enter Lower Spec Limit
and Upper Spec Limit in the dialogue box. Click OK. Click OK.
1/28/2017 Ronald Morgan Shewchuk 25
Operational Excellence
Measurement System Analysis
Figure 7.8 Measurement System Analysis Steps – Variable Data
16
150
14
0 1 2 3 4 5
Gage R&R Repeat Reprod Part-to-Part
Parts
R Chart by Operators
1 2 3 Silica Conc by Operators
UCL=0.6693
Sample Range
18
0.50
_ 16
0.25 R=0.26
0.00 LCL=0 14
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
Parts 1 2 3
Operators
Xbar Chart by Operators
1 2 3
Parts * Operators Interaction
18
Sample Mean
18 Operators
UCL=15.593
__ 1
16
2
Average
X=15.327 16 3
LCL=15.061
14
14
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5
Parts 1 2 3 4 5
Parts
A new graph is created in the Minitab project file with the Gage R&R analysis results.
Return to the session by clicking on Window Session on the top menu to view the ANOVA analytical results.
• Let us more closely examine the graphical output of the Gage R&R (ANOVA) Report
for Silica Conc.
• Figure 7.9 shows the components of variation.
• A good measurement system will have the lion’s share of variation coming from
the product, not the measurement system.
• Consequently, we would like the bars for repeatability and reproducibility to be
small relative to part-to-part variation.
• By contrast, the X-bar SPC chart of Figure 7.11 should be out of control.
• This seems counterintuitive but it is a healthy indication that the variability present
is due to part to part differences rather than Operator to Operator differences
• Let us now focus on the analytical output of the session window as captured in
Figure 7.8.
• Lovers of Gage R&Rs will typically look for four metrics as defined below and
expect these metrics to be within the acceptable or excellent ranges specified by
Gage R&R Metric Rules of Thumb as shown in Figure 7.15.
2measurement
% Contribution = X 100 Eqn 7.4
2 total
measurement
% Study Variation = X 100 Eqn 7.5
total
6measurement
Two-Sided Spec % P/T = X 100 Eqn 7.6
USL - LSL
3measurement
One-Sided Spec % P/T = X 100 Eqn 7.7
TOL
Return to the active worksheet by clicking on Window Worksheet 1 *** on the top menu. Name the adjoining column Reference Conc
and enter the reference sample concentration values corresponding to each sample (Part) number.
1/28/2017 Ronald Morgan Shewchuk 37
Operational Excellence
Measurement System Analysis
Figure 7.16 Gage Linearity and Bias Study Steps – Variable Data
Click on Stat Quality Tools Gage Study Gage Linearity and Bias Study on the top menu.
Select C2 Parts for Part numbers, C5 Reference Conc for Reference values and C4 Silica Conc for Measurement data in the dialogue box.
Click OK.
1/28/2017 Ronald Morgan Shewchuk 39
Operational Excellence
Measurement System Analysis
Figure 7.16 Gage Linearity and Bias Study Steps – Variable Data
Gage Linearity and Bias Report for Silica Conc
Reported by:
Gage name: Tolerance:
Date of study: Misc:
Gage Linearity
Predictor Coef SE Coef P
1.0 Regression Constant 0.5443 0.1826 0.005
95% CI Slope 0.00835 0.01234 0.502
Data
Avg Bias
S 0.167550 R-Sq 1.1%
0.8
Gage Bias
Reference Bias P
0.6 Average 0.666667 0.000
12 0.711111 0.000
13.3 0.600000 0.000
Bias
14 0.622222 0.000
16.7 0.677778 0.000
0.4 17.3 0.722222 0.000
0.2
0.0 0
12 13 14 15 16 17
Reference Value
A new graph is created in the Minitab project file with the Gage Linearity and Bias Study results.
• We can see there is a bias between the Hong Kong measurement system and
Minnesota Polymer’s measurement system.
• The bias is relatively constant over the silica concentration range of interest as
indicated by the regression line.
• The Minnesota Polymer measurement system is reading approximately 0.67 wt %
Silica higher than Hong Kong.
• This is not saying that the Hong Kong instrument is right and the Minnesota
Polymer instrument is wrong.
• It is merely saying that there is a difference between the two instruments which
must be investigated.
• This difference could have process capability implications if it is validated.
• Minnesota Polymer may be operating in the top half of the allowable spec range.
• The logical next step is for the Hong Kong R&D center to conduct an MSA of similar
design, ideally with the same sample set utilized by Minnesota Polymer.
1
Intended outcome of script from Customer Satisfaction Team
Open a new worksheet. Click on Stat Quality Tools Create Attribute Agreement Analysis Worksheet on the top menu.
Enter the Number of samples, the Number of appraisers and the Number of replicates in the dialogue box. Click OK.
The worksheet is modified to include a randomized run order of the scripts (samples).
Name the adjoining columns Response and Reference. Transcribe the satisfaction level rating and the reference value of the script to
the appropriate cells.
1/28/2017 Ronald Morgan Shewchuk 46
Operational Excellence
Measurement System Analysis
Figure 7.17 Measurement System Analysis Steps – Attribute Data
Click on Stat Quality Tools Attribute Agreement Analysis on the top menu.
Select C4 Response for Attribute column, C2 Samples for Samples and C3 Appraisers for Appraisers in the dialogue box. Select C5
Reference for Known standard/attribute. Click OK.
1/28/2017 Ronald Morgan Shewchuk 48
Operational Excellence
Measurement System Analysis
Figure 7.17 Measurement System Analysis Steps – Attribute Data
Assessment Agreement Date of study:
Reported by:
Name of product:
Misc:
90 90
Percent
Percent
80 80
70 70
60 60
1 2 3 1 2 3
Appraiser Appraiser
A new graph is created in the Minitab project file with the Attribute Assessment Agreement results.
Display the analytical MSA Attribute Agreement Results by clicking on Window Session on the top menu.
• The attribute MSA results allow us to determine the percentage overall agreement,
the percentage agreement within appraisers (repeatability), the percentage
agreement between appraisers (reproducibility), the percentage agreement with
reference values (accuracy) and the Kappa Value (index used to determine how
much better the measurement system is than random chance).
• From the graphical results we can see that the Customer Service Agents were in
agreement with each other 90% of the time and were in agreement with the
expected (standard) result 90% of the time.
• From the analytical results we can see that the agreement between appraisers was
80% and the overall agreement vs the standard values was 80%.
• The Kappa Value for all appraisers vs the standard values was 0.90, indicative of
excellent agreement between the appraised values and reference values.
• Figure 7.18 provides benchmark interpretations for Kappa Values.
• Another way of looking at this case is that out of sixty expected outcomes there
were only three miscalls on rating customer satisfaction by the Customer Service
Agents included in this study.
• Mr. Lee can have confidence in the feedback of the Virtual Cable customer
satisfaction measurement system and proceed to identify and remedy the
underlying root causes of customer dissatisfaction.
14.8
_
X=14.67
14.4
LCL=14.239
1 4 7 10 13 16 19 22 25 28
Day
0.6
UCL=0.5295
Moving Range
0.4
0.2 __
MR=0.1621
0.0 LCL=0
1 4 7 10 13 16 19 22 25 28
Day
Open a new worksheet. Copy and paste the measurement data from the two instruments into the worksheet.
Select the reference instrument XRF-EDS1 for the X variables and XRF-EDS2 for the Y variables. Click OK.
17
16
XRF-EDS2
15
14
13
13 14 15 16 17
XRF-EDS1
Hover your cursor over the least squares regression line. The R-sq = 98.1%. Correlation is good.
Return to the worksheet. Click on Stat → Regression → Orthogonal Regression on the top menu.
Select the reference instrument XRF-EDS2 for the Response (Y) and XRF-EDS1 for the Predictor (X) variables. Click Options.
Select 95 for the Confidence level. Click OK → then click OK one more time.
17
16
XRF-EDS2
15
14
13
13 14 15 16 17
XRF-EDS1
Click on Window → Session on the top menu. The session window indicates that the 95% Confidence Interval of the slope includes 1.0.
The two instruments are linear in accuracy.
Return to the worksheet. Click on Stat → Basic Statistics → Paired t on the top menu.
Select XRF-EDS1 for Sample 1 and XRF-EDS2 for Sample 2 in the dialogue box. Click Options.
Select 95.0 for Confidence level. Select 0.0 for Hypothesized difference. Select Difference ≠ hypothesized difference for Alternative
hypothesis in the dialogue box. Click OK. Then click OK one more time.
1/28/2017 Ronald Morgan Shewchuk 77
Operational Excellence
Measurement System Analysis
Figure 7.20 Metrology Correlation and Matching Steps
The session window indicates that the 95% confidence interval for the mean difference includes zero. The P-Value for the paired t-Test
is above the significance level of 0.05. Therefore we may not reject the null hypothesis. There is no significant bias between the two
instruments.
George, M., Maxey, P., Price, M. and Rowlands, D., The Lean Six Sigma
Pocket Toolbook, McGraw-Hill, New York, NY, 2005
• 40 CFR Part 136: Guidelines Establishing Test Procedures for the Analysis of
Pollutants, Appendix B
40 CFR Part 136, Subchapter D