0% found this document useful (0 votes)
66 views28 pages

The Role of Variation, Error, and Complexity in Manufacturing Defects"

Deatils

Uploaded by

Vasant bhoknal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views28 pages

The Role of Variation, Error, and Complexity in Manufacturing Defects"

Deatils

Uploaded by

Vasant bhoknal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

UC-406

SAND94-8535
• Unlimited Release
Printed March 1994

The Role of Variation, Error, and Complexity in Manufacturing Defects"

C. Martin Hinckley Professor Philip Barkan


Project and Administration Department Stanford University
Sandia National Laboratories / California Stanford, CA 94305-4021

Abstract

Variation in component properties and dimensions is a widely recognized factor in product


| defects which can be quantified and controlled by Statistical Process Control
• methodologies. Our studies have shown, however, that traditional statistical methods are
! ineffective in characterizing and controlling defects caused by error. The distinction
between error and variation becomes increasingly important as the target defect rates
" approach extremely low values. Motorola data substantiates our thesis that defect rates in
the range of several parts per million can only be achieved when traditional methods for
controlling variation are combined with methods that specifically focus on eliminating
defects due to error.

Complexity in the product design, manufacturing processes, or assembly increases the


likelihood of defects due to both variation and error. Thus complexity is also a root cause
of defects. Until now, the absence of a sound correlation between defects and complexity
has obscured the importance of this relationship.

We have shown that assembly complexity can be quantified using Design for Assembly
(DFA) analysis. High levels of correlation have been found between our complexity
measures and defect data covering tens of millions of assembly operations in two widely
different industries. The availability of an easily determined measure of complexity,
combined with these correlations, permits rapid estimation of the relative defect rates for
alternate design concepts. This should prove to be a powerful tool since it can guide
design improvement at an early stage when concepts are most readily modified.

* Based on a journal article submitted to the Journal of Quality Technology

..,, S
Table of Contents

Contents Page
I. Introduction ............................................................................................................. 9
II. ControllingVariation ............................................................................................... 9
StatisticalQualityControl................................................................................................ 9
Taguchi'sRobustDesign.............................................................................................. 10
Motorola's6 Sigma and the ProcessCapabilityIndex ................................................... 10
Process Capability- An Insufficient Measure for Predicting Defects ........................ 11
Limitations of the Variation Paradigm ............................................................................ 12
III. Error- a Distinctly Different Defect Source than Variation ...................................... 13
Error Probabilities Are Not Predicted by Variation Models....................................... 14
o Probability-TheOnly Universal Method for Describing Error and Variation .............. 14
Error-A Critical Factor in Defect Creation ................................................................ 14
• Poka-Yoke and 100 Percent Inspection ........................................................................ 15
Evidence Supporting the Distinction Between Variation and Error ................................. 16
Self and Source Inspection ........................................................................................... 17
IV. Complexity - A Third Source of Defects ................................................................. 17
Quantifying Complexity ................................................................................................. 18
Assessing Assembly Complexity................................................................................... 19
Assembly Complexity - A Key Factor in ConformanceQuality ....................................... 19
V. A New Model of Conformance Quality .................................................................... 20
Testing the Relationship Between Defects and Assembly Complexity ........................... 20
Assessing the Level of Quality Control for Each Organization ....................................... 22
Comparison of Product Concepts - An Illustrated Example............................................ 22
VI. Conclusions ......................................................................................................... 25

5/6
List of Illustrations

Figure Page

1 Population and sample from Ishikawa .................................................................. 10


2 The probability distribution functions for a hole and cylinder diameter ................. 11
3 Part proposed by Olivera (1988) for the study of variation ................................... 13
4 The relationship between complexity, error, and variation defects ........................ 18
5 Defect rate versus manual Assembly Efficiency by Brannan (1991) ...................... 19
6 Defects per unit versus assembly complexity for two manufacturers..................... 21
7 Alternate assembly concepts for a box ................................................................. 23
. 8 Predicted assembly time versus number of assembly operations ........................... 24

List of Tables

Table Page

1 Estimated defects per unit for four product concepts ........................................... 23

7/8

......... " II r, , II ' I' I I I


I. Introduction
'4

Buzzell (1987) et al proclaimed that "Quality is King," affirming its dominant role in
market share and Return on Investment. The performance of Japanese products in the
marketplace reinforces this conclusion (Womack et. al. (1990)). Our focus is motivated
by tremendous pressure to improve conformance quality, measured by the manufacturing
defect rate, to previously unimaginable limits.

Illustrating the required level of performance, Toyota asserted in 1990 that their North
American suppliers had defect rates that were two orders of magnitude higher than their
Japanese suppliers (Sanger (1990)). Similarly, the goal of Motorola's 6 sigma (Harry et.
al. (1988)) program has been to reduce defects by roughly 3 orders of magnitude. In both
cases the new target for defect levels is in the range of I to 10 parts per million (ppm). By
contrast, defects in the range of 2,000 to 20,000 ppm are perceived as normal using
traditional Statistical Quality Control (SQC) (Ishikawa (1990)).

The consequences of this trend are profound, pointing to the need for a new rigor in
, understanding defects as an essential element for improvement. It is not enough to
confront defect issues on the factory floor. A means of addressing potential defect
sources early in the design process is essential, requiring a comprehensive approach.

Several concepts and methods have evolved which have the purpose of reducing defects.
The focus of these methods often leads to strategies for defect reduction that are limited,
particularly when the goal is to achieve defect rates below 10 ppm. We will first critically
review the sources of product defects and the traditional methodologies that have been
developed to address the defect sources.

II. Controlling Variation

Statistical Quality Control


Statistical Quality Control (SQC) was one of the earliest quality improvement methods to
be developed. It has become the backbone of several other techniques. It is based on the
principle that variation can be observed in all processes and that such variation can be
described statistically. Building upon this concept, sampling has been used to characterize
" process dispersion and control processes. Figure 1, based on the work of Ishikawa
(1990), illustrates a sampling, measurement, and feedback cycle which is the basis for this
m
method.
Aim Population Sample Data

(ProcessControl) Lot
ActionforProcess _/f--"___ "
,.............................................................
Action............"

Figure1. Population and Sample fromIshikawa (1990).

This traditional Quality Control method has two major shortcomings:

1. Defects or deviations are detected after they are created, as Figure 1 shows.
The number of defects produced is very sensitive to the response time of the
feedback cycle, which can involve many delays when the point of detection is
downstream from the defect source.
2. Traditional Quality Control focuses exclusively on the production process,
accepting the design as given. Since many defects are attributable to design, a
focus on eliminating defects in the production process misses many important
opportunities for defect reduction.
In one sense, the two major shortcomings of traditional quality control are related since
they both center on a downstream process control. This represents a serious limitation
since defect discovery is generally too late for maximum effectiveness.

Ta.quchi's Robust Design


Taguchi's(Ross1988) robustdesignrepresentsan important advancein thisperspective.
This method addresses defect problems during the design/development phase. This
method considers defect issues relative to the product design as well as the production
process. Taguchi robust design seeks defect reduction (and improved reliability) by
reducing sensitivity to variation. Largely an experimental procedure which comes late in
the design process, its power is limited by its very limited application in the earliest phases
of design.

Motorola's 6 Sioma and the Process Capability Index


Motorola's 6 sigma (Harry et al (1988)) seeks to control defects caused by variation
through assuring that design requirements have been correctly established in the design
phase and that the production process capability matches these requirements. Like the
Taguchi method it can influence the design as well as the production process. An
important measure used by Motorola's 6 sigma method to identify appropriate production
processes is the Process Capability Index, Cp (Harry et al (1988)). The Process Capability
Index provides a measure of how well variation is controlled relative to tolerance limits.
For bilateral tolerances, this index (which has also been called the "standard capability
ratio" Gryna (1988)) is defined as:

10
Cp - [USL-LSL [ = tolerancewidth
- 6.0 process capability (1)

Where, USL = Upper Specification Limit


, LSL = Lower Specification Limit
= the Standard Deviation of the production process

A related index (CI_) (Harry et al (1988)) addresses "shifts and drifts" in the process
mean. Motorola has based the 6 sigma method upon the likelihood of a mean shift that is
equal to 1.5 standard deviations.

Process Capability- An Insufficient Measure for Predicting Defects


The Cp and Cpkindices can be used to predict how frequently the outcome of a process
will exceed specified limits. However, the probability of a defect is significantly different
from the probability of exceeding specification limits. By way of illustration, the
distribution for two manufacturing processes, a cylinder turning operation and hole
forming operation, are shown in Figure 2. For the normal distributions given in Figure 2,
the probability that a cylinder will exceed the Upper Specification Limit is 0.0013.
Similarly, the probability that a hole has a diameter smaller than its Lower Specification
Limit is 0.0013. However, the probability of encountering interference in a random
assembly of the cylinder and hole from Figure 2 must be determined by examining the joint
probability of having a large cylinder and small hole that interfere. The probability of
. assembly interference in this case is only 2.13e-5.

3,0oo .. c.=l_+._cp=l---_

3
I:L 3010 "'1
_'_' I
":'""i
':1 '\";"_";1

0.746 ).748 0.75 0.752 0.754

Diameter (in)

Figure 2. The probability distribution functions(PDF) for a cylinderturning and hole forming
operation based on an ANSI standard class LC4 clearance locational fit (Oberg et. al.
- (1984)). Note the logarithmic vertical scale. The vertical lines indicate assumed Lower
SpecificationLimits (LSL) and the Upper SpecificationLimits (USL). The cylinder
has a mean diameterof 0.749 incheswith a standard deviationof 0.00033 inches. The
° hole drilling distribution is based on a mean of 0.75175 incheswith a standard
deviationof 0.000583.

11
This type of analysis reveals that there is a high probability with random assembly that an
oversized cylinder will not cause interference during assembly. As this case illustrates, the
probability of a interference or a defect caused by variation is generally lower than the
probability of exceeding the specification limits (assuming the limits have been set ,i

correctly). However, defect rates based on joint probability distributions are extremely
sensitive to the accurate characterization of the tails of the distribution.

Accurate assessment of joint probabilities and the correct assignment of specification


limits becomes increasingly difficult as the number of variables in a study grows.
Frequently, simplifying assumptions are used to make analysis of variation practical. For
example, a Root Sum of Squares (RSS) (Harry et al (1988)) method is commonly used in
conjunction with one dimensional analysis (Research (1988)) to define tolerance
specifications. However, more sophisticated three dimensional treatments of tolerance
variations that depend on Monte Carlo simulations have shown that the RSS method is
inaccurate for complex problems (Heggem (1992), Held (1993)). We have shown that
interference probabilities can be underestimated by orders of magnitude when complex
interfaces are evaluated as one dimensional rather than three dimensional problems
(Hinckley (1993)). As a consequence, the Process Capability Index is generally an
inaccurate predictor of defect rates even though it is a useful tool in controlling variation.

Based on the drilling variation illustrated in Figure 2, we note that the predicted o
probability of inadvertently forgetting to drill a hole (diameter = 0.0") is virtually zero
since this outcome differs from the mean diameter by 1280 standard deviations.

Limitations of the Variation Paradigm


As typically applied, each of the methods that focus on variation are based on three
fundamental assumptions:

1) Sampling can be used to characterize all significant sources of variation,


2) All defects are a result of variation, and
3) It is generally assumed that variation follows a Normal or Gaussian distribution.
Based on these assumptions, in practice sample sizes are generally small and "odd-ball"
readings are discarded or averaged in a manner that dilutes their significance. Such
practices, while convenient, effectively preclude an accurate assessment of the "tails" of
the distribution. Since defects arise from the tails of the distributions, the assumption of a
normal distribution can seriously underestimate the incidence of defects. Our work has
challenged each of the underlying assumptions that have led to current practices.
Understanding the nature of rare events, particularly those beyond three standard
deviations from the mean, and the limitations of statistical methods is particularly
important when the goal is to achieve extremely low defect rates below 10 ppm.

12
. III. Error- a Distinctly Different Defect Source than Variation
Although variation is a useful way of describing the cause of many defects, it does not
" describe the cause of all defects. To illustrate, in the assembly of the simple box shown in
Figure 3, an operator may occasionally forget to install a screw. In the assembly, each
screw is either missing or present, a condition that can only be described in terms of
probability rather than variation. Omitting a screw is an error rather than a variation. We
define error as the execution of a prohibited action, or the failure to perform a required
action, or the misinterpretation of information essential for the correct execution of an
action.

b.

. Figure 3. Part proposed by Olivera (1988) for the study of defects resulting from
variation. Omitting a screw, an error, illustrates a type of defln the fabrication
and assembly of the box illustrated in Figure 3, there are several possible types
of omission errors that can be identified. For example, 1) an operator may
install a screw, but forget to tighten it, 2) occasionally an operator may forget
to drill one of the clearance holes in the lid, 3) or the operator may forget to
tap one of the holes that accepts a screw. Collectively, with the error of
omitting a screw, four types of omission errors have been described for each
threaded connector in this assembly.

In a clear departure from the predictions based on variation, Rook (1962) found, in a
study of over 23,000 production defects, that the probability of omitting an operation and
not detecting the omission using traditional production practice was approximately
0.00003. In other words, roughly one operation in 33,000 will be omitted without
detection, resulting in a defect rate of 30 ppm. Given two threaded fasteners and four
types of omission errors per fastening feature, the cumulative defect rate resulting from
omission errors for the simple box illustrated in Figure 3 would be approximately 240 ppm
. (60 ppm per part). This exceeds the defect rate goal of 1-10 ppm by roughly an order of
magnitude.

13
1
Error Probabilities Are Not Predicted by Variation Models
The distributions describing process variation are useless in predicting the frequency of
errors. For example, variation in screw torque can only be measured for screws that are
installed and tightened. Thus, the probability that the screw is omitted or that an operator
forgets to tighten the screw is not predicted by the distribution of torque. Similarly, the
distribution of clearance hole diameters can not be used to estimate the frequency of
omitting a drilling operation since the observed distribution can only be based on the
measurement of holes that have been drilled. Thus, the outcome of errors exceeds the
frequency and magnitude predicted by methods based on variation.

Because errors occur rarely in production, traditional sampling and statistical methods are
not useful in estimating their frequency. To illustrate, for a ninety percent confidence of
observing just fifteen screw omission errors while sampling one operation in a hundred,
we would have to perform over a hundred million operationsT Thus, we cannot depend
upon sampling or statistical methods to predict how frequently errors are occurring. More
importantly, statistical methods are not effective in eliminating defects caused by errors.
Errors require different methods of control than variation, and consequently must be
treated as a separate source of defects.

We have found support for this assessment in data drawn from 23 independently observed
studies of human performance which were obtained from research and production
environments (Hinckley (1993)). We have consistently rejected the normal distribution at
the highest levels of significance in these cases where it has been traditionally assumed for
decades, demonstrating that functions derived from Pareto's law provide a clearly superior
model of the data in every case. Since Pareto's law deviates from the conditions of the
Central Limit Theorem, this law provides a theoretical explanation for the observation that
traditional statistical methods consistently underestimated the frequency and magnitude of
rare events for these data, revealing a critical limitation of statistical methods in predicting
or controlling defect rates.

Probability-The Only Universal Method for Describing Error and Variation


We can convert every distribution describing variation to probabilities. For example, we
can extrapolate from a given distribution to predict the probability of exceeding
specification limits, or the probability of interference. However, we cannot convert every
probability to a distribution.

As we have already shown, error can only be effectively described in terms of probability.
Errors either occur or they do not. A part is either present in the product or missing.
Consequently, the only universal method of describing both error and variation is
probabJfity.

Error-A Critical Factor in Defect Creation


Although some writers have alluded to a difference between mistakes or error, and
variation (Lafferty (1992)), this distinction has not been accurately described in the

14
literature. This is probably due to the fact that it is virtually impossible to accurately
assess rare events using sampling methods. As shown by our example, the assumption
• that variation alone can characterize all defects isflcaJ,ed and often prevents
identification of major defect sources.

• The relative control of variation and error differs for each organization. We have
developed a technique which aids in assessing the relative strengths and weaknesses in
controlling error and variation based on each organizations defect data and quality control
philosophy. However, a discussion of this method is beyond the scope of this paper.

Errors of omission represent only one type of error that can occur. The following types of
errors were listed "in order of importance" in a recent book edited by the Nikkan Kogyo
Shimbun, Ltd. (1988):

1. Omitted processing 6. Processingthe wrong piece


2. Processing errors 7. Misoperation
3. Errors setting up workpieces 8. Adjustmenterror
4. Missing parts 9. Equipmentnot set up properly
5. Wrong parts 10. Tools and jigs improperly prepared
!
= ,

| While each of these errors are individually rare occurrences, collectively they can have a
significant impact on conformance quality. Harris et al (1969) concluded that 80 percent
" of the defects in complex systems could be attributed to error. In an examination of
23,000 production defects, Rook (1962) found that 82 percent of all defects were caused
by human errors. Voegtlen (1988) reported that 60 percent of product failures could be
traced to workmanship defects. In our review of data for front end automotive headlamps
by a major manufacturer we also observed that more the 70 percent of 6,600 observed
defects were caused by assembly or handling errors (Hinckley (1993)). In addition,
NASA (Associated Press (1993)) reported that most space shuttle mishaps occurring since
October 1990 are the result of human error. All of these studies point to human error as a
key source of defects and failures.

Poka-Yoke and 100 Percent Inspection


Shingo (1986) makes an important distinction between error and product defects. While
errors are inevitable, defects are not. He has shown that, in many cases, simple methods
can provide early error detection that assure correction before a defect is passed to the
next stage in the production process. He stated"

"We should recognizethat people are, alter all, only human and as such, they will, on rare
occasions, inadvertentlyforget things. It is more effectiveto incorporatea checklist-i.e., a
poka-yoke-intothe operation so that ifa worker forgets something, the device will signalthat

15
fact, thereby preventingdefects from occurring. This, I think, it the quickest road leading to
attainment of zero defects." (italics added) (Shingo (1986))

This is consistent with observations made by Rasmussen (1985), who concluded that the ,
frequency of errors derived from incident reports (such as defects) is dependent on the
opportunity for people to detect and correct the errors immediately. No amount of
vigilance or training will assure that unintentional errors will be recognized. A core
concept is that poka-yoke, in combination with 100 percent inspection, can catch virtually
every error.

Using these techniques, a washing machine drainpipe assembly line processing 30,000
units a month involving 23 workers achieved zero defects for six consecutive months
(Shingo (1986)). This level of performance is orders of magnitude better than the lowest
estimates of human error rates per Rook (1965). By using poka-yoke to detect and
correct error, defect probabilities can be less than error probabilities. Consequently,
defects are related more to the level of quality control than to the frequency of errors.

As illustrated, centering attention on error prevention and intervention is more productive


than prediction of error rates for production problems. Poka-yoke can be incorporated in
the product to assure that a part cannot be assembled in the wrong position or orientation
(Tsuda (1993)). In some instances it is possible to design the product so that assembly
cannot proceed if a part is missing or incorrectly positioned. In this sense, poka-yoke is an
important design tool. In practice, however, poka-yoke is not a full solution because it is
largely limited to specific accessible process steps and the design is accepted as given.

Evidence Sup_p_.Q__r_.g the Distinction Between Variation and Error


The distinction between error and variation is critical in achieving the lowest defect rates
demanded by the marketplace. Using traditional Statistical Quality Control (SQC), defect
rates in the range of 2000 ppm can be achieved. Based on normally distributed variation
and Motorola's 6 sigma philosophy, defect rates can theoretically be reduced to the 3.4
ppm through the control of variation. However, we estimated that the 6 sigma concepts
would only reduce defect rates by roughly a factor of two because this method does not
address errors. Alter five years of aggressively applying the 6 sigma philosophy, data
supplied by Motorola (Gebala (1992)) showed that their average defect rate on several
products was more than two orders of magnitude greater than the 6 sigma goal of 3.4
ppm, a result consistent with our estimates.

Some may argue that Motorola has simply not achieved their 6 sigma goal. However,
evidence that this is not the case can be obtained from information recently published by
Motorola in September of 1993. Smith (1993) stated,

"Motorola electedto enter this market (electronicballast) and set a quality goal of 6 sigma for
initial delivery. This required a very strict TDU (total number of defects per unit) budget. But
it became evident early in the project that achieving a Cp greater than 2 would go only part

16

' ' ,r, , 'm ' ' ..... m' _' " '
of the way. Mistake-proofing the design wouM also be required...Mistake-proofing the
design is an essential factor in achieving the TDUgoal. The design team isforced to
investigate any opportunitiesfor errors during manufacture and assembly, and to eliminate
them." (italics added) (Smith (1993))

w Motorola's own experience clearly demonstrates that mistake-proofing (the English


translation of poka-yoke per Shingo (1986)) must be applied in conjunction with control
of variation to achieve the lowest defect rates. With the addition of error control,
Motorola was able to achieve defect rates in the tens of parts per million.

Self and Source Inspection,


Shingo has also introduced several quality concepts that overcome some of the limitations
of other methods. Seifinspection, and source inspection (Shingo (1986)) have the goal of
detecting and eliminating defects at their production source. Self inspection has the
objective of detecting defects as close to the point of generation as possible to reduce or
eliminate delays in the feedback. By gaging tools, materials and activities upstream of the
process, it is possible to eliminate many defects before they are created using source
inspection. However, these techniques, like Statistical Quality Control (SQC), do not
address design as an important cause of defects.

IV. Complexity- A Third Source of Defects


| .
| There is another source of defects which we refer to as "complexity." Returning to the
II box example illustrated in Figure 3, if the lid could be attached with one screw instead of
II two, several opportunities for error would be eliminated and the number of features
contributing to variation defects would be reduced. A snap fit lid could further reduce the
probability of error and variation defects. Thus, many defects resulting from either error
or variation can be avoided through reducing the complexity of the product design.

This view is consistent with numerous studies which have identified task complexity or
difficulty as a factor increasing the probability of errors (Swain et al (1983), Meister
(1961), Park (1987), Card et al (1978), Jerison (1963), Harris (1966), MacKenzie et al
(1991), Gatchell (1979), Stalk et al (1990)). Although this relationship is widely
recognized, Rasmussen (1981) stated that it has not been quantified in general terms. Our
own work has shown that complexity also increases defects resulting from variation as
well as errors (Hinckley (1993)).

The relationship between error, variation and complexity is illustrated in Figure 4. Since
both defects due to error and variation are rare events, we can treat them as independent
" defect sources. Neither error nor variation are, by themselves, a complete description of
the cause of defects. However, complexity influences the likelihood of defects in a
product resulting from either variation or error. Consequently, the link between
complexity and defects leads to a more global o1"general model of defects than either

17
error or variation considered independently. In the remainder of the paper we will focus
on the broader relationship between defects and complexity.

Defects

' iiiiiiiii!iii!
i2!ii
i iif::,: .: i

Figure 4. The Relationship betweencomplexity, error, variation and defects. Since excessive
variation and error are both rare events, they can be treated as independentdefect sources
which, by themselves, are not complete models of defects. Complexity increases both errors
and defects resulting from variation.
i,

Some earlier efforts have been made to test on a broad basis the relationship between
product complexity and defect rates (Womack et al (1990), Ekings (1988)). These efforts
have led to weak correlations which we attrii:ute to two factors: 1) oversimplified -
measures of complexity, and 2) differences in quality control between organizations.

Quantifyin9 Complexity
At the present, there are only a few relative and no absolute measures of complexity.
Control of product complexity, a major source of defects, has thus far been hampered by
this lack of a quantifiable basis of measurement.

We introduce a general perspective on complexity, proposing that a valid measure of this


characteristic must have two essential elements:

a) a quantity measure - identifying the number of elements which contribute to


complexity
b) a difficulty measure - a relative measure of the difficulty in generating or executing
each of the elements.
A common weakness in efforts to understand the role of complexity has been the singular
focus on the number of elements. It is generally easy to count the number of elements
contributing to complexity, such as the number of assembly operations in a product.
However, assessing the relative difficulty of the elements is challenging. At the present,
there are no generally accepted methods for comparing part complexity. However,

18
Design for Assembly (DFA) methods have been developed in the last decade which we
believe have the potential for assessing assembly complexity.

Assessin.q Assembly Complexity


" Design for Assembly (DFA) methodologies were developed with the objective of helping
designers identify opportunities for making assembly easier. In every method, the number
of parts, and assembly operations, quantity measures of complexity, are counted.

Virtually, all of these methodologies also translate handling, insertion, and securing actions
into an estimate of the nominal time needed to perform an assembly process. In general,
for every factor that increases the difficulty of the action or the complexity of the assembly
interface, there is an increase in the predicted time for execution. The assembly time per
operation provides an approximate relative comparison of the difficulty or complexity of
the dissimilar assembly activities. Thus, the Design for Assembly (DFA) methods address
both quantity and difficulty measures of complexity.

Assembly Complexity - A Key Factor in Conformance Quality


In late 1990 Motorola published a report (Brannan (1990)) which showed that the
number of defects per million parts decreased dramatically with increases in the "manual
, assembly efficiency." Manual assembly efficiency is an arbitrary measure used in the
Boothroyd Dewhurst® (1985) Design for Assembly (DFA) method. Motorola's data
illustrating the relationship between defects and assembly efficiency is shown in Figure 5.

"_ IE+I i _

Figure 5. Observed Defects per MillionParts versus the Manual AssemblyEfficiency


published by Motorola (Brannan (1991)). The line drawn by Motorola is not a curve fit.

. To expl_tinthe relationship in Figure 5, consider that the mm_ber of assembly operations


and the average time required to perform each operation generally decreases as the
assembly efficiency increases. Assuming a constant probability of human error per unit
, time, the defect rate should increase as the assembly time per operation or complexity

19
increases. Thus, the relationship between defects and assembly efficiency observed by
Motorola is intuitively sound. Motorola did not attempt to explain this relationship.

The data provided by Motorola piqued our interest because it suggested that there may be
a quantifiable link between a criteria describing assembly complexity and product defects.
Such a correlation could be used to evaluate the conformance quality potential of product
concepts before tolerance studies are even initiated. In addition, a model based on such a
relationship could provide important insights useful in defining quality improvement
strategies (Barkan (1993)).

V. A New Model of Conformance Quality

Testing the Relationship Between Defects and Assembly Complexity


To test the relationship between assembly complexity measured using DFA and defect
rates, extensive data were obtained from Motorola (Gebala (1992)) and a disk drive
manufacturer. For each data set, there are many possible ways of comparing defect rates
to complexity measures. For example, defect rates can be expressed in terms of defects
per operation, defects per part, or defects per assembly, and assembly complexity could be
described in terms of total assembly time, average time per assembly operation, or average
assembly time per part. In our study, more than 50 univariate and multivariate
comparisons of defect rates and complexity measures were made for the data supplied by
each manufacturer.

While defects did not have any consistent correlation with part count, the defect-to-
complexity relationship illustrated in Figure 6 was found to provide the highest correlation
for data of both manufacturers. This relationship links the Defects per Unit (DPU) to a
function of total DFA predicted assembly time (TM) and the number of assembly
operations (Na). The relationship is remarkable not only for the high correlations
(correlation coefficients (r) > 0.95) but also for the observed consistency for data from
two different manufacturers. Each point in the figure represents a different product or
subassembly. Collectively the data reflects defect rates involving tens of millions of parts
and assembly operations.

20
1"_ DiskDrive _ __'t_ R
" "- Company _'__"_ B

I00 200 2oo0


.......

Figure 6. Defects per unit versus assemblycomplexityfor Motorola (Gebala (1992)) and disk
drive manufacturer. The solid lines represent power curve fits to the data. Each point
represents a different product or subassembly.

The equation defining the model of defects per unit (DPU) versus assembly complexity is
as follows:

DPU = (TM- to. N a)k


• C3 (1)
to, k, an_ c3, are three constants. The constant to represents a threshold assembly time
• for the simplest assembly operation. The constant k is the slope of the data in Figure 6,
which is approximately equal to 1.3 for both sets of data, and the constant c3 is related to
the vertical position of the data in the same figure.

Equation 1 is the core of a new model of production defects that is more general or global
in nature. This comprehensive model is more fundamentally sound and better validated by
correlation with production performance than any other current approach to predicting
defect rates. We are not aware of any methods or studies, for example, which have
combined all variations in a product to predict general defect rates in a manner that has
been correlated with production performance spanning a broad range of product
complexity.

The relationships illustrated in Figure 6 clearly show that complexity plays a dominant role
in product defects. The differences in the two sets of data also demonstrates that large
differences in level of quality control can exist among manufacturers as we had
anticipated, explaining why poor correlations between defects and complexity have been
observed in previous industry-wide studies. Figure 6 also demonstrates that the relative
" level of quality control can be easily compared, even for companies producing dissimilar
products of different complexity. This promises to be particularly helpful in benchmarking
studies.

21
Assessina the Level of Quality Control for Each Organization
To assess the global level of quality control for each organization, in the ideal case Design
for Assembly (DFA) data and defect data would be collected for a variety of products
currently in production which span a range of complexity for each company. The values
of the constants can then be determined by curve fitting as illustrated for two o

manufacturers in Figure 6. However, this is not always possible for several reasons:
1. Products may be too complex to l_ermit economical DFA evaluations,
2. Lack of cooperation or support prevents collection of defect and DFA data, or
3. Products are so similar in complexity that correlations would not be meaningful.
In these cases, if the defect rate and Design for Assembly (DFA) analysis for one existing
product can be obtained the relationship between defects and complexity can be estimated
by assuming k = 1.3 and to = 2 sec/operation in the following equation:

_ (TM- 2.Na) 1'3


c3 - DPU (2)

For example, given 500 defects in every million boxes illustrated in Figure 1 (TM=22.5
sec and 4 assembly operations), the value of c3 could be determined from Equation 2 to be
approximately 64,700 secl.3, units/defect.

Comoarison of Product Concepts - An Illustrated Example


Even where data is limited, one of the most important benefits of this study is that it leads •
to a method of comparing the relative potential defect rates of product concepts, given a
constant level of quality control. This method of comparing concept defect rates can be
applied at the earliest stages of concept development before detailed design is begun.

The techniques for comparing product concepts will be illustrated by examining assembly
options for the box described by Olivera (1988) as illustrated in Figure 3. The original
design and proposed alternate assembly concepts are shown in Figure 7. For each of the
proposed assembly concepts, the DFA assembly time and number of assembly operations
were determined. These values can be substituted into Equation 1 to estimate the defect
rates for a constant level of quality control. The results of this simple analysis, which were
completed in about a half an hour, are listed in Table 1. The comparison of the potential
defect rates can generally be completed in a fraction of the time required to perform a
detailed tolerance analysis of a single concept with predictions that more accurately reflect
the defect rates that are likely to be observed in production.

22
6

Original Design 1. Center Fastener

2. Snap Cover 3. Pin Lock Cover

Figure 7. Alternate assemblyconcepts for a box. The originalconcept was proposed


q
by Olivera (1988).

Table 1. Estimated Defects per Unit (DPU) for four product concepts illustrated in Figure 7.
Calculations are based on Equation 1 using k-=1.3, c3=64,700, t0=2 sec/op. Note, 1.6
seconds has been added to Concept 3 for assembly operations which are not "top down."

Original ConceptI Concept2 Concept3


Concept Centered Snap Lid LockPin
12Screws) Screw
DFAAss'y Time (TM-sec) 22.5 13.8 7 16.1
Number of Operations (Na) 4 3 2 4
EstimatedDPU 0.0005 0.000223 0.000065 0.000234
ConformanceQualityRank 4 2 1 3

From the resultsof this analysis,the snapfit lid will havea defectrate roughly seventimes
lower thanthe original productwith two screws. Eliminatingone screw,or usinglock
pins will reduce the defect rate by about a factor of two relative to the baseline design. In
the absence of other constraints, the snap fit lid would be selected, however, there are
ot_en many additional requirements which may favor a product concept having a potential
defect rate higher than the minimum alternative. The method presented here simply allows
consideration of the potential defect rate as one of the factors to be considered in selecting
. among design alternatives.

23

, , i i i , i
The difference in potential defect rates among alternative concepts can also be illustrated
graphically. Equation 1 reveals that the number of defects per unit (DPU) will be constant
as long as the following expression is equal to a constant (C):

C = TM- t0.N a (3)

From Equation 3, lines of constant defects per unit (iso-DPU) can be plotted on the same
figure with the Design for Assembly data for the alternative concepts as shown in Figure
8. The slope of the lines of constant defects per unit (iso-DPU) is equal to the threshold
assembly time to. Note that the relative potential defect rate can be determined by the
perpendicular distance above or below the iso-DPU line passing through the baseline
concept. The defect rate decreases as distance below this line increases.

30 , 700 ppm

25 -t_
1 _s__ ___- - -" s-- -O_'''- n_al
s" 500---_m
20 -t ....... 300 ppm
1 ....................... 13
15 -t .................... I1 100 p0rn

10 "q
_ ....................................................... Iso-I)l_.J

5q
..........
Ili
J P

0-1 I I I

1 2 3 4 5

Numberof AssemblyOperations(Na)

Figure8. Predicted Design for Assembly(DFA) time versus the number of assembly operations
for four products from Figure 7. The solid line is for constant defects per unit equal to the
baseline concept labeled "Original" (based on to =2 sec/op). Numbers indicate other concepts.

Concept 3 with the pin lock cover is not ideal from the standpoint of ease of assembly
because the pins are not inserted in a top-down manner. However, this product illustrates
a very important point. Comparing Concepts 1 and 3, the projected defect rates are
similar in spite of the fact that Concept 3 requires more assembly time and more assembly
operationsf This relationship at first seems counter-intuitive, but results from the inverse
relationship of defect rates with the number of assembly operations from Equation 1. In
addition to the data presented earlier, one automotive manufacturer provided defect data
spanning billions of assembly operations. Although we could not evaluate the data in the
same manner as the Motorola and disk drive data because the automotive manufacturer
did not have Design for Assembly evaluations, one important and consistent trend was the
negative correlation with the number of assembly operations.

Comparing the results of Table 1 and Figure 8, we can see that the relative conformance
quality of the conc_,pts studied can be predicted by the graphical method without the need
for lengthy calculations. This comparison is based on the assumption that the same

24
criteria, like 6 sigma, is used to control the level of variation of all concepts. A line of the
b
appropriate slope can simply be drawn through the point representing the baseline concept
and the relative distance above or below the line can be used to estimate the relative
potential defect rates, providing valuable input during concept comparison and selection
• with a minimum amount of analysis.

Because the level of quality control within an organization can change, the defect rates
predicted using this method should not be viewed as either absolute or unalterable. The
great advantage of this method is that it provides a useful relative estimate of the defect
rates of alternate concepts that is not sensitive to changes in the level of quality control.

VI. Conclusions
Achieving defect rates in the range of 1-10 parts per million, the new world class standard,
requires a clear understanding of the cause of defects. We have shown that errors and
variation are distinctly different defect sources. In contrast to the dispersion observed in
every naturallyoccurring process, errors, such as omitting a part during assembly, can
only be described in terms of probability. Errors in the production environment are such
extremely rare events, that they are poorly characterized using traditional statistical
• methods based on small samples where outliers are often discarded. Although errors
rarely occur, there are so many different types of error that error is often a major source of
q
defects in products.

This reveals a critical limitation of statistical methods in achieving the highest levels of
conformance quality. Our assessment of the distinction between error and variation is
substantiated by experience at Motorola which has shown that extremely low defect rates
were only achieved when methods for eliminatingerror were used in conjunction with the
control of variation.

The distinct difference between error and variation suggests that a comprehensive or
global model of product defect rates cannot be based on either of these concepts
independently. Reducing product complexity eliminates opportunities for error and
decreases the probability of defects resulting from variation. Thus, the link between
complexity and defects is a more global or comprehensive model useful in predicting
defects. However, industry-wide studies have shown poor correlations between defect
rates and measures of complexity due to large differences in the level of quality control
between manufacturers.

Using Design for Assembly methods to measure product complexity we have shown that
• defect rates are highly correlated with complexity within individual organizations. The
identified link between complexity and defects provides a new means of benchmarking the
relative quality control of organizations producing dissimilar products and aids in
• identifying important opportunities for reducing defects. Using the relationship between
defects and complexity identified in this study, the potential defect rates of product

25
concepts can be compared in the earliest stages of product develoDment with minimum
effort, providing a powerful new tool usef_alin product development.

The concepts presented in this study have a potentially profound impact on the
understanding of defects and the opportunities for improvement. It opens the way for the
development of a quality strategy which can identify the most efficient method of reducing
defects for each organization depending on their relative strengths and weaknesses in the
control of error, variation, and product complexity.

26
REFERENCES

" The Associated Press, "Space shuttle workers afraid to report errors, study finds". The Modesto
Bee, Modesto, July 27, 1993.
Barkan, P., & C. M. Hinckley, "The Benefits and Limitations of Structured Methodologies in
" Product Design". Manufacturing Review, Vol. 6, No. 3, September, 1993, pp. 211-219.
Boothroyd, G., Dewhurst, P. (1985). Product Design for Assembly. Boothroyd Dewhurst, Inc.,
Section 2, Kingston, 1985.
Brannan, B. (1991). "Six Sigma Quality and DFA-DFMA Case Study/Motorola Inc". Boothroyd
& Dewhurst DFM Insight Vol. 2, Winter, pg 1-3.
Buzzell, R. D., and B. T. Gale (1987). The P1MS (Profit Impact of Market Strategy) Principles-
Linking Strategy to Performance. Chapter 6, pp. 103-131.
Card, S., W. English, and B. Burr (1978). "Evaluation of mouse, rate-controlled isometric joystick,
step keys and text keys for test selection on a CRT". Ergonomics, 21(8), pp. 601-613.
Ekings, J. D. (1988), "Assembly Industries". Juran's Quality Control Handbook, Section 30, J. M.
Juran, editor in chief, Frank M. Gryna associate editor, Fourth Edition, McGraw-Hill, New
York, 1988., pp. 24.6.
Gatchzll, S. M. (1979). "The Effect of Part Proliferation on Assembly Line Operators' Decision
Making Capabilities". Proceedings of the Human Factors Society, 23rd Annual Meeting,
(Santa Monica : The Human Factors Society).
Gebala, D. (1992), Correspondence, Motorola, Inc., Advanced Manufacturing Technologies, Fort
Lauderdale, FL, August 7, 1992.
, Gryna, F. M. (1988). "Manufacturing Planning". Chapter 16, Juran's Quality Control Handbook,
J. M. Juran, editor in chief, Frank M. Gryna associate editor, Fourth Edition, McGraw-Hill,
New York, pp. 16.19-16.21.
• Harris, D. H. (1966). "The Effect of Equipment Complexity on Inspection Performance". Journal
of Applied Psychology 5O, 236-237.
Harris, D. H., and F. B. Chancy (1969). Human Factors in Quality Assurance. Wiley, New York,
p. 9.
Harry, M. J. and R. Stewart (1988), "Six Sigma Mechanical Design Tolerancing". Motorola
Publication No. 6t7-2-10/88, Scottsdale, AZ.
Heggem, R. (April 30, 1992), "Proactive Tolerance Analysis". Applied Computer solutions, Santa
Clara.
Held, D. O. (June 1993), "Quality Advisor". Manufacturing Engineering, pp. 12.
Hinckley, C. M. (1993). A Global ConfolTnance Quality Model-A New Strategic Tool for
Minimizing Defects Caused by Variation, Error; and Complexity. A Dissertation submitted
to the Department of Mechanical Engineering and the Committee of Graduate Studies of
Stanford University, Stanford.
lshikawa, K. (1990). Guide to Quality Control. Asian Productivity Organization, Quality
Resources, White Plains, Seventh Printing.
Jerison, H. J. (1963). "On the Decrement Function in Human Vigilance". In D. N. Buckner and J.
J. McGrath (Eds.), Vigilance: A symposium, New York, McGraw-Hill, pp. 199-216.
Lafferty, J. P., (October 1992) "Quality Advisor," Manufacturing Engineering.
, MacKenzie, I. S., A. Sellen, and W. Buxton (1991). "A Comparison of Input Devices in Elemental
Pointing and Dragging Tasks". Human Factors in Computing Systems-Reaching Through
Technology, CH11991 Conference Proceedings, Edited by S. P. Robertson, G. M. Olson, and
• J.S. Olson, Addison Wesley Publishing Co., Reading, pp. 161-166.

27

, i
i i nl
Meister, D. (1961). Analysis of Human Initiated Equipment Failures During Category I Testing.
OSTF-1, Report REL R-54. General Dynamics/Astronautics, San Diego.
Nikkan Kogyo Shimbun, Ltd./Factory Magazine, Ed. (1988). Poka-yoke-lmproving Product
Quality by Preventing Defects. Edited by Productivity Press, Cambridge.
Oberg, E., F. D. Jones, and H. L. Horton (1984). Machinery's Handbook - 22nd Edition. Edited
by H. H. Ryffel and J. H. Geronimo, Industrial Press Inc., New York, p. 1532.
Olivera, R. (1988). "6 Sigma/Fit Tolerance Analysis". Communications Sector, Motorola Inc.,
Schaumburg.
Park, K. S. (1987), Human Reliability - Analysis, Prediction, and Prevention of Human Errors,
Elsevier, Amsterdam, p. 197.
Rasmussen, J. (1981). Human Errors, A taxonomy for describing human malfi_nctions in
industrial intallations. Report No. RISO-M-2304, Roskilde, Denmark: RISO National
Laboratories.
Rasmussen, J. (1985), "Trends in Human Reliability Analysis". Ergonomics, 28(8), pp. 1185-
1195.
Research Needs and Technological Opportunities in Mechanical Tolerancing (1988). Results of
an International Workshop, Select Panel on Research Opportunities in Mechanical
Tolerancing held in Orlando Florida, Sep 28-Oct 2, for the US National Science Foundation
Design and Manufacturing Systems Division, ASME CRTD-I 5, New York.
Rook, L. W., Jr. (1962). Reduction of Human Error in Production. SCTM 93-62(14), Sandia
National Laboratories, Division 1443, June 1962.
Rook, L. W., Jr. (1965). Motivation and Human Error. SC-TM- 135, Sandia National
Laboratories, Albuquerque, p. 5.
Ross, P. J. (1988), Taguchi Techniques for Quality Engineering. McGraw-Hill, New York.
Sanger, D. E. (November 1, 1990), "U.S. Suppliers Get a Toyota Lecture". The New York Times.
Shingo, S. (1986). Zero Quality Control: Source Inspection and the Poka-yoke System.
Translated by Andrew P. Dillon, Productivity Press, Cambridge.
Smith, B. (1993), "Making a War on Defects-Six Sigma Design". IEEE Spectrum, September,
1993, pp. 43-47.
Stalk, G., Jr., and T. M. Hout (1990). Competing Against Time-How 7_me-Based Competition is
Reshaping Global Markets. The Free Press.
Swain, A. D. and H. E. Guttmann (1983), Handbook of Human Reliability Analysis with
Emphasis on Nuclear Power Plant Applications, NUREG/CR-1278, SAND80-0200, RX,
AN, Sandia National Laboratories, Albuquerque, August, 1983.
Tsuda, Y. (1993), "Implications of Fool Proofing in the Manufacturing Process". Quality Through
Engineering Design, Edited by W. Kuo, Elsevier, pp. 79-95.
Voegtlen, H. D. (1988). "Complex Industries". Section 31., duran's Quality Control Handbook, J.
M. Juran, editor in chief, Frank M. Gr3aaa associate editor, Fourth Edition, McGraw-Hill, New
York, pp. 31. I-31.24.
Womack, J. P., D. T. Jones & D. Roos (1990). The Machine that Changed the WorM. Rawson
Associates, New York, p. 93, 96.

28
UNLIMITED RELEASE

" INITIAL DISTRIBUTION

I
Susan Held
Co-Program Manager, Leadership Through Quality Staff
Albuquerque Operations Office
U. S. Department of Energy
P. O. Box 5400
Albuquerque, NM 87185-5400

Mark J. Kahnke
Director, Total Quality Management and Planning
AlliedSignal Inc., Kansas City Division, Mail Post 2A44
P.O. Box 419159
Kansas City, Mo. 64141-6159

Don E. Michel
President and General Manager
EG&G Mound Applied Technologies, Inc.
P.O. Box 3000
Miamisburg, OH 45342-3000
I

Melinda K. Bynum
TQM Director
' Rocky Flats Plant
P.O. Box 464
Golden, CO 80402-0464

Anders W. Lundberg
Special Assistant for Weapons Safety Studies
Defense Technologies Engineering Division
Lawrence Liverrnore National Laboratories
P. O. Box 808, L-125
Livermore, CA 94551

Harry Flaugh
WX Deputy Division Leader
Los Alamos National Laboratories
WX-DO, MS P945
Los Alamos, NM 87545

Susan H. Alexander
" Manager, Organizational Improvement Programs
Martin Marrietta Energy Systems, Inc.
Y-12 Site, Building 9119, Mail Stop 8236
" Oak Ridge, TN 37831

29
WilliamF. lema
Director-QualityAssurance
Martin Marietta SpecialtyComponents
P.O. Box 2908
Largo, FL 34649

J. C. Drummond
Division Manager, Quality
Mason & Hanger -Silas Mason Co., Inc.
Building 12-28
P.O. Box 30020
Amarillo, TX 79177

Dennis L. Hayes
Manager, Tritium Program Coordinator
Westinghouse Savannah River Site
235-H
P.O. Box 516
Aiken, SC 29802

MSTP
O

0112 P.M. Stanford, 10000


0141 M.R. Kestenbaum, 11000
0149 C.P. Robinson, 4000
0151 G. Yonas, 9000
0185 K.G. Haug, 10100
0186 C.E. Emery, 3000
0360 A.R.C. Westwood, 1000
0366 R.T. Johnson, 1040
0429 W.C. Nickell, 5100
0455 G.R. Otey, 4100
0463 R.L. Hagengruber, 5000
0513 H.W. Schmitt, 2000
0517 M. Prickett, 2508
0724 D.L. Hartley, 6000
0872 A. Beradino, 5408
0953 W.E. Alzheimer, 2900
1067 M.L. Jones, 7000
1357 A.A. Trujillo, 12913
1357 T. Olascoaga, 12914
1359 D.W. Bushmire, 12911
1361 M.R. Baca, 12909
1363 B. Hawkinson, 12662 •
1365 J.K. Gabaldon, 12903
1369 C.M. Tapp, 12900

30
1369 R.R. Prairie, 12908
B 1380 W.D. Siemens, 4200
9001 J.C. Crawford, 8000
Attn: M.E. John, 8100
• W.J. McLean, 8300
P. N. Smith, 8500
R. C. Wayne, 8700
9005 J.B. Wright, 5300
9006 E.E. Ives, 5200
9007 T.M. Dyer, 8800
9014 D.R. Henson, 5371
9036 C.T. Yokomizo, 8007
9037 R.J. Detry, 8200
9103 G. Thomas, 8111
9103 P.R. Bryson, 8111
9105 L.A. Hiles, 8400
9203 E.T. Cull, 5354
9405 R.E. Stoltz, 8008
9408 A.J. West, 8414
9409 L.N. Tallerico, 8205
• 9901 L.A. West, 8600
9901 C.M. Hinckley, 8603

9021 Publications for OSTI(10), 8535


9021 Publications/Technical Library Processes, 8535
0899 Technical Library Processes Department (4), 7141
9018 Central Technical Files (3), 8523

31

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy