Quality Engineering Taguchi Method
Quality Engineering Taguchi Method
4.1. Introduction 57
4.2. Electronic and Electrical Engineering 59
Functionality Evaluation of System Using Power (Amplitude) 61
Quality Engineering of System Using Frequency 65
4.3. Mechanical Engineering 73
Conventional Meaning of Robust Design 73
New Method: Functionality Design 74
Problem Solving and Quality Engineering 79
Signal and Output in Mechanical Engineering 80
Generic Function of Machining 80
When On and Off Conditions Exist 83
4.4. Chemical Engineering 85
Function of an Engine 86
General Chemical Reactions 87
Evaluation of Images 88
Functionality Such as Granulation or Polymerization Distribution 91
Separation System 91
4.5. Medical Treatment and Efficacy Experimentation 93
4.6. Software Testing 97
Two Types of Signal Factor and Software 97
Layout of Signal Factors in an Orthogonal Array 97
Software Diagnosis Using Interaction 98
System Decomposition 102
4.7. MT and MTS Methods 102
MT (Mahalanobis–Taguchi) Method 102
Application of the MT Method to a Medical Diagnosis 104
Design of General Pattern Recognition and Evaluation Procedure 108
Summary of Partial MD Groups: Countermeasure for Collinearity 114
4.8. On-line Quality Engineering 116
References 123
56 Taguchi’s Quality Engineering Handbook. Genichi Taguchi, Subir Chowdhury and Yuin Wu
Copyright © 2005 Genichi Taguchi, Subir Chowdhury, Yuin Wu.
4.1. Introduction 57
4.1. Introduction
The term robust design is in wide spread use in Europe and the United States. It
refers to the design of a product that causes no trouble under any conditions and
answers the question: What is a good-quality product? As a generic term, quality
or robust design has no meaning; it is merely an objective. A product that functions
under any conditions is obviously good. Again, saying this is meaningless. All en-
gineers attempt to design what will work under various conditions. The key issue
is not design itself but how to evaluate functions under known and unknown
conditions.
At the Research Institute of Electrical Communication in the 1950s, a telephone
exchange and telephone were designed so as to have a 40- and a 15-year design
life, respectively. These were demanded by the Bell System, so they developed a
successful crossbar telephone exchanger in the 1950s; however, it was replaced 20
years later by an electronic exchanger. From this one could infer that a 40-year
design life was not reasonable to expect.
We believe that design life is an issue not of engineering but of product plan-
ning. Thinking of it differently from the crossbar telephone exchange example,
rather than simply prolonging design life for most products, we should preserve
limited resources through our work. No one opposes the general idea that we
should design a product that functions properly under various conditions during
its design life. What is important to discover is how to assess proper functions.
The conventional method has been to examine whether a product functions
correctly under several predetermined test conditions. Around 1985, we visited the
Circuit Laboratory, one of the Bell Labs. They developed a new circuit by using
the following procedure; first, they developed a circuit of an objective function
under a standard condition; once it could satisfy the objective function, it was
assessed under 16 different types of conditions, which included different environ-
ments for use and after-loading conditions.
If the product did not work properly under some of the different conditions,
design constants (parameters) were changed so that it would function well. This
is considered parameter design in the old sense. In quality engineering (QE), to
achieve an objective function by altering design constants is called tuning. Under
the Taguchi method, functional improvement using tuning should be made under
standard conditions only after conducting stability design because tuning is im-
provement based on response analysis. That is, we should not take measures for
noises by taking advantage of cause-and-effect relationships.
The reason for this is that even if the product functions well under the 16
conditions noted above, we cannot predict whether it works under other condi-
tions. This procedure does not guarantee the product’s proper functioning within
its life span under various unknown conditions. In other words, QE is focused not
on response but on interaction between designs and signals or noises. Designing
parameters to attain an objective function is equivalent to studying first-order
moments. Interaction is related to second-order moments. First-order moments
involve a scalar or vector, whereas second-order moments are studied by two-
dimensional tensors. Quality engineering maintains that we should do tuning only
under standard conditions after completing robust design. A two-dimensional ten-
sor does not necessarily represent noises in an SN ratio. In quality engineering,
noise effects should have a continual monotonic tendency.
58 4. Quality Engineering: The Taguchi Method
冘 [ƒ(A,B, ... , N ) ⫺ m]
16
2
min i (4.2)
A,B,... i⫽1
The effective power is measured in watts, whereas the apparent power is measured
in volt-amperes. For example, suppose that input is defined as the following si-
nusoidal voltage:
EI
W⫽ cos(␣ ⫺ ) (4.5)
2
EI
apparent power ⫽ (4.6)
2
EI
reactive power ⫽ [1 ⫺ cos(␣ ⫺ )] (4.7)
2
62 4. Quality Engineering: The Taguchi Method
Since in the actual circuit (as in the mechanical engineering case of vibration),
phases ␣ and  vary, we need to decompose total variation into parts, including
not only effective energy but also reactive energy. This is the reason that variation
decomposition by quadratic form in complex number or positive Hermitian form
is required.
y ⫽ M (4.8)
ST ⫽ y 12 ⫹ y 22 ⫹ 䡠䡠䡠 ⫹ y n2 (4.9)
Using the Hermitian form, which deal with the quadratic form in complex num-
ber, we express total variation as
M1 y1 ⫹ M2 y2 ⫹ 䡠䡠䡠 ⫹ Mn yn
⫽ (4.11)
M1M1 ⫹ M2M2 ⫹ 䡠䡠䡠 ⫹ MnMn
Se ⫽ ST ⫺ S (4.14)
Se
VN ⫽ Ve ⫽ (4.15)
n⫺1
4.2. Electronic and Electrical Engineering 63
1
S ⫽ 10 log (S ⫺ Ve) (4.17)
r 
Now r (effective divider), representing a magnitude of input, is expressed as
r ⫽ M1M1 ⫹ M2M2 ⫹ 䡠䡠䡠 ⫹ MnMn (4.18)
In cases where there is a three-level compounded noise factor N, data can be
tabulated as shown in Table 4.1 when a signal M has k levels. L1, L2, and L3 are
linear equations computed as
L1 ⫽ M1 y11 ⫹ M2 y12 ⫹ 䡠䡠䡠 ⫹ Mk y1k
Table 4.1
Input/output data
Signal
Noise M1 M2 䡠䡠䡠 Mk Linear Equation
N1 y11 y12 䡠䡠䡠 y1k L1
N2 y21 y22 䡠䡠䡠 y2k L2
N3 y31 y32 䡠䡠䡠 y3k L3
64 4. Quality Engineering: The Taguchi Method
E ⫽ hf (4.30)
In this case, E is too small to measure. Since we cannot measure frequency as part
of wave power, we need to measure frequency itself as quantity proportional to
energy. As for output, we should keep power in an oscillation system sufficiently
stable. Since the system’s functionality is discussed in the preceding section, we
describe only a procedure for measuring the functionality based on frequency.
If a signal has one level and only stability of frequency is in question, measure-
ment of time and distance is regarded as essential. Stability for this case is nominal-
the-best stability of frequency, which is used for measuring time and distance. More
exactly, we sometimes take a square root of frequency.
Now
(y1 ⫹ y2 ⫹ y3 ⫹ y4)2
Sm ⫽ (ƒ ⫽ 1) (4.34)
4
ST ⫽ y 21 ⫹ y 22 ⫹ y 23 ⫹ y 24 (ƒ ⫽ 4) (4.35)
Se ⫽ ST ⫺ Sm (ƒ ⫽ 3) (4.36)
Ve ⫽ –13 Se (4.37)
Table 4.2
Cases where signal factor exists
Noise
Signal N1 N2 Total
M1 y11 y12 y1
M2 y21 y22 y2
⯗ ⯗ ⯗ ⯗
Mk yk1 yk2 yk
4.2. Electronic and Electrical Engineering 67
Table 4.3
Decomposition of total variation
Factor f S V
M k SM VM
N 1 SN
e k⫺1 Se Ve
Total 2k ST
Now
SM
VM ⫽ (4.43)
k
Se
Ve ⫽ (4.44)
k⫺1
SN ⫹ Se
VN ⫽ (4.45)
k
Thus, for modulation functions, we can define the same SN ratio as before.
Although y as output is important, before studying y we should research the fre-
quency stability of the transmitting wave and internal oscillation as discussed in
the preceding section.
68 4. Quality Engineering: The Taguchi Method
1
S ⫽ 10 log (S ⫺ Ve) (4.52)
6r 
Now
SN(F ) ⫹ Se
VN ⫽ (4.53)
6k ⫺ 3
Table 4.4
Case where modulated signal exists
Modulated Signal
Linear
Noise Frequency M1 M2 䡠䡠䡠 Mk Equation
N1 F1 y11 y12 䡠䡠䡠 y1k L1
F2 y21 y22 䡠䡠䡠 y2k L2
F3 y31 y32 䡠䡠䡠 y3k L3
N2 F1 y41 y42 䡠䡠䡠 y4k L4
F2 y51 y52 䡠䡠䡠 y5k L5
F3 y61 y62 䡠䡠䡠 y6k L6
4.2. Electronic and Electrical Engineering 69
Table 4.5
Decomposition of total variation
Factor f S V
 1 S
F 2 SF
N(F) 3 SN(F)
e 6(k ⫺ 1) Se Ve
Total 6k ST
we should research and design only analog functions because digital systems are
included in all AM, FM, and PM, and its only difference from analog systems is
that its level value is not continuous but discrete: that is, if we can improve SN
ratio as an analog function and can minimize each interval of discrete values and
eventually enhance information density.
❒ Example
A phase-modulation digital system for a signal having a 30⬚ phase interval such as
0, 30, 60, ... , 330⬚ has a value that is expressed in the following equation,
even if its analog functional SN ratio is ⫺10 dB:
2
10 log ⫽ ⫺10 (4.54)
2
By taking into account that the unit of signal M is degrees, we obtain the fol-
lowing small value of :
The fact that representing RMS is approximately 3⬚ implies that there is almost
no error as a digital system because 3⬚ represents one-fifth of a function limit re-
garded as an error in a digital system. In Table 4.6 all data are expressed in radians.
Since the SN is 48.02 dB, is computed as follows:
The ratio of this to the function limit of Ⳳ15⬚ is 68. Then, if noises are selected
properly, almost no error occurs.
Table 4.6
Data for phase shifter (experiment 1 in L18 orthogonal array; rad)
Voltage
Temperature Frequency V1 V2 V3
T1 F1 77 ⫹ j101 248 ⫹ j322 769 ⫹ j1017
F2 87 ⫹ j104 280 ⫹ j330 870 ⫹ j1044
F3 97 ⫹ j106 311 ⫹ j335 970 ⫹ j1058
T2 F1 77 ⫹ j102 247 ⫹ j322 784 ⫹ j1025
F2 88 ⫹ j105 280 ⫹ j331 889 ⫹ j1052
F3 98 ⫹ j107 311 ⫹ j335 989 ⫹ j1068
❒ Example
This example deals with the stability design of a phase shifter to advance the phase
by 45⬚. Four control factors are chosen and assigned to an L18 orthogonal array.
Since the voltage output is taken into consideration, input voltages are selected as
a three-level signal. However, the voltage V is also a noise for phase modulation.
Additionally, as noises, temperature T and frequency F are chosen as follows:
Voltage (mV): V1 ⫽ 224, V2 ⫽ 707, V3 ⫽ 2234
Table 4.7
Data for phase advance angle (rad)
Voltage
Temperature Frequency V1 V2 V3 Total
T1 F1 0.919 0.915 0.923 2.757
F2 0.874 0.867 0.876 2.617
F3 0.830 0.823 0.829 2.482
T2 F1 0.924 0.916 0.918 2.758
F2 0.873 0.869 0.869 2.611
F3 0.829 0.823 0.824 2.476
Since we can adjust the phase angle to 45⬚ or 0.785 rad (the target angle),
here we pay attention to the nominal-the-best SN ratio to minimize variability. Now
T and V are true noises and F is an indicative factor. Then we analyze as follows:
(ƒ ⫽ 18) (4.58)
15.701 2
Sm ⫽ 13.695633
18
(ƒ ⫽ 1) (4.59)
5.5152 ⫹ 5.2282 ⫹ 4.9582
SF ⫽ ⫺ Sm ⫽ 0.0258625
6
(ƒ ⫽ 2) (4.60)
Se ⫽ ST ⫺ Sm ⫺ SF ⫽ 0.0001835
(ƒ ⫽ 15) (4.61)
Se
Ve ⫽ (0.00001223) (4.62)
15
18 (13.695633 ⫺ 0.00001223)
––
1
⫽ 10 log
0.00001223
⫽ 47.94 dB (4.63)
S ⫽ 10 log ––
18 (13.695633 ⫺ 0.00001223)
1
⫽ ⫺1.187 dB (4.64)
72 4. Quality Engineering: The Taguchi Method
The target angle of 45⬚ for signal S can be expressed in radians as follows:
45
S ⫽ 10 log (3.1416) ⫽ ⫺1.049 dB (4.65)
180
As a next step, under optimal conditions, we calculate the phase advance angle
and adjust it to the value in equation (4.65) under standard conditions. This process
can be completed by only one variable.
When there are multiple levels for one signal factor, we should calculate the
dynamic SN ratio. When we design an oscillator using signal factors, we may select
three levels for a signal to change phase, three levels for a signal to change output,
three levels for frequency F, and three levels for temperature as an external con-
dition, and assign them to an L9 orthogonal array. In this case, for the effective
value of output amplitude, we set a proportional equation for the signal of voltage
V as the ideal function. All other factors are noises. As for the phase modulation
function, if we select the following as levels of a phase advance and retard signal,
M1 ⫽ ⫺120⬚
M2 ⫽ ⫺60⬚
M3 ⫽ M4 ⫽ 0 (dummy)
M5 ⫽ ⫹ 60⬚
M6 ⫽ ⫹ 120⬚
DESIGN BY SIMULATION
In electronic and electric systems, theories showing input/output relationships are
available for analysis. In some cases, approximate theories are available. Instead of
experimenting, we can study functionality using simulation techniques on the com-
puter. About half of the electrical and electronic design examples in References 1
4.3. Mechanical Engineering 73
New Method: Quality engineering recommends improving functionality before a quality problem
Functionality Design (functional variability) occurs. In other words, before a problem happens, a de-
signer makes improvements. Quite a few researchers or designers face difficulties
understanding how to change their designs. Quality engineering advises that we
check on changes in functionality using SN ratios by taking advantage of any type
of design change. A factor in design change is called a control factor. Most control
factors should be assigned to an L18 orthogonal array. A researcher or designer
should have realized and defined both the objective and generic functions of the
product being designed.
❒ Example
Imagine a vehicle steering function, more specifically, when we change the orien-
tation of a car by changing the steering angle. The first engineer who designed such
a system considered how much the angle could be changed within a certain range
4.3. Mechanical Engineering 75
of steering angle. For example, the steering angle range is defined by the rotation
of a steering wheel, three rotations for each, clockwise and counterclockwise, as
follows:
(⫺360)(3) ⫽ ⫺1080⬚
(360)(3) ⫽ ⫹1080⬚
In this case, the total steering angle adds up to 2160⬚. Accordingly, the engineer
should have considered the relationship between the angle above and steering cur-
vature y, such as a certain curvature at a steering angle of ⫺1080⬚, zero curvature
at 0, and a certain curvature at ⫹1080.
For any function, a developer of a function pursues an ideal relationship between
signal M and output y under conditions of use. In quality engineering, since any
function can be expressed by an input/output relationship of work or energy, es-
sentially an ideal function is regarded as a proportionality. For instance, in the
steering function, the ideal function can be given by
y ⫽ M (4.66)
In short, N1 represents conditions where the steering angle functions well, whereas
N2 represents conditions where it does not function well. For example, the former
represents a car with new tires running on a dry asphalt road. In contrast, the latter
is a car with worn tires running on a wet asphalt or snowy road.
By changing the vehicle design, we hope to mitigate the difference between N1
and N2. Quality engineering actively takes advantage of the interaction between
control factors and a compounded error factor N to improve functionality. Then we
76 4. Quality Engineering: The Taguchi Method
measure data such as curvature, as shown in Table 4.8 (in the counterclockwise
turn case, negative signs are added, and vice versa). Each Mi value means angle.
Table 4.8 shows data of curvature y, or the reciprocal of the turning radius for
each signal value of a steering angle ranging from zero to almost maximum for both
clockwise and counterclockwise directions. If we turn left or right on a normal road
or steer on a highway, the range of data should be different. The data in Table 4.8
represent the situation whether we are driving a car in a parking lot or steering in
a hairpin curve at low speed. The following factor of speed K, indicating different
conditions of use, called an indicative factor, is often assigned to an outer orthog-
onal array. This is because a signal factor value varies in accordance with each
indicative factor level.
K1: low speed (less than 20 km/h)
Table 4.8
Curvature data
M1 M2 M3 M4 M5 M6 M7
ⴚ720 ⴚ480 ⴚ240 0 240 480 720
N1 y11 y12 y13 y14 y15 y16 y17
N2 y21 y22 y23 y24 y25 y26 y27
4.3. Mechanical Engineering 77
L1 ⫹ L2
ˆ ⫽ S ⫽ (4.75)
2r
Se
Ve ⫽ (4.76)
12
SN  ⫹ Se
VN ⫽ (4.77)
13
The reason that sensitivity S is calculated by equation (4.75) instead of using
S is because S always takes a positive value. But sensitivity is used to adjust 
to target, and  may be either positive or negative, since the signal (M), takes both
positive and negative values.
Table 4.9
Decomposition of total variation
Source f S V
 1 S
N
e
1
12
SN
Se
冎
Ve VN
Total 14 ST
78 4. Quality Engineering: The Taguchi Method
tion. For example, the steering function discussed earlier is not a generic
but an objective function. When an objective function is chosen, we need to
check whether there is an interaction between control factors after allocating
them to an orthogonal array, because their effects (differences in SN ratio,
called gain) do not necessarily have additivity. An L18 orthogonal array is
recommended because its size is appropriate enough to check on the ad-
ditivity of gain. If there is no additivity, the signal and measurement char-
acteristics we used need to be reconsidered for change.
4. Another disadvantage is the paradigm shift for orthogonal array’s role. In
quality engineering, control factors are assigned to an orthogonal array. This
is not because we need to calculate the main effects of control factors for
the measurement characteristic y. Since levels of control factors are fixed,
there is no need to measure their effects for y using orthogonal arrays. The
real objective of using orthogonal arrays is to check the reproducibility of
gains, as described in disadvantage 3.
In new product development, there is no existing engineering knowledge most
of the time. It is said that engineers are suffering from being unable to find ways
for SN ratio improvement. In fact, the use of orthogonal arrays enables engineers
to perform R&D for a totally new area. In the application, parameter-level intervals
may be increased and many parameters may be studied together. If the system
selected is not a good one, there will be little SN ratio improvement. Often, one
orthogonal array experiment can tell us the limitation on SN ratios. This is an
advantage of using an orthogonal array rather than a disadvantage.
We are asked repeatedly how we would solve a problem using quality engineering. Problem Solving and
Here are some generalities that we use. Quality Engineering
1. Even if the problem is related to a consumer’s complaint in the marketplace
or manufacturing variability, in lieu of questioning its root cause, we would
suggest changing the design and using a generic function. This is a technical
approach to problem solving through redesign.
2. To solve complaints in the marketplace, after finding parts sensitive to en-
vironmental changes or deterioration, we would replace them with robust
parts, even if it incurs a cost increase. This is called tolerance design.
3. If variability occurs before shipping, we would reduce it by reviewing process
control procedures and inspection standards. In quality engineering this is
termed on-line quality engineering. This does not involve using Shewhart control
charts or other management devices but does include the following daily
routine activities by operators:
a. Feedback control. Stabilize processes continuously for products to be pro-
duced later by inspecting product characteristics and correcting the
difference between them and target values.
b. Feedforward control. Based on the information from incoming materials or
component parts, predict the product characteristics in the next process
and calibrate processes continuously to match product characteristics with
the target.
c. Preventive maintenance. Change or repair tools periodically with or without
checkup.
80 4. Quality Engineering: The Taguchi Method
Signal and Output In terms of the generic function representing mechanical functionality,
in Mechanical
Engineering y ⫽ M (4.78)
ST ⫽ y 12 ⫹ 䡠䡠䡠 ⫹ y n2 (4.79)
By doing so, we can make ST equivalent to work, and as a result, the following
simple decomposition can be completed:
This type of decomposition has long been used in the telecommunication field.
In cases of a vehicle’s acceleration performance, M should be the square root of
distance and y the time to reach. After choosing two noise levels, we calculate the
SN ratio and sensitivity using equation (4.80), which holds true for a case where
energy or work is expressed as a square root. But in many cases it is not clear if it
is so. To make sure that (4.80) holds true, it is checked by the additivity of control
factor effects using an orthogonal array. According to Newton’s law, the following
formula shows that distance y is proportional to squared time t 2 under constant
acceleration:
b 2
y⫽ t (4.81)
2
t⫽ 冪2b 兹y
冪b
2
t ⫽ M ⫽ 兹y ⫽ M (4.82)
Generic Function of The term generic function is used quite often in quality engineering. This is not an
Machining objective function but a function as the means to achieve the objective. Since a
function means to make something work, work should be measured as output. For
example, in case of cutting, it is the amount of material cut. On the other hand,
as work input, electricity consumption or fuel consumption can be selected. When
4.3. Mechanical Engineering 81
y ⫽ M (4.83)
Since in this case, both signal of amount of cut M and data of electricity con-
sumption y are observations, M and y have six levels and the effects of N are not
considered. For a practical situation, we should take the square root of both y and
M. The linear equation is expressed as
r ⫽ M 211 ⫹ M 12
2
⫹ 䡠䡠䡠 ⫹ M 23
2
(4.85)
Table 4.10
Data for machining
T1 T2 T3
N1 M11 M12 M13
y11 y12 y13
N2 M21 M22 M23
y21 y22 y23
82 4. Quality Engineering: The Taguchi Method
L2
S ⫽ (ƒ ⫽ 1) (4.87)
r
Se ⫽ ST ⫺ S (ƒ ⫽ 5) (4.88)
Se
Ve ⫽ (4.89)
5
1
S ⫽ 10 log (S ⫺ Ve) (4.91)
r 
Since we analyze based on a root square, the true value of S is equal to the
estimation of 2, that is, consumption of electricity needed for cutting, which
should be smaller. If we perform smooth machining with a small amount of elec-
tricity, we can obviously improve dimensional accuracy and flatness.
y ⫽ T (4.92)
ST ⫽ y 211 ⫹ y 12
2
⫹ 䡠䡠䡠 ⫹ y 23
2
(ƒ ⫽ 6) (4.93)
(L1 ⫹ L2)2
S ⫽ (ƒ ⫽ 1) (4.94)
2r
(L1 ⫺ L2)2
SN  ⫽ (ƒ ⫽ 1) (4.95)
2r
Now
r ⫽ T 21 ⫹ T 22 ⫹ T 23 (4.98)
1
S ⫽ 10 log (S ⫺ Ve) (4.100)
2r 
4.3. Mechanical Engineering 83
Then
In this case, 2 of the ideal function represents electricity consumption per unit
time, which should be larger. To improve the SN ratio using equation (4.99) means
to design a procedure of using a large amount of electricity smoothly. Since we
minimize electricity consumption by equations (4.90) and (4.91), we can improve
not only productivity but also quality by taking advantage of both analyses.
Since quality engineering is a generic technology, a common problem occurs for When On and Off
the mechanical, chemical, and electronic engineering fields. We explain the func- Conditions Exist
tionality evaluation method for on and off conditions using an example of
machining.
Recently, evaluation methods of machining have made significant progress, as
shown in an experiment implemented in 1997 by Ishikawajima–Harima Heavy
Industries (IHI) and sponsored by the National Space Development Agency of
Japan (NASDA). The content of this experiment was released in the Quality En-
gineering Symposium in 1998. Instead of using transformality [setting product
dimensions input through a numerically controlled (NC) machine as signal and
corresponding dimensions after machined as measurement characteristic], they
measured input and output utilizing energy and work by taking advantage of the
concept of generic function.
In 1959, the following experiment was conducted at the Hamamatsu Plant of
the National Railway (now known as JR). In this experiment, various types of cut-
ting tools ( JIS, SWC, and new SWC cutting tools) and cutting conditions were
assigned to an L27 orthogonal array to measure the net cutting power needed for
a certain amount of cut. The experiment, based on 27 different combinations,
revealed that the maximum power needed was several times as large as the mini-
mum. This finding implies that excessive power may cause a rough surface or
variability in dimensions. In contrast, cutting with quite a small amount of power
means that we can cut sharply. Consequently, when we can cut smoothly and power
effectively, material surfaces can be machined flat and variability in dimension can
be reduced. In the Quality Engineering Symposium in June 1998, IHI released
84 4. Quality Engineering: The Taguchi Method
Table 4.11
Cutting experiment
T1 T2 T3
N1 M: amount of cut M11 M12 M13
y: electricity consumption y11 y12 y13
N2 M: amount of cut M21 M22 M23
y: electricity consumption y21 y22 y23
4.4. Chemical Engineering 85
we compute the total variation of output ST, which is a sum of squares of the square
root of each cumulative electricity consumption:
ST ⫽ (兹y11)2 ⫹ (兹y12)2 ⫹ 䡠䡠䡠 ⫹ (兹y23)2 ⫽ y11 ⫹ y12 ⫹ 䡠䡠䡠 ⫹ y23 (ƒ ⫽ 6)
(4.105)
This is equivalent to a total sum of electricity consumptions. Subsequently, total
consumed energy ST can be decomposed as follows:
ST ⫽ (Electricity consumption used for cutting)
⫹ (electricity consumption used for loss or variability in cutting)
(4.106)
While an ideal relationship is defined by energy or work, we use the square
root of both sides of the relationship in analyzing the SN ratio. After we take the
square root of both sides, the new relationship is called a generic function.
We can compute the power used for cutting as a variation of proportional terms:
L1 ⫹ L2
S ⫽ (4.107)
r1 ⫹ r2
Now L indicates a linear equation using square roots and is calculated as
L1 ⫽ 兹M11 兹y11 ⫹ 兹M12 兹y12 ⫹ 兹M13 兹y13
Function of an The genetic function of an engine is a chemical reaction. Our interest in oxygen
Engine aspirated into an engine is in how it is distributed in the exhaust.
❏ Insufficient reaction. We set the fraction of oxygen contained in CO2 and CO
to p.
❏ Sufficient reaction. We set the fraction of oxygen contained in CO2 to q.
❏ Side reaction (e.g., of Nox). We set the fraction of oxygen contained in side-
reacted substances to 1 ⫺ p ⫺ q.
Based on this definition, ideally the chemical reaction conforms to the follow-
ing exponential equations:
p ⫽ e⫺1T (4.111)
p ⫹ q ⫽ e⫺2T (4.112)
Here T represents the time needed for one cycle of the engine and p and q are
measurements in the exhaust. The total reaction rate 1 and side reaction rate 2
are calculated as
1 1
1 ⫽ ln (4.113)
T p
1 1
2 ⫽ ln (4.114)
T p⫹q
It is desirable that 1 become greater and 2 be close to zero. Therefore, we
compute the SN ratio as
21
⫽ 10 log (4.115)
32
The sensitivity is
S ⫽ 10 log 21 (4.116)
The larger the sensitivity, the more the engine’s output power increases, the
larger the SN ratio becomes, and the less effect the side reaction has.
Cycle time T should be tuned such that benefits achieved by the magnitude of
total reaction rate and loss by the magnitude of side reaction are balanced
optimally.
Selecting noise N (which has two levels, such as one for the starting point of
an engine and the other for 10 minutes after starting) as follows, we obtain the
data shown in Table 4.12. According to the table, we calculate the reaction rate
for N1 and N2. For an insufficient reaction,
1 1
11 ⫽ ln (4.117)
T p1
1 1
12 ⫽ ln (4.118)
T p2
4.4. Chemical Engineering 87
Table 4.12
Function of engine based on chemical reaction
N1 N2
Insufficient reaction p p1 p2
Objective reaction q q1 q2
Total p1 ⫹ q1 p2 ⫹ q2
1 1
21 ⫽ ln (4.119)
T p1 ⫹ q1
1 1
22 ⫽ ln (4.120)
T p2 ⫹ q2
Since the total reaction rates 11 and 12 are larger-the-better, their SN ratios are
as follows:
1 ⫽ ⫺10 log
1
2 冉1
211
1
⫹ 2
12 冊 (4.121)
Since the side reaction rates 21 and 22 are smaller-the-better, their SN ratio is
⫽ 1 ⫹ 2 (4.123)
S ⫽ 1 (4.124)
If side reactions barely occur and we cannot trust their measurements in a chem- General Chemical
ical reaction experiment, we can separate a point of time for measuring the total Reactions
reaction rate (1 ⫺ p) and a point of time for measuring the side reaction rate (1
⫺ p ⫺ q). For example, we can set T1 to 1 minute and T2 to 30 minutes, as
illustrated in Table 4.13. When T1 ⫽ T2, we can use the procedure described in
the preceding section.
Table 4.13
Experimental data
T1 T2
Insufficient reaction p p11 p12
Objective reaction q q11 q12
Total p11 ⫹ q11 p12 ⫹ q12
88 4. Quality Engineering: The Taguchi Method
If the table shows that 1 ⫺ p11 is more or less than 50% and 1 ⫺ (p12 ⫹ q12)
ranges at least from 10 to 50%, this experiment is regarded as good enough. p11
and (p12 ⫹ q12) are used for calculating total reaction and side reaction rates,
respectively:
1 1
1 ⫽ ln (4.125)
T p11
1 1
2 ⫽ ln (4.126)
T2 p12 ⫹ q12
Using the equations above, we can compute the SN ratio and sensitivity as
21
⫽ 10 log (4.127)
21
Evaluation of Images An image represents a picture such as a landscape transformed precisely on each
pixel of a flat surface. Sometimes an image of a human face is whitened compared
to the actual one; however, quality engineering does not deal with this type of case
because it is an issue of product planning and tuning.
In a conventional research method, we have often studied a pattern of three
primary colors (including a case of decomposing a gray color into three primary
colors, and mixing up three primary colors into a gray color) as a test pattern.
When we make an image using a pattern of three primary colors, a density curve
(a common logarithm of a reciprocal of permeability or reflection coefficient)
varies in accordance with various conditions (control factors). By taking measure-
ments from this type of density curve, we have studied an image. Although creating
a density curve is regarded as reasonable, we have also measured Dmax, Dmin, and
gamma from the curve. In addition to them (e.g., in television), resolution and
image distortion have been used as measurements.
Because the consumers’ demand is to cover a minimum density difference of
three primary colors (according to a filmmaker, this is the density range 0 to
10,000) that can be recognized by a human eye’s photoreceptor cell, the resolving
power should cover up to the size of the light-sensitive cell. However, quality en-
gineering does not pursue such technical limitations but focuses on improving
imaging technology. That is, its objective is to offer evaluation methods to improve
both quality and productivity.
For example, quality engineering recommends the following procedure for tak-
ing measurements:
1. Condition M1. Create an image of a test pattern using luminosity 10 times as
high and exposure time one-tenth as high as their current levels.
2. Condition M2. Create an image of a test pattern using as much luminosity
and exposure time as their current levels.
3. Condition M3. Create an image of a test pattern using luminosity one-tenth
as high and exposure time 10 times as high as their current levels.
At the three sensitivity curves, we select luminosities corresponding to a per-
meability or reflection coefficient of 0.5. For density, it is 0.301. For a more prac-
4.4. Chemical Engineering 89
tical experiment, we sometimes select seven levels for luminosity, such as 1000,
100, 10, 1, 1/10, 1/100, and 1/1000 times as much as current levels. At the same
time, we choose exposure time inversely proportional to each of them.
After selecting a logarithm of exposure time E for the value 0.301, the SN ratio
and sensitivity are calculated for analysis.
Next, we set a reading of exposure time E (multiplied by a certain decimal
value) for each of the following seven levels of signal M (logarithm of luminosity)
to y1, y2, ... , y7:
In this case, a small difference between the two sensitivity curves for N1 and N2
represents a better performance. A good way to designing an experiment is to
compound all noise levels into only two levels. Thus, no matter how many noise
levels we have, we should compound them into two levels.
Since we have three levels, K1, K2, and K3, for a signal of three primary colors,
two levels for a noise, and seven levels for an output signal, the total number of
data is (7)(3)(2) ⫽ 42. For each of the three primary colors, we calculate the SN
ratio and sensitivity. Now we show only the calculation of K1. Based on Table 4.14,
90 4. Quality Engineering: The Taguchi Method
Table 4.14
Experimental data
Linear
M1 M2 䡠䡠䡠 M7 Equation
K1 N1 y11 y12 䡠䡠䡠 y17 L1
N2 y21 y22 䡠䡠䡠 y27 L2
K2 N1 y31 y32 䡠䡠䡠 y37 L3
N2 y41 y42 䡠䡠䡠 y47 L4
K3 N1 y51 y52 䡠䡠䡠 y57 L5
N2 y61 y62 䡠䡠䡠 y67 L6
we proceed with the calculation. Now, by taking into account that M has seven
levels, we view M4 ⫽ 0 as a standard point and subtract the value of M4 from M1,
M2, M3, M5, M6, and M7, each of which is set to y1, y2, y3, y5, y6, and y7.
By subtracting a reading y for M4 ⫽ 0 in case of K1, we obtain the following
linear equations:
(L1 ⫹ L2)2
S ⫽ (4.132)
2r
(L1 ⫺ L2)2
SN  ⫽ (4.133)
2r
ST ⫽ y 11
2
⫹ y 12
2
⫹ 䡠䡠䡠 ⫹ y 27
2
(4.134)
Ve ⫽ ––
12 (ST ⫺ S ⫺ SN )
1
(4.135)
VN ⫽ ––
13 (ST ⫺ S)
1
(4.136)
1
S1 ⫽ 10 log (S ⫺ Ve) (4.138)
2r 
As for K2 and K3, we calculate 2, S2, 3, and S3. The total SN ratio is computed
as the sum of 1, 2, and 3. To balance densities ranging from low to high for
K1, K2, and K3, we should equalize sensitivities for three primary colors, S1, S2, and
S3, by solving simultaneous equations based on two control factors. Solving them
such that
S1 ⫽ S 2 ⫽ S3 (4.139)
The procedure discussed thus far is the latest method in quality engineering,
whereas we have used a method of calculating the density for a logarithm of lu-
minosity log E. The weakness of the latter method is that a density range as output
is different at each experiment. That is, we should not conduct an experiment
using the same luminosity range for films of ASA 100 and 200. For the latter, we
should perform an experiment using half of the luminosity needed for the former.
We have shown the procedure to do so.
Quality engineering is a method of evaluating functionality with a single index
. Selection of the means for improvement is made by specialized technologies
and specialists. Top management has the authority for investment and personnel
affairs. The result of their performance is evaluated based on a balance sheet that
cannot be manipulated. Similarly, although hardware and software can be designed
by specialists, those specialists cannot evaluate product performance at their own
discretion.
If you wish to limit granulation distribution within a certain range, you must classify Functionality Such
granules below the lower limit as excessively granulated, ones around the center as Granulation or
as targeted, and ones above the upper limit as insufficiently granulated, and pro- Polymerization
ceed with analysis by following the procedure discussed earlier for chemical re- Distribution
actions. For polymerization distribution, you can use the same technique.
Consider a process of extracting metallic copper from copper sulfide included in Separation System
ore. When various types of substances are mixed with ore and the temperature of
a furnace is raised, deoxidized copper melts, flows out of the furnace, and remains
in a mold. This is called crude copper ingot. Since we expect to extract 100% of the
copper contained in ore and convert it into crude copper (product), this ratio of
extraction is termed yield. The percentage of copper remaining in the furnace slag
is regarded as the loss ratio p. If p ⫽ 0, the yield becomes 100%. Because in this
case we wish to observe the ratio at a single point of time during reaction, cali-
bration will be complicated.
On the other hand, as the term crude copper implies, there exist a considerable
number (approximately 1 to 2% in most cases) of ingredients other than copper.
Therefore, we wish to bring the ratio of impurity contained in copper ingot, q*,
close to zero. Now, setting the mass of crude copper to A (kilograms), the mass
of slag to B (kilograms) (this value may not be very accurate), the ratio of impurity
in crude copper to q*, and the ratio of copper in slag to p*, we obtain Table 4.15
for input/output. From here we calculate the following two error ratios, p and q:
Bp*
p⫽ (4.140)
A(1 ⫺ q*) ⫹ Bp*
Aq*
q⫽ (4.141)
Aq* ⫹ B(1 ⫺ p*)
The error ratio p represents the ratio of copper molecules that is originally
contained in ore but mistakenly left in the slag after smelting. Then 1 ⫺ p is called
the yield. Subtracting this yield from 1, we obtain the error ratio p. The error ratio
q indicates the ratio of all noncopper molecules that is originally contained in the
furnace but mistakenly included in the product or crude copper. Both ratios are
calculated as a mass ratio. Even if the yield 1 ⫺ p is large enough, if the error
92 4. Quality Engineering: The Taguchi Method
Table 4.15
Input/output for copper smelting
Output
Input Product Slag Total
Copper A(1 ⫺ q*) Bp* A(1 ⫺ q*) ⫹ Bp*
Noncopper Aq* B(1 ⫺ p*) Aq* ⫹ B(1 ⫺ p*)
Total A B A⫹B
ratio q is also large, this smelting is considered inappropriate. After computing the
two error ratios p and q, we prepare Table 4.16.
Consider the two error ratios p and q in Table 4.15. If copper is supposed to
melt well and move easily in the product when the temperature in the furnace is
increased, the error ratio p decreases. However, since ingredients other than cop-
per melt well at the same time, the error ratio q rises. A factor that can decrease
p and increase q is regarded as an adequate variable for tuning. Although most
factors have such characteristics, more or less, we consider it real technology to
reduce errors regarding both p and q rather than obtaining effects by tuning. In
short, this is smelting technology with a large functionality.
To find a factor level reducing both p and q for a variety of factors, after making
an adjustment so as not to change the ratio of p to q, we should evaluate func-
tionality. This ratio is called the standard SN ratio. Since gain for the SN ratio
accords with that when p ⫽ q, no matter how great the ratio of p to q, in most
cases we calculate the SN ratio after p is adjusted equal to q. Primarily, we compute
p0 as follows when we set p ⫽ q ⫽ p0:
1
p0 ⫽ (4.142)
1 ⫹ 兹[(1/p) ⫺ 1][(1/q) ⫺ 1]
Secondarily, we calculate the standard SN ratio as
(1 ⫺ 2p0)2
⫽ 10 log (4.143)
4p0(1 ⫺ p0)
The details are given in Reference 5.
Once the standard SN ratio is computed, we determine an optimal condition
according to the average SN ratios for each control factor level, estimate SN ratios
Table 4.16
Input/output expressed by error ratios
Output
Input Product Slag Total
Copper 1⫺p p 1
Noncopper q 1⫺q 1
Total 1⫺p⫹q 1⫹p⫺q 2
4.5. Medical Treatment and Efficacy Experimentation 93
for optimal and initial conditions, and calculate gains. After this, we conduct a
confirmatory experiment for the two conditions, compute SN ratios, and compare
estimated gains with those obtained from this experiment.
Based on the experiment under optimal conditions, we calculate p and q. Unless
a sum of losses for p and q is minimized, timing is set using factors (such as the
temperature in the furnace) that influence sensitivity but do not affect the SN
ratio. In tuning, we should gradually change the adjusting factor level. This ad-
justment should be made after the optimal SN ratio condition is determined.
The procedure detailed here is even applicable to the removal of harmful el-
ements or the segregation of garbage.
❒ Example
First, we consider a case of evaluating the main effects on a cancer cell and the
side effects on a normal cell. Although our example is an anticancer drug, this
analytic procedure holds true for thermotherapy and radiation therapy using super-
sonic or electromagnetic waves. If possible, by taking advantage of an L18 orthogonal
array, we should study the method using a drug and such therapies at the same
time.
Quality engineering recommends that we experiment on cells or animals. We
need to alter the density of a drug to be assessed by h milligrams (e.g., 1 mg) per
unit time (e.g, 1 minute). Next, we select one cancer cell and one normal cell and
designate them M1 and M2, respectively. In quality engineering, M is called a signal
factor. Suppose that M1 and M2 are placed in a certain solution (e.g., water or a
salt solution), the density is increased by 1 mg/minute, and the length of time that
each cell survives is measured. Imagine that the cancer and normal cells die at the
eighth and fourteenth minutes, respectively. We express these data as shown in
Table 4.17, where 1 indicates ‘‘alive’’ and 0 indicates ‘‘dead.’’ In addition, M1 and
Table 4.17
Data for a single cell
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15
M1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
M2 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0
94 4. Quality Engineering: The Taguchi Method
M2 are cancer and normal cells, and T1, T2, ... indicate each lapse of time: 1, 2,
... minutes.
For digital data regarding a dead-or-alive (or cured-or-not cured) state, we cal-
culate LD50 (lethal dose 50). In this case, the LD50 value of M1 is 7.5 because a
cell dies between T7 and T8, whereas that of M2 is 13.5. LD50 represents the quan-
tity of drug needed until 50% of cells die.
In quality engineering, both M and T are signal factors. Although both signal and
noise factors are variables in use, a signal factor is a factor whose effect should
exist, and conversely, a noise is a factor whose effect should be minimized. That
is, if there is no difference between M1 and M2, and furthermore, between each
different amount of drug (or of time in this case), this experiment fails. Then we
regard both M and T as signal factors. Thus, we set up X1 and X2 as follows:
In this case, the smaller X1 becomes, the better the drug’s performance becomes.
In contrast, X2 should be greater. Quality engineering terms the former the smaller-
the-better characteristic and the latter larger-the-better characteristic. The follow-
ing equation calculates the SN ratio:
X2
⫽ 20 log (4.146)
X1
Table 4.18
Dead-or-alive data
T1 T2 T3 T4 T5 T6 T7 T8 T9 T10 T11 T12 T13 T14 T15 T16 T17 T18 T19 T20
M1 N1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
N2 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
N3 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0
M2 N1⬘ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
N2⬘ 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
N3⬘ 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
4.5. Medical Treatment and Efficacy Experimentation 95
Table 4.19
LD50 data
M1 5.5 3.5 11.5
M2 14.5 8.5 19.5
Since the data for M1 should have a smaller standard deviation as well as a
smaller average, after calculating an average of a sum of squared data, we multiply
its logarithm by 10. This is termed the smaller-the-better SN ratio:
On the other hand, because the LD50 value of a normal cell M2 should be infinites-
imal, after computing an average sum of reciprocal data, we multiply its logarithm
by 10. This is larger-the-better SN ratio:
2 ⫽ ⫺10 log
1
3 冉 1
14.52
⫹
1
8.52
⫹
1
19.52 冊 ⫽ 21.50 dB (4.148)
⫽ 1 ⫹ 2
⫽ ⫺17.65 ⫹ 21.50
⫽ 3.85 dB (4.149)
For example, given two drugs A1 and A2, we obtain the experimental data shown
in Table 4.20. If we compare the drugs, N1, N2, and N3 for A1 should be consistent
with N⬘,
1 N⬘2, and N⬘3 for A2. Now suppose that the data of A1 are the same as those
in Table 4.19 and that a different drug A2 has LD50 data for M1 and M2 as illustrated
in Table 4.20. A1’s SN ratio is as in equation (4.149). To compute A2’s SN ratio,
we calculate the SN ratio for the main effect 1 as follows:
Table 4.20
Data for comparison experiment
A1 M1 5.5 3.5 11.5
M2 14.5 8.5 19.5
A2 M1 18.5 11.5 20.5
M2 89.5 40.5 103.5
96 4. Quality Engineering: The Taguchi Method
Next, as a larger-the-better SN ratio, we calculate the SN ratio for the side effects
2 as
2 ⫽ ⫺10 log
1
3 冉 1
89.52
⫹
1
40.52
⫹
1
103.52 冊 ⫽ 35.59 (4.151)
Therefore, we obtain the total SN ratio, combining the main and side effects, by
adding them up as follows:
⫽ ⫺24.75 ⫹ 35.59 ⫽ 10.84 (4.152)
Finally, we tabulate these results regarding comparison of A1 and A2 in Table
4.21, which is called a benchmarking test. What we find out from Table 4.21 is
that the side effect of A2 is larger than A1’s by 14.09 dB, or 25.6 times, whereas
the main effect of A2 is smaller than A1’s by 7.10 dB, or 1/5.1 times. On balance,
A2’s effect is larger than A1’s by 6.99 dB, or 5 times. This fact reveals that if we
increase an amount of A2, we can improve both main and side effects by 3.49 dB
compared to A1’s. That is, the main effect is enhanced 2.3 times, and the side
effect is reduced 1/2.3 times. By checking the SN ratio using the operating window
method, we can know whether the main and side-effects are improved at the same
time.
Table 4.21
Comparison of drugs A1 and A2
Main Effect Side Effect Total
A1 ⫺17.65 21.50 3.85
A2 ⫺24.75 35.59 10.84
Gain ⫺7.10 14.09 6.99
4.6. Software Testing 97
point is 36⬚C. If the cancer cell dies at 43⬚C, the LD50 value is 6.5⬚C, the difference
from the standard point of 36⬚C. If the normal cell dies at 47⬚C, the LD50 value is
10.5⬚C. When a sonic or electromagnetic wave is used in place of temperature, we
need to select the wave’s power (e.g., raise the power by 1 W/minute.
Quality engineering supposes that all conditions of use necessarily belong to a Two Types of Signal
signal or noise factor. Apart from a condition where a user uses it actively or Factor and Software
passively, the effects of a signal factor are those that are essential. When we plan
to design software (software product) using a system with computers, the software
is considered a user’s active signal factor. On the other hand, when we conduct
inspection, diagnosis, or prediction using research data (including various sensing
data), the entire group of data is regarded as consisting of passive signal factors.
Signal factors should not only have a common function under various conditions
of use but also have small errors.
In this section we explain how to use the SN ratio in taking measures against
bugs when we conduct a functional test on software that a user uses actively. Soft-
ware products have a number of active signal factors and consist of various levels
of signal factors. In this section we discuss ways to measure and analyze data to
find bugs in software.
In testing software, there is a multiple-step process, that is, a number of signals. Layout of Signal
In quality engineering, we do not critque software design per se, but discuss mea- Factors in an
sures that enable us to find bugs in designed software. We propose a procedure Orthogonal Array
for checking whether software contains bugs or looking for problems with (diag-
nosis of) bugs.
As we discussed before, the number of software signal factors is equivalent to
the number of steps involved. In actuality, the number of signal factors is tremen-
dous, and the number is completely different in each factor level. With software,
how large an orthogonal array we can use should be discussed even when signal
factors need to be tested at every step or data can be measured at a certain step.
To use an L36 orthogonal array repeatedly is one of the practical methods we can
use, as shown next.
❏ Procedure 1. We set the number of multilevel signal factors to k. If k is large,
we select up to 11 signal factors of two levels and up to 12 factors of three
levels. Therefore, if k ⱕ 23, we should use an L36 orthogonal array. If k is
more than 23, we should use a greater orthogonal array (e.g., if 24 ⱕ k ⱕ
59, L108 is to be used) or use an L36 orthogonal array repeatedly to allocate
98 4. Quality Engineering: The Taguchi Method
all factors. We should lay out all signal factors as an array after reducing the
number of each factor level to two or three.
❏ Procedure 2. We conduct a test at 36 different conditions that are laid out on
an orthogonal array. If we obtain an acceptable result for each experiment
in the orthogonal array, we record a 0; conversely, if we do not obtain an
acceptable result, we record a 1. Basically by looking at output at the final
step, we judge by 0 or 1. Once all data (all experiments in case of L36) are
zero, the test is completed. If even one datum of 1 remains in all experi-
mental combination, the software is considered to have bugs.
❏ Procedure 3. As long as there are bugs, we need to improve the software
design. To find out at what step a problem occurs when there are a number
of bugs, we can measure intermediate data. In cases where there are a small
number of bugs, we analyze interactions. Its application to interactions is
discussed in Section 4.7.
Software Diagnosis For the sake of convenience, we explain the procedure by using an L36 orthogonal
Using Interaction array; this also holds true for other types of arrays. According to procedure 3 in
the preceding section, we allocate 11 two-level factors and 12 three-level factors to
an L36 orthogonal array, as shown in Table 4.22. Although one-level factors are not
assigned to the orthogonal array, they must be tested.
We set two-level signal factors to A, B, ... , and K, and three-level signal factors
to L, M, ... , and W. Using the combination of experiments illustrated in Table
4.23, we conduct a test and measure data about whether or not software functions
properly for all 36 combinations. When we measure output data for tickets and
changes in selling tickets, we are obviously supposed to set 0 if both outputs are
correct and 1 otherwise. In addition, we need to calculate an interaction for each
case of 0 and 1.
Now suppose that the measurements taken follow Table 4.23. In analyzing data,
we sum up data for all combinations of A to W. The total number of combinations
is 253, starting with AB and ending with VW. Since we do not have enough space
to show all combinations, for only six combinations of AB, AC, AL, AM, LM, and
LW, we show a two-way table, Table 4.23. For practical use, we can use a computer
to create a two-way table for all combinations.
Table 4.23 includes six two-way tables for AB, AC, AL, AM, LM, and LW, and
each table comprises numbers representing a sum of data 0 or 1 for four condi-
tions of A1B1 (Nos. 1–9), A1B2 (Nos. 10–18), A2B1 (Nos. 19–27), and A2B2 (Nos.
28–36). Despite all 253 combinations of two-way tables, we illustrate only six. After
we create a two-dimensional table for all possible combinations, we need to con-
sider the results.
Based on Table 4.23, we calculate combined effects. Now we need to create
two-way tables for combinations whose error ratio is 100% because if an error in
software exists, it leads to a 100% error. In this case, using a computer, we should
produce tables only for L2 and L3 for A2, M3 for A2, M2 for L3, W2 and W3 for L2,
and W2 and W3 for L3. Of 253 combinations of two-dimensional tables, we output
only 100% error tables, so we do not need to output those for AB and AC.
We need to enumerate not only two-way tables but also all 100% error tables
from all possible 253 tables because bugs in software are caused primarily by com-
binations of signal factor effects. It is software designers’ job to correct errors.
Table 4.22
Layout and data of L36 orthogonal array
A B C D E F G H I J K L M N O P Q R S T U V W
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Data
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
2 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 1
3 1 1 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3 1
4 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 2 2 2 2 3 3 3 3 0
5 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 1 1 1 1 0
6 1 1 1 1 1 2 2 2 2 2 2 3 3 3 3 1 1 1 1 2 2 2 2 1
7 1 1 2 2 2 1 1 1 2 2 2 1 1 2 3 1 2 3 3 1 2 2 3 0
8 1 1 2 2 2 1 1 1 2 2 2 2 2 3 1 2 3 1 1 2 3 3 1 0
9 1 1 2 2 2 1 1 1 2 2 2 3 3 1 2 3 1 2 2 3 1 1 2 1
10 1 2 1 2 2 1 2 2 1 1 2 1 1 3 2 1 3 2 3 2 1 3 2 0
11 1 2 1 2 2 1 2 2 1 1 2 2 2 1 3 2 1 3 1 3 2 1 3 1
12 1 2 1 2 2 1 2 2 1 1 2 3 3 2 1 3 2 1 2 1 3 2 1 0
13 1 2 2 1 2 2 1 2 1 2 1 1 2 3 1 3 2 1 3 3 2 1 2 0
14 1 2 2 1 2 2 1 2 1 2 1 2 3 1 2 1 3 2 1 1 3 2 3 1
15 1 2 2 1 2 2 1 2 1 2 1 3 1 2 3 2 1 3 2 2 1 3 1 0
16 1 2 2 2 1 2 2 1 2 1 1 1 2 3 2 1 1 3 2 3 3 2 1 0
17 1 2 2 2 1 2 2 1 2 1 1 2 3 1 3 2 2 1 3 1 1 3 2 1
18 1 2 2 2 1 2 2 1 2 1 1 3 1 2 1 3 3 2 1 2 2 1 3 1
19 2 1 2 2 1 1 2 2 1 2 1 1 2 1 3 3 3 1 2 2 1 2 3 0
20 2 1 2 2 1 1 2 2 1 2 1 2 3 2 1 1 1 2 3 3 2 3 1 1
21 2 1 2 2 1 1 2 2 1 2 1 3 1 3 2 2 2 3 1 1 3 1 2 1
22 2 1 2 1 2 2 2 1 1 1 2 1 2 2 3 3 1 2 1 1 3 3 2 0
23 2 1 2 1 2 2 2 1 1 1 2 2 3 3 1 1 2 3 2 2 1 1 3 1
24 2 1 2 1 2 2 2 1 1 1 2 3 1 1 2 2 3 1 3 3 2 2 1 1
99
100
Table 4.22 (Continued )
A B C D E F G H I J K L M N O P Q R S T U V W
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Data
25 2 1 1 2 2 2 1 2 2 1 1 1 3 2 1 2 3 3 1 3 1 2 2 0
26 2 1 1 2 2 2 1 2 2 1 1 2 1 3 2 3 1 1 2 1 2 3 3 1
27 2 1 1 2 2 2 1 2 2 1 1 3 2 1 3 1 2 2 3 2 3 1 1 1
28 2 2 2 1 1 1 1 2 2 1 2 2 1 3 3 3 2 2 1 3 1 2 1 1
29 2 2 2 1 1 1 1 2 2 1 2 2 1 3 3 3 2 2 1 3 1 2 1 1
30 2 2 2 1 1 1 1 2 2 1 2 3 2 1 1 1 3 3 2 1 2 3 2 1
31 2 2 1 2 1 2 1 1 1 2 2 1 3 3 3 2 3 2 2 1 2 1 1 0
32 2 2 1 2 1 2 1 1 1 2 2 2 1 1 1 3 1 3 3 2 3 2 2 1
33 2 2 1 2 1 2 1 1 1 2 2 3 2 2 2 1 2 1 1 3 1 3 3 1
34 2 2 1 1 2 1 2 1 2 2 1 1 3 1 2 3 2 3 1 2 2 3 1 1
35 2 2 1 1 2 1 2 1 2 2 1 2 1 2 3 1 3 1 2 3 3 1 2 1
36 2 2 1 1 2 1 2 1 2 2 1 3 2 3 1 2 1 2 3 1 1 2 3 1
4.6. Software Testing 101
Table 4.23
Supplemental tables
(1) AB two-way table
B1 B2 Total
A1 4 4 8
A2 6 8 14
Total 10 12 22
Once they modify 100% errors, they perform a test again based on an L36 orthog-
onal array. If 100% error combinations remain, they correct them again. Although
this procedure sounds imperfect, it quite often streamlines the debugging task
many times over.
System When, by following the method based on Table 4.22, we find it extremely difficult
Decomposition to seek root causes because of quite a few bugs, it is more effective for debugging
to break signal factors into small groups instead of selecting all of them at a time.
Some ways to do so are described below.
1. After splitting signal factors into two groups, we lay out factors in each of
them to an L18 orthogonal array. Once we correct all bugs, we repeat a test
based on an L36 orthogonal array containing all signal factors. We can reduce
bugs drastically by two L18 tests, thereby simplifying the process of seeking
root causes in an L36 array with few bugs.
2. When we are faced with difficulties finding causes in an L36 array because
of too many bugs, we halve the number of signal factors by picking alter-
native factors, such as A, C, E, G, ... , W, and check these bugs. Rather than
selecting every other factor, we can choose about half that we consider im-
portant. The fact that the number of bugs never changes reveals that all
bugs are caused by a combination of about half the signal factors. Then we
halve the number of factors. Until all root causes are detected, we continue
to follow this process. In contrast, if no bug is found in the first-half com-
binations of signal factors, we test the second half. Further, if there happens
to be no bug in this test, we can conclude that interactions between the first
and second halves generate bugs. To clarify causes, after dividing each of the
first and second halves into two groups, we investigate all four possible com-
binations of the two groups.
3. From Table 4.22, by correcting 100% errors regarding A, L, and M based on
Table 4.23, we then check for bugs in an L36 orthogonal array. Once all bugs
are eliminated, this procedure is complete.
We have thus far shown some procedures for finding the basic causes of bugs.
However, we can check them using intermediate output values instead of the final
results of a total system. This is regarded as a method of subdividing a total system
into subsystems.
distance only in the unit cluster of a population and defining variable distance in
a population in consideration of variable interrelationships among items.
The MTS method also defines a group of items close to their average as a unit
cluster in such a way that we can use it to diagnose or monitor a corporation. In
fact, both the MT and MTS methods have started to be applied to various fields
because they are outstanding methods of pattern recognition. What is most im-
portant in utilizing multidimensional information is to establish a fundamental
database. That is, we should consider what types of items to select or which groups
to collect to form the database. These issues should be determined by persons
expert in a specialized field.
In this section we detail a procedure for streamlining a medical checkup or
clinical examination using a database for a group of healthy people (referred to
subsequently as ‘‘normal’’ people). Suppose that the total number of items used
in the database is k. Using data for a group of normal people—for example, the
data of people who are examined and found to be in good health in any year after
annual medical checkups for three years in a row (if possible, the data of hundreds
of people is preferable)—we create a scale to characterize the group.
As a scale we use the distance measure of P. C. Mahalanobis, an Indian statis-
tician, introduced in his thesis in 1934. In calculating the Mahalanobis distance in
a certain group, the group needs to have homogeneous members. In other words,
a group consisting of abnormal people should not be considered. If the group
contains people with both low and high blood pressure, we should not regard the
data as a single distribution.
Indeed, we may consider a group of people suffering only from hepatitis type
A as being somewhat homogeneous; however, the group still exhibits a wide de-
viation. Now, let’s look at a group of normal people without hepatitis. For gender
and age we include both male and female and all adult age brackets. Male and
female are denoted by 0 and 1, respectively. Any item is dealt with as a quantitative
measurement in calculation. After regarding 0 and 1 data for male and female as
continuous variables, we calculate an average m and standard deviation . For all
items for normal people, we compute an average value m and standard deviation
and convert them below. This is called normalization.
y⫺m
Y⫽ (4.153)
Suppose that we have n normal people. When we convert y1, y2, ... , yn into Y1,
Y2, ... , Yn using equation (4.153), Y1, Y2, ... , Yn has an average of 0 and a standard
deviation of 1, leading to easier understanding. Selecting two from k items arbi-
trarily, and dividing the sum of normalized products by n or calculating covariance,
we obtain a correlation coefficient.
After forming a matrix of correlation coefficients between two of k items by
calculating its inverse matrix, we compute the following square of D representing
the Mahalanobis distance:
D2 ⫽
1
k 冉冘 冊
ij
aijYiYj (4.154)
104 4. Quality Engineering: The Taguchi Method
Now aij stands for the (i, j)th element of the inverse matrix. Y1, Y2, ... , Yn are
converted from y1, y2, ... , yn based on the following equations:
y1 ⫺ m1
Y1 ⫽
1
y2 ⫺ m 2
Y2 ⫽
2
⯗
yk ⫺ mk
Yk ⫽ (4.155)
k
In these equations, for k items regarding a group of normal people, set the
average of each item to m1, m 2, ... , mk and the standard deviation to 1, 2, ... ,
n. The data of k items from a person, y1, y2, ... , yk, are normalized to obtain Y1,
Y2, ... , Yk. What is important here is that ‘‘a person’’ whose identity is unknown in
terms of normal or abnormal is an arbitrary person. If the person is normal, D 2
has a value of approximately 1; if not, it is much larger than 1. That is, the Ma-
halanobis distance D 2 indicates how far the person is from normal people.
For practical purposes, we substitute the following y (representing not the SN
ratio but the magnitude of N):
y ⫽ 10 log D 2 (4.156)
Therefore, if a certain person belongs to a group of normal people, the average
of y is 0 dB, and if the person stays far from normal people, y increases because
the magnitude of abnormality is enlarged. For example, if y is 20 dB, in terms of
D 2 the person is 100 times as far from the normal group as normal people are. In
most cases, normal people stay within the range of 0 to 2 dB.
Application of the Although, as discussed in the preceding section, we enumerate all necessary items
MT Method to a in the medical checkup case, we need to beware of selecting an item that is derived
Medical Diagnosis from two other items. For example, in considering height, weight, and obesity, we
need to narrow the items down to two because we cannot compute an inverse of
a matrix consisting of correlation coefficients.
When we calculate the Mahalanobis distance using a database for normal peo-
ple, we determine the threshold for judging normality by taking into account the
following two types of error loss: the loss caused by misjudging a normal person
as abnormal and spending time and money to do precise tests; and the loss caused
by misjudging an abnormal person as normal and losing the chance of early
treatment.
❒ Example
The example shown in Table 4.24 is not a common medical examination but a
special medical checkup to find patients with liver dysfunction, studied by Tatsuji
4.7. MT and MTS Methods 105
Table 4.24
Physiological examination items
Examination Item Acronym Normal Value
Total protein TP 6.5–7.5 g / dL
Albumin Alb 3.5–4.5 g. / dL
Cholinesterase ChE 0.60–1.00 ⌬pH
Glutamate oxaloacetate transaminase GOT 2–25 units
Glutamate pyruvate transaminase GPT 0–22 units
Lactatdehydrogenase LDH 130–250 units
Alkaline phosphatase ALP 2.0–10.0 units
␥-Glutamyltranspeptidase ␥-GTP 0–68 units
Lactic dehydrogenase LAP 120–450 units
Total cholesterol TCh 140–240 mg / dL
Triglyceride TG 70–120 g / dL
Phospholipases PL 150–250 mg / dL
Creatinine Cr 0.5–1.1 mg / dL
Blood urea nitrogen BUN 5–23 mg / dL
Uric Acid UA 2.5–8.0 mg / dL
Kanetaka at Tokyo Teishin Hospital [6]. In addition to the 15 items shown in Table
4.24, age and gender are included.. The total number of items is 17.
By selecting data from 200 people (it is desirable that several hundred people
be selected, but only 200 were chosen because of the capacity of a personal com-
puter) diagnosed as being in good health for three years at the annual medical
checkup given by Tokyo’s Teishin Hospital, the researchers established a database
of normal people. The data of normal people who were healthy for two consecutive
years may be used. Thus, we can consider the Mahalanobis distance based on the
database of 200 people. Some people think that it follows an F-distribution with
an average of 1 approximate degree of freedom of 17 for the numerator and infinite
degrees of freedom for its denominator on the basis of raw data. However, since
its distribution type is not important, it is wrong. We compare the Mahalanobis
distance with the degree of each patient’s dysfunction.
If we use decibel values in place of raw data, the data for a group of normal
people are supposed to cluster around 0 dB with a range of a few decibels. To
minimize loss by diagnosis error, we should determine a threshold. We show a
simple method to detect judgment error next.
106 4. Quality Engineering: The Taguchi Method
Table 4.25 demonstrates a case of misdiagnosis where healthy people with data
more than 6 dB away from those of the normal people are judged not normal. For
95 new persons coming to a medical checkup, Kanetaka analyzed actual diagnostic
error accurately by using a current diagnosis, a diagnosis using the Mahalanobis
distance, and close (precise) examination.
Category 1 in Table 4.25 is considered normal. Category 2 comprises a group
of people who have no liver dysfunction but had ingested food or alcohol despite
being prohibited from doing so before a medical checkup. Therefore, category 2
should be judged normal, but the current diagnosis inferred that 12 of 13 normal
people were abnormal. Indeed, the Mahalanobis method misjudged 9 normal peo-
ple as being abnormal, but this number is three less than that obtained using the
current method.
Category 3 consists of a group of people suffering from slight dysfunctions. Both
the current and Mahalanobis methods overlooked one abnormal person. Yet both
of them detected 10 of 11 abnormal people correctly.
For category 4, a cluster of 5 people suffering from moderate dysfunctions, both
methods found all persons. Since category 2 is a group of normal persons, com-
bining categories 1 and 2 as a group of liver dysfunction – , and categories 3 and
4 as a group of liver dysfunction ⫹, we summarize diagnostic errors for each method
in Table 4.26. For these contingency tables, each discriminability (0: no discrimin-
ability, 1: 100% discriminability) is calculated by the following equations (see ‘‘Sep-
aration System’’ in Section 4.4 for the theoretical background). A1’s discriminability:
[(28)(15) ⫺ (51)(1)]2
1 ⫽ ⫽ 0.0563 (4.157)
(79)(16)(29)(66)
A2’s discriminability:
(63)(15) ⫺ (16)(1)]2
2 ⫽ ⫽ 0.344 (4.158)
(76)(16)(64)(31)
Table 4.25
Medical checkup and discriminability
A2: Mahalanobis
A1: Current Method Method
Category a Normal Abnormal Normal Abnormal Total
1 27 39 59 7 66
2 1 12 4 9 13
3 1 10 1 10 11
4 0 5 0 5 5
Total 29 66 64 31 95
a
1, Normal; 2, normal but temporarily abnormal due to food and alcohol; 3, slightly abnormal;
4, moderately abnormal.
4.7. MT and MTS Methods 107
Table 4.26
2 ⫻ 2 Contingency table
A1: Current method
Diagnosis:
Liver dysfunction Normal Abnormal Total
⫺ (Normal) 28 51 79
⫹ (Abnormal) 1 15 16
Total 29 66 95
0.0563
⫽ 10 log ⫽ ⫺12.2 dB (4.159)
0.9437
A2’s SN ratio:
0.344
2 ⫽ 10 log ⫽ ⫺2.8 dB (4.160)
0.656
Therefore, a medical examination using the Mahalanobis distance is better by
9.7 dB or 8.7 times than the current item-by-item examination. According to Table
4.25, both methods have identical discriminability of abnormal people. However,
the current method diagnosed 51 of 79 normal people to be or possibly be abnor-
mal, thereby causing a futile close examination. whereas the Mahalanobis method
judges only 16 of 79 to be abnormal. That is, the latter enables us to eliminate
such a wasteful checkup for 35 normal people.
108 4. Quality Engineering: The Taguchi Method
We do not need to sort out items regarded as essential. Now suppose that the
number of items (groups of items) to be studied is l and an appropriate two-level
orthogonal array is LN. When l ⫽ 30, L32 is used, and when l ⫽ 100, L108 or L124
is selected.
Once control factors are assigned to LN, we formulate an equation to calculate
the Mahalanobis distance by using selected items because each experimental con-
dition in the orthogonal array shows the assignment of items. Following procedure
4, we compute the SN ratio, and for each SN ratio for each experiment in or-
thogonal array LN, we calculate the control factor effect. If certain items chosen
contribute negatively or little to improving the SN ratio, we should select an op-
timal condition by excluding the items. This is a procedure of sorting out items
used for the Mahalanobis distance.
MTS METHOD
Although an orthogonalizing technique exists that uses principal components
when we normalize data in a multidimensional space, it is often unrelated to econ-
omy and useless because it is based too heavily on mathematical background. Now
we introduce a new procedure for orthogonalizing data in a multidimensional
space, which at the same time reflects on the researchers’ objective.
We select X1, X2, ... , Xk as k-dimensional variables and define the following as
n groups of data in the Mahalanobis space:
⯗
Xk1, Xk2, ... , Xkn
All of the data above are normalized. That is, n data, X1, X2, ... , Xn have a mean
of zero and a variance of 1. X1, X2, ... , Xk represent the order of cost or priority,
which is an important step and should be determined by engineers. In lieu of X1,
X2, ... , Xk, we introduce new variables, x1, x 2, ... , xk that are mutually orthogonal:
x1 ⫽ X1 (4.161)
X2 ⫽ b21x1 (4.162)
In this case b12 is not only the regression coefficient but also the correlation co-
efficient. Then the remaining part of X2, excluding the part related to x1
(regression part) or the part independent of x1, is
x 2 ⫽ X2 ⫺ b21x1 (4.163)
110 4. Quality Engineering: The Taguchi Method
冘 x x ⫽ 冘 X (X
n n
1j 2j 1j 2j ⫺ b21x1j) ⫽ 0 (4.164)
j⫽1 j⫽1
Thus, x1 and x 2 become orthogonal. On the other hand, whereas x1’s variance is
1, x 2’s variance 22, called residual contribution, is calculated by the equation
22 ⫽ 1 ⫺ b 221 (4.165)
The reason is that if we compute the mean square of the residuals, we obtain the
following result:
1
n
冘(X 2j ⫺ b21x1j)2 ⫽
1
n
冘X 2
2j ⫺
2
b
n 21
冘 X x ⫹ n1 (⫺b ) 冘 x
2j ij 21
2 2
1j
2n 2 n
⫽1⫺ b ⫹ b 22l
n 2l n
⫽ 1 ⫺ b 22l (4.166)
We express the third variable, X3, with x1 and x 2:
X3 ⫽ b31x1 ⫹ b32x 2 (4.167)
By multiplying both sides of equation (4.167) by x1, calculating a sum of elements
in the base space, and dividing by n, we obtain
b31 ⫽冘X x 1
nV1 3j 1j (4.168)
V ⫽ 冘x
1 2
1 1j (4.169)
n
Similarly, by multiplying both sides of equation (4.167) by x 2, we have b32
b32 ⫽冘X x 1
nV2 3j 2j (4.170)
V ⫽ 冘x
1 2
2 2j (4.171)
n
The orthogonal part of x 3 is thus a residual part, so that X3 cannot be expressed
by both X1 and X2 or both x1 and x 2. In sum,
x 3 ⫽ X3 ⫺ b31x1 ⫺ b32x 2 (4.172)
In the Mahalanobis space, x1, x 2, and x 3 are orthogonal. Since we have already
proved the orthogonality of x1 and x 2, we prove here that of x1 and x 3 and that of
x 2 and x 3. First, considering the orthogonality of x1 and x 3, we have
冘 x (X
j
1j 3j ⫺ b31x1j ⫺ b32x 2j) ⫽ 冘x X
j
1j 3j ⫺ b31 冘x 2
1j
⫽0 (4.173)
4.7. MT and MTS Methods 111
This can be derived from equation (4.167) defining b31. Similarly, the orthogonality
of x 2 and x 3 is proved according to equation (4.170), defining b32 as
⫽0 (4.174)
For the remaining variables, we can proceed with similar calculations. The var-
iables normalized in the preceding section can be rewritten as follows:
x1 ⫽ X1
x 2 ⫽ X2 ⫺ b21x1
x 3 ⫽ X3 ⫺ b31x1 ⫺ b32x 2
⯗
xk ⫽ Xk ⫺ bk1x1 ⫺ b2kx 2 ⫺ 䡠䡠䡠 ⫺ bk(k⫺1)xk⫺1 (4.175)
Each of the orthogonalized variables x 2, ... , xk does not have a variance of 1. In
an actual case we should calculate a variance right after computing n groups of
variables, x1, x 2, ... , xn in the Mahalanobis space. As for degrees of freedom in
calculating a variance, we can regard n for x1, n ⫺ 1 for x 2, n ⫺ 2 for x 3, ... , n ⫺
k ⫹ 1 for xk. Instead, we can select n degrees of freedom for all because n » k
quite often.
1 2
V1 ⫽ (x ⫹ x 12
2
⫹ 䡠䡠䡠 ⫹ x 1n
2
)
n 11
1
V2 ⫽ (x 2 ⫹ x 22
2
⫹ 䡠䡠䡠 ⫹ x 2n
2
)
n ⫺ 1 21
⯗
1
Vk ⫽ (x 2 ⫹ x 2k2 ⫹ 䡠䡠䡠 ⫹ x 2kn) (4.176)
n ⫺ k ⫹ 1 k1
Now setting normalized, orthogonal variables to y1, y2, ... , yk, we obtain
x1
y1 ⫽
兹V1
x2
y2 ⫽
兹V2
⯗
xk
yk ⫽ (4.177)
兹Vk
All of the normalized y1, y2, ... , yk are orthogonal and have a variance of 1. The
database completed in the end contains m and as the mean and standard devi-
ation of initial observations, b21, V2, b31, b32, V2, ... , bk1, bk2, ... , bk(k⫺1), Vk as the
coefficients and variances for normalization. Since we select n groups of data, there
112 4. Quality Engineering: The Taguchi Method
are k means of m1, m 2, ... , mk, k standard deviations of 1, 2, ... , k, (k ⫺ 1)k/2
coefficients, and k variances in this case. Thus, the number of necessary memory
items is as follows:
(k ⫺ 1)k k (k ⫹ 5)
k⫹k⫹ ⫹k⫽ (4.178)
2 2
The correlation matrix in the normalized Mahalanobis space turns out to be an
identity matrix. Then the correlation matrix R is expressed as
冢 冣
1 0 䡠䡠䡠 0
0 1 䡠䡠䡠 0
R⫽ (4.179)
⯗ ⯗ ... ⯗
0 0 ⯗ 1
Therefore, the inverse matrix of R, A is also an identity matrix:
冢 冣
1 0 䡠䡠䡠 0
0 1 䡠䡠䡠 0
A⫽R ⫺1
⫽ (4.180)
⯗ ⯗ ... ⯗
0 0 ⯗ 1
Using these results, we can calculate the Mahalanobis distance D 2 as
1 2
D2 ⫽ (y ⫹ y 22 ⫹ 䡠䡠䡠 ⫹ y k2) (4.181)
k 1
Now, setting measured data already subtracted by m and divided by to X1, X2,
... , Xk, we compute primarily the following x1, x 2, ... , xk:
x1 ⫽ X1
x 2 ⫽ X2 ⫺ b21x1
x 3 ⫽ X3 ⫺ b31x1 ⫺ b32x 2
⯗
xk ⫽ Xk ⫺ bk1x1 ⫺ 䡠䡠䡠 ⫺ bk(k⫺1) (4.182)
Thus, y1, y2, ... , yk are calculated as
y1 ⫽ x1
x2
y2 ⫽
兹V2
x3
y3 ⫽
兹V3
⯗
xk
yk ⫽ (4.183)
兹Vk
4.7. MT and MTS Methods 113
When for an arbitrary object we calculate y1, y2, ... , yk and D 2 in equation
(4.181), if a certain variable belongs to the Mahalanobis space, D 2 is supposed to
take a value of 1, as discussed earlier. Otherwise, D 2 becomes much larger than 1
in most cases.
Once the normalized orthogonal variables y1, y2, ... , yk are computed, the next
step is the selection of items.
A1: Only y1 is used
⯗
Ak: y1, y2, ... , yk are used
Now suppose that values of signal factor levels are known. Here we do not
explain a case of handling unknown values. Although some signals belong to the
base space, l levels of a signal that is not included in the base space are normally
used. We set the levels to M1, M2, ... , Ml. Although l can only be 3, we should
choose as large an l value as possible to calculate errors.
In the case of Ak, or the case where all items are used, after calculating the
Mahalanobis distances for M1, M2, ... , Ml by taking the square root of each, we
create Table 4.27. What is important here is not D 2 but D per se. As a next step,
we compute dynamic SN ratios. Although we show the case where all items are
used, we can create a table similar to Table 4.27 and calculate the SN ratio for
other cases, such as the case when partial items are assigned to an orthogonal
array.
Based on Table 4.27, we can compute the SN ratio as follows:
Total variation:
ST ⫽ D 21 ⫹ D 22 ⫹ 䡠䡠䡠 ⫹ D 21 (ƒ ⫽ l ) (4.184)
Variation of proportional terms:
(M1D1 ⫹ M2D2 ⫹ 䡠䡠䡠 ⫹ Ml Dl)2
S ⫽ (ƒ ⫽ 1) (4.185)
r
Effective divider:
r ⫽ M 21 ⫹ M 22 ⫹ 䡠䡠䡠 ⫹ M 2l (4.186)
Error variation:
Se ⫽ ST ⫺ S (ƒ ⫽ l ⫺ 1) (4.187)
Table 4.27
Signal values and Mahalanobis distance
Signal-level value M1 M2 䡠䡠䡠 Mk
Mahalanobis distance D1 D2 䡠䡠䡠 Dl
114 4. Quality Engineering: The Taguchi Method
SN ratio:
(1/r)(S ⫺ Ve)
⫽ 10 log (4.188)
Ve
On the other hand, for the calibration equation, we calculate
M1D1 ⫹ M2D2 ⫹ 䡠䡠䡠 ⫹ Ml Dl
⫽ (4.189)
r
and we estimate
D
M⫽ (4.190)

In addition, for A1, A2, ... , Ak⫺1, we need to compute SN ratios, 1, 2, ... , k⫺1.
Table 4.28 summarizes the result. According to the table, we determine the num-
ber of items by balancing SN ratio and cost. The cost is not the calculation cost
but the measurement cost for items. We do not explain here the use of loss func-
tions to select items.
Summary of Partial To summarize the distances calculated from each subset of distances to solve col-
MD Groups: linearity problems, this approach can be widely applied. The reason is that we can
Countermeasure for select types of scale to use, as what is important is to select a scale that expresses
Collinearity patients’ conditions accurately no matter what correlation we have in the Mahal-
anobis space. The point is to be consistent with patients’ conditions diagnosed by
doctors.
Now let’s go through a new construction method. For example, suppose that
we have data for 0 and 1 in a 64 ⫻ 64 grid for computer recognition of hand-
writing. If we use 0 and 1 directly, 4096 elements exist. Thus, the Mahalanobis
space formed by a unit set (suppose that we are dealing with data for n persons
in terms of whether a computer can recognize a character ‘‘A’’ as ‘‘A’’: for example,
data from 200 sets in total if 50 people write a character of four items) has a 4096
⫻ 4096 matrix. This takes too long to handle using current computer capability.
How to leave out character information is an issue of information system design.
In fact, a technique for substituting only 128 data items consisting of 64 column
sums and 64 row sums for all data in a 64 ⫻ 64 grid has already been proposed.
Since two sums of 64 column sums and 64 row sums are identical, they have
collinearity. Therefore, we cannot create a 128 ⫻ 128 unit space by using the
relationship 64 ⫹ 64 ⫽ 128.
In another method, we first create a unit space using 64 row sums and introduce
the Mahalanobis distance, then create a unit space using 64 column sums and
Table 4.28
Signal values and Mahalanobis distances
Number of items 1 2 3 䡠䡠䡠 k
SN ratio 1 2 3 䡠䡠䡠 k
Cost C1 C2 C3 䡠䡠䡠 Ck
4.7. MT and MTS Methods 115
calculate the Mahalanobis distance. For a few alphabets similar to ‘‘B,’’ we calculate
Mahalanobis distances. If 10 persons, N1. N2, ... , N10 write three letters similar to
‘‘B,’’ we can obtain 30 signals. After each of the 10 persons writes the three letters
‘‘D,’’ ‘‘E,’’ and ‘‘R,’’ we compute the Mahalanobis distances for all signals from a
unit space of ‘‘B.’’ This is shown in Table 4.29.
We calculate a discriminability SN ratio according to this table. Because we do
not know the true difference among M ’s, we compute SN ratios for unknown true
values as follows:
Total variation:
ST ⫽ D 211 ⫹ D 212 ⫹ 䡠䡠䡠 ⫹ D 23,10 (ƒ ⫽ 30) (4.191)
Signal effect:
D 21 ⫹ D 22 ⫹ D 23
SM ⫽ (ƒ ⫽ 3) (4.192)
10
SM
VM ⫽ (4.193)
3
Se ⫽ ST ⫺ SM (ƒ ⫽ 27) (4.194)
Se
Ve ⫽ (4.195)
27
By calculating row sums and column sums separately, we compute the following
SN ratio using averages of VM and Ve:
M ⫺ Ve)
1
––(V
⫽ 10 log 10
(4.196)
Ve
The error variance, which is a reciprocal of the antilog value, can be computed as
2 ⫽ 10⫺1.1 (4.197)
The SN ratio for selection of items is calculated by setting the following two levels:
❏ Level 1: item used
❏ Level 2: item not used
We allocate them to an orthogonal array. By calculating the SN ratio using equa-
tion (4.196), we choose optimal items.
Table 4.29
Mahalanobis distances for signals
Noise
Signal N1 N2 䡠䡠䡠 N10 Total
M1 (D) D11 D12 䡠䡠䡠 D1, 10 D1
M2 (E) D21 D22 䡠䡠䡠 D2, 10 D2
M3 (R) D31 D32 䡠䡠䡠 D3, 10 D3
116 4. Quality Engineering: The Taguchi Method
Although there can be some common items among groups of items, we should
be careful not to have collinearity in one group. Indeed, it seems somewhat un-
usual that common items are included in several different groups, even though
each common item should be emphasized; however, this situation is considered
not so unreasonable because the conventional single correlation has neglected
other, less related items completely.
A key point here is that we judge whether the summarized Mahalanobis dis-
tance corresponds to the case of using the SN ratio calculated by Mahalanobis
distances for items not included in the unit space. Although we can create Ma-
halanobis spaces by any procedure, we can judge them only by using SN ratios.
❒ Example
Although automobile keys are generally produced as a set of four identical keys,
this production system is regarded as dynamic because each set has a different
dimension and shape. Each key has approximately 9 to 11 cuts, and each cut has
several different dimensions. If there are four different dimensions with a 0.5-mm
step, a key with 10 cuts has 410 ⫽ 1,048,576 variations.
In actuality, each key is produced such that it has a few different-dimension
cuts. Then the number of key types in the market will be approximately 10,000.
Each key set is produced based on the dimensions indicated by a computer. By
using a master key we can check whether a certain key is produced, as indicated
by the computer. Additionally, to manage a machine, we can examine particular-
4.8. Online Quality Engineering 117
B C
control cost ⫽ ⫹ (4.198)
n0 u0
D 20
(4.199)
3
On the other hand, the magnitude of dispersion for an inspection interval n0 and
an inspection time lag of l is
冉 n0 ⫹ 1
2
⫹l 冊 D 20
u0
(4.200)
A
⌬2 冋 冉
D 20
3
⫹
n0 ⫹ 1
2
⫹l 冊 册 D 20
u0
(4.201)
Adding equations (4.198) and (4.201) to calculate the following economic loss L0
for the current control system, we have 33.32 cents per product:
L0 ⫽
B0
n0
⫹
C
u0
A
⫹ 2
⌬ 冋 冉
D 20
3 2冊 册
n0 ⫹ 1
⫹l
D 20
u0
⫽
12
300
⫹
58
19,560
⫹
1.90
302 冋 冉
202
3 冊
⫹
301
2
⫹ 50
202
册
19,560
⫽ 0.04 ⫹ 000.30 ⫹ 0.2815 ⫹ 0.87
⫽ 33.32 cents (4.202)
Assuming that the annual operation time is 1600 hours, we can see that in the
current system the following amount of money is spent annually to control quality:
A 2
(4.204)
⌬2 m
4.8. Online Quality Engineering 119
It is an optimal control system that improves the loss in equation (4.203). In short,
it is equivalent to determining an optimal inspection interval n and adjustment limit
D, both of which are calculated by the following formulas:
n⫽ 冪2uA B D⌬0
0
(4.205)
⫽冪 冉20冊
(2)(19,560)(12) 30
1.90
⫽ 745
⬇ 600 (twice an hour) (4.206)
D⫽ 冉3C
A
⫻
D 2
u0
⌬
2
0
冊1/2
(4.207)
⫽ 冋(3)(58)
1.90 冉 202
19,560
⌬2冊册 1/4
⫽ 6.4 (4.208)
⬇ 7.0 m (4.209)
Then, by setting the optimal measurement interval to 600 sets and adjustment limit
to Ⳳ7.0 m, we can reduce the loss L:
L⫽
B
n
C A
⫹ ⫹ 2
u ⌬ 冋 冉
D2
32
⫹
n⫹1
2
⫹l 冊 册 D2
u
(4.210)
D2
u ⫽ u0
D 20
⫽ (19,560) 冉 冊 72
202
⫽ 2396 (4.211)
Therefore, we need to change the adjustment interval from the current level of 65
hours to 8 hours. However, the total loss L decreases as follows:
L⫽
12
600
⫹
58
2396
⫹
1.80
302 冋 冉 72
3
⫹
601
2
⫹ 50 冊 册 72
2396
⫽ 0.02 ⫹ 0.0242 ⫹ 3.45 ⫹ 1.51
⫽ 9.38 cents (4.212)
Management in Manufacturing
The procedure described in the preceding section is a technique of solving a bal-
ancing equation of checkup and adjustment costs, and necessary quality level, lead-
ing finally to the optimal allocation of operators in a production plant. Now, given
that checkup and adjustment take 10 and 30 minutes, respectively, the work hours
required in an 8-hour shift at present are:
(10 min) ⫻ (daily no. inspections) ⫹ (30 min) ⫻ (daily no. adjustments)
(8)(300) (3)(300)
⫽ (10) ⫹ (30) (4.215)
n u
⫽ (10) 冉 冊 2400
600
⫹ (30) 冉 冊
2400
2396
⫽ 40 ⫹ 30.1 ⫽ 70 minutes (4.216)
Assuming that one shift has a duration of 8 hours, the following the number of
workers are required:
70
⫽ 0.146 worker (4.217)
(8)(60)
is needed, which implies that one operator is sufficient. As compared to the follow-
ing workers required currently, we have
(8)(300) (3)(300)
(10) ⫹ (30) ⫽ 83.6 minutes (4.219)
300 19,560
so we can reduce the number of workers by only 0.028 in this process. On the
other hand, to compute the process capability index Cp, we estimate the current
standard deviation 0 using the standard deviation of a measurement error m:
冪D3 ⫹ 冉n 2⫹ 1 ⫹ l冊 Du
2 2
0 ⫽ ⫹ 2m
0 0 0
(4.220)
0
0 ⫽ 冪3
202
⫹ 冉301
2
⫹ 50 冊 202
19560
⫹ 22
⫽ 11.9 m (4.221)
4.8. Online Quality Engineering 121
(2)(30)
⫽
(6)(11.9)
⫽ 0.84 (4.222)
The standard deviation in the optimal feedback system is
冪 冉 冊
72 601 72
⫽ ⫹ ⫹ 50 ⫹ 22
3 2 2396
⫽ 5.24 m (4.223)
Then we cannot only reduce the required workforce by 0.028 worker (0.84 times)
but can also enhance the process capability index Cp from the current level to
(30)(2)
Cp ⫽ ⫽ 1.91 (4.224)
(6)(5.24)
at 45% off the original price, the sales volume doubles. This holds true when we
offer new jobs to half the workers, with the sales volume remaining at the same
level. Now, provided that the mean adjustment interval decreases by one-fourth
due to increased variability after the production speed is raised, how does the cost
eventually change? When frequency of machine breakdowns quadruples, u de-
creases from its current level of 2396 to 599, one-fourth of 2396. Therefore, we
alter the current levels of u0 and D0 to the values u0 ⫽ 599 and D0 ⫽ 7 m. By
taking these into account, we consider the loss function. The cost is cut by 0.6-
fold and the production volume is doubled. In general, as production conditions
change, the optimal inspection interval and adjustment limit also change. Thus, we
need to recalculate n and D here. Substituting 0.6A for A in the formula, we obtain
n⫽ 冪2u0.6AB D⌬
0
0.6 ⫻ 190 冉 7 冊
⫽冪
(2)(599)(12) 30
D⫽ 冋 (3)(58)
(0.6)(1.90) 冉 冊 册72
599
(302)
1/4
u ⫽ (599) 冉 冊
102
72
⫽ 1222 (4.228)
L⫽
B
n
C
⫹ ⫹
u
0.6A
302 冋 冉
D2
3
⫹
n⫹1
2 冊 册
⫹ 2l
D2
u
⫽
12
600
⫹
58
1222
⫹
(0.6)(1.90)
302 冋 冉
102
3
⫹
601
2
⫹ 100 冊 册
102
1222
⫽ 0.02 ⫹ 0.0475 ⫹ 4.22 ⫹ 4.15
⫽ 15.12 cents (4.229)
Suppose that the production volume remains the same, (300)(1600) ⫽ 480,000
sets. We can save the following amount of money on an annual basis:
(1.99 ⫺ 1.29)(48) ⫽ $33,600,000 (4.232)
References
1. Genichi Taguchi, 1987. System of Experimental Design. Dearborn, Michigan: Unipub/
American Supplier Institute.
2. Genichi Taguchi, 1984. Reliability Design Case Studies for New Product Development.
Tokyo: Japanese Standards Association.
3. Genichi Taguchi et al., 1992. Technology Development for Electronic and Electric Industries.
Quality Engineering Application Series. Tokyo: Japanese Standards Association.
4. Measurement Management Simplification Study Committee, 1984, Parameter Design
for New Product Development. Japanese Standards Association.
5. Genichi Taguchi et al., 1989. Quality Engineering Series, Vol. 2. Tokyo: Japanese Stan-
dards Association.
6. Tatsuji Kenetaka, 1987. An application of Mahalanobis distance. Standardization and
Quality Control, Vol. 40, No. 10.