0% found this document useful (0 votes)
54 views47 pages

3a. Factorial Experiment

The document discusses a factorial experiment investigating the effects of exhaust index and pump heater voltage on pressure inside a vacuum tube. It provides background on factorial experiments and details of an example experiment, including the data collection process and analysis, which involves a two-way ANOVA to evaluate main and interaction effects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views47 pages

3a. Factorial Experiment

The document discusses a factorial experiment investigating the effects of exhaust index and pump heater voltage on pressure inside a vacuum tube. It provides background on factorial experiments and details of an example experiment, including the data collection process and analysis, which involves a two-way ANOVA to evaluate main and interaction effects.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 47

TI142312 - QUALITY ENGINEERING

Factorial Experiment

NANI KURNIATI, PhD

DEPARTMENT OF INDUSTRIAL ENGINEERING


INSTITUT TEKNOLOGI SEPULUH NOPEMBER (ITS)
Factorial Designs:
Introduction and Two-Levels
Factorial Experiments

•A complete (or full)


factorial design is a
design containing every
possible combination of
the levels of all factors.
A Classic Example*
Pressure Inside a Vacuum Tube
*Source: Hicks, C. R., Fundamental
•An experiment is performed to Concepts in the Design of
Experiments, 3rd edition. Holt,
determine the effect of: Rinehart & Winston, 1982.
▪Exhaust index (measured in seconds), and
▪Pump heater voltage (in volts)
on the pressure inside a vacuum tube (measured in microns
of mercury).
▪Experiment consists of choosing What the heck is
•3 exhaust indexes (60, 90, 150) a vacuum tube?
•2 voltages (127, 220)
at fixed levels and performing
•2 replications
at each of the resulting 6 treatment combinations and
performing the resulting 12 runs according to a completely
randomized design.
Example: Data Layout

Data Layout – Order of Experimentation:


•Chosen completely at random
▪Note some treatment combinations run twice before others
are performed once.
Exhaust Index
Voltage 60 90 120
4 1 6
127
9 11 12
2 3 7
220
8 5 10
▪What if one were to replicate by first randomizing 6 runs for
each treatment combination and then randomizing 6 more?
•Thiswould be a “restriction on randomization” and the design
would no longer be completely randomized.
▪Strictly speaking, this would introduce an additional
blocking factor into the experiment.
Example: Data

Data – Results from Experimentation:


Exhaust Index
Voltage 60 90 120
48 28 7
127
58 33 15
62 14 6
220
54 10 9
Boxplots of Pressure by EI Boxplots of Pressure by V
(means are indicated by solid circles) (means are indicated by solid circles)

60 60

50 50

40 40
Pressure

Pressure

30 30

20 20

10 10

0 0
EI V
60

90

150

127

220
General Two-Factor Factorial Experiment

Statistical (effects) model:


y ijk =  + i +  j + ( )ij + ijk ,
i = 1, 2, , a; j = 1, 2, , b; k = 1, 2, , n.
•Factor A is often called the row treatment factor while
Factor B is called the column treatment factor.
Sum of Squares Decomposition

It can be shown—via some straightforward algebra—that:

( )2
a b n a b
 (yijk − y ••• ) = bn (y i•• − y ••• ) + an y • j• − y •••
2 2

i=1 j=1 k =1 j=1 j=1

( ) +  (yijk − yij• )2


a b a b n
+ n y ij• − y i•• − y • j• + y •••
2

i=1 j=1 i=1 j=1 k =1

which has the form


SST = SSA + SSB + SSAB + SSE.
▪The degrees of freedom associated with each sum of
squares also partition in the same way, i.e.,
dfT = dfA + dfB + dfAB + dfE,
where
dfT = abn – 1; dfA = a – 1; dfB = b – 1;
dfAB = (a – 1)(b – 1); and dfE = ab(n – 1).
Relevant Statistical Results

•Forming the corresponding mean squares and assuming fixed


effects, it can be shown that:
bn a 2 an b 2
E(MS A ) =  +
2
 i E(MS B ) =  +2
 j
a − 1 i=1 b − 1 j=1
a b
n
E(MS AB ) =  2 +  ij ( )2
(a − 1)(b − 1) i=1 j=1
and
E(MSE ) =  2

•Also, when null hypotheses of no main effects and no


interaction effects are true, then MSA, MSB, MSAB, and MSE
all estimate 2.
•However, if there are differences between row treatment
effects or column treatment effects, or if interaction is
present, then the corresponding mean squares will be larger
than MSE (sound familiar?)
Accounting for Interaction Between Treatments

Reconsider the decomposition for a moment


SST = SSA + SSB + SSAB + SSE
We can consider each combination as a “treatment” level
and consider this as just the model component of the sums
of squares.
Example: Two-Way ANOVA via Excel

Voltage EI Treatment Data Key Voltage EI Treatment


1 1 1 48 1 189 222 106
1 1 1 58 2 155 85 61
1 2 2 28 3 37 22
1 2 2 33 4 116
1 3 3 7 5 24
1 3 3 15 6 15
2 1 4 62
2 1 4 54
2 2 5 14
2 2 5 10
2 3 6 6
2 3 6 9

SS DF MS F-stat p-value
Average 9861.33 1
Treatments 4987.66667 5 997.533 43.059 0.00013
Voltage 96.33 1 96.333 4.158 0.08754
EI 4608.17 2 2304.083 99.457 0.00003
Interaction 283.17 2 141.583 6.112 0.03569
Error 139.00 6 23.167
Total 14988.00 12
Example: Interpretation

Since interaction is significant in this example, one should


be cautious in interpreting the main effects.
•The significant interaction here means that the effect of
exhaust index on vacuum tube pressure at one voltage is
different from its effect at the other voltage.
•Thus, when interaction is present, one generally is advised
not to compare row means without specifying a level for the
column mean, or vice-versa
▪e.g., it is potentially misleading to compare the effect of
exhaust index 60 versus exhaust index 90 without
considering the voltage level.
It would be appropriate, however, to develop comparisons
amongst the 6 treatment combinations as follow up to
their one-way analysis of variance.
Consider Text Example, Table 7.11

Two factors are of interest in


the study of animal survival
time in a study of poisons
• Poison Poison A B
Treatment
C D
I 0.31 0.82 0.43 0.45
• Treatment of poison 0.45 1.10 0.45 0.71
0.46 0.88 0.63 0.66

Groups of four animals are 0.43 0.72 0.76 0.62

randomly allocated to the three


II 0.36 0.92 0.44 0.56
0.29 0.61 0.35 1.02
0.40 0.49 0.31 0.71
poisons and four treatments 0.23 1.24 0.40 0.38

• Thus,a 3 x 4 factorial design II 0.22


0.21
0.30
0.37
0.23
0.25
0.30
0.36

with four replications 0.18


0.23
0.38
0.29
0.24
0.22
0.31
0.33

No blocking, both factors of


equal importance, and
interaction among the factors is
a possibility
Consider Text Example, Table 7.11
Key Poison Treat Interaction
Interaction Poison Treat Data
1 1 1 0.31
1 9.88 3.77 1.65
1 1 1 0.45 2 8.71 8.12 3.52
1 1 1 0.46 3 4.42 4.71 2.27
1 1 1 0.43
2 1 2 0.82 4 6.41 2.44
2 1 2 1.10 5 1.28
2 1 2 0.88
2 1 2 0.72
6 3.26
3 1 3 0.43 7 1.5
3 1 3 0.45 8 2.67
3 1 3 0.63
3 1 3 0.76 9 0.84
4 1 4 0.45 10 1.34
4 1 4 0.71
4 1 4 0.66
11 0.94
4 1 4 0.62 12 1.3
5 2 1 0.36
5 2 1 0.29
5 2 1 0.40 Source SS DF MS F-stat
5 2 1 0.23
6 2 2 0.92 Model 2.204 11 0.200 9.01
6 2 2 0.61 Error 0.801 36 0.022
6 2 2 0.49
6 2 2 1.24
Corrected 3.005 47
7 2 3 0.44
7 2 3 0.35 Data from Text
7 2 3 0.31 Source SS DF MS F
7 2 3 0.40
8 2 4 0.56 Average 11.030 1 11.030
8 2 4 1.02 Poison 1.033 2 0.517 23.222
8 2 4 0.71
8 2 4 0.38
Treat 0.921 3 0.307 13.806
9 3 1 0.22 Interaction 0.250 6 0.042 1.874
9 3 1 0.21 Error 0.801 36 0.022
9 3 1 0.18
9 3 1 0.23
Total 14.036 48
10 3 2 0.30
10 3 2 0.37
10 3 2 0.38
Results x 1000
10 3 2 0.29 Source SS DF MS
11 3 3 0.23
Average 11030.419 1 11030.419
11 3 3 0.25
11 3 3 0.24 Poison 1033.013 2 516.506
11 3 3 0.22 Treat 921.206 3 307.069
12 3 4 0.30
12 3 4 0.36 Interaction 250.137 6 41.690
12 3 4 0.31 Error 800.725 36 22.242
12 3 4 0.33
Total 14035.500 48
Factorials with More Than Two Factors

Basic procedure is straightforward generalization of the


two-factor case.
•Experiment consists of a total of abc···n observations.
•Completelyrandomized design requires these to be run in a
random order.
Sum of squares (& degrees of freedom) decomposition
is also similar to the two-factor case:
SST = SSA + SSB + SSAB
+ SSC + SSAC + SSBC + SSABC
+ ··· + SSX + SSAX + SSBX + ··· + SSAB···X
+ ··· + SSE.
Quantitative and Qualitative Factors

The basic ANOVA procedure treats every factor as if it were


qualitative.
•Sometimes an experiment will involve both quantitative and
qualitative factors
•This can be accounted for in the analysis to produce
regression models for the quantitative factors at each
level (or combination of levels) of the qualitative factors.
▪The resulting response curves and/or response surfaces are
often a considerable aid in the practical interpretation of the
results of an experiment.
▪Regression enables us to:
•Determine if relationships exists between the factors (the
independent variables) and the responses (dependent
variables) and assess their importance.
•Describe the nature of the relationship.
•Assess the degree of accuracy of the relationship.
Factorial Experiments
Recall that a complete (or full) factorial design is an
experimental design containing every possible combination
of the levels of all factors.
•Ifthere are k factors with each factor i = 1, 2,..., k having ni
levels, then the resulting design is called an n1n2 ···nk
design and contains n1n2···nk design points.
▪If n1 = n2 = ··· = nk = n, then it is referred to as an nk
design.

We now focus on 2k designs which have two levels for each


factor.
•The two levels are usually called low and high and could be
either quantitative or qualitative.
•Very widely used in industrial experimentation.
▪Especially useful for screening of potentially important
factors.
•Form a basic “building block” for other very useful designs.
22 Factorial Designs

The Simplest 2k Design:


•Two factors each with two levels
▪Representation of factor levels:
Coded
Verbal Symbolic Variables 2
A B A B x1 x2
low low – – –1 –1 1

low high – + –1 +1
x2 0
high low + – +1 –1 -2 -1 0 1 2

high high + + +1 +1 -1

Qualitative Quantitative Description -2


x1
Description of Factors
of Factors
The Design
Region
Main Effects

def: The main effect of a factor in a two-level experimental


design is defined as the change in the (average) response
produced by a change in the level of that factor.
•Example: Suppose the responses observed from a 22
experiment are the following:
x1 x2 Responses
Average response when B is low (x2 = -1):
-1 -1 6, 2, 4 (6 + 2 + 4 + 9 + 11 + 10)/6 = 7
+1 -1 9, 11, 10
Average response when B is high (x2 = +1):
-1 +1 10, 12, 14 (10 + 12 + 14 + 15 + 15 + 12)/6 = 13
+1 +1 15, 15, 12
Main Effect of B = 13 – 7 = 6
Average response when A is low (x1 = -1):
(6 + 2 + 4 + 10 + 12 + 14)/6 = 8
Main Effect of A = 12 – 8 = 4
Average response when A is high (x1 = +1):
(9 + 11 + 10 + 15 + 15 + 12)/6 = 12
Interaction Effect

def: The interaction effect between two factors in a two-


level experimental design is defined as the average
difference between the effect of one factor at the high level
of the other and the effect of that factor at the low level of
the other.
•Example:
x1 x2 Responses
Average effect of A (x1) when B is low
-1 -1 6, 2, 4
(x2 = -1):
+1 -1 9, 11, 10 (9 + 11 + 10 – 6 – 2 – 4)/6 = 3
-1 +1 10, 12, 14
+1 +1 15, 15, 12
Average effect of A (x1) when B is high (x2 = +1):
(15 + 15 + 12 – 10 – 12 – 14)/6 = 1

Interaction Effect = 3 – 1 = 2
A 23 Example – Worsted Yarn

Three factors each with two levels.


•Observed value is the “number of cycles to failure of worsted
yarn under cycles of repeated loading.”
Factors Coded Variables Response
A B C Y = Cycles
Length (mm) Amplitude (mm) Load (g) x1 x2 x3 to failure y = log10Y
250 8 40 -1 -1 -1 674 2.83
350 8 40 +1 -1 -1 3636 3.56
250 10 40 -1 +1 -1 170 2.23
350 10 40 +1 +1 -1 1140 3.06
250 8 50 -1 -1 +1 292 2.47
350 8 50 +1 -1 +1 2000 3.30
250 10 50 -1 +1 +1 90 1.95
350 10 50 +1 +1 +1 360 2.56

▪Log10 of cycles to failure is used as the response because of


the “large differences in magnitude between the original
observations.”
23 Example: A Regression Perspective

First let's assume that the factors are quantitative and there
are no interaction effects.
•In this case, the conjectured model is:
y = 0 + 1x1 + 2 x 2 + 3 x 3 + 
•Consider fitting this model to the data via least squares
(regression) to estimate each of the main effects.
▪We thus want to find values for the coefficients that

minimize N
 j 0 1 1j 2 2 j 3 3 j
( y −  −  x −  x −  x ) 2

j =1
where
yj = response at the jth design point,
xij = (coded) level of factor i at the jth design point, and
N = number of design points = 23 = 8.
23 Example: Regression Coefficients

•Example: βˆ = (X y )/ n where
2.83 1 -1 -1 -1 1
3.56 1 1 -1 -1 2
2.23 1 -1 1 -1 3
y = 3.06 X= 1 1 1 -1 = 4
2.47 1 -1 -1 1 5
3.30 1 1 -1 1 6
1.95 1 -1 1 1 7
2.56 1 1 1 1 8

β̂0 = ( 2.38 + 3.56 + 2.23 + 3.06 + 2.47 + 3.30 + 1.95 + 2.56) / 8 = 2.745
β̂1 = ( −2.38 + 3.56 − 2.23 + 3.06 − 2.47 + 3.30 − 1.95 + 2.56) / 8 = 0.375
β̂2 = ( −2.38 − 3.56 + 2.23 + 3.06 − 2.47 − 3.30 + 1.95 + 2.56) / 8 = −0.295
β̂3 = ( −2.38 − 3.56 − 2.23 − 3.06 + 2.47 + 3.30 + 1.95 + 2.56) / 8 = −0.175
Thus, the fitted model is y = 2.745 + 0.375x1 – 0.295x2 – 0.175x3.
23 Example: Computing the Main Effects
•Next, let's compute the main effect of each factor as the
difference between the average response when the factor is
high level and the average response when it is low.
▪Factor A (x1): (x2,x3)
Total Average
(-1,-1) (1,-1) (-1,1) (1,1)
x1 = 1: 3.56 3.06 3.30 2.56 12.48 3.12
x1 = -1: 2.83 2.23 2.47 1.95 9.48 2.37
Difference: 3.00 0.75

•The average effect of Factor A is estimated to be A = 0.75.


•Similarly, the average effects of Factors B and C turn out to
be
B = –0.59 and C = –0.35.
▪Note that we had previously found
ˆ 1 = 0.375; ˆ 2 = −0.295; ˆ 3 = −0.175
so estimated main effects are twice the values of the
corresponding regression coefficients.
Regression Coefficients & Average Effects

It is NOT A COINCIDENCE that the estimated average


effects are twice the values of the corresponding regression
coefficients
•Thiswill happen in general because, in the fitted model we
have
ŷ = ˆ 0 + ˆ 1x1 + ˆ 2 x 2 + ˆ 3 x 3
so that
ŷ ˆ
= i
x i
and, thus:
▪The regression coefficients estimate the change in the
response as the result of a one-unit change in the
corresponding coded variable, while
▪The average effect of factor i measures the result of an
implicit two-unit change the corresponding coded variable
since that is the distance between the coding of the levels
23 Example: Interactions via Regression
(Continued)
•From the given data, we can then obtain:
ˆ 0 = 2.745, ˆ 1 = 0.375, ˆ 2 = −0.295, ˆ 3 = −0.175,
ˆ 12 = −0.015, ˆ 13 = −0.015, ˆ 23 = −0.020, ˆ 123 = −0.040

▪It, perhaps, should not be a surprise that the coefficient on


the x1x2 term is –0.015 or half of the average effect (–0.03)
previously computed for the AB interaction.
▪What may be surprising is that the intercept and
coefficients for x1, x2, and x3 are the exactly same as
before.
•This does not happen, in general, in regression when new
variables are added to the regression equation.
•Ithappens, here, however, because the columns of the
design matrix are orthogonal.
Computation of Average Effects in 2k Designs

The preceding example showed that the average effects can


be computed simply from the vector of responses and the
“signs” within the design matrix corresponding to the coded
variables.
•We can generalize this by defining the following notation
(stated here for a design with 3 factors):
(1) = (total) response when all factors are at their low levels;
a = (total) response when Factor A (only) is at its high level;
b = (total) response when Factor B (only) is at its high level;
c = (total) response when Factor C (only) is at its high level;
ab = (total) response when A & B are at their high levels;
ac = (total) response when A & C are at their high levels;
bc = (total) response when B & C are at their high levels;
abc = (total) response when A, B & C are at their high levels.
•By defining these in relation to the total response, we allow
for replications at each design point.
Signs for Effects in a 23 Design
Effect Yarn
Response I A B AB C AC BC ABC Example
(1) + - - + - + + - 2.83
a + + - - - - + + 3.56
b + - + - - + - + 2.23
ab + + + + - - - - 3.06
c + - - + + - - + 2.47
ac + + - - + + - - 3.30
bc + - + - + - + - 1.95
abc + + + + + + + + 2.56
•To obtain the average effect of any factor, attach the sign on
the appropriate column to the corresponding response and:
(1) sum to produce a contrast which yields the total effect
of the factor, and then
(2) divide by n times the number of +'s to obtain the
average effect, where here
n = number of replications performed at each design
point.
23 Example: Computation of Selected Effects
Effect Yarn
•Using the preceding results Response I A B AB C AC BC ABC Example
(1) + - - + - + + - 2.83
for the data from in the a + + - - - - + + 3.56
worsted yarn experiment, b
ab
+
+
-
+
+
+ +
- -
-
+
-
-
-
+
-
2.23
3.06
we can compute: c + - - + + - - + 2.47
ac + + - - + + - - 3.30
1
A= a + ab + ac + abc − b − c − bc − (1)bc + - + - + - + - 1.95
4n abc + + + + + + + + 2.56

= 3.56 + 3.06 + 3.30 + 2.56 − 2.23 − 2.47 − 1.95 − 2.83


1
4
= 3.56 + 3.06 + 3.30 + 2.56 − 2.23 + 2.47 + 1.95 + 2.83
1 1
4 4
= 3.12 − 2.37 = y A + − y A − = 0.75
1
AB = ab + (1) + abc + c − b − a − bc − ac 
4n
= 3.06 + 2.83 + 2.56 + 2.47 − 2.23 − 3.56 − 1.95 − 3.30
1
4
= 3.06 + 2.83 + 2.56 + 2.47 − 2.23 + 3.56 + 1.95 + 3.30
1 1
4 4
= 2.73 − 2.76 = y AB + − y AB − = −0.03
Formulas for Computing the Effects in a 23 Design

In general, for a 2k design with n replications


per treatment combination, we have: Contrast
Effect = k −1
n2
•Thus, for a 23 design we have:
1
A = y A+ − y A− = a + ab + ac + abc − b − c − bc − (1)
4n
1
B = yB+ − yB− = b + ab + bc + abc − a − c − ac − (1)
4n
1
C = y C+ − y C− = c + ac + bc + abc − a − b − ab − (1)
4n
1
AB = y AB+ − y AB− = ab + (1) + abc + c − b − a − bc − ac 
4n
1
AC = y AC+ − y AC− = ac + (1) + abc + b − a − c − ab − bc 
4n
1
BC = yBC+ − yBC− = bc + (1) + abc + a − b − c − ab − ac 
4n
1
ABC = y ABC+ − y ABC− = abc − bc − ac + c − ab + b + a − (1)
4n
Judging the Statistical Significance of Effects

To this point, we have simply shown how various effects can


be estimated via regression or via some simple
computational formulas.
•We have not, however, determined which of the estimated
effects are significantly different from zero.
▪We will use the standard analysis of variance for factorial
designs to do so.
▪We can also exploit the structure of the two-level designs to
develop computationally straightforward means of
computing the appropriate sums of squares.
Computing the Sums of Squares in a 2k Design

For a 2k design, the sums of squares can be


computed simply via the equation at right. SS (Contrast )2
Effect =
•Thus, for example, in a 23 design we have: n2k
1
Response I A
Effect Yarn
SS
B AB C AC BC ABC ExampleA
= a + ab + ac + abc − b − c − bc − (1)2
(1) + - - + - + + - 2.83
8n
1
b + ab + bc + abc − a − c − ac − (1)2
a + + - - - - + + 3.56
b + - + - - + - + SSB
2.23 =
ab + + + + - - - - 3.06 8n
c + - - + + - - + 2.47
1
ac
bc
+
+
+
-
-
+
-
-
+ +
+ -
-
+
-
-
3.30
SS
1.95 C
= c + ac + bc + abc − a − b − ab − (1)2
8n
abc + + + + + + + + 2.56
1
SS AB = ab + (1) + abc + c − b − a − bc − ac 2
8n
1
SS AC = ac + (1) + abc + b − a − c − ab − bc 2
8n
1
SSBC = bc + (1) + abc + a − b − c − ab − ac 2
8n
1
SS ABC = abc − bc − ac + c − ab + b + a − (1)2
8n
Example: Surface Finish

An experiment was performed to investigate the surface


finish of a metal part.
•The experiment is a 23 factorial design in the factors feed
rate (A), depth of cut (B), and tool angle (C), with n = 2
replicates.

Pattern here shows treatment combinations


listed in standard order
Summary of Computations: Surface Finish Example

•Computing the estimated effects and sums of squares in


Excel using the preceding formulas, we can obtain:

New columns formed by multiplying


elements of main effects columns
ANOVA Table: Surface Finish Example

•Using the preceding sums of squares we can also produce the


following ANOVA table.
▪Only Factor A (Feed Rate) is significant at the 5% level.
▪Factor B is significant at the 10% level.
Further Assessment of the Significance of Effects

Fact: If an effect is negligible, then its estimated (average)


effect is normally distribution with mean zero and variance
2/2k-2.
•Given this, the standard error of any effect estimate in a 2k
design is defined to be:
MSE
s.e.(effect) =
n2k −2
▪Thus a 95% confidence interval on an estimated effect is
given by
Effect estimate  t0.025,df(MSE)[s.e.(effect)]
•If the 95% CI does not contain zero, then the corresponding
factor is considered to be significant at the  = 0.05 level of
significance.
▪If the estimated effects are plotted in a normal probability
plot, then the negligible effects should plot roughly in a
straight line
Example: Screening for Significant Effects

•For the surface finish example, we have MSE = 2.4375 and


dfMSE = 8 so that
MSE 2.4375
t 0.025,dfMSE s.e.(effect) = t 0.025,8 k −2
= 2 . 306 3 −2
 1.800
n2 (2)2
▪Thus, effects whose magnitudes 2.0

are smaller than 1.80 are


1.5 A
considered to not be significant
at the 5% level. 1.0
B
0.5
AB
0.0 ABC
C
-0.5
AC
-1.0

BC
-1.5

-2.0
-1.0 0.0 1.0 2.0 3.0 4.0
Surface Finish Example: Fit Via Regression
Regression Analysis: Response versus A, B, C, AB, AC, BC, ABC

The regression equation is


Response = 11.1 + 1.69 A + 0.812 B + 0.438 C + 0.687 AB
+ 0.062 AC - 0.313 BC + 0.562 ABC

Predictor Coef SE Coef T P


Constant 11.0625 0.3903 28.34 0.000
A 1.6875 0.3903 4.32 0.003
B 0.8125 0.3903 2.08 0.071
C 0.4375 0.3903 1.12 0.295
AB 0.6875 0.3903 1.76 0.116
AC 0.0625 0.3903 0.16 0.877
BC -0.3125 0.3903 -0.80 0.446
ABC 0.5625 0.3903 1.44 0.188

S = 1.561 R-Sq = 79.0% R-Sq(adj) = 60.7%

Analysis of Variance
Source DF SS MS F P
Regression 7 73.437 10.491 4.30 0.029
Residual Error 8 19.500 2.437
Total 15 92.937
Yarn Example Revisited

Data (Again):
Factors Coded Variables Response
A B C Y = Cycles
Length (mm) Amplitude (mm) Load (g) x1 x2 x3 to failure y = log10Y
250 8 40 -1 -1 -1 674 2.83
350 8 40 +1 -1 -1 3636 3.56
250 10 40 -1 +1 -1 170 2.23
350 10 40 +1 +1 -1 1140 3.06
250 8 50 -1 -1 +1 292 2.47
350 8 50 +1 -1 +1 2000 3.30
250 10 50 -1 +1 +1 90 1.95
350 10 50 +1 +1 +1 360 2.56
Yarn Example: Fitted Model
Fractional Factorial Fit: Response versus A, B, C

Estimated Effects and Coefficients for Response (coded units)

Term Effect Coef


Constant 2.7950
A 0.8500 0.4250
B -0.4900 -0.2450
C -0.2500 -0.1250
A*B 0.0700 0.0350
A*C 0.0700 0.0350
B*C 0.0600 0.0300
A*B*C 0.0200 0.0100

Analysis of Variance for Response (coded units)

Source DF Seq SS Adj SS Adj MS


Main Effects 3 2.05020 2.05020 0.683400
2-Way Interactions 3 0.02680 0.02680 0.008933
3-Way Interactions 1 0.00080 0.00080 0.000800
Residual Error 0 0.00000 0.00000 0.000000
Total 7 2.07780
What IS Going On in the Previous Example

In general, the number of parameters that can be estimated


can be no more than the number of observations.
•To test for significance in a full factorial 23 design, we need to
estimate 9 quantities:
▪1 mean effect,
▪3 main effects,
▪4 interactions, and
▪the variance of the random error terms.
•Thus, with only one replication at each of the 8 design points,
we do not have enough information to estimate all the
desired parameters.
Dealing with the Problem

We have two recourses when we lack sufficient information


to estimate all desired model parameters:
•Replicate the experiment.
▪Get more data!
•Rely on the “sparsity of effects” principle that states that
most systems are dominated by the main effects and low-
order interactions.
▪Very often true in many real-world contexts.
•Some authors claim that they have never seen a “real”
four-factor or higher interaction.
•Three-way interactions are also relatively rare.
•Large portion of two-way interactions also are often
negligible.
▪This principle enables us to “pool” or combine the higher-
order interactions as an estimate of error.
Yarn Example: Fitted Model w/o Interactions
Fractional Factorial Fit: Response versus A, B, C

Estimated Effects and Coefficients for Response (coded units)

Term Effect Coef SE Coef T P


Constant 2.7450 0.02475 110.91 0.000
A 0.7500 0.3750 0.02475 15.15 0.000
B -0.5900 -0.2950 0.02475 -11.92 0.000
C -0.3500 -0.1750 0.02475 -7.07 0.002

Analysis of Variance for Response (coded units)

Source DF Seq SS Adj SS Adj MS F


P
Main Effects 3 2.06620 2.06620 0.688733 140.56
0.000
Residual Error 4 0.01960 0.01960 0.004900
Total 7 2.08580
Surface Finish Example: Fit Via Regression

Recall the previous results from fitting a (full) regression model to


the surface finish data.
• What is the “best” model for this data?
The regression equation is
Response = 11.1 + 1.69 A + 0.812 B + 0.438 C + 0.687 AB
+ 0.062 AC - 0.313 BC + 0.562 ABC

Predictor Coef SE Coef T P


Constant 11.0625 0.3903 28.34 0.000
A 1.6875 0.3903 4.32 0.003
B 0.8125 0.3903 2.08 0.071
C 0.4375 0.3903 1.12 0.295
AB 0.6875 0.3903 1.76 0.116
AC 0.0625 0.3903 0.16 0.877
BC -0.3125 0.3903 -0.80 0.446
ABC 0.5625 0.3903 1.44 0.188
S = 1.561 R-Sq = 79.0% R-Sq(adj) = 60.7%
Analysis of Variance
Source DF SS MS F P
Regression 7 73.437 10.491 4.30 0.029
Residual Error 8 19.500 2.437
Total 15 92.937
Selecting a Parsimonious Model

What is the “best” model for a given set of data?


•This typically is regarded as one that:
▪Is significant, as indicated by the overall F-statistic,
▪Exhibits a relatively small error component (MSE),
▪Has a "large" R2, and
▪Is "parsimonious," that is, it involves the fewest
parameters.
•In general, the process of developing a parsimonious model
entails sequentially adding (or deleting) one or more
variables to (or from) the regression model.
▪This process is often facilitated by considering what
happens when variables are either added or deleted to a
model. Specifically:
•Add significant terms to a model
•Delete insignificant terms from a model
Yates Method vs. Contrast Method

Yates Algorithm versus Contrast Method


A B C y (1) (2) (3) Divisor Estimate
(1) -1 -1 -1 16 38 85 177 16 11.0625
a 1 -1 -1 22 47 92 27 8 3.375
b -1 1 -1 20 44 13 13 8 1.625
ab 1 1 -1 27 48 14 -5 8 -0.625
c -1 -1 1 21 6 9 7 8 0.875
ac 1 -1 1 23 7 4 1 8 0.125
bc -1 1 1 18 2 1 -5 8 -0.625
abc 1 1 1 30 12 -6 -24 8 -3

Contrast Method 3.375 1.625 0.875

A B C AB AC BC ABC y
(1) -1 -1 -1 1 1 1 -1 16
a 1 -1 -1 -1 -1 1 1 22
b -1 1 -1 -1 1 -1 1 20
ab 1 1 -1 1 -1 -1 -1 27
c -1 -1 1 1 -1 -1 1 21
ac 1 -1 1 -1 1 -1 -1 23
bc -1 1 1 -1 -1 1 -1 18
abc 1 1 1 1 1 1 1 30

Contrast Method 3.375 1.625 0.875 1.375 0.125 -0.625 1.125


Sum Squares 45.5625 10.5625 3.0625 7.5625 0.0625 1.5625 5.0625
Questions?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy