0% found this document useful (0 votes)
199 views64 pages

Taguchi Case Study

Factorial designs were developed in the 19th century and were advocated for by Ronald Fisher in the 1920s. A factorial design involves studying the effects of two or more factors, each with discrete levels, on a response variable. It allows investigation of the main effects of each factor as well as interaction effects between factors. Factorial designs are more efficient than studying factors individually as they can determine multiple effects with the same number of trials. A full factorial design studies all possible combinations of factor levels, while a fractional design omits some combinations. Factorial designs enable examination of interaction effects, where the impact of one factor depends on the level of another factor.

Uploaded by

ahmed elkhouly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
199 views64 pages

Taguchi Case Study

Factorial designs were developed in the 19th century and were advocated for by Ronald Fisher in the 1920s. A factorial design involves studying the effects of two or more factors, each with discrete levels, on a response variable. It allows investigation of the main effects of each factor as well as interaction effects between factors. Factorial designs are more efficient than studying factors individually as they can determine multiple effects with the same number of trials. A full factorial design studies all possible combinations of factor levels, while a fractional design omits some combinations. Factorial designs enable examination of interaction effects, where the impact of one factor depends on the level of another factor.

Uploaded by

ahmed elkhouly
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 64

2.

FACTORIAL DESIGNS

2.1 History
Factorial designs were used in the 19th century by John Bennet Lawes and Joseph Henry Gilbert
of the Rothamsted Experimental Station

Ronald Fisher argued in 1926 that "complex" designs (such as factorial designs) were more
efficient than studying one factor at a time.[2]

Fisher thought that a factorial design allows the effect of several factors and even interactions
between them to be determined with the same number of trials as are necessary to determine any
one of the effects by itself with the same degree of accuracy.

The term "factorial" may not have been used in print before 1935, when Fisher used it in his
book The Design of Experiments.

2.2 Definition
In statistics, a full factorial experiment is an experiment whose design consists of two or more
factors, each with discrete possible values or "levels", and whose experimental units take on all
possible combinations of these levels across all such factors as shown if Figure 1.1. A full
factorial design may also be called a fully crossed design. Such an experiment allows the
investigator to study the effect of each factor on the response variable, as well as the effects of
interactions between factors on the response variable.

Afactor is a major independent variable. In this example we have two factors: time in instruction
and setting

A level is a subdivision of a factor. In this example, time in instruction has two levels and setting
has two levels.

For the vast majority of factorial experiments, each factor has only two levels. For example, with
two factors each taking two levels, a factorial experiment would have four treatment
combinations in total, and is usually called a 22 factorial design.
If the number of combinations in a full factorial design is too high to be logistically feasible, a
fractional factorial design may be done, in which some of the possible combinations (usually at
least half) are omitted.

Fig.2.1

Probably the easiest way to begin understanding factorial designs is by looking at an example.
Let's imagine a design where we have an educational program where we would like to look at a
variety of program variations to see which works best. For instance, we would like to vary the
amount of time the children receive instruction with one group getting 1 hour of instruction per
week and another getting 4 hours per week. And, we'd like to vary the setting with one group
getting the instruction in-class (probably pulled off into a corner of the classroom) and the other
group being pulled-out of the classroom for instruction in another room.

Let's begin by doing some defining of terms. In factorial designs, a factor is a major independent
variable. In this example we have two factors: time in instruction and setting. A level is a
subdivision of a factor. In this example, time in instruction has two levels and setting has two
levels. Sometimes we depict a factorial design with a numbering notation. In this example, we
can say that we have a 2 x 2 (spoken "two-by-two) factorial design. In this notation, the number
of numbers tells you how many factors there are and the number values tell you how many
levels. If I said I had a 3 x 4 factorial design, you would know that I had 2 factors and that one
factor had 3 levels while the other had 4. Order of the numbers makes no difference and we
could just as easily term this a 4 x 3 factorial design. The number of different treatment groups
that we have in any factorial design can easily be determined by multiplying through the number
notation. For instance, in our example we have 2 x 2 = 4 groups. In our notational example, we
would need 3 x 4 = 12 groups.

We can also depict a factorial design in design notation. Because of the treatment level
combinations, it is useful to use subscripts on the treatment (X) symbol. We can see in the figure
that there are four groups, one for each combination of levels of factors. It is also immediately
apparent that the groups were randomly assigned and that this is a posttest-only design.

Now, let's look at a variety of different results we might get from this simple 2 x 2 factorial
design. Each of the following figures describes a different possible outcome. And each outcome
is shown in table form (the 2 x 2 table with the row and column averages) and in graphic form
(with each factor taking a turn on the horizontal axis). You should convince yourself that the
information in the tables agrees with the information in both of the graphs. You should also
convince yourself that the pair of graphs in each figure show the exact same information graphed
in two different ways. The lines that are shown in the graphs are technically not necessary -- they
are used as a visual aid to enable you to easily track where the averages for a single level go
across levels of another factor. Keep in mind that the values shown in the tables and graphs are
group averages on the outcome variable of interest. In this example, the outcome might be a test
of achievement in the subject being taught. We will assume that scores on this test range from 1
to 10 with higher values indicating greater achievement. You should study carefully the
outcomes in each figure in order to understand the differences between these cases.
2.3 The advantage of Factorial Design
A two-way design enables us to examine the joint (or interaction) effect of the
independent variables on the dependent variable. An interactionmeans that the effect of
one independent variable has on a dependent variable is not the same for all levels of the
other independent variable. We cannot get this information by running separate one-way
analyses.

Factorial Designs are widely used in experiments involving several factors.

There are several special cases of the general factorial design that are important because
they are widely used, and form the basis of other designs of considerable practical value.

Factorial design can lead to more powerful test by reducing the error (within cell)
variance. This point will appear clearly when will compare the result of one-way analyses
with the results of a twoway analyses or t-tests.

With factorial designs, we don't have to compromise when answering these questions. We
can have it both ways if we cross each of our two times in instruction conditions with
each of our two settings.

2.4 The Main Effects


A main effect is an outcome that is a consistent difference between levels of a factor. For
instance, we would say theres a main effect for setting if we find a statistical difference between
the averages for the in-class and pull-out groups, at all levels of time in instruction. The first
figure depicts a main effect of time. For all settings, the 4 hour/week condition worked better
than the 1 hour/week one. It is also possible to have a main effect for setting (and none for time).
Fig. 2.2 a Main effect on time Fig. 2.2 b Main effect on setting

In the second main effect graph we see that in-class training was better than pull-out training for
all amounts of time.

Fig. 2.3 Main effect on time and setting

Finally, it is possible to have a main effect on both variables simultaneously as depicted in the
third main effect figure. In this instance 4 hours/week always works better than 1 hour/week and
in-class setting always works better than pull-out.
2.5 Interaction Effects

Fig. 2.4 Interaction effect

If we could only look at main effects, factorial designs would be useful. But, because of the way
we combine levels in factorial designs, they also enable us to examine the interaction effects that
exist between factors. An interaction effect exists when differences on one factor depend on the
level you are on another factor. It's important to recognize that an interaction is between factors,
not levels. We wouldn't say there's an interaction between 4 hours/week and in-class treatment.
Instead, we would say that there's an interaction between time and setting, and then we would go
on to describe the specific levels involved.

How do you know if there is an interaction in a factorial design? There are three ways you can
determine there's an interaction. First, when you run the statistical analysis, the statistical table
will report on all main effects and interactions. Second, you know there's an interaction when
can't talk about effect on one factor without mentioning the other factor. if you can say at the end
of our study that time in instruction makes a difference, then you know that you have a main
effect and not an interaction (because you did not have to mention the setting factor when
describing the results for time). On the other hand, when you have an interaction it is impossible
to describe your results accurately without mentioning both factors. Finally, you can always spot
an interaction in the graphs of group means -- whenever there are lines that are not parallel there
is an interaction present! If you check out the main effect graphs above, you will notice that all of
the lines within a graph are parallel. In contrast, for all of the interaction graphs, you will see that
the lines are not parallel.

Fig. 2.5
In the first interaction effect graph, weInteraction
see that oneeffects
combination of levels -- 4 hours/week and in-
class setting -- does better than the other three. In the second interaction we have a more complex
"cross-over" interaction. Here, at 1 hour/week the pull-out group does better than the in-class
group while at 4 hours/week the reverse is true. Furthermore, the both of these combinations of
levels do equally well.

2.6 Summary
Factorial design has several important features. First, it has great flexibility for exploring or
enhancing the signal (treatment) in our studies. Whenever we are interested in examining
treatment variations, factorial designs should be strong candidates as the designs of choice.
Second, factorial designs are efficient. Instead of conducting a series of independent studies we
are effectively able to combine these studies into one. Finally, factorial designs are the only
effective way to examine interaction effects.
So far, we have only looked at a very simple 2 x 2 factorial design structure. You may want to
look at some factorial design variations to get a deeper understanding of how they work. You
may also want to examine how we approach

2.7 Calculations
A two-factor factorial design is an experimental design in which data is collected for all possible
combinations of the levels of the two factors of interest.

If equal sample sizes are taken for each of the possible factor combinations then the design is a
balanced two-factor factorial design.

A balanced ab factorial design is a factorial design for which there are a levels of factor A, b
levels of factor B, and n independent replications taken at each of the ab treatment
combinations. The design size is N = abn.

The eect of a factor is dened to be the average change in the response associated with a
change in the level of the factor. This is usually called a main eect.

If the average change in response across the levels of one factor are not the same at all levels of
the other factor, then we say there is an interaction between the factors

Table 2.1 calculations


Where nij is the number of observations in cell (i,j).

EXAMPLE: (A 22 balanced design): A virologist is interested in studying the Eectsof a = 2


dierent culture media (M) and b = 2 dierent times (T) on the growth of a particular virus.

She performs a balanced design with n = 6 replicates for each of the 4 M T treatment
combinations. The N = 24 measurements were taken in a completely randomized order. The
results:

Table 2.2

The eect of changing T from 12 to 18 hours on the response depends on the level of M

For medium 1, the T eect = 37.1623.3 =13.86


For medium 2, the T eect = 32 26 =6

The eect on the response of changing M from medium 1 to 2 depends on the level of T.

For T = 12 hours, the M eect = 2623.3 =2.7

For T = 18 hours, the M eect = 3237.16 =-5.16

If either of these pairs of estimated eects are signicantly dierent then we say there exists a
signicant interaction between factors M and T.

For the 22 design example:

If 13.83 is signicantly dierent than 6 for the M eects, then we have a signicant M
T interaction. Or,

If 2.6 is signicantly dierent than 5.16 for the T eects, then we have a
signicant M T interaction.

There are two ways of dening an interaction between two factors A and B:

If the average change in response between the levels of factor A is not the same
at all levels of factor B, then an interaction exists between factors A and B.

The lack of additivity of factors A and B, or the nonparallelism of the mean proles of A
and B, is called the interaction of A and B.

When we assume there is no interaction between A and B, we say the eects are additive.

An interaction plot or treatment means plot is a graphical tool for checking for potential
interactions between two factors. To make an interaction plot,

1. Calculate the cell means for all ab combinations of the levels of A and B.

2. Plot the cell means against the levels of factor A.

3. Connect and label means the same levels of factor B.

The roles of A and B can be reversed to make a second interaction plot


Interpretation of the interaction plot: Parallel lines usually indicate no signicant interaction.

Severe lack of parallelism usually indicates a signicant interaction. Moderate


lack of parallelism suggests a possible signicant interaction may exist.

Statistical signicance of an interaction eect depends on the magnitude of the MSE: For smal
values of the MSE, even small interaction eects (less non parallelism) may be signicant .

When an A B interaction is large, the corresponding main eects A and B may have little
practical meaning. Knowledge of the A B interaction is often more useful than knowledge of
the main eect

. We usually say that a signicant interaction can mask the interpretation of signicant main
eects. That is, the experimenter must examine the levels of one factor, say A, at xed levels of
the other factor to draw conclusions about the main eect of A

It is possible to have a signicant interaction between two factors, while the main eects for
both factors are not signicant.

This would happen when the interaction plot shows interactions in dierent directions that
balance out over one or both factors (such as an X pattern). This type of interaction, however, is
uncommon
2.8 The Interaction Model

The interaction model for a two-factor completely randomized design is: yijk2 = (2)

where

is the baseline mean, i is the ith factor A eect,

j is the jth factor B eect, ()ij is the (i,j)th A B interaction eect. ijk is the random error of
the kth observation from the (i,j)th cell

We assume ijk IID N(0,2). For now, we will also assume all eects are xed. If ()ij is
removed from (22), we would have the additive model:
Equation 2.1
yijk = + i + j +ijk

If we impose the constraints

Equation 2.2

then the least squares estimates of the model parameters are

= j = i =

ij=

If we substitute these estimates into (22) we get

Equation 2.3
yijk = + i + j + c ij + ijk

= y + (yiy) + (yjy) + (yijyiyj + y) + ijk

where ijk is the kth residual from the treatment (i,j)th cell, and ijk =

For the 22 design

y = 29.625 y1 = 24.6 y2 = 34.586 y1 = 30.25 y2 = 29.00

2.9 Statistical Analysis of the Fixed-Effects Mode

= the AB interaction sum of squares (df = (a1)(b1)


Fig. 2.6 Statistical Analysis of the Fixed-Effects Mode

Balanced Two-Factor Factorial ANOVA Table

Table 2.3
2.10 Factorial Design Variations

Here, we'll look at a number of different factorial designs. We'll begin with a two-factor design
where one of the factors has more than two levels. Then we'll introduce the three-factor design.
Finally, we'll present the idea of the incomplete factorial design.

A 2x3 Example
Fig. 2.7 2x3 Example main effect of setting

For these examples, let's construct an


example where we wish to study of the effect of different treatment combinations for cocaine
abuse. Here, the dependent measure is severity of illness rating done by the treatment staff. The
outcome ranges from 1 to 10 where higher scores indicate more severe illness: in this case, more
severe cocaine addiction. Furthermore, assume that the levels of treatment are:

Factor 1: Treatment

o psychotherapy

o behavior modification

Factor 2: Setting

o inpatient

o day treatment

o outpatient

Fig. 2.8 main effect of treatment


Note that the setting factor in this example has three levels.

The first figure shows what an effect for setting outcome might look like. You have to be very
careful in interpreting these results because higher scores mean the patient is doing worse. It's
clear that inpatient treatment works best, day treatment is next best, and outpatient treatment is
worst of the three. It's also clear that there is no difference between the two treatment levels
(psychotherapy and behavior modification). Even though both graphs in the figure depict the
exact same data, I think it's easier to see the main effect for setting in the graph on the lower left
where setting is depicted with different lines on the graph rather than at different points along the
horizontal axis.

The second figures shows a main effect for treatment with psychotherapy performing better
(remember the direction of the outcome variable) in all settings than behavior modification. The
effect is clearer in the graph on the lower right where treatment levels are used for the lines. Note
that in both this and the previous figure the lines in all graphs are parallel indicating that there are
no interaction effects.

Now, let's look at a few of the possible interaction effects. In the first case, we see that day
treatment is never the best condition. Furthermore, we see that psychotherapy works best with
inpatient care and behavior modification works best with outpatient care.

The other interaction effect example is a bit more complicated. Although there may be some
main effects mixed in with the interaction, what's important here is that there is a unique
combination of levels of factors that stands out as superior: psychotherapy done in the inpatient
setting. Once we identify a "best" combination like this, it is almost irrelevant what is going on
with main effects.

2.11 Incomplete Factorial


Design

Fig. 2.9 Incomplete Factorial Design


It's clear that factorial designs can become cumbersome and have too many groups even with
only a few factors. In much research, you won't be interested in a fully-crossed factorial design
like the ones we've been showing that pair every combination of levels of factors. Some of the
combinations may not make sense from a policy or administrative perspective, or you simply
may not have enough funds to implement all combinations. In this case, you may decide to
implement an incomplete factorial design. In this variation, some of the cells are intentionally
left empty -- you don't assign people to get those combinations of factors.

One of the most common uses of incomplete factorial design is to allow for a control or placebo
group that receives no treatment. In this case, it is actually impossible to implement a group that
simultaneously has several levels of treatment factors and receives no treatment at all. So, we
consider the control group to be its own cell in an incomplete factorial rubric (as shown in the
figure). This allows us to conduct both relative and absolute treatment comparisons within a
single study and to get a fairly precise look at different treatment combinations

2.12 Blocking in Factorial design

BLOCKING IN A FACTORIAL DESIGN

We have discussed factorial designs in the context of a completely randomizes experiment.


Sometimes it is not feasible or practical to completely randomize all of the runs in a factorial.
Forexample
The presence of a nuisance factor may require that the experiment be run in blocks. We discussed
the basic concept of Blocking in the context of a single-factor experiment in Chapter 4 We now
show how blocking can be incorporated in a factorial.

Consider a factorial experiment with two factors (A and 8) and replicates The linear statistical
model for this design as

Equation 2.4

where , . and () , represent the effects of facter A. B. ad the respectively. Now suppose that
to run this experiment a particular raw material is required.

This raw material is available in batches that are not Large enough to allow all ahn treatment
combinations to he run from the same batch. However

. if a hatch contains enough material for observations .

Then an alternative design is to run each of the n replicates using a separate batch of raw
material.

Consequently. the batches of raw Material represent a randomization restriction of a block.

and a single replicate of a complete factorial experiment is tun within each block. The effects
model for this new design is

Equation 2.5

where k is the effect of the kth block. Of course, within a block the order in which the treatment
combinations are run is completely randomized. The model (Equation 5-37) assumes that
interaction between blocks and treatments is negligible. This was assumed previously in the
analysis of randomized block designs. If these interactions do exist, they cannot be separated
from the error component. In fact, the error term in this model really consists of the (1-6),k,
([36)A, and (703),;k interactions. The analysis of variance is outlined in Table 5-18 on page 208.
The layout closely resembles that of a factorial design, with the error sum of squares reduced by
the sum of squares for blocks. Computationally, we find the sum of squares for blocks as the sum
of squares between the n block totals { y..k }. In the previous example, the randomization was
restricted to within a batch of raw material. In practice, a variety of phenomena may cause
randomization restrictions, such as time, operators, and so on. For example, if we could not run
the entire factorial experiment on one day, then the experimenter could run a complete replicate
on day 1, a second replicate on day 2, and so on. Consequently, each day would be a block.

Table 2.3 ANOVA for a two factor factorial randomized complete block

experiment is designed using three levels of ground clutter and two filter types. We will consider
these as fixed type factors. The experiment is performed by randomly selecting a treatment
combination (ground clutter level and filter type) and then introducing a signal representing the
target into the scope. The intensity of this target is increased until the operator observes it. The
intensity level at detection is then measured as the response variable. Because of operator
availability, it is convenient to select an operator and keep him or her at the scope until all the
necessary runs have been made. Furthermore, op-erators differ in their skill and ability to use the
scope. Consequently, it seems logical to use the operators as blocks. Four operators are randomly
selected. Once an operator is chosen, the order in which the six treatment combinations are run is
randomly deter-mined. Thus, we have a 3 X 2 factorial experiment run in a randomized complete
block. The data are shown in Table 5-19. The linear model for this experiment is

Equation 2.6

where Ti represents the ground clutter effect, j represents the filter type effect, ()ij is the
interaction, k is the block effect, and ijk is the NID(0, 2) error component. The sums of
squares for ground clutter, filter type, and their interaction are computed in the usual

2.13 Fractional Factorial Designs


The learning objectives for this lesson include:

Understanding the application of Fractional Factorial designs, one of the most important
designs for screening

Becoming familiar with the terms design generator, alias structure and design
resolution

Knowing how to analyze fractional factorial designs in which there arent normally
enough degrees of freedom for error
Becoming familiar with the concept of foldover either on all factors or on a single
factor and application of each case

Being introduced to Plackett-Burman Designs as another class of major screening


designs

Introduction to Fractional Factorial Designs

What we did in the last chapter is consider just one replicate of a full factorial design and run it
in blocks. The treatment combinations in each block of a full factorial can be thought of as a
fraction of the full factorial.
In setting up the blocks within the experiment we have been picking the effects we know would
be confounded and then using these to determine the layout of the blocks.
We begin with a simple example.
In an example where we have k = 3 treatments factors with 23 = 8 runs, we select 2p = 2 blocks,
and use the 3-way interaction ABC to confound with blocks and to generate the following
design.

Table 2.4 sign table for three factor


Tr A B C A A B AB I
factor
t B C C C
(1) - - - + + + -
a + - - - - + +
b - + - - + - +
ab + + - + - - -
c - - + + - - +
ac + - + - + - -
bc - + + - - + -
ab + + + + + + +
c

Here are the two blocks that result using the ABC as the generator:
Table 2.5
Block 1 2
ABC - +
(1) a
ab b
ac c
bc abc
A fractional factorial design is useful when we can't afford even one full replicate of the full
factorial design. In a typical situation our total number of runs is N = 2k-p, which is a fraction of
the total number of treatments.
Using our example above, where k = 3, p = 1, therefore, N = 22 = 4
So, in this case, either one of these blocks above is a one half fraction of a 23 design. Just as in
the block designs where we had AB confounded with blocks - where we were not able to say
anything about AB. Now, where ABC is confounded in the fractional factorial we can not say
anything about the ABC interaction.
Let's take a look at the first block which is a half fraction of the full design. ABC is the generator
of the 1/2 fraction of the 23 design. Now, take just the fraction of the full design where ABC = -1
and we place it within its own table:
Table 2.6

trt A B C AB A BC AB I
C C
(1) - - - + + + - +

ab + + - + - - - +

ac + - + - + - - +

bc - + + - - + - +

Notice the contrast defining the main effects (similar colors) - there is an aliasing of these effects.
Notice that columns with the same color are just -1 times one another.
In this half fraction of the design we have 4 observations, therefore we have 3 degrees of
freedom to estimate. The degrees of freedom estimate the following effects: A - BC, B - AC, and
C - AB. Thus, this design is only useful if the 2-way interactions are not important, since the
effects we can estimate are the combined effect of main effects and 2-way interactions.
This is referred to as a Resolution III Design. It is called a Resolution III Design because the
generator ABC has three letters, but the properties of this design and all Resolution III designs
are such that the main effects are confounded with 2-way interactions.

2.14 Notation
Fractional designs are expressed using the notation lk p, where l is the number of levels of each
factor investigated, k is the number of factors investigated, and p describes the size of the
fraction of the full factorial used. Formally, p is the number of generators, assignments as to
which effects or interactions are confounded, i.e., cannot be estimated independently of each
other (see below). A design with p such generators is a 1/(lp) fraction of the full factorial design.
For example, a 25 2 design is 1/4 of a two level, five factor factorial design. Rather than the 32
runs that would be required for the full 25 factorial experiment, this experiment requires only
eight runs.
In practice, one rarely encounters l> 2 levels in fractional factorial designs, since response
surface methodology is a much more experimentally efficient way to determine the relationship
between the experimental response and factors at multiple levels. In addition, the methodology to
generate such designs for more than two levels is much more cumbersome.
The levels of a factor are commonly coded as +1 for the higher level, and 1 for the lower level.
For a three-level factor, the intermediate value is coded as 0.
To save space, the points in a two-level factorial experiment are often abbreviated with strings of
plus and minus signs. The strings have as many symbols as factors, and their values dictate the
level of each factor: conventionally, for the first (or low) level, and for the second (or high)
level. The points in this experiment can thus be represented as , , , and .
The factorial points can also be abbreviated by (1), a, b, and ab, where the presence of a letter
indicates that the specified factor is at its high (or second) level and the absence of a letter
indicates that the specified factor is at its low (or first) level (for example, "a" indicates that
factor A is on its high setting, while all other factors are at their low (or first) setting). (1) is used
to indicate that all factors are at their lowest (or first) values.

2.15 Generation
In practice, experimenters typically rely on statistical reference books to supply the "standard"
fractional factorial designs, consisting of the principal fraction. The principal fraction is the set
of treatment combinations for which the generators evaluate to + under the treatment
combination algebra. However, in some situations, experimenters may take it upon themselves to
generate their own fractional design.
A fractional factorial experiment is generated from a full factorial experiment by choosing an
alias structure. The alias structure determines which effects are confounded with each other. For
example, the five factor 25 2 can be generated by using a full three factor factorial experiment
involving three factors (say A, B, and C) and then choosing to confound the two remaining
factors D and E with interactions generated by D = A*B and E = A*C. These two expressions are
called the generators of the design. So for example, when the experiment is run and the
experimenter estimates the effects for factor D, what is really being estimated is a combination of
the main effect of D and the two-factor interaction involving A and B.
An important characteristic of a fractional design is the defining relation, which gives the set of
interaction columns equal in the design matrix to a column of plus signs, denoted by I. For the
above example, since D = AB and E = AC, then ABD and ACE are both columns of plus signs,
and consequently so is BDCE. In this case the defining relation of the fractional design is I =
ABD = ACE = BCDE. The defining relation allows the alias pattern of the design to be
determined.
Table 2.7 Treatment combinations for a 25 2 design
Treatment combinations for a 25 2 design
Treatment combination I A B C D = AB E = AC
de + + +
a + +
be + + +
abd + + + +
cd + + +
ace + + + +
bc + + +
abcde + + + + + +

2.16 Resolution
An important property of a fractional design is its resolution or ability to separate main effects
and low-order interactions from one another. Formally, the resolution of the design is the
minimum word length in the defining relation excluding (1). The most important fractional
designs are those of resolution III, IV, and V: Resolutions below III are not useful and resolutions
above V are wasteful in that the expanded experimentation has no practical benefit in most cases
the bulk of the additional effort goes into the estimation of very high-order interactions which
rarely occur in practice. The 25 2 design above is resolution III since its defining relation is
I = ABD = ACE = BCDE.
Table 2.8 resolution table
Resolutio Ability Example
n
I Not useful: an experiment of exactly one run only 21 1 with defining
tests one level of a factor and hence can't even relation I = A
distinguish between the high and low levels of that
factor
II Not useful: main effects are confounded with other 22 1 with defining
main effects relation I = AB
III Estimate main effects, but these may be 23 1 with defining
confounded with two-factor interactions relation I = ABC
IV 24 1 with defining
Estimate main effects uncompounded by two-
factor interactions relation I = ABCD
Estimate two-factor interaction effects, but
these may be confounded with other two-
factor interactions
V 25 1 with defining
Estimate main effects uncompounded by
three-factor (or less) interactions relation
Estimate two-factor interaction effects I = ABCDE
uncompounded by two-factor interactions
Estimate three-factor interaction effects, but
these may be confounded with other two-
factor interactions
VI 26 1 with defining
Estimate main effects unconfounded by four-
factor (or less) interactions relation
Estimate two-factor interaction effects I = ABCDEF
unconfounded by three-factor (or less)
interactions
Estimate three-factor interaction effects, but
these may be confounded with other three-
factor interactions
The resolution described is only used for regular designs. Regular designs have run size that
equal a power of two, and only full aliasing is present. Nonregular designs are designs where run
size is a multiple of 4; these designs introduce partial aliasing, and generalized resolution is used
as design criteria instead of the resolution described previously.

Chapter 3 : Response Surface Method

3.1 Introduction
3.1.1 History of Response Surface Method (RSM)
In the Mead and Pike paper, they move back the origin of RSM to include use of "response
curves dating back into the 1930's , Then in 1935 Yates work on it.
In November 1966, a paper A Review of Response Surface Methodology ; A literature was
published by Hill and Hunter. Its purpose was to review the practical applications of RSM in
chemical and related fields.
In December 1976, another paper A Review of Response Surface Methodology From A
Biometric View Point was published by Mead and Pike appeared.
With the passage of time many Statisticians work on RSM for Improvement.

3.1.2 Application of RSM


The most frequent applications of RSM are in the industrial area.
RSM is important in designing formulating and developing and analyzing new specific scientific
studying and product.
It is also efficient in improvements of existing studies and products .
Most common application of RSM are in industrial ,biological and clinical sciences, social
sciences ,food sciences and physical and engineering sciences .

3.1.3 Defenation of RSM


As an important subject in the statistical design of experiments, the Response Surface
Methodology (RSM) is a collection of mathematical and statistical techniques useful for the
modeling and analysis of problems in which a response of interest is influenced by several
variables and the objective is to optimize this response (Montgomery 2005).

For example, the growth of a plant is affected by a certain amount of water X1 and sunshine X2.
The plant can grow under any ombination of treatment X1 and X2. Therefore, water and
unshinecan vary continuously. When treatments are from a continuous range of values, then a
response Surface Methodology is useful for developing, improving, and optimizing the response
variable. In this case, the plant growth y is the response variable, and it is a function of water and

sunshine. It can be expressed as Y = f (X1, X2) + e

The variables X1 and X2 are independent variables where the response y depends on them. The
dependent variable y is a function of X1, X2, and the experimental error term, denoted as e. The
error term e represents any measurement error on the response, as well as other type of variations

not counted in f . It is a statistical error that is assumed to distribute normally with zero mean

and variance S2In most RSM problems, the true response function f is unknown. In order to

develop a proper approximation for f , the experimenter usually starts with a low order

polynomial in some small region.

We usually represent the response surface gaphicaly such as in figure 3.1 :


figure 3.1 a three-dimensional response surface

If the response can be defined by a linear function of independent variables, then the
approximating function is a first-order model. A first-order model with 2 independent variables
can be expressed as

0 1 x1 2 x2
y= + + + (3-1)

If there is a curvature in the response surface, then a higher degree polynomial should be used.
The approximating function with 2 variables is called a second-order model:

0 1 x1 2 x2
y= + + +11x211+22x222+12x1x2 + (3-2)

3.1.4 Experimental Strategy


1. RSM resolve around the assumption that the response is a function of a set of
independent(design) variables x1,x2,x3.xk and function can be approximated in some region
of polynomial model.
y=f ( xi )

y=f ( x1 , x2 x k )

Here response variable is y that depend on the k independent variables.

2. If the factors are given then directly estimate the effects and interaction of model.

3. And if the factors are unknown then first calculate them by using the Screening method.

4. Estimate The Interaction effect using 1st order model

5. If curvature is found then use the RSM. And 2nd order model will be used to approximate the
response variable.

6. Make the graph and find the stationary point. Maximum response, Minimum response or

x1 , x2 , x3 . x4
saddle point by using the obtained values of .

3.1.5 Response Surface Methods and Designs


Response Surface Methods are designs and models for working with continuous treatments when
finding the optima or describing the response is the goal (Oehlert 2000).
The first goal for Response Surface Method is to find the optimum response. When there is more
than one response then it is important to find the compromise optimum that does not optimize
only one response (Oehlert 2000). When there are constraints on the design
data, then the experimental design has to meet requirements of the constraints. The second goal
is to understand how the response changes in a given direction by adjusting the design variables.
In general, the response surface can be visualized graphically. The graph is helpful to see the
shape of a response surface; hills, valleys, and ridge lines.
Hence, the function f (x1, x2) can be plotted versus the levels of x1 and x2 as shown as Figure 3.2
Figure 3.2 Response surface plot

In this graph, each value of x1 and x2 generates a y-value. This three-dimensional graph shows
the response surface from the side and it is called a response surface plot.
Sometimes, it is less complicated to view the response surface in two-dimensional graphs. The
contour plots can show contour lines of x1 and x2 pairs that have the same response value y. An
example of contour plot is as shown in Figure 3-3.
Figure 3-3 Contour plot.

In order to understand the surface of a response, graphs are helpful tools. But,when there are
more than two independent variables, graphs are difficult or almost impossible to use to illustrate
the response surface, since it is beyond 3-dimension. For this reason, response surface models
are essential for analyzing the unknown function f.

3.2 Methods of optimization


We use two types of model in RSM.
1st Order Model.
2nd Order Model.
3.2.1 Analysis of a First-Order Response Surface
The relationship between the response variable y and independent variables is usually unknown.
In general, the low-order polynomial model is used to describe the response surface f.

A polynomial model is usually a sufficient approximation in a small region of the response


surface. Therefore, depending on the approximation of unknown function f, either first-order or
second-order models are employed. Furthermore, the approximated function f is a first-order
model when the response is a linear function of independent variables. A first-order model with
N experimental runs carrying out on q design variables and a single response y can be expressed
as in equation (3-1) :

0 1 x1 2 x2
y= + + +

The response y is a function of the design variables x1, x2 denoted as f, plus the experimental
error. A first-order model is a multiple-regression model

3.2.2 Designs for Fitting the First-Order Model


First-order model is used to describe the flat surfaces that may or may not be tilted. This model is
not suitable for analyzing maximum, minimum, and ridge lines The first-order model
approximation of the function f is reasonable when f is not too curved in that region and the
region is not too big. First-order model is assumed to be an adequate approximation of true
surface in a small region of the xs (Montgomery 2005).

3.2.3 Analysis of a Second-Order Response Surface


When there is a curvature in the response surface the first-order model is insufficient. A second-
order model is useful in approximating a portion of the true response surface with parabolic
curvature. The second-order model includes all the terms in the first-order model, plus all
quadratic terms like x21i and all cross product terms Like xi j x3j . It is usually
11 13

expressed as in equation (3-2 )

0 1 x1 2 x2
y= + + +11x211+22x222+12x1x2 +

The second-order model is flexible, because it can take a variety of functional forms and
approximates the response surface locally. Therefore, this model is usually a good estimation of
the true response surface.

3.2.4 Designs for Fitting Second-Order Model


There are many designs available for fitting a second-order model. The most popular one is the
central composite design (CCD). This design was introduced by Box and Wilson. It consists of
factorial point s (from a 2q design and 2q-k fractional factorial design), central points, and axial
points.

CCD was often developed through a sequential experimentation. When a first-order model shows
an evidence of lack of fit, axial points can be added to the quadratic terms with more center
points to develop CCD. The number of center points nc at the origin and the distance a of the
axial runs from the design center are two parameters in the CCD design. The center runs contain
information about the curvature of the surface, if the curvature is significant, the additional axial
points allow for the experimenter to obtain an efficient estimation of the quadratic terms. The
Figure (3.4) illustrates the graphical view of a central composite design for q = 2 factors.
Figure 3.4 Central Composite Design for q = 2

There are couples of ways of choosing a and nc. First, CCD can run in incomplete blocks. A
block is a set of relatively homogeneous experimental conditions so that an experimenter divides
the observations into groups that are run in each block.
An incomplete block design may be conducted when all treatment combinations cannot be run in
each block. In order to protect the shape of the response surface, the block effects need to be
orthogonal to treatment effects. This can be done by choosing the correct a and nc in factorial
and axial blocks.
Also, a and nccan be chosen so that the CCD is not blocked. If the precision of the estimated
response surface at some point x depends only on the distance from x to the origin, not on the
direction, then the design is said to be rotatable (Oehlert 2000). When the rotatable design is
rotated about the center, the variance of y will remain same. Since the reason for using response
surface analysis is to located unknown optimization, it makes sense to use a rotatable design that
provides equal precision of estimation of the surface in all directions. The choice of a will make
the CCD design rotatable by using eithera = 2q / 4 for the full factorial or = 2(q-k ) / 4 for a
fractional factorial.
3.2.5 Multiple Regression Model
The relationship between a set of independent variables and the response y is determined by a
mathematical model called regression model. When there are more than two independent
variables the regression model is called multiple-regression model . In general, a multiple-
regression model with q independent variable takes the form of

where n > q. The parameter j measures the expected change in response y per unit increase in xi
when the other independent variables are held constant. The ith observation and jth level of
independent variable is denoted by xij. The data structure for the multipleregression model is
shown in Table 3.1.

Table 3.1 Data for Multiple-Regression Model


3.3 Methods of RSM
There are two methods of RSM to obtain optimum response. And we move toward our optimum
point with these two method..

Method Of Steepest Ascent.


Method Of Steepest Descent.

3.3.1 : Steepest Ascent Method:


This is a procedure for moving sequentially in the direction of the maximum increase in the
response getting optimum response.
The initial estimate of the optimum operating condition for this will be far from the actual
optimum.
In such circumstances, the objective of the experimenter is to move rapidly to the general
vicinity(nearest point) of the optimum. We wish to use a simple and economically efficient
experimental procedure. When we remote from the optimum, we usually assume that a 1st order
model is an adequate approximation to the true surface in a small region of the xs.
This is a procedure for moving sequentially in the direction of the maximum increase in the
response getting optimum response.

Figure 3.5Steepest Ascent Method

3.3.2 Steepest Descent Method


If minimization is desired then we call this technique the method of steepest descent.

Figure 3.5Steepest Descent Method


CHAPTER 4:TAGUCHI METHOD
4.1 Introduction of TAGUCHI Method

Dr. Taguchi of Nippon Telephones and Telegraph Company, Japan has developed a method based
on OrthogonalArray experiments which gives much reduced variance for the experiment with
optimum settings of control parameters.

Taguchi has envisaged a new method of conducting the design of experiments which are based
on well-defined guidelines. This method uses a special set of arrays called orthogonal arrays.
These standard arrays stipulate the way of conducting the minimal number of experiments which
could give the full information of all the factors that affect the performance parameter. The crux
of the orthogonal arrays method lies in choosing the level combinations of the input design
variables for each experiment

Thus the marriage of Design of Experiments with optimization of control parameters to obtain
best results is achieved in the Taguchi Method. Taguchi calls common cause variation the noise.
Noise factors are classified into three categories: Outer Noise, Inner Noise, and Between
Product Noise. Taguchis approach is not to eliminate or ignore the noise factors; Taguchi
techniques aim to reduce the effect or impact of the noise on the product quality.

4.2 Taguchi's rule for manufacturing

Taguchi realized that the best opportunity to eliminate variation is during the design of a product
and its manufacturing process. Consequently, he developed a strategy for quality engineering that
can be used in both contexts. The process has three stages:

System design:

This is design at the conceptual level, involving creating of prototype of the product that
will meet functional requirements and create the process that will built it.
Parameter (measure) design:

Involves finding optimal settings of product and process parameter in order to optimize
performance characteristics.

Tolerance design:

With a successfully completed parameter design, and an understanding of the effect that
the various parameters have on performance, resources can be focused on reducing and
controlling variation in the critical few dimensions

Parameter design

Controllable factor: It can be set at any value or level that is desired by the user during the real
eld operation. Examples: feed rate of a machine, status of a switch (on or off).

Uncontrollable factor (noise): The user is unable to control the value or level of an uncontrollable
factor during the real eld operation. Examples: environmental temperature, number of
participants for a seminar. But, the experimenter must be able to control the uncontrollable
factors during the experiment (i-e-, be able to set them in certain given levels), so that their
inuence on the response can be observed and studied in a predetermined way.

Purpose for using the parameter designs:

The purpose of this parameter design is to nd the best run (combination of the levels of the
controllable factors), so that:

(i) The value of the yield (response) will be very stable, i.e., least inuenced by the
uncontrollable factors.

(ii) The value of the yield Will be maximized.


Fig. 4.1 Inner Array and Outer Array

The inner and outer orthogonal arrays can be selected according to the following parameter
values:

Number of factors
Number of levels for each factor
Number of runs

Most of the orthogonal arrays are the fractional factorial designs For examples, the inner L9
orthogonal array is a 34-2 fractional design

3 4 81
Number of runs 34 2 = = =9
32 9

The outer L8 orthogonal array is simply a 23 factorial design.

Number of runs 23 = 8
Fig. 4.2 for parameter design

4.2.1 Taguchi method treats optimization problems in :

4.2.1.1 Static problems

Generally, a process to be optimized has several control factors which directly decide the target
or desired value of the output. The optimization then involves determining the best control factor
levels so that the output is at the target value. Such a problem is called as a static problem.

This is best explained using a P-Diagram which is shown Figure 4.2 (P stands for Process or
Product). Noise is shown to be present in the process but should have no effect on the output!
This is the primary aim of the Taguchi experiments to minimize variations in output even
though noise is present in the process. The process is then said to have become robust.

Noise

P- Output
Diagram
Control
Factor
Fig. 4.3... P- Diagram for Static problems
There are 3 Signal-to-Noise ratios of common interest for optimization of
Static Problems.

I Smaller the better:

This is usually the chosen S/N ratio for all undesirable characteristics like defects etc. for
which the ideal value is zero. Also, when an ideal value is finite and its maximum or minimum
value is defined (like maximum purity is 100% or maximum Tc is 92K or minimum time for
making a telephone connection is 1 sec) then the difference between measured data and ideal
value is expected to be as small as possible. The generic form of S/N ratio then becomes,

y 2i
S
N (smaller )
=10 log ( ) n (Eq.4-1)

II Larger the better:

( ( ))
1
y 2i
S (Eq. 4-2)
=10 log
N (bigger ) n

This case has been converted to SMALLER-THE-BETTER by taking the reciprocals of


measured data and then taking the S/N ratio as in the smaller-the-better case.

III Nominal the Best:

y 2
S
N (Nominal )
=10 log ( )
s
2 (Eq. 4-3)

This case arises when a specified value is MOST desired, meaning that neither a smaller nor a
larger value is desirable.

Examples are;

i most parts in mechanical fittings have dimensions which are nominal-the-best type.

(ii) Ratios of chemicals or mixtures are nominally the best type.


e.g. Aqua regia 1:3 of HNO3:HCL
Ratio of Sulphur, KNO3 and Carbon in gun powder

(iii) Thickness should be uniform in deposition /growth /plating /etching.

4.3 ORTHOGONAL ARRAY


Taguchi method is based on performing evaluation or experiments to test the
sensitivity of a set of response variables to a set of control parameters
(or independent variables) by considering experiments in orthogonal
array with an aim to attain the optimum setting of the control
parameters. Orthogonal arrays provide a best set of well balanced
(minimum) experiments. Table 4.1 shows eighteen standard orthogonal
arrays along with the number of columns at different levels for these arrays.
An array name indicates the number of rows and columns it has, and also the
number of levels in each of the columns. For example array L4 (23) has four
rows and three 2 level columns. Similarly the array L18 (2137) has 18
rows; one 2 level column; and seven 3 level columns. Thus, there are
eight columns in the array L18. The number of rows of an orthogonal
array represents the requisite number of experiments. The number of rows
must be at least equal to the degrees of the freedom associated with the
factors i.e. the control variables. In general, the number of degrees of
freedom associated with a factor (control variable) is equal to the number of
levels for that factor minus one. For example, a case study has one factor (A)
with 2 levels (A), and five factors (B, C, D, E, F) each with 3 level. Table
4.1 depicts the degrees of freedom calculated for this case. The number of
columns of an array represents the maximum number of factors that can be
studied using that array.

Table 4.1 Standard orthogonal arrays

Orthogonal Number of Maximum number Maximum number of


array rows of factors columns at these levels
2 3 4 5

L4 4 3 3
L8 8 7 7
L9 9 4 4
L12 12 11 11

L16' 16 15 15
L16 16 5 5
L18 18 8 1 7
L25 25 6 6

L27 27 13 13
L32 32 31 31
L32' 32 10 1 9
L36' 36 23 11 12 -
L36 36 16 3 13 -

L50 50 12 1 11
L54 54 26 1 25
L64 64 63 63
L64' 64 21 21
L81 81 40 40

The signal to noise ratios (S/N), which are log functions of desired output,
serve as the objective functions for optimization, help in data analysis and
the prediction of the optimum results. The Taguchi method treats the
optimization problems in two categories: static problems and dynamic
problems. For simplicity, the detailed explanation of only the static problems
is given in the following text. Next, the complete procedure followed to
optimize a typical process using Taguchi method is explained with an
example.
Table 4.2 The degrees of freedom for one factor (A) in 2 levels and
five factors
(B, C, D, E, F) in 3 levels

Factors Degrees of
freedom

Overall mean 1
A 21 = 1
B, C, D, E, F 5 (31) = 10

Total 12

4.3.1 A typical orthogonal array


While there are many standard orthogonal arrays available, each of the arrays is meant for a
specific number of independent design variables and levels . For example, if one wants to
conduct an experiment to understand the influence of 4 different independent variables with each
variable having 3 set values ( level values), then an L9 orthogonal array might be the right
choice. The L9 orthogonal array is meant for understanding the effect of 4 independent factors
each having 3 factor level values. This array assumes that there is no interaction between any two
factor. While in many cases, no interaction model assumption is valid, there are some cases
where there is a clear evidence of interaction. A typical case of interaction would be the
interaction between the material properties and temperature.

Table 4.3 Layout of L9 orthogonal array.

L9 (34) Orthogonal array


Performance
Independent Variables
Parameter Value
Experiment # Variable 1 Variable 2 Variable 3 Variable 4
1 1 1 1 1 p1
2 1 2 2 2 p2
3 1 3 3 3 p3
4 2 1 2 3 p4
5 2 2 3 1 p5
6 2 3 1 2 p6
7 3 1 3 2 p7
8 3 2 1 3 p8
9 3 3 2 1 p9
The Table 4.3 shows an L9 orthogonal array. There are totally 9 experiments to be conducted and each experiment is based on the
combination of level values as shown in the table. For example, the third experiment is conducted by keeping the independent
design variable 1 at level 1, variable 2 at level 3, variable 3 at level 3, and variable 4 at level 3.

4.3.2 Properties of an orthogonal array


The orthogonal arrays have the following special properties that reduce the number of
experiments to be conducted.

1 The vertical column under each independent variables of the above table has a special
combination of level settings. All the level settings appear an equal number of times. For
L9 array under variable 4, level 1, level 2 and level 3 appears thrice. This is called the
balancing property of orthogonal arrays.

2 All the level values of independent variables are used for conducting the experiments.

3 The sequence of level values for conducting the experiments shall not be changed. This
means one cannot conduct experiment 1 with variable 1, level 2 setup and experiment 4
with variable 1, level 1 setup. The reason for this is that the array of each factor columns
are mutually orthogonal to any other column of level values. The inner product of vectors
corresponding to weights is zero. If the above 3 levels are normalized between -1 and 1,
then the weighing factors for level 1, level 2 , level 3 are -1 , 0 , 1 respectively. Hence the
inner product of weighing factors of independent variable 1 and independent variable 3
would be

(-1 * -1+-1*0+-1*1) +(0*0+0*1+0*-1) +(1*0+1*1+1*-1) =0

4.3.3 Minimum number of experiments to be conducted


The design of experiments using the orthogonal array is, in most cases, efficient when compared
to many other statistical designs. The minimum number of experiments that are required to
conduct the Taguchi method can be calculated based on the degrees of freedom approach.
NV
N taguchi=1+ ( Li1) (Eq.3-4)
i=1

For example, in case of 8 independent variables study having 1 independent variable with 2
levels and remaining 7 independent variables with 3 levels ( L18 orthogonal array) , the
minimum number of experiments required based on the above equation is 16. Because of the
balancing property of the orthogonal arrays, the total number of experiments shall be multiple of
2 and 3. Hence the number of experiments for the above case is 18.

4.3.4 Assumptions of the Taguchi method


The additive assumption implies that the individual or main effects of the independent variables
on performance parameter are separable. Under this assumption, the effect of each factor can be
linear, quadratic or of higher order, but the model assumes that there exists no cross product
effects (interactions) among the individual factors. That means the effect of independent variable
1 on performance parameter does not depend on the different level settings of any other
independent variables and vice versa. If at anytime, this assumption is violated, then the
additivity of the main effects does not hold, and the variables interact.

4.3.5 Designing an experiment


The design of an experiment involves the following steps

1 Selection of independent variables

2 Selection of number of level settings for each independent variable

3 Selection of orthogonal array

4 Assigning the independent variables to each column

5 Conducting the experiments

6 Analyzing the data

7 Inference

The details of the above steps are given below.

1- Selection of the independent variables


Before conducting the experiment, the knowledge of the product/process under investigation is
of prime importance for identifying the factors likely to influence the outcome. In order to
compile a comprehensive list of factors, the input to the experiment is generally obtained from
all the people involved in the project.

2- Deciding the number of levels

Once the independent variables are decided, the number of levels for each variable is decided.
The selection of number of levels depends on how the performance parameter is affected due to
different level settings. If the performance parameter is a linear function of the independent
variable, then the number of level setting shall be 2. However, if the independent variable is not
linearly related, then one could go for 3, 4 or higher levels depending on whether the relationship
is quadratic, cubic or higher order.
In the absence of exact nature of relationship between the independent variable and the
performance parameter, one could choose 2 level settings. After analyzing the experimental data,
one can decide whether the assumption of level setting is right or not based on the percent
contribution and the error calculations.

3- Selection of an orthogonal array

Before selecting the orthogonal array, the minimum number of experiments to be conducted shall
be fixed based on the total number of degrees of freedom [5] present in the study. The minimum
number of experiments that must be run to study the factors shall be more than the total degrees
of freedom available. In counting the total degrees of freedom the investigator commits 1 degree
of freedom to the overall mean of the response under study. The number of degrees of freedom
associated with each factor under study equals one less than the number of levels available for
that factor. Hence the total degrees of freedom without interaction effect is 1 + as already given
by equation 2.1. For example, in case of 11 independent variables, each having 2 levels, the total
degrees of freedom is 12. Hence the selected orthogonal array shall have at least 12 experiments.
An L12 orthogonal satisfies this requirement.

Once the minimum number of experiments is decided, the further selection of orthogonal array is
based on the number of independent variables and number of factor levels for each independent
variable.

4- Assigning the independent variables to columns


The order in which the independent variables are assigned to the vertical column is very
essential. In case of mixed level variables and interaction between variables, the variables are to
be assigned at right columns as stipulated by the orthogonal array [3].

Finally, before conducting the experiment, the actual level values of each design variable shall be
decided. It shall be noted that the significance and the percent contribution of the independent
variables changes depending on the level values assigned. It is the designers responsibility to set
proper level values.

5- Conducting the experiment

Once the orthogonal array is selected, the experiments are conducted as per the level
combinations. It is necessary that all the experiments be conducted. The interaction columns and
dummy variable columns shall not be considered for conducting the experiment, but are needed
while analyzing the data to understand the interaction effect. The performance parameter under
study is noted down for each experiment to conduct the sensitivity analysis.

6- Analysis of the data

Since each experiment is the combination of different factor levels, it is essential to segregate the
individual effect of independent variables. This can be done by summing up the performance
parameter values for the corresponding level settings. For example, in order to find out the main
effect of level 1 setting of the independent variable 2 , sum the performance parameter values of
the experiments 1, 4 and 7. Similarly for level 2, sum the experimental results of 2, 5 and 7 and
so on.

Once the mean value of each level of a particular independent variable is calculated, the sum of
square of deviation of each of the mean value from the grand mean value is calculated. This sum
of square deviation of a particular variable indicates whether the performance parameter is
sensitive to the change in level setting. If the sum of square deviation is close to zero or
insignificant, one may conclude that the design variables is not influencing the performance of
the process. In other words, by conducting the sensitivity analysis, and performing analysis of
variance (ANOVA), one can decide which independent factor dominates over other and the
percentage contribution of that particular independent variable.

7- Inference
From the above experimental analysis, it is clear that the higher the value of sum of square of an
independent variable, the more it has influence on the performance parameter. One can also
calculate the ratio of individual sum of square of a particular independent variable to the total
sum of squares of all the variables. This ratio gives the percent contribution of the independent
variable on the performance parameter.

In addition to above, one could find the near optimal solution to the problem. This near optimum
value may not be the global optimal solution. However, the solution can be used as an initial /
starting value for the standard optimization technique.

4.3.6 Robust Design


A main cause of poor yield in manufacturing processes is the manufacturing variation. These
manufacturing variations include variation in temperature or humidity, variation in raw materials,
and drift of process parameters. These source of noise / variation are the variables that are
impossible or expensive to control.

The objective of the robust design is to find the controllable process parameter settings for which
noise or variation has a minimal effect on the product's or process's functional characteristics. It
is to be noted that the aim is not to find the parameter settings for the uncontrollable noise
variables, but the controllable design variables. To attain this objective, the control parameters,
also known as inner array variables, are systematically varied as stipulated by the inner
orthogonal array. For each experiment of the inner array, a series of new experiments are
conducted by varying the level settings of the uncontrollable noise variables. The level
combinations of noise variables are done using the outer orthogonal array.

The influence of noise on the performance characteristics can be found using the ratio where S is
the standard deviation of the performance parameters for each inner array experiment and N is
the total number of experiment in the outer orthogonal array. This ratio indicates the functional
variation due to noise. Using this result, it is possible to predict which control parameter settings
will make the process insensitive to noise.

However, when the functional characteristics are not affected by the external noises, there is no
need to conduct the experiments using the outer orthogonal arrays. This is true in case of
experiments which are conducted using the computer simulation as the repeatability of a
computer simulated experiments is very high.
4.3.7 Example
Consider an example where a truck front fenders injection-molded polyurethane bumpers suffer
from too much porosity. A team of engineers conducted a Taguchi OA design to study the effects
of several factors related to the porosity.

Table 4.4: Factors

The team also decided to consider interactions AB and BC. They took two measurements of
porosity (for porosity values, smaller is better). They also decided to use the L8 orthogonal array
for this design. Table 4.5 shows the alias relation for L8 arrays.

Table 4.5: L8 (2^7) Arrays

In Table 4.5, column 3 is the alias of the interaction of column 1 and column 2 (in red), while
column 6 is the alias of the interaction of column 2 and column 4 (in red). Since interactions AB
and BC are considered significant in this example, the main factors A, B, C, D and E are
assigned to columns 1, 2, 4, 5 and 7. The interaction factors AB and BC are assigned to columns
3 and 6.

Table 4.6 shows the L8 orthogonal array and the porosity measurements the team took.

Table 4.6: L8 Orthogonal Array


CHAPTER 5:Case Study
5.1 Introduction
Nontraditional machining processes have the ability to machine the highly
alloyed materials irrespective of their mechanical properties. Among
nontraditional machining processes, electrochemical machining (ECM) has
tremendous potential in terms of the versatility of its applications. However,
the main problem of ECM is the difficulty in determining the optimum
values of the machining parameters such as wire diameter, wire feed rate,
and applied voltage. These optimum values of the machining parameters can
improve the process characteristics such as the metal removal rate (MRR)
and surface roughness. Design of experiment (DOE) involves designing a set
of experiments, in which all relevant factors are varied systematically. When
the results of these experiments are analyzed, they help to identify optimal
conditions and the factors that most influence the results as well as details
such as the existence of interactions and synergies between factors. The
present study proposes an application of fractional factorial design to
execute the sufficient experimental procedures for determining the
significant and insignificant factors as well as investigating a reliable
mathematical model between the input and output factors in the
nontraditional machining processes. The experiments are designed, and the
results are systemically analyzed to determine the optimum combination of
the input parameters which leads to a maximum MRR

5.2 Fractional factorial design

As the number of factors in a 2k factorial design increases, the number of


runs required for a complete replicate of the design rapidly outgrows the
resources of most experimenters. For example, a complete replicate of the
26 design requires 64 runs. In this design, only 6 of the 63 degrees of
freedom (dof) correspond to the main effects, and only 15 degrees of
freedom correspond to two factor interactions. The remaining 42 degrees of
freedom are associated with three-factor and higher interactions. If the
experimenter can reasonably assume that certain high-order interactions are
negligible, information on the main effects and low-order interactions may be
obtained by running only a fraction of the complete factorial experiment.
These fractional factorial designs are among the most widely used types of
designs for product and process design and for process improvement .

Table .1 Working conditions of WECT process


Table 2 Experimental parameters and levels in WECT process

5.3 Experimental work

5.3.1 Experimental setup

The experiments of the WECT process are carried out on a laboratory


electrochemical test rig. In designing a wire electrochemical turning test rig,
many factors should be considered such as low cost, simple assembly, and
flexible change.

Table 3 Alias structure

Table 4 The recommended design of experiments and MRR results


Table 5 ANOVA table

5.3.2 Test conditions and measurements


The indigenous WECT experimental set has been designed and developed
successfully to analyze the influence of the predominant machining
parameters, i.e., applied voltage (A), wire axial feed rate (B), wire diameter
(C), electrolyte concentration (D), overlap distance (E), and rotational speed
(F) on the desired machining performance characteristics, i.e., MRR. The
experimental conditions of WECT process parameters are listed in Table 1.
These conditions are chosen through preliminary experiments and literature
surveys

Each specimen was weighted before and after machining using a digital
scale (Sartorius, type 1712, 0.0001 g). The specimen diameter was
measured using digital micrometer (Mitutoye, up to 25 mm, 0.001 mm). The
metal removal rate was specified using the following equation:

W bW a
MRR=
t

(1)

where

Wb is the specimen weight before machining (g).

Wa is the specimen weight after machining (g).

t is the machining time (min).

5.4 Practical application and discussion

5.4.1 One-quarter fraction design

A one-quarter fraction design (262) is performed to study the effects of 6


factors in 16 runs. The design is to be run in a single block. One-quarter
fraction design uses a selected subset of basic factor (A, B, C, and D) and
generators (ABCE and BCDF) for the remaining factors (E and F). Generators
are interactions of the basic factors, giving the levels for the remaining
factors. Also, the alias structure is represented in Table 3. The alias structure
shows which main effects and interactions are confounded with each other,
while the high-order interactions are negligible. Since this design resolution
is IV (6-2=IV), the main effects will be clear of the two-factor interactions.
However, each two-factor interaction will be at least confounded with
another two-factor interaction or a block effect. After designing the
experiments, the MRR (103 g/min) is measured using Eq. (1) and the
results are illustrated in Table 4.

Table 6 Effect estimates


Fig.1 Pareto chart

5.4.3 Significance tests

Using different techniques such as Pareto chart, analysis ofvariance (ANOVA),


and normal probability plot, the experimentsare analyzed to determine the
significant factors.
The ANOVA (Table 5) partitions the variability in MRR intoseparate pieces for
each of the effects. It then tests the statisticalsignificance of each effect by
comparing the mean squareagainst an estimate of the experimental error. In
this case, foureffects (A, D, E, and AF+DE) have P values less than
0.05,indicating that they are significantly different from 0 at the95.0 %
confidence level.
The Pareto chart (Fig. 1) of effects is often an effective tool forcommunicating
the results of an experiment .
The estimatedeffects and interactions are sorted from the largest
absolutevalue to the smallest absolute value. The magnitude of eacheffect is
represented by a column, and often, a line goingacross the columns indicates
how large an effect has to be(i.e., how long a column must be) to be
statistically significant.

Normal probability plot (Fig. 2) ascertain the results of Pareto chart and
ANOVA.

Fig. 2Normal probability plot

5.4.4 Main effect plot


Main effect plot (Fig. 3) acts as an approximate tool for determining the
significant and insignificant factors. Where the difference between the
response values at the low level and the high level of a factor is high, this
factor is considered to be significant (A, D, and E). Also, this plot helps us to
decide at which level we should maintain the significant factors in order to
achieve our target (maximizing MRR).
Fig. 3 Main effect plot

5.4.5 Interaction plot


Another graphic statistical tool is the interaction plot (Fig. 4).This type of
chart illustrates the interaction effects betweendependent variables. Also,
this plot also helps to conclude atwhich level we should maintain the factors
of the significantinteractions in order to achieve our goal (maximizing MRR).
Fig. 4 Interaction plot

5.4.6 Surface and contour plots


Response surface plots such ascontour and surface plots are useful for
establishing desirableresponse values and operating conditions. A surface
plotgenerally displays a three-dimensional view that may providea clearer
representation of the response. Both contour andsurface plots help
experimenters to understand the nature of the relationship between the two
factors.

5.4.7 Prediction model


Depending on the significant factors, a regression model canbe investigated
for predicting the MRR .The equation ofthe model is
where X1, X4, X5, X4X5, and X1X6 are coded variables thatcorrespond to the
factors A, D, E, DE interaction, and AF interaction. The adequacy of the
resultant model is checked by agraphical representation of the predicted
model values vs the experimental values of MRR. It is obvious from Fig. 5that
the predicted values obtained by using the developedmathematical model
are in a good agreement with theactual experimental values. This ensures
that the model isadequate and reliable for further prediction within
thespecified range.

Fig. 5 Predicted vs measured values for MRR

5.5 Optimum configurations


Based on the results of this implementation, it can be noticed that the
optimum configurations for maximizing the MRR are as follows in Table 7.

Table 7 Optimum configurations

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy