18meo113t - Doe - Unit 2 - Ay2023 - 24 Even
18meo113t - Doe - Unit 2 - Ay2023 - 24 Even
Handled by
A.C. Arun Raj,
Assistant Professor,
Department of Mechanical Engineering,
SRM IST, Kattankulathur.
Disclaimer
The content prepared in the presentation are from various sources, only used for education
purpose. Thanks to all the sources.
Unit 2
2
Unit 2
3. Selection of design parameters so that the product will work well under a wide variety
4
Need for DOE methodology
5
Need for DOE methodology
– Optimizing a process
– Designing a product
– Formulating a product
6
Need for DOE methodology
• Characterizing a process
7
Need for DOE methodology
• Optimizing a process
8
Need for DOE methodology
• Optimizing a process
9
Need for DOE methodology
• Designing a Product
10
Need for DOE methodology
• Designing a Product
11
Need for DOE methodology
• Formulating a Product
12
Need for DOE methodology
13
Need for DOE methodology
14
Barriers in the Successful Application of DOE
• Educational barriers
• Management barriers
• Cultural barriers
• Communication barriers
• Other barriers
15
Barriers in the Successful Application of DOE
• Educational barriers
– The word ‘statistics’ invokes fear in many industrial engineers.
– The fundamental problem begins with the current statistical education for the engineering
community in their academic curriculum. The courses currently available in ‘engineering
statistics’ often tend to concentrate on the theory of probability, probability distributions and
more mathematical aspects of the subject, rather than practically useful techniques such as
DOE, Taguchi method, robust design, gauge capability studies, Statistical Process Control
(SPC), etc.
16
Barriers in the Successful Application of DOE
• Management barriers
– Managers often don’t understand the importance of DOE in problem solving or don’t
appreciate the competitive value it brings into the organisation. In many organisations,
managers encourage their engineers to use the so-called ‘home-grown’ solutions for
process and quality-related problems.
– These ‘home-grown’ solutions are consistent with the OVAT approach to experimentation,
as managers are always after quick-fix solutions which yield short-term benefits to their
organisations.
17
Barriers in the Successful Application of DOE
• Management barriers
– Responses from managers with high resistance to change may include the following
• DOE tells me what I already know.
– Many managers do not instinctively think statistically, mainly because they are not
convinced that statistical thinking adds any value to management and decision-making.
18
Barriers in the Successful Application of DOE
• Cultural barriers
– Cultural barriers are one of the principal reasons why DOE is not commonly used in many
organisations.
– Many organisations are not culturally ready for the introduction and implementation of
advanced quality improvement techniques such as DOE and Taguchi.
– The best way to overcome this barrier is through intensive training programs and by
demonstrating the successful application of such techniques by other organisations during
the training.
19
Barriers in the Successful Application of DOE
• Communication barriers
– Research has indicated that there is very little communication between the academic and
industrial worlds.
– For the successful initiative of any quality improvement programme, these communities
should work together and make this barrier less formidable.
20
Barriers in the Successful Application of DOE
• Other barriers
– Negative experiences with DOE may make companies reluctant to use DOE again. The
majority of negative DOE experiences can be classified into two groups. The first relates to
technical issues and the second to non-technical issues.
– not choosing the appropriate levels for the process variables, etc. Non-linearity or curvature effects of process variables
should be explored to determine the best operating process conditions;
– assessing the impact of ‘uncontrolled variables’ which can influence the output of the process. Experimenters should try to
understand how the ‘uncontrolled variables’ influence the process behaviour and devise strategies to minimise their impact as
much as possible;
– Lacking awareness of assumptions: data analysis, awareness of different alternatives why they are needed, etc.
21
Barriers in the Successful Application of DOE
22
Barriers in the Successful Application of DOE
• Educational barriers
– Current statistical education for the engineering community in their academic curriculum is not sufficient
• Management barriers
– Managers encourage their engineers to use ‘home-grown’ solutions for process-and quality-related problems.
• Cultural barriers
– Principal reason; reluctant and fear of embracing the DOE
• Communication barriers
– Gap between academia and industry ; lack of knowledge in engineering and statistics
• Other barriers
– Technical and non-technical issues might bring negative experiences with DOE; makes engineers reluctant
to use DOE
23
Barriers in the Successful Application of DOE
24
Barriers in the Successful Application of DOE
25
A Practical Methodology for DOE
These are:
1. planning phase
2. designing phase
3. conducting phase
4. analysing phase.
26
Planning Phase
• Many engineers pay special attention on the statistical details of DOE and
very little attention to the non-statistical details.
28
Planning Phase
30
Planning Phase
31
Planning Phase
2. Selection of Response or Quality Characteristic
• Experimenters should define the measurement system prior to performing
the experiment in order to understand what to measure, where to measure
and who is doing the measurements, etc. so that various components of
variation (measurement system variability, operator variability, part
variability, etc.) can be evaluated.
33
Planning Phase
34
Planning Phase
• Uncontrollable variables (or noise variables) are those which are difficult or
expensive to control in actual production environments.
35
Planning Phase
36
Planning Phase
37
Planning Phase
• However, for qualitative variables, more than two levels may be required. If
a non-linear function is expected by the experimenter, then it is advisable to
study variables at three or more levels.
• This would assist in quantifying the nonlinear (or curvature) effect of the
process variable on the response function.
38
Planning Phase
• In order to effectively interpret the results of the experiment, it is highly desirable to have a
good understanding of the interaction between two process variables (Marilyn, 1993).
• The best way to relate to interaction is to view it as an effect, just like a factor or process
variable effect.
• In the context of DOE, we generally study two-order interactions.
• The number of two-order interactions within an experiment can be easily obtained by using
a simple equation:
39
Planning Phase
40
Designing Phase
• Two people fipping two different coins would result in the effect of the person and the effect of the coin to be
confounded.
• It is good practice to have the design matrix ready for the team prior to
executing the experiment.
41
Designing Phase
• The design matrix generally reveals all the settings of factors at different
levels and the order of running a particular experiment.
42
Conducting Phase
• This is the phase in which the planned experiment is carried out and the
results are evaluated.
– availability of materials/parts, operators, machines, etc. required for carrying out the
experiment;
43
Conducting Phase
• The following steps may be useful while performing the experiment in order
to ensure that it is performed according to the prepared experimental design
matrix (or layout).
– The person responsible for the experiment should be present throughout the experiment. In
order to reduce the operator-to-operator variability, it is best to use the same operator for
the entire experiment.
– Monitor the experimental trials. This is to find any discrepancies while running the
experiment. It is advisable to stop running the experiment if any discrepancies are found.
– Record the observed response values on the prepared data sheet or directly into the
computer.
– Any experiment deviations and unusual occurrences must be recorded and analysed.
44
Analysing Phase
• Having performed the experiment, the next phase is to analyze and interpret
the results so that valid and sound conclusions can be derived.
– Determine the design parameter levels that yield the optimum performance.
45
Analysing Phase
• Statistical methods should be used to analyze the data.
• Analysis of variance is widely used to test the statistical significance of the effects through F-
Test.
• Confidence interval estimation is also part of the data analysis. empirical models are
developed relating the dependent (response) and independent variables (factors).
• Residual analysis and model adequacy checking are also part of the data analysis
procedure.
• Graphical analysis and normal probability plot of the effects may be preferred by Industry
46
Analytical Tools of DOE
Main Effects Plot
• A main effects plot is a plot of the mean response values at each level of a
design parameter or process variable.
• One can use this plot to compare the relative strength of the effects of
various factors.
• The sign and magnitude of a main effect would tell us the following:
– The sign of a main effect tells us of the direction of the effect, that is,
whether the average response value increases or decreases.
47
Analytical Tools of DOE
Main Effects Plot
48
Analytical Tools of DOE
Main Effects Plot
Effect of a process
49
Analytical Tools of DOE
Interactions Plots
– An interactions plot is a powerful graphical tool which plots the mean
response of two factors at all possible combinations of their settings.
– If the lines are parallel, this indicates that there is no interaction between
the factors.
– Non-parallel lines are an indication of the presence of interaction
between the factors.
50
Analytical Tools of DOE
Interactions Plots
51
Analytical Tools of DOE
Cube Plots
– Cube plots display the average response values at all combinations of
process or design parameter settings.
– One can easily determine the best and worst combinations of factor
levels for achieving the desired optimum response.
– A cube plot is useful to determine the path of steepest ascent or descent
for optimisation problems.
52
Analytical Tools of DOE
Cube Plots
– Figure below illustrates an example of a cube plot for a cutting tool life optimisation study with
three tool parameters: cutting speed, tool geometry and cutting angle.
– The graph indicates that tool life increases when cutting speed is set at low level and cutting angle
and tool geometry are set at high levels.
– The worst condition occurs when all factors are set at low levels.
53
Analytical Tools of DOE
54
Analytical Tools of DOE
Pareto Plot of Factor Effects
– To detect the factor and interaction effects that are most important to
the process or design optimisation
– It displays the absolute values of the effects, and draws a reference
line on the chart.
– Any effect that extends past this reference line is potentially important.
55
Analytical Tools of DOE
Pareto Plot of Factor Effects
– The graph shows that factors B and C
and interaction AC are most important.
– It is always a good practice to check the
findings from a Pareto chart with
Normal Probability Plot (NPP) of the
estimates of the effects
- 80:20 principle.
56
Analytical Tools of DOE
Pareto Plot of Factor Effects
• Taste of the Food
– Let us take an example, where we need
• Quality of the food
to prepare a chart of feedback analysis
• Price
for XYZ restaurant, as per the reviews
• Presentation
and ratings received from the
customers. Here the customers are
given a checklist of four points based on
which they have to rate the restaurant
out of 10. The four points are:
57
Analytical Tools of DOE
Pareto Plot of Factor Effects
58
Analytical Tools of DOE
NPP (Normal Probability Plot) of Factor Effects
– For NPP, the main and interaction effects of factors or process (or design)
parameters should be plotted against cumulative probability (%).
– Inactive main and interaction effects tend to fall roughly along a straight
line, whereas active effects tend to appear as extreme points falling off
each end of the straight line (Benski, 1989).
59
Analytical Tools of DOE
NPP of Factor Effects
– These active effects are
judged to be statistically
significant.
– The results are absolutely
identical to that of a Pareto plot
of factor/ interaction effects.
60
Analytical Tools of DOE
NPP of Residuals
– One of the key assumptions for the statistical analysis
of data from industrial experiments is that the data
come from a normal distribution.
– In order to check the data for normality, it is best to
construct an NPP (Normal Probability Plot ) of the
residuals.
– NPPs are useful for evaluating the normality of a data
set, even when there is a fairly small number of
observations.
– Here residual is the mean difference between the
observed value (obtained from the experiment) and
the predicted or fitted value. 61
Analytical Tools of DOE
NPP of Residuals
– If the residuals fall approximately along a straight line, they are then
normally distributed.
– In contrast, if the residuals do not fall fairly close to a straight line, they
are then not normally distributed and hence the data do not come from a
normal population.
– In the graph , points fall fairly close to a straight line, indicating that the data
are approximately normal.
62
Analytical Tools of DOE
NPP of Residuals
63
Analytical Tools of DOE
Response Plots
64
Analytical Tools of DOE
Response Contour Plots
• In this Figure, the tool life increases with an
increase in cutting angle and a decrease in
cutting speed.
• If the regression model (i.e. first-order model)
contains only the main effects and no interaction
effect, the fitted response surface will be a plane
(i.e. contour lines will be straight).
• If the model contains interaction effects, the
contour lines will be curved and not straight.
• The contours produced by a second-order
model will be elliptical in nature.
65
Analytical Tools of DOE
Response Surface Plots
variable (Y), and two independent variables (X and Z). The plot
66
Analytical Tools of DOE
Response Surface Plots
– Moreover, we have used a fitted surface in latter figure to find a direction of
potential improvement for a process.
67
Model for Predicting Response Function
• Use of this regression model is to predict the response for different
combinations of process parameters (or design parameters) at their best
levels.
– A positive sign indicates that as the predictor variable increases, the response variable also
increases.
– A negative sign indicates that as the predictor variable increases, the response variable
decreases.
Example: The coefficient value represents the mean change in the response
given a one unit change in the predictor. For example, if a coefficient is +3, the
mean response value increases by 3 for every one unit change in the predictor.
69
Overview of Linear Models
70
Least Squares Regression
71
Introduction to Regression Analysis
72
Linear Regression Model
73
Simple Linear Regression Model
74
Simple Linear Regression Model
Dependent variable (Y)
Sum of Squares of Error should be minimal, then only we can say that it is best regression line.
75
Simple Linear Regression Model
76
Simple Linear Regression Model
77
Regression Model
78
Regression Model
79
Model for Predicting Response Function
• For factors at 2-levels, the regression coefficients are obtained by
dividing the estimates of effects by 2.
• The term ‘ε’ the random error component which is approximately normal and
independently distributed with mean zero and constant variance σ2.
• The regression coefficient β12 corresponds to the interaction between the process parameters x1
and x2
80
Model for Predicting Response Function
• For example, the regression model for the cutting tool life optimisation study is
given by
• The response values obtained from above Eq. are called predicted values and the
actual response values obtained from the experiment are called observed values.
81
Model for Predicting Response Function
• We can predict the cutting tool life for various combinations of these tool
parameters.
• For instance, if all the cutting tool life parameters are kept at low level settings,
the predicted tool life then would be
82
Model for Predicting Response Function
• The observed value of tool life (refer to cube plot) is 26 h.
• The difference between the observed value and predicted value (i.e. residual) is
− 1.332.
• Similarly, if all the cutting tool life parameters are kept at the optimal condition (i.e.
cutting speed = low, tool geometry = high and cutting angle = high), the predicted
tool life would then be
83
Model for Predicting Response Function
• The number of confirmatory runs at the optimal settings can vary from 4 to 20
(4 runs if expensive, 20 runs if cheap).
84
Confidence Interval for the Mean Response
• The statistical confidence interval (CI) (at 99% confidence limit) for the mean
response can be computed using the equation
where
y = mean response obtained from confirmation trials or runs
SD = standard deviation of response obtained from confirmation
trials
n = number of samples (or confirmation runs).
85
Confidence Interval for the Mean Response
• For the cutting tool life example, five samples were collected from the process at the
optimal condition (i.e. cutting speed = low, tool geometry = high and cutting angle =
high).
• Confirmation trials
Y(bar) = 53.71h SD = 0.654 h
86
Confidence Interval for the Mean Response
• As the predicted value (54.384) based on the regression model falls within the
statistical CI, we will consider our model good.
87
Confidence Interval for the Mean Response
• If the results from the confirmation trials or runs fall outside the statistical CI, possible causes must be
identified. Some of the possible causes may be
– measurement error
– wrong assumptions regarding interactions
88
Confidence Interval for the Mean Response
• If the results from the confirmatory trials or runs are within the CI, then improvement action on the
process is recommended.
• The new process or design parameters should be implemented with the involvement of top
management.
• After the solution has been implemented, control charts on the response(s) or key process
parameters should be constructed for constantly monitoring, analyzing, managing and improving the
process performance.
89
Screening Designs
• In many process development and manufacturing applications, the number of potential process or
• Screening reduces the number of process or design parameters (or factors) by identifying the
• This reduction allows one to focus process improvement efforts on the few really important factors, or
90
Types of Factorial Design
91
Full Factorial Design
92
Full Factorial Design
93
Full Factorial Design
94
Fractional Factorial Design
95
96
97
98
99
100
101
Resolution Design
102
Resolution Design
103
104
Screening Designs
• Screening designs provide an effective way to consider a large number of process or design
parameters (or factors) in a minimum number of experimental runs or trials (i.e. with minimum
• The purpose of screening designs is to identify and separate out those factors that demand further
investigation.
• For screening designs, experimenters are generally not interested in investigating the nature
105
P-B design
• Focus on - Screening Designs expounded by R.L. Plackett and J.P. Burman in 1946 – hence the
• P–B designs are based on Hadamard matrices in which the number of experimental runs or trials is
a multiple of four, i.e. N = 4, 8, 12, 16 and so on, where N is the number of trials/runs (Plackett
• P–B designs are suitable for studying up to k = (N−1)/(L−1) factors, where L is the number
• For instance, using a 12-run experiment, it is possible to study up to 11 process or design parameters
at 2-levels. 106
P-B design
• One of the interesting properties of P–B designs is that all main effects are estimated with the
same precision. This implies that one does not have to anticipate which factors are most likely to
• Resolution III Design🡪 No main effects are confounded (aliased) with any other main effect, but
107
P-B design
• The aim of P-B design is to study as many factors as possible in a minimum number of trials and to
identify those that need to be studied in further rounds of experimentation in which interactions can be
• Thus, they can determine which factors are important. i.e., the main effects are
interested (not interaction effects)
• P-B design is equivalent to fractional factorial design (are multiples of four but are not
powers of two. )
• Here, k = N-1, there will be no degrees of freedom available for error to be estimated.
108
P-B Designs
Advantages:
• Limited number of runs to evaluate large number of factors.
• Important main effects can be selected for more in- depth study
109
Disadvantage of P-B Designs
• Their disadvantage is their complexity.
• Geometric P–B designs are resolution III designs and therefore these
110
P-B Design vs. Fractional factorial
111
P-B Designs
• Geometric
– 2N (N = 4, 8, 16, 32, etc.)
– Geometric designs are identical to fractional factorial designs in which one may be able
to study the interactions between factors.
• Non-geometric
– are multiples of four but are not powers of two. E.g. have runs of 12, 20, 24, 28, etc.
112
P-B Designs
113
P-B Designs
114
P-B Designs
• Step 1: Assign the levels for a first factor as per P-B design generator table
• Step 2: Keep the last run/trail at a low level for all factors.
• Step 3: To make levels for consecutive factors follow the cyclic manner as represented.
Generator
vectors
2
P-B Designs
• Step 1: Assign the levels for a first factor as per P-B design generator table
• Step 2: Keep the last run/trail at a low level for all factors.
• Step 3: To make levels for consecutive factors follow the cyclic manner as represented.
P-B Designs
117
Ex 1. Paperboard product
Number of factors: 7
Number of levels: 2
Hence, it is 27 designs.
Experimental design
27 = 128 runs
P-D = 8 runs
119
Ex 1. Paperboard product
120
Ex 1. Paperboard product
Finding key main effects
• Minitab software – used for plotting
• To identify the key main effects that were most influential on the response (i.e.
force).
Active Effects
121
Ex 1. Paperboard product
Pareto chart –Substantiation
C
E
B
122
Ex 1. Paperboard product
Main effects
Conclusion: Main effects C (press roll pressure), E (paste type) and B (amount of
additive) are found to have significant impact on the mean puncture resistance
(i.e. the force required to penetrate the paper board).
123
Ex 1. Paperboard product
Design Matrix
In order to analyze the factors affecting variability in force, we need to
calculate the SD of observations at each experimental design point.
124
Ex 1. Paperboard product
Normal plot: Effects on SD
The normal plot indicates that only factor F (cure time) influenced the variation in
the puncture resistance (i.e. force). Further analysis of factor F has revealed that
variability is maximum when cure time is set at high level (i.e. 5 days).
125
Ex 1. Paperboard product
Conclusion
• Factors C, B and E have a significant impact on process average, whereas
factor F has a significant impact on process variability.
• Other factors such as A, D and G can be set at their economic levels since
they do not appear to influence either the process average or the process
variability.
• The next stage of the experimentation would be to consider the interaction
among the factors and select the optimal settings from the experiment that
yields maximum force with minimum variability. [ Full or fractional factorial with
resolution 4]
126
Ex 2. Plastic Extrusion Process
127
Ex 2. Plastic Extrusion Process
• Design Matrix
128
Ex 2. Plastic Extrusion Process
129
Ex 2. Plastic Extrusion Process
• Main Effects
130
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
and 3-levels.
• Factorial designs would enable an experimenter to study the joint effect of the
131
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
and, most importantly, varies the factors simultaneously rather than one factor
at a time.
• Using this approach, the tester can examine both main effects (effect of the
132
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
133
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
134
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
135
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
136
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
137
Introduction of full factorial design, Basic concepts of 22, 23 and 2k designs
138
22 factorial designs
F value = variance of the group means (Mean Square Between) / mean of the within group variances (Mean Squared Error)
139
Example 22 design
140
Example 22 design
141
Example 22 design
142
22 factorial designs
143
F-Distribution Table (alpha = 0.05) for Critical Value
144
F-Distribution Table (alpha = 0.025) for Critical
Value
145
F-Distribution Table (alpha = 0.1) for Critical Value
146
Example 22 design
• As an example, consider an investigation into the effect of the concentration of the reactant
and the amount of the catalyst on the conversion (yield) in a chemical process. The
objective of the experiment was to determine if adjustments to either of these two factors
would increase the yield. Let the reactant concentration be factor A and let the two levels of
interest be 15 and 25 percent. The catalyst is factor B, with the high level denoting the use
of 2 pounds of the catalyst and the low level denoting the use of only 1 pound. The
experiment is replicated three times, so there are 12 runs.
147
22 factorial designs
148
Example 22 design
149
Example 22 design
150
22 factorial designs
Example:
SST = [282 +362 +182 +312 +252 +322 +192 +302 + 272 +322 +232
+292] – [80+100+60+90]2/4n
151
Example 22 design
152
Factor A: F Critical Value (1,8) = 5.3177
Example 22 design Factor B: F Critical Value (1,8) = 5.3177
Factor AB: F Critical Value (1,8) = 5.3177
153
F-Distribution Table (alpha = 0.05) for Critical Value
154
Example (2) 22 design
An article in the AT&T Technical Journal describes the application of two-level
factorial designs to integrated circuit manufacturing. A basic processing step in
this industry is to grow an epitaxial layer on polished silicon wafers. The wafers
are mounted on a susceptor and positioned inside a bell jar. Chemical vapors are
introduced through nozzles near the top of the jar. The susceptor is rotated, and heat
is applied.
A deposition time and B arsenic flow rate.
two levels of deposition time are short (-) and long (+)
two levels of arsenic flow rate are 55% (-) and 59%(+)
n=4 replications
155
Example (2) 22 design
156
Example (2) 22 design
157
23 factorial design
158
23 factorial design
159
23 factorial design
160
23 factorial design
161
Example: 23 Factorial design
162
Example: 23 Factorial design
163
Thank you
164