0% found this document useful (0 votes)
63 views23 pages

Run DOEon NNModel

This document discusses running a Design of Experiments (DOE) on a Neural Network (NN) model to optimize rayon fiber strength. A NN model was developed using five years of production data to relate 16 process factors to fiber strength. DOE was then performed on the NN model to explore a wider range of factor settings beyond normal production ranges, to find local or true optima. Running DOE on the NN model allows optimization beyond the model's normal narrow predictive range and finds optimal conditions with fewer actual production runs than exploring the full design space experimentally.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views23 pages

Run DOEon NNModel

This document discusses running a Design of Experiments (DOE) on a Neural Network (NN) model to optimize rayon fiber strength. A NN model was developed using five years of production data to relate 16 process factors to fiber strength. DOE was then performed on the NN model to explore a wider range of factor settings beyond normal production ranges, to find local or true optima. Running DOE on the NN model allows optimization beyond the model's normal narrow predictive range and finds optimal conditions with fewer actual production runs than exploring the full design space experimentally.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 23

Run Design of Experiment (DOE) on Neural-Network (NN) Model

By Zivorad R. Lazic

Abstract

Mathematical modeling is a process of identifying relationships between input variables-factors

and output variables-responses. A model can be viewed as a “Black Box”.

The goal of predictive modeling technology is to accurately predict outcomes and uncover

relationships in data. A model relates a set of inputs such as process variables to outputs such as

fiber strength. Once built, the model can take new manufacturing inputs and produce a prediction

of outputs. The closeness with which these predicted outputs match actual performance is a quality

measure of the model.

Nonlinear Multi Factor Predictive Modeling (NLMPM) based on Neural-Network (NN) 1 was used

to generate model Rayon Fiber Strength vs. 16 Process Factors from five years production data.

The objective of this optimization was to run a simulation environment on the desktop in which

we can change controllable variables to see the marginal differences such changes make to the

output being optimized. Such changes are extremely complex and multiple changes happening at

the same time make prediction even more difficult. Not only can optimization predict how much

an output will be affected by changing a manufacturing factor, but it can also predict how it will

change when multiple other factors under the decision-maker's control are adjusted at the same

time.

In order to overcome NN drawback that NN prediction capabilities are limited to a very narrow

range of process factors because in large scale production operators are trying to keep factor’s

settings as constant as possible. It means if you need to search for real optimum you need wider

1
Software: “Insights 6.0”, Pavilion Technologies, Inc, Austin, TX
factor’s space which usually is not available in real process industry. In this case, NN was used to

get good mathematical model based on five years real process data and to run DOE on NN model

to explore wider factor’s space to find local or real optimum.

1.0 Introduction

Optimization is a methodology that makes complex process as fully effective or optimal as possible.

Optimization includes three important components: Objective Functions (f), variables/Factors (x) and

constraints. Mathematically speaking, the goal of optimization is to minimize, maximize or reach target

value for objective function subjected to defined constraints. The process to determine the relationships

between process factors and responses/properties for a given process is known as mathematical modeling.

Mathematical modeling is a process of identifying relationships between process factors/inputs and

responses/outputs. A model can be viewed as a “Black Box”.

Once a mathematical model is constructed, an optimization procedure can be used to define optimum.

Mathematical modeling can be divided into three approaches:

 Fundamental-First Principles Modeling;

 Empirical Modeling (experimental or historic process data);

 Hybrid Modeling (systematic use of both historic process data and first-principles equations in
model development)

The First Principal Modeling belongs to theoretical/physical models and requires in-depth process

knowledge and often resorts to linear approximation of non-linear systems and it is usually

computationally very expensive.

The empirical modeling is very often only possible approach which can be developed quickly and results

are often accurate but without concept of physics and depends of the quality of data generated by

experiment or process. If experiment is used to generate data, Design of Experiments (DOE) technique for
experimentation/optimization is the most capable and efficient tool for planning, conducting experiment

and analyzing the data generated from experiments on small scale or full-scale process that include

complex interactions among many process factors. The second source of data can be a “Data Historian” as

a fully integrated data base from process and quality. As manufacturing processes and objectives become

more complex, experimental process optimization is becoming more difficult and too risky for producing

off-spec product and Nonlinear Multi Factor Predictive Modeling (NLMPM) based on Neural-Network

(NN) was used to generate model.

These tools are useful when you have good wealth of data and not much physical knowledge. With NN-s

you don’t have to worry about the form of the empirical model. Neural Network modeling technique is

mathematical approach to mimic neuro-biological processes. The technique came out of a psychological

research, research originated by Mc Cullock & Pitts (1942), Hebb in early 50’s and revolutionized by

Rummelhart (back propagation method) in early 80’s

The biggest drawback of this approach is very poor extrapolation which means no way to found optimum

beyond process conditions. This was the main reason to apply DOE to generate data from NN-Model and

to verify optimal conditions with minimum number of production runs.

One common type of industrial process, rayon fiber, is a good example of this kind of complex system.
2.0 Neural-Network (NN) Predictive Modeling

Rayon is a manufactured regenerated cellulosic fiber. Rayon is produced from naturally occurring

polymers and therefore it is not a truly synthetic fiber, nor is it a natural fiber; it is considered a

manufactured regenerated cellulosic fiber.

Regular rayon has lengthwise lines called striations and its cross-section is an indented circular shape.

Filament Rayon yarns varies from 80 to 980 filaments per yarn and varies in size from 40 to 5000 denier.

Staple fibers range from 1.5 to 15 denier and are mechanically or chemically crimped. Rayon fibers are

naturally very bright, but the addition of delustering pigments cuts down on this natural brightness.

Most commercial rayon manufacturing today utilizes the viscose process. All of the early viscose

production involved batch processing. In more recent times, processes have been modified to allow some

semi-continuous production. For easier understanding, the viscose process is a batch operation. Purified

cellulose for rayon production usually comes from specially processed wood pulp. It is sometimes

referred to as “dissolving cellulose” or “dissolving pulp” to distinguish it from lower grade pulps used for

papermaking and other purposes. Dissolving cellulose is characterized by a high  -cellulose content, i.e.,

it is composed of long-chain molecules, relatively free from lignin and hemicelluloses, or other short-

chain carbohydrates. This process includes a few steps:

Steeping

The cellulose sheets are saturated with a solution of caustic soda (or sodium hydroxide) and allowed to

steep for enough time for the caustic solution to penetrate the cellulose and convert some of it into “soda

cellulose”, the sodium salt of cellulose. This is necessary to facilitate controlled oxidation of the cellulose

chains and the ensuing reaction to form cellulose xanthate.

Pressing

The soda cellulose is squeezed mechanically to remove excess caustic soda solution.
Shredding

The soda cellulose is mechanically shredded to increase surface area and make the cellulose easier to

process. This shredded cellulose is often referred to as “white crumb”.

Aging

The white crumb is allowed to stand in contact with the oxygen of the ambient air. Because of the high

alkalinity of white crumb, the cellulose is partially oxidized and degraded to lower molecular weights.

This degradation must be carefully controlled to produce chain lengths short enough to give manageable

viscosities in the spinning solution, but still long enough to impart good physical properties to the fiber

product.

Xanthation

The properly aged white crumb is placed into a churn, or other mixing vessel, and treated with gaseous

carbon disulfide. The soda cellulose reacts with the CS to form xanthate ester groups. The carbon
2

disulfide also reacts with the alkaline medium to form inorganic impurities which give the cellulose

mixture a characteristic yellow color – and this material is referred to as “yellow crumb”. Because

accessibility to the CS is greatly restricted in the crystalline regions of the soda cellulose, the yellow
2

crumb is essentially a block copolymer of cellulose and cellulose xanthate.

Dissolving

The yellow crumb is dissolved in aqueous caustic solution. The large xanthate substituent on the cellulose

force the chains apart, reducing the interchain hydrogen bonds and allowing water molecules to solvate

and separate the chains, leading to solution of the otherwise insoluble cellulose. Because of the blocks of

un-xanthated cellulose in the crystalline regions, the yellow crumb is not completely soluble at this stage.

Because the cellulose xanthate solution (or more accurately, suspension) has a very high viscosity, it has

been termed “viscose”.


Ripening

The viscose is allowed to stand for a period of time to “ripen”. Two important processes occur during

ripening: Redistribution and loss of xanthate groups. The reversible xanthation reaction allows some of

the xanthate groups to revert to cellulosic hydroxyls and free CS . This free CS can then escape or react
2 2

with other hydroxyl on other portions of the cellulose chain. In this way, the ordered, or crystalline,

regions are gradually broken down and more complete solution is achieved. The CS that is lost reduces
2

the solubility of the cellulose and facilitates regeneration of the cellulose after it is formed into a filament.

Filtering

The viscose is filtered to remove undissolved materials that might disrupt the spinning process or cause

defects in the rayon filament.

Degassing

Bubbles of air entrapped in the viscose must be removed prior to extrusion or they would cause voids, or

weak spots, in the fine rayon filaments.

Spinning

The viscose is forced through a spinneret a device with many small holes. Each hole produces a fine

filament of viscose. As the viscose exits the spinneret, it comes in contact with a solution of sulfuric acid,

sodium sulfate and, usually, Zn ions. Several processes occur at this point which cause the cellulose to be
--

regenerated and precipitate from solution. Water diffuses out from the extruded viscose to increase the

concentration in the filament beyond the limit of solubility. The xanthate groups form complexes with the

Zn which draw the cellulose chains together. The acidic spin bath converts the xanthate functions into
++

unstable xantheic acid groups, which spontaneously lose CS and regenerate the free hydroxyls of
2

cellulose. (This is similar to the well-known reaction of carbonate salts with acid to form unstable

carbonic acid, which loses CO ). The result is the formation of fine filaments of cellulose, or rayon.
2
Stretching

The rayon filaments are stretched while the cellulose chains are still relatively mobile. This causes the

chains to stretch out and orient along the fiber axis. As the chains become more parallel, interchain

hydrogen bonds form, giving the filaments the properties necessary for use as textile fibers.

Cutting

If the rayon is to be used as staple (i.e., discreet lengths of fiber), the group of filaments (termed “tow”) is

passed through a rotary cutter to provide a fiber which can be processed in much the same way as cotton.

Washing

The freshly regenerated rayon contains many salts and other water soluble impurities which need to be

removed. Several different washing techniques may be used.

Drying

Drying is the last step in this process.

The historic data were collected from production line for sixteen process factors and eight properties in

the last five years (15082 data records/rows). Process factors include Cl 2 concentration in bleach, lye

content in viscose, cellulose content in viscose, viscose maturity, sulfur content in viscose, hemi cellulose

content in viscose, berol concentration in spinbath, spinneret type, sodium sulfate concentration in

spinbath, viscose throughput per day, zinc sulfate concentration in spinbath, sulfuric acid concentration in

spinbath, spinning speed, viscose viscosity, stretch and 2-nd KKF filterability (measured by Coulter

Counter).

The measured properties/responses include Conditioned Fiber Strength (CONSTR), Fiber Yellowness,

Fiber Whiteness, Dyeability, Sliver Cohesion, and three different splinters counts: Spl_1, Spl_2&3 and

Spl_4.
The most important fiber property is Condition Fiber Strength (CONSTR) and objective is to model and

to find setting for optimal fiber strength which means maximum fiber strength with minimum cost. The

second objective is to use model to predict certain properties because certain critical properties cannot be

measured on-line or cannot be measured quickly enough to be useful in controlling the process. Lab

results are often too late, infrequent, or inaccurate to be useful for control purposes.

The complete process modeling and optimization will include a few steps:

1. Format and load data files;

2. Data Preprocessing;

3. Create and train a model;

4. Use sensitivity analysis to determine which input factors have the most effect on the fiber strength;

5. Apply DOE (NN-model) for the most important process factors to find optimum.

The raw data usually coming from different systems with different types of formats, and data may be

sampled at different time frequencies. To extract data from a raw data files and use it to produce a dataset

suitable for building and training a model is so called formatting procedure. Synchronizing the sampling

intervals will be done in the preprocessing step.

The data preprocessing includes Time Merge routine, which will put all data (process and QC data) on a

uniform sampling interval. The Time Merge tool will interpolate, extrapolate or compress variables as

necessary to place all variables on the sampling interval specified in the interval box. As a general rule,

the value for interval should be the interval at which you wish to predict the outputs. However, it should

be no smaller than the smallest sampling interval of the inputs. The formatted data are presented in
Table#1.1 Five Years, Process Data and Fiber Strength

Table#1.2 Five Years, Process Data and Fiber Strength


This third steps includes building, training and verifying of the model. When model was built, “Insights”

software will start to train it. It means software will begin tuning the model by passing alternately through

a set of training data and a set of testing data. Each combined training/testing pass is called “epoch”.

During the training pass of each epoch, software modifies the internal structure of the model based on the

error between the original output value (from the actual data) and the predicted output value (predicted

value from the model). An additional set of data is reserved for validating the model after it is trained, and

is never seen by the model during training. When you train a model, you can stop it manually or you can

let the software stop it based on various criteria. The stopping criteria are:

 Auto stop, training stops when the model no longer improving on the test set;

 Final Epoch, training stops when the software has completed the specified number of epoch;

 Train Relative Error, training stops when the relative error for the training set is equal to or less

than the specified error.

The lower these Relative Errors are, the better the model is. A Relative Error of 0.0 indicates that the

model predicts the data perfectly. A Relative Error greater than 1.0 indicates that the model is worse than

a model that constantly predicts the mean of the data. R 2 is a standard error measure commonly used in

linear regression. Unlike Relative Error, a higher number for R 2 indicates better performance. The

Relative Error will decrease while R2 will increase as the model improves during training. The

relationship between these two error measures is:

R2=1-(Rel-Error)2

Relative Error is always positive.

The Relative Error, R2 and the strip chart with original and predicted values is shown on graph#1.
Graph#1 The Relative Error, R2 and the strip chart with original and predicted values

It is very easy to see when training begins; the discrepancy between the predicted and actual output values

is relatively high. At this point, the model cannot generalize very well from the data; that is, it cannot yet

accurately predict output values from input values. As training progress, the model continues to modify its

internal structure to better represent the relationship between input and output variables. The best

prediction was reached after only eight epochs. As a general rule, an R2 greater than 0.35 or so (or

equivalently, a Relative Error of 0.80 or less) indicates that the model is useful for prediction and set point

recommendation.

2.0 Analyzing the Model

A few tools available to analyze predictive model:

 Predicted vs. Actual;


 Sensitivity vs. Rank;

 Sensitivity Report;

 Output vs. Percent

 Sensitivity vs. Percent

A Predicted vs. Actual analysis is used to run a Dataset through a trained model and compare the model’s

Predicted output values with the Actual output values recorded in the dataset. The principal reason for

doing this are:

 To validate a model by running it on data that it never saw during training;

 To identify points that a trained model did not learn well;

 To view the training/testing Relative Error and R2 for individual variables in a model with

multiple output variables;

 To generate predicted model values and include them in the Dataset.

The Predicted vs. Actual plot is a “scatter plot” of the output Conditioned Fiber Strength-CONSTR

(Graph#2) value predicted by the model against the actual output as recorded in the dataset.
Graph#2 The Predicted vs. Actual plot-Scatter Pplot

This “perfect model” line, and a line displaying the best fit to the plotted points, is drawn on the plot. This

plot visually displays the quality of the modeling fit, which is seen to be quite good.

The Sensitivity vs. Rank, tool shows which input variables have the greatest effect on the output variable.

The influence of a change in input on an output is called the sensitivity or gain. The strategy for process

optimization is to identify the most significant factors and optimize them first. This provides a good

global (or static) optimization of the process.

The three types of sensitivity analysis are: Sensitivity vs. Rank analysis plots the average sensitivities,

displayed in descending order, Graph#3, Output vs. Percent, analysis creates plots of the input-output

relationships, Graph#4 and Sensitivity vs. Percent analysis creates plots that are the derivatives of the

Output vs. Percent plots, Graph#5.


Graph#3 Sensitivity vs. Rank analysis plots

Graph#4 Output vs. Percent Plots of the input-output relationships


Graph#5 Sensitivity vs. Percent analysis plots

Sensitivity Report, the variables in the report is ranked according to their Average Absolute values. The

sensitivity values show the average impact of the inputs on the output. Average sensitivity is the average

change of the output variable as the input variable increases from its minimum to its maximum value

(normalized). A positive Average value indicates that, on average, the output value increases as the input

variable increases. A negative Average value indicates that, on average, the output value decreases as the

input value increases. Average Absolute is the average of the magnitude (absolute value) of the change in

the output variables the input variable increases from the minimum to its maximum value. Average

Absolute is always positive. Average Absolute gives a general indication of the strength of the effect of an

input on an output. Combined with Average, it can be used to tell whether the input-output relationship is

linear, monotonic, or without a causal connection, Table#2.


Table#2 Sensitivity Report

The Output vs. %, plot shows the details of how the output varies across the range of each input variable,

Graph#5.

3.0 Findings

It is very easy to see from the Table#2, the first FIVE the most significant process factors effecting

Conditioned Fiber Strength-CONSTR are:

1. 2-nd KKF Filterability

2. Fiber Stretching;

3. Viscose Viscosity;

4. Spinning Speed;

5. Spin bath H2SO4 Concentration.


Three process factors are with negative sign,2-nd KKF Filterability; Spinning Speed & Spin bath H2SO4

Concentration which means lower is better regarding fiber strength. Two other factors are with positive

sign, Fiber Stretching and Viscose Viscosity which means higher is better.

This was so called Screening step without experiment based on process historic data in a period of five

years. The five selected process factors have been included in optimization process using “Software

Designed Experiment”.

The Central Composite Rotatable Design-DOE2 (Box-Wilson Design) was used for first three selected

process factors to perform Software Designed Experiment on a NN model, It means 20 runs (six

replicates) conducted on NN-model and predicted values for fiber strength were used to perform process

optimization, Table#3.

Table#3.1 The Central Composite Rotatable Design

2
Software: Design Expert, “Stat-Ease, Inc”, Minneapolis, MN
Table#3.2 The Central Composite Rotatable Design Matrix

The Factors Description and levels are presented below:

No. Factor Name Lower Level (-) Higher Level (+)

1. 2-nd KKF Filterability 30 70

2. Fiber Stretching 60 70

3. Viscose Viscosity 40 80

Now, using the same model, we will use “Insights” Set points & What Ifs tool to simulate the behavior of

the process under various operating conditions specified in Table#3. This tool allows you to perform

“What If” simulations, or, in other words, to do “software designed experiment” rather than actually

performing designed experiments in plant. This graphical interface, Table#4, allows you to change input

values/factors and predict the output (fiber strength) for 20 new inputs.
Table#4 “Insights” Set points & What Ifs tool

With the aid of software, the data from Fiber Strength response were fitted to quadratic polynomial.

Statistical analysis indicates with a high degree of confidence, more than 99.99 percent (Table#5), that

this model is significant. Therefore, although the equation only approximate the true relationship, it will

be more than adequate for empirical prediction in wider experimental space than NN-predictive model.
Table#5

Response: FIBER STRENGTH


ANOVA for Response Surface Quadratic Model
Analysis of variance table [Partial sum of squares]
Sum of Mean F
Source Squares DF Square Value Prob > F
Model 0.15 9 0.016 5140.52 < 0.0001 significant
A 0.046 1 0.046 14540.31 < 0.0001
B 0.096 1 0.096 30572.46 < 0.0001
C 1.098E-003 1 1.098E-003 349.68 < 0.0001
A2 2.420E-004 1 2.420E-004 77.07 0.0003
B 2
7.325E-004 1 7.325E-004 233.29 < 0.0001
C2 1.009E-008 1 1.009E-008 3.215E-003 0.9570
AB 1.250E-005 1 1.250E-005 3.98 0.1026
AC 2.450E-005 1 2.450E-005 7.80 0.0383
BC 4.500E-006 1 4.500E-006 1.43 0.2849
Residual 1.570E-005 5 3.140E-006
Cor Total 0.15 14

Std. Dev. 1.772E-003 R-Squared 0.9999


Mean 2.42 Adj R-Squared 0.9997
C.V. 0.073 Pred R-Squared 0.9991
PRESS1.262E-004 Adeq Precision 221.363

Final Equation in Terms of Actual Factors:

FIBER STRENGTH =

-0.45126

-3.39675E-003* FILTERABILITY

+0.075048* STRETCHING

+1.16682E-003* VISCOSITY

+1.58078E-005* FILTERABILITY2

-4.40040E-004* STRETCHING2

-1.02090E-007* VISCOSITY2
-1.25000E-005* FILTERABILITY * STRETCHING

-4.37500E-006* FILTERABILITY * VISCOSITY

-7.50000E-006* STRETCHING * VISCOSITY

It contains second-order term which capture interactions. The relatively small but statistically significant

coefficient on AC (Filterability x Viscosity) indicates significant antagonism between these factors.

Based on this model, software can produce extremely useful maps. Graph#6 shows 3Dsurfaces, with

projected contour map, for the fiber strength.

Graph#6 Response Surface Plots

DESIGN-EXPERT Plot
DESIGN-EXPERT Plot
FIBER STRENGTH
FIBER STRENGTH
X = A: FILTERABILITY
X = B: STRETCHING
Y = C: VISCOSITY Y = C: VISCOSITY

Actual Factor Actual Factor


2.499
A: FILTERABILITY =2.505
50.00 B: STRETCHING = 65.00
2.459 2.465

2.432
FIBER STRENGTH

2.412
FIBER STRENGTH

2.366 2.399

2.319 2.365

80.00 80.00
70.00 70.00
70.00 70.00
67.50 60.00
60.00 60.00
65.00 50.00
C: VISCOSITY 50.00 62.50
C: VISCOSITY 50.00 40.00
B: STRETCHING A: FILTERABILITY
40.00 60.00 40.00 30.00

DESIGN-EXPERT Plot DESIGN -EXPERT Plot


70.00
FIBER STRENGTH
FIBER STRENGTH FIBER STRENGTH
X = A: FILTERABILITY D esign Points
Y = B: STRETCHING 2.515
X = A: FI LTERABILITY
Actual Factor Y = B: STR ETCHING
2.571
C: VISCOSITY = 77.84 67.50
B: STRETCHING

2.468
2.500 Actual Fact or
C: VISCOSITY = 60.00
2.429
FIBER STRENGTH

2.357
65.00 2.421
2.286

62.50 2.373

70.00 2.326
70.00
67.50
60.00
60.00
65.00
50.00 30.00 40.00 50.00 60.00 70.00
B: STRETCHING62.50 40.00
A: FILTERABILITY
60.00 30.00 A: FILT ERABILIT Y

The optimal process factors settings were verified on production line.


4.0 Discussion

Designed Experiments (DOE) are often used to determine to which input variables/factors are most

sensitive or the most important for particular responses/properties. Those designs are so called screening

designs. Another use of designed experiment is to “find the recipe”, or determine settings, or setpoints,

that optimize the output/property. Since it is difficult and very expensive to perform designed experiment

in the plant, it is a major advantage to be able to simulate such experiment using a model. As it was

already demonstrated, this approach allows you to perform “software designed experiment” that simulate

designed experiment in the plant. You can simulate all possibilities and strategies in an off-line software

package and than conduct verification run in run-time plant environment.

References

1. McCulloch, W. S. and Pitts, W. H.,A logical calculus of the ideas immanent in nervous activity.

Bulletin of Mathematical Biophysics, 5:115-133.(1943)

2. Hebb, D.O. (1949), The Organization of Behavior, NY: John Wiley & Sons

3. Rumelhart, D.E., Hinton, G.E., and Williams, R.J., "Learning internal representations by error

propagation",(1986

4. Box, G.E.P., W.G. Hunter and J.S. Hunter, “Statistics for Experiments”, Wiley, New York

(1978)

5. Montgomery, D.C., “Design and Analysis of experiments”, 3-rd ed., Wiley, New York (1991)

6. Lazic, R.Z, “Design of Experiments in Chemical Engineering", Wiley-VCH, Winheim (2004)

7. Anderson M.J., and P.J. Whitcomb, “Optimization of Paint Formulation Made Easy with

Computer-Aided Design of Experiments for Mixtures”, J.Coatings Tech., p.71 (July 199)

8. Bayne C. K., Rubin I.B., Practical Experimental Designs and Optimization Methods for

Chemists. VCH Publishers, Inc., Deerfield Beach, Florida, (1986)


9. Box G.E.P., Draper N.R. Empirical Model-Building and Response Surfaces John Wiley & Sons,

Inc., New York, (1987)

10. Morgan E. Chemometrics: Experimental Design. John Wiley & Sons, Inc., New York, (1991)

11. Cornell J.A., Experiments with mixtures. John Wiley & Sons, Inc., New York, (1981)

12. Hunter, J.S., "Applying Statistics to Solving Chemical Problems" Chemtech, 17, 167-169 (1987)

13. Hendrix, C., "What Every Technologist Should Know About Experimental Design" Chemtech, 9,

167-174 (1979)

14. Bishop, T., et al, "Another Look at the Statistician's Role in Experimental Planning and Design"

The American Statistician, 36, 387-389 (1982)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy