0% found this document useful (0 votes)
76 views9 pages

Recent Developments and Challenges in Surrogate Model Based Optimal Design of Engineering Systems

This document discusses surrogate models which are approximations of computationally expensive simulation models used in engineering design optimization. Surrogate models allow designers to more efficiently explore design tradeoffs and optimize designs. Key challenges in developing accurate surrogate models include determining the appropriate sampling points from the design space to generate training data, and selecting the best approximation method. Surrogate models help address limitations of computation time and lack of gradient information in simulations, enabling more efficient multi-disciplinary design optimization.

Uploaded by

Raman Balu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views9 pages

Recent Developments and Challenges in Surrogate Model Based Optimal Design of Engineering Systems

This document discusses surrogate models which are approximations of computationally expensive simulation models used in engineering design optimization. Surrogate models allow designers to more efficiently explore design tradeoffs and optimize designs. Key challenges in developing accurate surrogate models include determining the appropriate sampling points from the design space to generate training data, and selecting the best approximation method. Surrogate models help address limitations of computation time and lack of gradient information in simulations, enabling more efficient multi-disciplinary design optimization.

Uploaded by

Raman Balu
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Recent Developments and Challenges in

Surrogate Model Based Optimal Design of


Engineering Systems
Dr. R. BALU

Abstract— Computer simulation has become the II. OPTIMAL DESIGN SCENARIO
back bone of the modern engineering design. Multi-
disciplinary optimal design through simulation uses Engineering design process is mainly concerned
sophisticated models that take long computing times, even with making decisions on analysis which have a bearing
on today’s most powerful super computers. In this context, directly on the end product. It often takes months of analysis
surrogate models come to the designers’ rescue. These are by a dedicated team of engineers to arrive at key decisions
basically cheap-to-compute models to the data generated by at a given stage of a design project. An important aspect of
the expensive-to-run computer simulation models. This the impact of the computational approach to engineering
paper addresses some of the basic issues in the surrogate design is that hundreds of feasible designs can be evaluated
model development, reviews some of the recent research in and design constraints emanating from more than one
this area and focuses on the challenges that lie ahead. discipline can be taken care of at the conceptual design
stage itself. A good potential now exists, than ever before,
Keywords—Optimal Design, Surrogate Models, Engineering to use conventional optimization tools, with this emerging
Systems, Computer Simulation.
capability, to evolve optimum cost effective products and
devices. A typical design of an aerospace vehicle, for
I. INTRODUCTION
example, can consider aerodynamic, structural, propulsion
weight, manufacturing and other aspects in the conceptual

O ver the past few decades, there has been an exponential


growth in the ability of engineers to evolve
sophisticated models to simulate various practical
stage itself. This has led to the concept of multi-disciplinary
optimisation.
One of the first obstacles to carry out design
phenomenon and to determine with precision, how a optimisation tasks, using these sophisticated simulation
complex product or device will perform even before the tools, is the long running times taken by them. For example,
“metal is cut”. This has been made possible by the a typical computational flow simulation over an aircraft may
availability of computing resources, both in terms of require typically hours or perhaps even days of
memory and speed, at very cheap and affordable cost. The computational (CPU) time, even while running on a parallel
development of efficient numerical algorithms for solving supercomputer system. To design an optimum configuration
complicated mathematical model equations, has also aided of the aircraft, thousand of such flow simulation runs are
in the proliferation of the use of these simulation tools in required. The design tasks can not, therefore, be completed
various engineering disciplines. These are, in general, in a time-bound manner. Thus, in modern design offices
computational power needed to support advanced designs
highly non linear partial differential equations with complex
may be enormous. Even with the latest most powerful
boundary conditions. There has been, therefore, a drastic
parallel supercomputer systems, these requirements cannot
and perceptible change in the way we design and develop
be easily met.
today’s practical engineering systems. The conventional The second difficulty is the lack of gradient
build-test-break-build approach of the yester-years, information in some of the most complicated simulations,
typically followed by Wright Brothers in their design of the which are needed by the gradient based optimization
first flying machine, has been replaced by the simulate- (search) procedures. Computation of these gradients
build method followed, for example, in the design of the numerically involves several simulation runs. For example
Mars Airborne Geophysical Explorer. to compute the first derivative of the objective function with
_______________________________________________ respect to a given variable one needs at least two runs, the
second derivatives need at least three runs and the cross
derivatives need at least four runs. This problem becomes
Author is with the Valia Koonambaikulathamma College of Engineering insurmountable if the design involves a large number of
and Technology (VKCET), Chavaracode, Trivandrum District, Kerala,
India, as Vice Principa l. (e-mail: balshyam2003@yahoo.com). independent parameters. Optimisation based on
evolutionary algorithms, of course, do not require gradient the accuracy of the high fidelity model is obtained in
information. But these however, require many iterations to simulations at the expense of low fidelity model?
converge, implying a large number of simulations which
again proves to be prohibitively costly. • Can the surrogate model be used to identify and
explore potential regions of the design space for
optimum designs during the optimization runs? Once
III. SURROGATE MODELS identified, can they be further used to exploit the local
regions?

A. ISSUES IN DVELOPMENT The above questions are the focal areas of active research in
surrogate models development today.
The basic idea in evolving a surrogate model is the
judicious use of available computational resources and
B. APPLICATIONS
budget. Investments in developing fast mathematical
approximations to the data generated by computer intensive
simulation model data, helps in gaining insights into the Surrogate models help us to gain increasing insight into a
design problem at hand and also in exploring many design design problem. Such models seek to provide answers in the
trade-offs and visualize the intricacies of the overall design. gaps between the limited analysis runs that can be afforded
The designer can take recourse to the high fidelity simulation with the available computer resources. They can also be
model code runs to test the ideas so generated and gained. He used to bridge seamlessly the various levels of
can also update and modify the approximation model itself. sophistication in-built in the varying fidelity physics based
Thus the surrogate model is a cheap-to-compute model of a simulation model codes. They may also unify data partly
computer intensive model – a model for a model. 1 obtained from computer simulations and partly by field
experiments. The main aim is to use all the available data
While the basic concept of a surrogate model sounds pertaining to a given problem, and evolve a simple yet
logical and simple, there are many challenges in developing powerful and usable model that can be used to backup
the best surrogate model for a given set of data. The basic design decisions.
questions that are pertinent are:
The simplest and currently the most common use of
• At what sampling points in the design space, the surrogate models is to augment the data generated by a
computer intensive model is to be run for generating single expensive computer simulation code that needs to be
model data that are input to develop the surrogate run for a range of values of input parameter values. The
model? basic idea is to use the surrogate model as a curve fit to the
available data so that data at any new design point can be
• Which is the best approximation method that closely predicted without recourse to the expensive simulation code
resembles the model data? runs. The underlying assumption is that once built the
surrogate model is as good as the original, with good
prediction accuracy and at the same hundreds or thousand
• Can the surrogate model be used to segregate the
times faster than the ‘mother’ code.
important parameters from the not-so-important ones in
the design problem?
Another interestingly common use of the surrogates is to
act as a calibrator for prediction codes with limited
• Can we rely on the surrogate model to do trade-off
accuracy. It is quite common while developing a software
studies and arrive at design decisions?
model for the physical process a simplified approach might
have been used to gain acceptable run times. For example in
• How to deal with noise in the computer generated
computational flow simulation, very rapid panel method of
simulation model data?
solution may be used in the place computer intensive full
Navier-Stokes models. A surrogate model may well be
• Can the surrogate model be continuously improved or trained to bridge the two codes by using it to model the
updated by having recourse to computer intensive differences between the results from each code, one a fast
original model runs during an optimization run? simpler code and a slow complex code. The idea is to gain
the accuracy of the sophisticated code at the expense of
• Can a single surrogate model serve the entire design running the faster code. Such multi fidelity multi level
space or should we develop multiple surrogates in approaches can be extended gainfully to deal with data
different local regions? coming from the physical experiments and their known
established correlations with computational predictions.
• Can a surrogate model be constructed to bridge the gap
between the predictions between a high fidelity model
and a low fidelity model for the same problem so that
Another important use of surrogates is their ability to deal IV. SAMPLING PLAN
with noisy or missing data. It is common that results coming
from physical experiments are subjected to random errors. The first step in constructing a cheap-to-evaluate surrogate
These have to be taken care of while using the data. It is model, say f*(x) that replicates an expensive-to-compute
also possible that some experiments fail to yield any results black-box function f(x) for a given engineering design
at all. The non-repeatable random error that is associated problem, is a well conceived sampling plan. Assuming that
with physical experiments does not exist in computer the design problem is governed by a k-dimensional vector
simulations as these are deterministic in nature of design variables x Є D Є Rk , where D is the design
Computational noise stems from the fact that certain space or the design domain. f(x) is assumed to be
simulations runs fail to converge as no numerical scheme is continuous and is considered as the quality, cost or
fool-proof and many times will fail in unexpected ways. The performance metric of the design problem. If the range of
surrogate models come in handy to act as fillers or filters to the k-variables is non-dimensionalised in the range [ 0,1] ,
smooth the data spanning any gaps. then design space D represents a k-dimensional hypercube.
Apart from the assumption of continuity, the only insight we
Finally the surrogate models can be used to gain insight can obtain about the function f is only through ‘n ‘ discrete
into the functional relationships among the variables and to observations or samples
identify the important ones and isolate the not-so-important
ones. Surrogate models, based on appropriate methods, can
be used to demonstrate, which variables are important and
{x(i) → y(i) = f(x(i)) [i= 1,2,3,…n] }. (1)
have the most profound impact on the final product along
with the approximate functional form. This will help design As these simulations are expensive to compute, the
engineers to focus on such parameters and understand them sampling points have to be distributed judiciously. The
with greater clarity. Along with visulaisation tools, contour number of sampling points is determined primarily by the
maps and plots can be generated using the surrogates to available resources and budget both in terms of computer
better visualize the intricate relations between the simulation runs and / or conducting physical experiments.
parameters. This might not have been possible using only The challenge here is to use the sparse set of observations
the computer intensive simulation code runs. or data to construct an approximation f * to f, which will
be subsequently used as the cheap alternative to evaluate
Having set the background for the necessity of building the any design in x Є D .
surrogate models in the modern day engineering design
process, the various stages that are to be followed in A. Latin Hypercube Sampling
building a good and reliable surrogate can be looked into.
These are detailed in the figure 1. A mathematically well posed surrogate model need not
necessarily generalize well and it may still be very poor in
predicting data at new or unseen locations in the k -
dimensional design space. The ability to predict reasonably
Sampling Plan Define the conditions of Computer well strongly depends on the sampling plan. Some
Simulation and /or Physical Experiments
sampling plans may need a fixed number of sampling points
and the designer has no choice in the matter. Suppose a
certain of level of accuracy is achieved by sampling an one
High Fidelity Quantitative Evaluation and Generate Data dimensional space at n locations, to achieve the same level
Simulations / of accuracy, one can assume intuitively infer that a
Observations
minimum of nk sampling points are needed. Thus sampling
at every possible combination of each of the design
variables becomes a laborious task. This sampling plan is
Construct Kriging, RBF, ANN , Polynomial
Surrogate referred to as full factorial design in the literature. The main
draw back of full factorial design is that the projection of
any variable on to the axes will overlap and the sampling
Optimisation Gradient Based / Evolutionary Algorithms can be improved, if it is made sure that these projections are
as uniform as possible. This can be done by splitting the
range of the variables into a relatively large number of equal
sized bins and generating random sub-samples within these
bins. Hence to extract as much information as possible with
Figure 1. Surrogate model Framework for Engineering Design
Optimisation
a limited set of simulation data, modern sampling of
simulation experiments use methods, that have a built-in
feature which is known as space filling property. A natural
development of this idea is to generate a sampling that is
stratified in all the k-dimensions. The sampling scheme is
known as Latin hypercube sampling (LHS). A major
advantage of LHS is that the number of samples n can be
tailored to match with the available computational budget
and resources. The number of samples ‘n’ is not restricted (2)
to any powers of k. This is especially useful, if the
dimension of the design space k is very large. A typical where m = [logRn] = [ln n / ln R ] , the square bracket here
LHS plan for a 10 point three dimensional case is shown in representing the integral part. A unique fraction between 0
the figure 2. This method yields a randomized sampling to 1, called the inverse radix number, can be constructed by
plan which guarantees multi-dimensional stratification but reversing the order of digits of n around the decimal point
does not ensure enough space filling characteristics. For as follows:
example, placing all the n-points along the main diagonal of
the design space, will not fill the available space uniformly.
A certain measure of ‘good’ and ‘bad’ latin hypercubes is
therefore necessary. This measure is defined in the
following way. Let d1 d2 … dm denote the unique values of (3)
the distance of between all the possible pairs of points in a
sampling plan S, sorted in ascending order and J1 J2 J3 … Jm The Hammersely points in the k-dimensional unit cube are
be defined such that Jj is the number of pairs of points in S given by the following sequence
separated by a distance dj. We will identify S as a maxmin
plan if it maximizes d1 and among plans for which this is
true minimizes J1. This definition can be applied to any set
of sampling plans. But since it is required to keep the
desired properties of the Latin hypercube sampling, we (4)
restrict the scope of all possible plans to a narrower set by
requiring further that S maxmisies d2 , among plans for Where R1, R2, … R k-1 are the first (k-1) prime numbers.
which this is true, minimizes J2 and so on till we reach Jm.
This method of identifying the best sampling plan becomes The Hammersely points are then given by
computationally intensive, if we have a large number of
sample points.

(5)
A comparison of different sampling plans for a two
dimensional case with 100 sampling points is shown in the
following figure

Figure 2. Three-Variable Ten-point Latin Hypercube Sampling Plan

B. Other Sampling Plans

In the literature, other sampling plans based on random


numbers such as Monte Carlo are also used, where the k-
random numbers define a point in the design space. A
special sequence called the Hammersely sequence has been
recently found to be particularly suitable, as it has a better
in-built space filling feature. 2 This sequence is generated
based on the following idea. Any integer n can be written in Figure 3. Comparison of Different Sampling Plans for n=100, k=2
radix-R notation ( R being an integer ) as,
V. SURROGATE MODEL CONSTRUCTION

Methods described above have identified ‘n’ sampling


points in the k-dimensional design space, so that data
generated by costly computer simulations at these locations
represent the design space as thoroughly as possible. The (8)
challenge the designer faces, is that n is usually restricted
by the available computational resources. Next step in the If the data points are obtained from physical experiments,
design process, is the learning about the characteristic of the the ‘ε’ can be considered as representing the measurement
function f(x) through the data pairs that are available errors. But one may wonder in the case of deterministic
namely [ (x1,y1),(x2,y2)…..(xn,yn)]. The space is infinite and computer simulation data how do we interpret this factor.
any number of hyper-surfaces can be fitted to pass through This can arise for example, in the case of flow simulation,
these known observations. Here one has to note the concept due to inadequate number of grids or lack of full
of curse of dimensionality, which states that any number convergence to solution . Suitable values of ε can be
observations will not lead to empirical models that can determined by proper studies of grid sensitivity and
predict correctly the data at new sites and will hence convergence and based on one’s experience even a partially
generalize poorly. Any sampling plan however carefully it is converged solution can be used. We view the above
chosen, will always put the sampling points toward the equation in a inverse way that is, given the data what is the
edges of the hypercube as the number of dimensions k ( the likelihood of the parameters. The model parameters are
number of parameters in the optimisation problem) estimated by minimising the difference between the
increases. The extreme example, in this case, is the needles estimated and the actual data at all the observed points.
in the haystack function that is shown below: Further if we assume that the parameters σ and ε are
constants , this reduces to the well known least square
criterion given by

(9)

The other approach to modeling is called the Cross-


(6) validation. This involves splitting the data set randomly
into q roughly equal subsets and then removing each of
Though the above model predicts the n available data these subsets in turn and the fitting the model to the
correctly, it predicts zero everywhere else. To circumvent aggregated (q-1) subsets. A loss function L is then
this difficulty, we select a generic model structure f*(x,w) computed which measures the error between the predictor
the exact shape of the model is determined by the model and the points in the subset that we set aside in each
parameters vector w and pose the model construction as a iteration. The contribution to L is summed over all the q
parameter estimation problem. We then have to choose the iterations
w’s such that the model best fits the data that we have
already generated. There are basically two approaches to
this way of the model construction. The first one is known
as Maximum Likelihood estimation (MLE). Given a set of
parameters w and the model f*(x,w) we can compute the (10)
probability of the data set
This approach is computer intensive if the dataset is quite
large and is quite ideal if the data set contains few data
points.

(7)
VI. RADIAL BASIS FUNCTION MODELS
having resulted from f and ‘ ε ‘ is a small constant margin
around each data point. If we assume that the errors are
independently and randomly distributed according to a In this section we describe the various forms of the
normal distribution with standard deviation σ , the function f*( x,w ) that are used to approximate the actual
probability of the data set is function f(x). The simplest model for f* is the mth order
polynomial model given by
The notable feature of the above equation is, that it is
linear in w, yet it can model highly non-linear multi-
(11) dimensional surfaces. The only condition that we impose is
that w is a square matrix namely nc=n. If the bases coincide
In the sense of maximum likelihood, estimate the with data points, ( ci = xi) then w can be determined from
parameters wm are determined through the least square
formulation Φw = y where Ψw=y (15)

where the elements of ψ, ψij are the Eucledian distances (xi


– xj ).
VII. KRIGING MODELS

(12) Of particular interest in surrogate based optimization, is the


special basis function of the form
The polynomial models for a multi dimensional case
becomes very complex. In this context, the function f* is
treated as composed of several well known basis functions.
Let us consider the function f* observed without error
according to the sampling plan x = {x1,x2,x3…xn}T yielding (15)
responses y = { y1,y2,y3…..yn }. The radial basis function
approximation to f is taken in the form One can see similarities between the above function and
the Gaussian function. The method that uses the above
function as the basis is known as kriging, which was first
proposed by Danie Krige, a south African mining engineer
(13) and developed by Matheron.3 The σ factor in the Gaussian
function has been replaced by the θ vector that allows the
where c(i) is the ith of the nc basis function centres and ψ is width of the basis function to vary for each variable. Also,
the nc vector containing the values of the basis functions whereas, in Gaussian formulation the exponent p has a value
themselves , evaluated at the Eucledian distances between of 2, giving a smooth surface through xi, in kriging, it is
the prediction site x and the basis function centre c(i) . This allowed to vary in the range pj = { 1 – 2 } for each
formulation is, in principle, identical to a single layer dimension of x. This gives added flexibility in generating a
artificial neuron network featuring inputs x hidden units ψ surrogate model. In fact, with pj = 2 and constant θ, kriging
and weights w which are linear transfer functions and output reduces to the standard Gaussian formulation.
y. Some examples of common basis functions are shown
below: To build a kriging model with a set of sample data X ={ x1,
x2, x3, … xn }T with observed data y = { y1,y2,y3….yn}T , an
TABLE 1 TYPES OF BASIS FUNCTIONS expression has to be found for the predicted value of y at a
new point x. The observed responses are considered as
Linear ψ ( r ) = r though they are the result of a stochastic process (even they
Cubic ψ ( r ) = r3 may from a deterministic computer code). So y is
Thin plate spline ψ ( r ) = r2 ln r considered as a random vector. This random field has a
Gaussian ψ ( r ) = e - r2/(2σ2) chosen mean µ . These random variables are correlated with
Multi quadratic ψ ( r ) = ( r 2 + σ 2 ) ½ each other, by using the kriging basis function expression
Inverse multi quadratic ψ ( r ) = ( r 2 + σ 2 ) - ½ as

The main advantage of using the radial basis function is


that, whatever functional form we choose, the weights are
very easy to determine from the relation (16)

From this we can construct a n x n matrix of all observed


data as

(14)
(20)

(17) where H is the hypothesis ( value of the parameter ) and D


is the data. P(H) is the prior probability of H before the data
By definition of the covariance matrix we can write, D was acquired. P(D|H) is the conditional probability of
seeing the data, given H is true. This is called the
likelihood. P(D) is the marginal probability of D. P(H|D) is
(18) the posterior probability, the probability that H is true given
the data and previous state of knowledge (or belief) about
To make a kriging prediction, we use the fact that the H. P(D) is the probability of seeing the data under all
likelihood of the observed data is maximum and determine possible H’s namely Hi’s. Given a set of exhaustive and
the model parameters p and θ and use the relation, mutually exclusive hypotheses Hi we have

(19)

One of the main advantages of kriging is, that the value of (21)
θ for each of the variable indicates the relative importance
of that parameter in evaluating y and hence non-significant Since P(D) is a normalizing constant, we can see the
parameters in a design problem can be identified. essence of Bayes approach in rewriting equation (20) for
P(H|D) as .

VIII. BAYESIAN APPROACH TO MODELING


(22)
Recently there has been an increasing use of developing
surrogate models base on Bayesian statistical approach. In general, the computation of P(D) is computationally
The parameters that are unknown in the model are intractable. But with recent increase in computer power,
determined from the data as they accumulate, for example, many good approximations have been built using Morkov
during an optimisation exercise. The systematic approach is chain Monte Carlo (MCMC) methods to accurately
shown in the following figure. The prior information about determine P(D) over all the possible Hi’s. Applying
the parameters is combined, using the Bayes theorem. with Bayesian concepts to kriging models and considering a
the data that is just accumulated to form a posterior variable mean µ Joseph and Hung have developed a new
distribution to derive inferences about the possible values of method called Blind kriging, which is proven to be a more
the parameters. The first step is the same as the general robust way of developing a surrogate model for a limited
formulation of statistics, including the frequentist approach. data base.5
The result of combining the prior information with data is
the unique feature of this approach. Once a new posterior is
obtained it becomes the next prior whenever new data IX. EXPLOITING THE SURROGATE MODELS IN
accumulates.4 THE OPTIMISATION PROCESS

The methods discussed above have enabled to build a


model f* (x) for the real function f (x). The next step is to
use f*(x) to search for the optimum x ‘ and possibly believe
that that is the true optimum of f(x). This can be checked by
actually evaluating f(x‘). The success in this approach
depends on how ell f*(x) emulates the true f(x). It is worth
noting at this juncture that there are broadly two main
approaches to optimization process ; local optimizers that
. are based on gradient information are good exploiters and
Figure 4. Principle of Bayesian Method of Modeling are in general very efficient when f (x) is very smooth and
uni-modal But these give less than satisfactory results when
In mathematical terms, the Bayes theorem can be stated f(x) exhibits long valleys and multiple local optima. Once
as trapped in the valley at a local optimum, the search has to
be re-launched from a new starting point chosen randomly.
This involves wasteful explorations of unpromising regions
of the search space. The second major class of optimal which seems to be a robust indicator, in finding the global
search methods are genetic algorithms and these are known minimum. Equation 26 can again be evaluated using error
to be good global exploresr. They are good at leaving poor functions. The criteria for finding potential optimum points,
objective function value regions quickly while is an active field of research and many strategies have to be
simultaneously exploring several promising sites. This explored for a particular problem.
exploration capability is further enhanced by a properly
selected space-filling sampling plan discussed earlier. In To illustrate the above ideas, a sample function
comparison to local search methods, what these methods
lack is, the high convergence speed and precision in the
exploitation of local optima. Because the surrogate model (27)
f* is only an approximation of the true function f which we
wish to optimize, it is prudent to enhance the accuracy of is selected and three initial sample points are chosen. The
the model using further function calls , called in-filling , in surrogate model is built using Gaussian radial basis
addition to the original sampling plan. The accuracy need to function. In the range of x between 0 and 1, this function
be improved solely in the regions of optimum predicted by has two minimum points one, a gentle minimum near 0.2
the surrogate model to obtain the local optimum value very and a second sharper minimum near 0.8. With updates using
quickly. P[I(x)] the first minimum point is fairly located but the
second global minimum point is not properly explored. This
The Gaussian process based models discussed earlier permit is illustrated in the figure 5 in terms of the update points
the calculation of an estimated error in model and so it is suggested by values of P[I(x)]. .
possible to judiciously place the infill points where the
uncertainty in model predictions is highest, This represents
a key advantage of Gaussian process based models. It can
be shown that the error is1

(23).
and the update point can be where s2(x) is maximum.
Another more elegant strategy of finding the update point,
is to use the probability of improvement which can be
defined as

Figure 5. Radial Basis Function Surrogate Model for Sample Function

(24)
Using the expected improvement index E[I(x)] the global
where I(x) is the difference between y(x) and ymin . The optimum is correctly traced, as seen in Figure 6.
update point is put where P[I(x)] is maximum. The integral
in equation can be found by using error function as
X. CONCLUSIONS

Optimal design of a complex device or product, often


involves exploring a broad design space or region of design
(25)
variable values, spanning several disciplines. Many detailed
analysis / simulation programs are available, but they can be
The global optimum will be found when P[I(x)] tends to
extremely expensive for exploring broad design spaces. One
zero. More generally, instead of using the probability
solution has been to simplify the simulations and obtain data
P[I(x)], we can also estimate the expected improvement at
from more approximate simulations. For these simulations,
any x , E[I(x)] as,
accuracy is sacrificed to reduce computational time.

(26)
Figure 6. The Expected Improvement Function Distribution

. Recent developments in efficient sampling plans and


Bayesian statistics based methods for surrogate model
development, brings the efficient global optimization of
engineering systems design closer to practical reality.
Various methods of model construction have their own
strengths and weaknesses and hence their use in practical
case requires prior knowledge and experience. Since no
method is universal, further research is required, to make
them more and more efficient. Thus there is a need to
optimise the optimal design process itself.

ACKNOWLEDGMENT

The author wishes to thank the management of Valia


Koonambhaikulathamma College of Engineering and
Technology, Chavarcode, Quilon, Kerala, for their kind
permission to publish this paper in the ICRAME 2010
proceedings.

REFERENCES

[1] Alexander, I.J., Forrester, Andras Sobester and Andy J Keane, -


Engineering Design via Surrogate Modeling – A Practical Guide ‘,
John Wiely and Sons Publications, , 2008.

[2] Jayant R Kalagnanam and Urmila M Diwaker, ‘An efficient


Sampling Technique for Off-line Quality Control ‘, Technometrics,
Vol. 39, No.3, pp. 306-319, 1997.

[3] Matheron George, ‘ Principles of Geo-statistics ‘, Economic


Geology, Vol. 58, pp. 1246-1266, 1963.

[4] John W Stevens, ‘ What is Bayesian Statistics ? ‘, Bayesian Statistics


in Health Economics, University of Sheffield, U K, April 2009

[5] V.R. Joseph and Ying Hung, ‘Blind Kriging: A new method for
Developing Meta Models ‘, Journal of Mechanical Design, Vol. 130,
No.3, February 2008.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy