0% found this document useful (0 votes)
18 views29 pages

Optimization Problem NEWEST

Uploaded by

James Green
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views29 pages

Optimization Problem NEWEST

Uploaded by

James Green
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

1

CFD Open Series


Patch 2.60

Optimization Problem

Adapted & Edited by: Ideen Sadrehaghighi

ANNAPOLIS, MD
2

Contents

1 Optimization Problem ............................................................................................................... 4


1.1 Mathematical Optimization ............................................................................................................................... 4
1.1.1 Single Constraint Lagrange Multiplier .............................................................................................. 5
1.1.1.1 Case Study - Solve the Following System of Equations (Method of Lagrange
Multipliers)6
1.1.2 References..................................................................................................................................................... 6
1.2 Design Variables ..................................................................................................................................................... 6
1.3 Optimization Types ............................................................................................................................................... 6
1.4 Non-Geometric Parametric Optimization – An Example ....................................................................... 7
1.5 Geometric Parameterization ............................................................................................................................. 8
1.5.1 Non-Uniform Rational B-Splines (NURBS)...................................................................................... 8
1.1.1 Radial Basis Function4 ............................................................................................................................. 9
1.1.2 Class/Shape Function Transformation (CST) Method ............................................................ 10
1.1.3 Singular Value Decomposition (SVD)13.......................................................................................... 11
1.1.4 Hicks-Henne Bump Functions ........................................................................................................... 11
1.1.5 Free-Form Deformation (FFD) .......................................................................................................... 13
1.5.2 References.................................................................................................................................................. 14
1.6 Some Common Terminologies ....................................................................................................................... 15
1.6.1 Identify Independent / Dependent Variables ............................................................................. 16
1.6.2 Continuous vs. Discrete Optimization ............................................................................................ 17
1.6.3 Unconstrained vs. Constrained Optimization ............................................................................. 17
1.6.4 Deterministic vs. Stochastic Optimization.................................................................................... 18
1.6.5 Quantity of Objectives Functions ..................................................................................................... 18
1.6.6 References.................................................................................................................................................. 18
1.7 Optimization Algorithms.................................................................................................................................. 18
1.7.1 Differentiable Objective Function .................................................................................................... 18
1.7.1.1 First-Order Derivative .............................................................................................................. 18
1.7.1.1.1 Gradient .................................................................................................................................... 19
1.7.1.1.2 Partial Derivative.................................................................................................................. 19
1.7.1.2 Second-Order Derivative ......................................................................................................... 19
1.7.1.2.1 Hessian Matrix ....................................................................................................................... 19
1.7.1.2.2 Bracketing Algorithms ....................................................................................................... 19
1.7.1.2.3 Local Descent Algorithms ................................................................................................. 19
1.7.1.2.4 First-Order Algorithms ...................................................................................................... 20
1.7.1.2.5 Second-Order Algorithms ................................................................................................. 20
1.7.2 Non-Differential Objective Function ............................................................................................... 20
1.7.2.1 Direct Algorithms ....................................................................................................................... 21
1.7.2.2 Stochastic Algorithms ............................................................................................................... 21
1.7.2.3 Population Algorithms ............................................................................................................. 21
1.7.3 Reference.................................................................................................................................................... 22
1.8 Optimization Frame Work............................................................................................................................... 22
1.8.1 Single vs. Multi-Objective Optimization ........................................................................................ 22
1.8.1.1 Various Methods to Solve Multiple Objective Optimization ..................................... 24
1.8.1.2 Pareto Optimality ....................................................................................................................... 24
1.8.1.3 Case Study - Multi-Objective (Point) Optimization ...................................................... 25
1.8.1.3.1 Acceleration Technique for Multi-Level Optimization ........................................ 26
1.8.2 Multi-Objective vs. Multi-Level Optimization ............................................................................. 26
3

1.8.3 Single vs. Multi-level Optimization .................................................................................................. 27


1.8.4 References.................................................................................................................................................. 27
1.9 Constraint Handling ........................................................................................................................................... 28

List of Tables:
No table of figures entries found.

List of Figures:
Figure 1.1.1 Global Maximum of f (x, y) ..................................................................................................................... 4
Figure 1.1.2 Example of Numerical Optimization .................................................................................................. 4
Figure 1.1.3 The red curve shows the constraint g(x, y) = c. The blue curves are contours of f(x, y).
The point where the red constraint tangentially touches a blue contour is the maximum
of f(x, y) along the constraint, since d1 > d2 .................................................................................................................. 5
Figure 1.2.1 Optimization Types ................................................................................................................................... 7
Figure 1.3.1 Optimization of Axial Fan Efficiency – Courtesy of CFD Support ........................................... 8
Figure 1.4.1 B-Spline Approximation of NACA0012 (left) and RAE2822 (right) Airfoils ..................... 9
Figure 1.4.2 Basis functions for six design variable configurations of the CST method ...................... 11
Figure 1.4.3 Three sets of Hicks-Henne Bump functions with different settings of t (n = 5, ai = 1, hi ϵ
[0.1; 0.9]). .................................................................................................................................................................................. 12
Figure 1.4.4 Two distributions for Hicks-Henne bump functions (n = 10) on the NACA 0012 airfoil.
Red dashed lines indicate bump maximum positions. ........................................................................................... 13
Figure 1.4.5 View of FFD box enclosing the embedded object, including the control points shown in
spheres. ...................................................................................................................................................................................... 13
Figure 1.4.6 The base functions on the range t in [0,1] for cubic Bézier curves: .................................... 14
Figure 1.5.1 Optimization Classification (Courtesy of Martins & Ning) ..................................................... 16
Figure 1.5.2 Schematic of a Gradient-Based Optimization with Two Design Variables ..................... 17
Figure 1.6.1 Different Search and Optimization Techniques .......................................................................... 23
Figure 1.6.2 Pareto Optimal ......................................................................................................................................... 24
Figure 1.6.3 Pareto Front in Aircraft Design [Antoine and Kroo] ................................................................. 25
Figure 1.6.4 High Performance Low Drag for Single and Multiple Design Points (Courtesy of Kenway
& Martins41) ............................................................................................................................................................................. 26
Figure 1.7.1 Concept of using Parallel Evaluation Strategy of Feasible and Infeasible Solutions to
Guide Optimization Direction in a GA ........................................................................................................................... 28
4

1 Optimization Problem
1.1 Mathematical Optimization
In mathematics and computer science, an optimization problem is the problem of finding the best
solution from all feasible solutions. In the simplest terms, an optimization problem consists of
maximizing or minimizing a real function by systematically choosing input values from within an
allowed set and computing the value of the
function. Figure 1.1.1 shows a graph of a
paraboloid given by z = f(x, y) = − (x² + y²) + 4.
The global maximum at (x, y, z) = (0, 0, 4) is
indicated by a blue dot. Optimization is the
process of obtaining the most suitable solution
to a given problem, while for a specific problem
only a single solution may exist, and for other
problems there may exist multiple potential
solutions [Skinner and Zare-Behtash]1. Thus,
optimization is the process of finding the `best'
solution, where `best' implies that the solution
is not the exact solution but is sufficiently
superior. Optimization tools should be used
for supporting decisions rather than for Figure 1.1.1 Global Maximum of f (x, y)
making decisions, i.e. should not substitute
decision-making process (Savic)2. Another example of numerical optimization is defined as

Minimize f(x) = 4x12 − x1 − x2 − 2.5 by varying x1 , x2


S. T. c1 (x) = x22 − 1.5𝑥12 + 2x1 − 1 ≥ 0
and c2 (x) = x22 + 2𝑥12 + 2x1 − 4.25 ≤ 0
Eq. 1.1.1
with results established in Figure 1.1.2, [Martins]3. Most current design optimization approaches
are heavily dependent on user training and experience requiring an array of specialized optimization
tools and compact shape parametrization. This constitutes a major obstacle to robustness and
reliability. In general, the role of the
parameterization (to be examine in detail next
chapter) is to provide an efficient interface between
the optimization method and a solver to form an
optimization framework [Kedward et al.]4. Another
persistent difficulty in aerodynamic optimization is
the ability to define an analysis method that is capable
of operating as many time as required (often
thousands of times) and integrating it appropriately
with an optimization strategy. The methods employed
must execute with realistic run times, dependent on
computational resource, but must also be
sophisticated enough to capture enough information
to analyses local geometry that feeds into a globally
optimal system. Many optimization problems,
especially those involved with large design spaces Figure 1.1.2 Example of Numerical
with coupled variables, inherently fall into the Optimization
5

category of Multi-Disciplinary Design Optimization (MDO), which in turn require a Multi-Objective


(MO) compromise to be an effective design (to be visited later). The main motivation for applying
MDO is that the performance of a real system is driven not only by the performance of individual
disciplines but also by their coupled interactions. It is no longer acceptable to consider the
aerodynamic analysis alone, its far reaching coupled effects to other disciplines must also be taken
into account if a truly optimal design is to be reached [Sobieszczanski-Sobieski and Haftka]5.
1.1.1 Single Constraint Lagrange Multiplier
According to Wikipedia, for the case of only one constraint and two choice variables (as exemplified
in Figure 1.1.3), consider the optimization problem:

Maximize f(x, y) subject to g(x, y) = 0


Eq. 1.1.2
(Sometimes an additive constant is shown separately rather than being included in g, in which case
the constraint is written g(x , y) = c, as in Figure 1.1.3). We assume that both f and g have
continuous first partial derivatives. We introduce a new variable (λ) called a Lagrange
multiplier and study the Lagrange function
which is defined by

ℒ(x, y, λ) = f(x, y) − λg(x, y)


Eq. 1.1.3
where the λ term may be either added or
subtracted. If f(x0, y0) is a maximum of f(x,
y) for the original constrained problem
and ∇g(x, y) ≠ 0, then there exists λ0 such
that (x0, y0, λ0) is a stationary point for the
Lagrange function (stationary points are
those points where the first partial
derivatives of are f zero).
The assumption ∇g ≠ 0 is called constraint
qualification. However, not all stationary
points yield a solution of the original Figure 1.1.3 The red curve shows the
problem, as the method of Lagrange constraint g(x, y) = c. The blue curves are contours
multipliers yields only a necessary of f(x, y). The point where the red constraint
condition for optimality in constrained tangentially touches a blue contour is the maximum
problems. Sufficient conditions for a of f(x, y) along the constraint, since d1 > d2
minimum or maximum also exist, but if a
particular candidate solution satisfies the sufficient conditions, it is only guaranteed that that
solution is the best one locally – that is, it is better than any permissible nearby points.
The global optimum can be found by comparing the values of the original objective function at the
points satisfying the necessary and locally sufficient conditions.
The method of Lagrange multipliers relies on the intuition that at a maximum, f(x, y) cannot be
increasing in the direction of any such neighboring point that also has g = 0. If it were, we could walk
along g = 0 to get higher, meaning that the starting point wasn't actually the maximum. Viewed in
this way, it is an exact analogue to testing if the derivative of an unconstrained function is 0, that is,
we are verifying that the directional derivative is 0 in any relevant (viable) direction. We can
visualize contours of f given by f(x , y) = d for various values of d, and the contour of g given by g(x
, y) = c. Suppose we walk along the contour line with g = c. We are interested in finding points
where f almost does not change as we walk, since these points might be maxima. There are two ways
this could happen:
6

1. We could touch a contour line of f, since by definition f does not change as we walk along its
contour lines. This would mean that the tangents to the contour lines of f and g are parallel
here.
2. We have reached a "level" part of f, meaning that f does not change in any direction.
1.1.1.1 Case Study - Solve the Following System of Equations (Method of Lagrange Multipliers)

∇f(x, y, z) = λ∇g(x, y, z) , g(x, y, z) = k


Eq. 1.1.4
Notice that the system of equations from the method actually has four equations, we just wrote the
system in a simpler form. To see this let’s take the first equation and put in the definition of the
.gradient vector to see what we get.

〈fx , fy , fz 〉 = λ〈g x , g y , g z 〉 = 〈λg x , λg y , λg z 〉


Eq. 1.1.5
In order for these two vectors to be equal the individual components must also be equal. So, we
actually have three equations here.

fx = λg x , fy = λg y , fz = λg z
Eq. 1.1.6
These three equations along with the constraint, g(x, y, z) = k give four equations with four
unknowns x, y, z, and λ. Further examples and information can be obtained from Paul's Online Notes
[Paul Dawkins].
1.1.2 References
1 S. N. Skinner and H. Zare-Behtash, ”State-of-the-Art in Aerodynamic Shape Optimization Methods”, Article in
Applied Soft Computing , September 2017, DOI: 10.1016/j.asoc.2017.09.030.
2 Dragan Savic,” Single-objective vs. Multi-objective Optimization for Integrated Decision Support”, Centre for

Water Systems, Department of Engineering School of Engineering and Computer Science, University, UK,
3 Joaquim R. R. A. Martins, “Multidisciplinary Design Optimization”, 7th International Fab Lab Forum and

Symposium on Digital Fabrication Lima, Peru, August 18, 2011, (Remote presentation).
4 L. J. Kedward , A. D. J. Payot y, T. C. S. Rendall, C. B. Allen, “Efficient Multi-Resolution Approaches for Exploration

of External Aerodynamic Shape and Topology”, AIAA, 2018.


5 J. Sobieszczanski-Sobieski and R.T. Haftka. “Multidisciplinary Aerospace Design Optimization: Survey of Recent

Developments”, 34th Aerospace Sciences Meeting and Exhibit, AIAA 96-0711, volume 70, Reno, Navada, 1996.

1.2 Design Variables


The choice of design variables defines basic properties of the optimization problem. Based on the
design variables structural optimization problems are separated in material, sizing, shape and
topology optimization. The numerical effort of the sensitivity analysis as well as the overall
robustness of the optimization problem is strongly related to the choice of design variables1.

1.3 Optimization Types


• Size optimization is widely used to find optimal solutions for key product characteristics,
such as cross-sectional thicknesses, material choice, and other part parameters.
• Shape optimization enhances an existing geometry by adjusting the height, length, or radii
of the design – morphing the part to distribute stress more evenly. Shape optimization is part
of the field of optimal control theory. The typical problem is to find the shape which is optimal

1 Wikipedia
7

in that it minimizes a certain cost functional while satisfying given constraints2. Examples of
Shape optimization are numerous with some case studies given in following chapters.
• Topology optimization is a mathematical method that optimizes material layout within a
given design space, for a given set of loads, boundary conditions and constraints with the
goal of maximizing the performance of the system. Figure 1.2.1 displays the different types
of optimization. A prime example would be the paper by [Ghasemi & Elham, 2021] which

Figure 1.3.1 Optimization Types

explores the multi‑stage aerodynamic topology optimization using an operator‑based


analytical differentiation.
• Material optimization problems utilize material parameters as design variables whereas
topology and geometry of the structural model remain constant. Examples for material
variables are distribution of concrete reinforcement, direction of fiber angles or layer
sequence in composite materials.

1.4 Non-Geometric Parametric Optimization – An Example


Parametric optimization is a process of finding the best possible combination of parameters that
leads to the best results according to the optimization function. The optimization function is a
formula that means the measure of how good the results are [Lubos Pirkl at CFDSUPPORT, 2021].
Optimization functions can be a very simple function of the results of the simulation. It can be for
example the efficiency, power, pressure drop, drag coefficient, material stress, or any quantity we can
pull out of the simulation results. Optimization function can also be a very complex combination of
functions with limits and logical operations of selected quantities. We typically optimize for its
minimal or maximal value. Consider a simple model of an axial fan stage. Let’s pick two parameters.
Parameter 1 would be the speed of rotation and parameter 2 would be the volumetric flow rate. The
optimization function would be efficiency. Let’s try to find the best set of parameters corresponding
to the highest efficiency. We’ve run a couple of simulations to explore the parametric space. The
optimization function plot then may look like Figure 1.3.1. The blue points correspond to the
predefined DoE (Design of Experiment) simulation runs. The green points are the optimization
algorithm search and the red point marks the best point found so far. And yes. Every point is a
complete single-point simulation run. The following picture shows a few more basic statistics from
this axial fan project.

2 Wikipedia
8

Figure 1.4.1 Optimization of Axial Fan Efficiency – Courtesy of CFD Support

1.5 Geometric Parameterization


1.5.1 Non-Uniform Rational B-Splines (NURBS)
Among many ideas proposed for generating any arbitrary surface, the approximate techniques of
using spline functions are gaining a wide range of popularity. The most commonly used approximate
representation is the Non-uniform Rational B-Spline (NURBS) function. They provide a powerful
geometric tool for representing both analytic shapes (conics, quadrics, surfaces of revolution, etc.)
and free-form surfaces [Tiller, 1983]1, or occasionally called Free From Deformations (FFD). The
surface is influenced by a set of control points and weights to where unlike interpolating schemes the
control points might not be at the surface itself. By changing the control points and corresponding
weights, the designer can influence the surface with a great degree of flexibility without
compromising the accuracy of the design. NURBS are generalization of B-splines and Bezier
representations, thus the family of curves and surfaces that can be represented with NURBS is much
wider. The relation for a NURBS curve is

n
Ni,p (r) ωi
X (r) = ∑ R i,p (r) Di i = 0,.........,n R i,p (r) =
∑ni=0 Ni,p (r) ωi
i=0
Eq. 1.5.1
9

where X(r) is the vector valued surface coordinate in the r-direction, Di are the control points
(forming a control polygon), ωi are weights, Ni,p(r) are the p-th degree B-Spline basis function (see
Eq. 1.5.1), and Ri,p(r) are known as the Rational Basis Functions satisfying:

∑ R i,p (r) = 1 , R i,p (r) ≥ 0


i=1
Eq. 1.5.2
Eq. 1.5.2 illustrates a six control point representation of a generic airfoil. The points at the leading
and trailing edges are fixed. Two control points at the 0% chord are used to affect the bluntness of
the section. Similar procedure can be applied to other airfoil geometries such as NACA four or five
digit series.

Figure 1.5.2 B-Spline Approximation of NACA0012 (left) and RAE2822 (right) Airfoils

As an example Figure 1.4.1 shows two airfoils NACA0012 and RAE2822 parameterized using B-
Spline curve of order 4 with control points. The procedure is easily applicable to 3D for example like
the common wing & fuselage [Kenway et al.]2. The choice for number of control points and their
locations are best determined using an inverse B-Spline interpolation of the initial data. The
algorithm yields a system of linear equations with a positive and banded coefficient matrix.
Therefore, it can be solved safely using techniques such as Gaussian elimination without pivoting.
The procedure can be easily extended to cross-sectional configurations, when critical cross-sections
are denoted by several circular conic sections, and the intermediate surfaces have been generated
using linear interpolation. Increasing the weights would deform the circular segments to other conic
segments (elliptic, parabolic, etc.) as desired for different flight regions. In this manner, the number
of design parameters can be kept to a minimum, which is an important factor in reducing costs. The
choice for number of control points and their locations are best determined using an inverse B-Spline
interpolation of the initial data. The algorithm yields a system of linear equations with a positive and
banded coefficient matrix. Therefore, it can be solved safely using techniques such as Gaussian
elimination without pivoting.
The procedure can be easily extended to cross-sectional configurations, when critical cross-sections
are denoted by several circular conic sections, and the intermediate surfaces have been generated
using linear interpolation. Increasing the weights would deform the circular segments to other conic
segments (elliptic, parabolic, etc.) as desired for different flight regions. In this manner, the number
of design parameters can be kept to a minimum, which is an important factor in reducing costs. An
efficient gradient-based algorithm for aerodynamic shape optimization is presented by [Hicken and
Zingg]3 where to integrate geometry parameterization and mesh movement.
1.1.1 Radial Basis Function4
A radial basis function (RBF) is a real-valued function φ whose value depends only on the distance
between the input and some fixed point, either the origin, so that φ(x) = φ(||x||), or some other fixed
point (c), called a center, so that φ(x) = φ (||x -c||). Any function φ that satisfies the property φ(x)
= φ(||x||) is a radial function. The distance is usually Euclidean distance, although other metrics are
10

sometimes used. They are often used as a collection {φk}k which forms a basis for some function space
of interest, hence the name.
Sums of radial basis functions are typically used to approximate given functions. This approximation
process can also be interpreted as a simple kind of neural network; this was the context in which they
were originally applied to machine learning, in work by [David Broomhead] and [David Lowe] in
19885-6 which stemmed from [Michael J. D. Powell]'s seminal research from 1977. RBFs are also used
as a kernel in support vector classification. The technique has proven effective and flexible enough
that radial basis functions are now applied in a variety of engineering applications. Radial basis
functions are typically used to build up function approximations of the form

y(𝐱) = ∑ ωi φ (‖𝐱 − 𝐱 𝑖 ‖)
i=1
Eq. 1.5.3
where the approximating function y (x) is represented as a sum of radial basis functions, each
associated with a different center xi and weighted by an appropriate coefficient . The weights ω can
be estimated using the matrix methods of linear least squares, because the approximating function
is linear in the weights.
1.1.2 Class/Shape Function Transformation (CST) Method
In order to present a general parameterization technique for any type of geometries and to overcome
the mentioned limits, [Kulfan]7, [Kulfan & Bussoletti]8 and [Ceze]9 among others developed the
method of Class/Shape Function Transformation (CST). This method provides the mathematical
description of the geometry through a combination of a shape function and class function. The
class function provides for a wide variety of geometries. The shape function replaces the complex
non-analytic function with a simple analytic function that has the ability to control the design
parameters and uses only a few scalable parameters to define a large design space for aerodynamic
analysis. The advantage of CST lies in the fact that it is not only efficient in terms of low number of
design variables but it also allows the use of industrial related design parameters like radius of
leading edge or maximum thickness and its location10.
Any smooth airfoil can be represented by the general 2D CST equations. The only things that
differentiate one airfoil from another in the CST method are two arrays of coefficients that are built
into the defining equations. These coefficients control the curvature of the upper and lower surfaces
of the airfoil. This gives a set of design variables which allows for aerodynamic optimization. This
method of parameterization captures the entire design space of smooth airfoils and is therefore
useful for any application requiring a smooth airfoil. The upper and lower surface defining equations
are as follows:

N1 x z
ςU (ψ) = CN2 (ψ). SU (ψ) + ψ.ΔςU
N1
} where ψ = and ς =
ςL (ψ) = CN2 (ψ). SL (ψ) + ψ.ΔςL c c
Eq. 1.5.4
The last terms define the upper and lower trailing edge thicknesses. Equation uses the general class
function to define the basic profile and the shape function to create the specific shape within that
geometry class. The general class function is defined as:

N1
CN2 (ψ) = ψN1 . (−ψ)N2
Eq. 1.5.5
11

For a general NACA type symmetric airfoil with a


round nose and pointed aft end, N1 is 0.5 and N2
is 0 in the class function. This classifies the final
shape as being within the "airfoil" geometry class,
which forms the basis of CST airfoil
representation. This means that all other airfoils
represented by the CST method are derived from
the class function airfoil. Further details can be
found in [Lane and Marshall]11, or [Ceze et al.].
The 2D process for airfoils is easily extended to
wings as a simple extrusion of parameterized
airfoils. This greatly increases the number of
design variables for an optimization scheme.
However, it is no less powerful. By controlling the Figure 1.5.3 Basis functions for six design
distribution of airfoils, any smooth wing can be variable configurations of the CST method
represented. Also, such characteristics as sweep,
taper, geometric twist, and aerodynamic twist can
be included. The definition of a 3D surface follows a similar structure to that of a 2D surface. Again,
for complete description of method, readers are encouraged to consult. For further and complete
information, readers encourage to consult [Su et al.]12. Figure 1.4.2 illustrates basis function for six
design variables.
1.1.3 Singular Value Decomposition (SVD)13
In linear algebra, the Singular Value Decomposition (SVD) is factorization of real or complex matrix.
It is the generalization of the eigen decomposition of a normal matrix (for example, a symmetric
matrix with non-negative eigenvalues) to any [m x n] matrix via an extension of the polar
decomposition. It has many useful applications in signal processing and statistics. Suppose M is
an m × n matrix whose entries come from the field K, which is either the field of real numbers or the
field of complex numbers. Then the singular value decomposition of M exists, and is a factorization
of the form
𝐌 = 𝐔 ∑ 𝐕∗
Eq. 1.5.6
Where
• U is an m × m unitary matrix over K (if K = , unitary matrices are orthogonal matrices),
• Σ is a diagonal m × n matrix with non-negative real numbers on the diagonal,
• V is an n × n unitary matrix over K, and V∗ is the conjugate transpose of V.

The diagonal entries σi of Σ are known as the singular values of M. A common convention is to list
the singular values in descending order. In this case, the diagonal matrix, Σ, is uniquely determined
by M (though not the matrices U and V if M is not square).
1.5.2 Hicks-Henne Bump Functions
Hicks and Henne14 introduced an analytical approach that takes a baseline geometry and adds a
linear combination of bump functions to the upper and lower surface to create a new shape. For 2D
problems, the parameterized geometry function can be expressed by:
12

n
log 0.5 ti
y = ybaase + ∑ bi (x) , bi (x) = ai [sin (πx log h )] 0≤ x≤ 1
i=1
Eq. 1.5.7
where n is the number of bump functions; bi (x) is the bump function (or basis function) proposed by
Hicks and Henne; ai represents the maximum bump amplitude and acts as the weighting coefficient;
hi locates the maximum point of the bump and ti controls the width of the bump. By setting all the
coefficients ai to zero, the baseline geometry is recovered.
By inspecting Eq. 1.5.7, it is apparent that every bump function is defined by three parameters that
can either be fixed or varying during optimization. To ensure the parameterization is a linear
function of the design variables, only the bump amplitude coefficients ai are allowed to vary and thus
treated as design variables, while the other two parameters are fixed. For the bump maximum
positions hi, two approaches are employed in this study:
a) even distribution over the range of [0.5/n; 1 - 0.5/n]; and
b) uneven distribution described by a "one-minus-cosine" function:
1 iπ
hi = [1 − cos ( )] i = 1,2, , , , , , n
2 n+1
Eq. 1.5.8

Figure 1.5.4 Three sets of Hicks-Henne Bump functions with different settings of t (n = 5, ai = 1, hi ϵ
[0.1; 0.9]).

Figure 1.4.3 shows three sets of Hicks-Henne Bump functions with different settings of t. It is
observed that the bump width narrows down as t increases, which indicates that a relatively smaller
value of t can provide more global shape control whereas a relatively larger value of t generates more
local shape control. For the bump width control parameter, t, a constant value is specified within the
SU2 code. In this study, in addition to the default setting t = 3, a range of values of t is defined and the
impact on the optimization results is investigated.
A comparison of these two distributions is shown in Figure 1.4.4, where a set of ten bump functions
are distributed on the NACA 0012 airfoil. It is not unexpected that the "one{minus{cosine"
distribution results in bump functions clustered at the leading edge (LE) and trailing edge (TE) of the
airfoil.
13

Figure 1.5.5 Two distributions for Hicks-Henne bump functions (n = 10) on the NACA 0012 airfoil. Red
dashed lines indicate bump maximum positions.

1.1.4 Free-Form Deformation (FFD)


Free-Form Deformation (FFD), which was initially proposed by [Sederberg and Parry]15 is a premier
parameterization method. The basic FFD concept can be visualized by embedding a flexible object
inside a flexible volume and deforming both of them simultaneously by perturbing the lattice of the
volume. The FFD control volume (or FFD box) has a topology of a cube when deforming three-
dimensional (3D) objects or a rectangular plane for 2D objects, and thus it can be parameterized as
either a triradiate volume or a bivariate surface. Both Bezier curves and uniform B-splines are used

Figure 1.5.6 View of FFD box enclosing the embedded object, including the control points shown in
spheres.

as FFD blending function. Figure 1.4.5 illustrates the FFD box encapsulating a rectangular wing
and the RAE 2822 airfoil, where a lattice of control points are uniformly spaced on the surface of FFD
box. The parameterized Bezier volume can be described using the following equation:

l m n

𝐗(ξ , η , ζ) = ∑ ∑ ∑ 𝐏𝐢𝐣𝐤 Bil (ξ)Bjm (η)Bkn (ζ)


i=0 j=0 k=0
Eq. 1.5.9
where l, m, n are the degrees of FFD blending function; ξ , η , ζ ϵ [0, 1] are the parametric
coordinates; Pijk are the Cartesian coordinates of the control point (i , j , k); X are the corresponding
14

Cartesian coordinates (x, y, z) for a given (ξ , η , ζ) in the Bezier volume Bli (ξ), Bmj (η), and Bnk (ζ), are
the Bernstein (Basis) polynomials, which are expressed as

l!
Bil (ξ) = ξi (1 − ξ)l−i
i! (l − i)!
m!
Bjm (η) = ηj (1 − η)m−j
j! (m − j)!
n!
Bkn (ζ) = ζj (1 − ζ)n−k
k! (k − ζ)!
Eq. 1.5.10
The control points of the FFD box are defined as the
design variables, the number of which depends on
the degree of the chosen Bernstein polynomials.
FFD is numerically executed in three steps. Firstly,
for the embedded object, a mapping is performed
from the physical space to the parametric space of
the FFD box. The parametric coordinates (ξ , η , ζ )
of each surface mesh node are determined and
remain unchanged during the optimization. Note
that this mapping is evaluated only once. Secondly,
the FFD control points are perturbed, which leads
to the deformation of the FFD box as well as the
embedded object. Thirdly, once the FFD box has
been deformed, the new Cartesian coordinates X =
(x, y, z) of the embedded object in physical space Figure 1.5.7 The base functions on the
range t in [0,1] for cubic Bézier curves:
are algebraically computed using Eq. 1.5.10. A
blue: y = (1 − ξ)3, green: y= 3(1 − ξ)2 t, red: y=
key feature of FFD parameterization approach is 3(1 − ξ) ξ2, and cyan: y = ξ3.
that multiple control points can be grouped
together to perform specific motions and thus
achieve desired shape deformation, such as redefining airfoil camber and thickness, applying changes
to wing twist and sweep, etc. See [Yang & Da Ronchy ]16. For example, a sample of the cubic Bezier
basis function is given Figure 1.4.6.
1.5.3 References
1 Tiller, W., “Rational B-Splines for Curve and Surface Representation, "Computer Graphics, 1983.
2 Gaetan K.W. Kenway, Joaquim R. R. A. Martins, and Graeme J. Kennedy, “Aero structural optimization

of the Common Research Model configuration”, American Institute of Aeronautics and Astronautics.
3 Jason E. Hickenand David W. Zingg, “Aerodynamic Optimization Algorithm with Integrated Geometry

Parameterization and Mesh Movement”, AIAA Journal Vol. 48, No. 2, February 2010.
4 Wikipedia.
5 Radial Basis Function networks Archived 2014-04-23 at the Wayback Machine
6 Broomhead, David H.; Lowe, David (1988). "Multivariable Functional Interpolation and Adaptive

Networks" (PDF). Complex Systems. 2: 321–355. Archived from the original (PDF) on 2014-07-14.
7 Kulfan, B. M., “Universal parametric geometry representation method,” Journal of Aircraft,, 2008.
8 Kulfan, B. M. and Bussoletti, J. E., “Fundamental Parametric Geometry Representations for Aircraft

Component Shapes," 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference,


2006.
9 Marco Ceze, Marcelo Hayashiy and Ernani Volpe, “A Study of the CST Parameterization
15

Characteristics”, AIAA 2009-3767.


10 Arash Mousavi, Patrice Castonguay and Siva K. Nadarajah, “Survey Of Shape Parameterization

Techniques And its Effect on Three-Dimensional Aerodynamic Shape Optimization”, AIAA 2007-3837.
11 Kevin A. Lane and David D. Marshall, “A Surface Parameterization Method for Airfoil Optimization

and High Lift 2D Geometries Utilizing the CST Methodology”, AIAA 2009-1461.
12 Hua Su, Chunlin Gong, and Liangxian Gu, “Three-Dimensional CST Parameterization Method Applied

in Aircraft Aero elastic Analysis”, Hindawi, Int. Journal of Aerospace Engineering Volume 2017.
13 Wikipedia.
14 Hicks, R. M. and Henne, P. A., Wing design by numerical optimization," Journal of Aircraft, Vol, 1978.
15 Sederberg, T. W. and Parry, S. R., Free-form deformation of solid geometric models," ACM SIGGRAPH

computer graphics, Vol. 20, No. 4, 1986, pp. 151-160.


16 Guangda Yang and Andrea Da Ronchy, “Aerodynamic Shape Optimization of Benchmark Problems

Using SU2”, AIAA, January 2018.

1.6 Some Common Terminologies


An important step in the optimization process is classifying your optimization model, since
algorithms for solving optimization problems are tailored to a particular type of problem. Here we
provide some guidance to help you classify your optimization model. According to (Martins & Ning,
2021), optimization problems can be classified by attributes associated with the different aspects of
the problem (Figure 1.5.1). The two main aspects are the problem formulation and the objective
and constraint function characteristics. Other researchers like (Miguel Matos Neves) describes the
optimization problem as:
➢ Determinative - In deterministic optimization the input data for the given problem is known
accurately. It is considered e.g. in many structural optimization problems.
➢ Heuristic - It is not unusual that one gets a kind of problems that are computationally very
hard. Then the time spent in computations (e.g. line search techniques) is not adequate and
there is some evidence that for these problems, heuristic search techniques (e.g. randomness
is used to find better solutions) are better methods. Examples of heuristic optimization
methods are simulated annealing, genetic algorithm and evolutionary algorithms.
➢ Stochastic - There are a plenty of problems where the input data for the given problem is
NOT known accurately. Then, the optimization under uncertainty, or stochastic optimization,
is chosen in a way that the uncertainty be considered under assumption of a given (assumed
known) probability distributions. E.g. it is the case where the human behavior has influence
in the performance being optimized like for e.g. in economic applications.
➢ Robust - Exists an intermediate special type of problem where parameters are known but
only within certain bounds. For these cases Robust optimization techniques are usually more
adequate. E.g. cases where a structure is sometimes loaded differently in different scenarios,
which do not occur simultaneously. In robust optimization, is like one is looking for a
compromise between different optimums found in each scenario without calculating
necessarily these different optimums.
16

Figure 1.6.1 Optimization Classification (Courtesy of Martins & Ning)

1.6.1 Identify Independent / Dependent Variables


Conferring to Lauren Thomas in scientific research, we often want to study the effect of one variable
on another one. For example, in airfoil aerodynamic optimization, the surface coordinates on the
airfoil is considered dependent variables, where the Drag Coefficient CD is the independent variable.
It is also described as variables in a study of a cause/effect relationship are called the independent
and dependent variables.
• The independent variable is the cause. Its value is independent of other variables in your
study.
• The dependent variable is the effect. Its value depends on changes in the independent
variable.
17

The independent variable leads to objective function (i.e., CD, CL, etc.) with or without constrains,
while the dependent variable (i.e., design variables) are sometimes called degree of freedom of
system.
1.6.2 Continuous vs. Discrete Optimization
Some models only make sense if the variables take on values from a discrete set, often a subset of
integers, whereas other models contain variables that can take on any real value. Models with
discrete variables are discrete optimization problems; models with continuous variables are
continuous optimization problems. Continuous optimization problems tend to be easier to solve
than discrete optimization problems; the smoothness of the functions means that the objective
function and constraint function values at a point x can be used to deduce information about points
in a neighborhood of x. However, improvements in algorithms coupled with advancements in
computing technology have dramatically increased the size and complexity of discrete optimization
problems that can be solved efficiently. Continuous optimization algorithms are important in discrete
optimization because many discrete optimization algorithms generate a sequence of continuous sub
problems.
1.6.3 Unconstrained vs. Constrained Optimization
Another important distinction is between problems in which there are no constraints on the
variables and problems in which there are constraints on the variables. Unconstrained
optimization problems arise directly in many practical applications; they also arise in the
reformulation of constrained optimization problems in which the constraints are replaced by a
penalty term in the objective function. Constrained optimization problems arise from applications
in which there are explicit constraints on the variables. The constraints on the variables can vary
widely from simple bounds to systems of equalities and inequalities that model complex
relationships among the variables. Constrained optimization problems can be furthered classified
according to the nature of the constraints (e.g., linear, nonlinear, convex) and the smoothness of the
functions (e.g., differentiable or no differentiable). Gradient-based algorithms typically use an
iterative two-step method to reach the optimum as described by [Venter]1. The first step is to use
gradient information to find the search direction and the second step is to move in that direction until
no further progress can be made or until a new constraint is reached. The second step is known as
the line search and provides the optimum step size. The two-step process is repeated until the
optimum is found, see Figure 1.5.22. Depending on the scenario, different search directions are
required.

Figure 1.6.2 Schematic of a Gradient-Based Optimization with Two Design Variables


18

1.6.4 Deterministic vs. Stochastic Optimization


In deterministic optimization, it is assumed that the data for the given problem are known accurately.
However, for many actual problems, the data cannot be known accurately for a variety of reasons.
The first reason is due to simple measurement error. The second and more fundamental reason is
that some data represent information about the future (e. g., product demand or price for a future
time period) and simply cannot be known with certainty. In stochastic optimization, the uncertainty
is incorporated into the model. Stochastic programming models take advantage of the fact that
probability distributions governing the data are known or can be estimated; the goal is to find some
policy that is feasible for all (or almost all) the possible data instances and optimizes the expected
performance of the model.
1.6.5 Quantity of Objectives Functions
Most optimization problems have a Single objective function, however, there are interesting cases
when optimization problems have no objective function or multiple objective functions. The goal is
to find a solution that satisfies the complementarity conditions. Multi-objective optimization
problems arise in many fields, such as engineering, economics, and logistics, when optimal decisions
need to be taken in the presence of trade-offs between two or more conflicting objectives. For
example, developing a new component might involve minimizing weight while maximizing strength
or choosing a portfolio might involve maximizing the expected return while minimizing the risk. In
practice, problems with multiple objectives often are reformulated as single objective problems by
either forming a weighted combination of the different objectives or by replacing some of the
objectives by constraints.
1.6.6 References
1 Venter, G. “Review of optimization techniques. In R. Blockley, and W. Shyy (Eds.), Encyclopedia of

aerospace engineering. Chichester, West Sussex, UK: John Wiley and Sons.
2 Schematic picture of a gradient-based optimization algorithm for the case with two design variables.

The response values are indicated by the iso-curves and the star represents the optimum solution for
a) unconstrained optimization and b) constrained optimization. The unfeasible region violating the
constraints is marked by the shaded areas.

1.7 Optimization Algorithms


Optimization refers to a procedure for finding the input parameters or arguments to a function that
result in the minimum or maximum output of the function[1]. The most common type of optimization
problems encountered are continuous function optimization, where the input arguments to the
function are real-valued numeric values, e.g. floating point values. The output from the function is
also a real-valued evaluation of the input values. There are many different types of optimization
algorithms that can be used for continuous function optimization problems, and perhaps just as many
ways to group and summarize them. One approach to grouping optimization algorithms is based on
the amount of information available about the target function that is being optimized that, in turn,
can be used and harnessed by the optimization algorithm.
1.7.1 Differentiable Objective Function
A differentiable function is a function where the derivative can be calculated for any given point in
the input space. The derivative of a function for a value is the rate or amount of change in the function
at that point. It is often called the slope.
1.7.1.1 First-Order Derivative
Slope or rate of change of an objective function at a given point. The derivative of the function with
more than one input variable (e.g. multivariate inputs) is commonly referred to as the gradient.
19

1.7.1.1.1 Gradient
Derivative of a multivariate continuous objective function. A derivative for a multivariate objective
function is a vector, and each element in the vector is called a partial derivative, or the rate of change
for a given variable at the point assuming all other variables are held constant.
1.7.1.1.2 Partial Derivative
Element of a derivative of a multivariate objective function. We can calculate the derivative of the
derivative of the objective function, that is the rate of change of the rate of change in the objective
function. This is called the second derivative.
1.7.1.2 Second-Order Derivative
Rate at which the derivative of the objective function changes. For a function that takes multiple input
variables, this is a matrix and is referred to as the Hessian matrix.
1.7.1.2.1 Hessian Matrix
Second derivative of a function with two or more input variables. Simple differentiable functions can
be optimized analytically using calculus. Typically, the objective functions that we are interested in
cannot be solved analytically. Optimization is significantly easier if the gradient of the objective
function can be calculated, and as such, there has been a lot more research into optimization
algorithms that use the derivative than those that do not. Some groups of algorithms that use gradient
information include:
• Bracketing Algorithms
• Local Descent Algorithms
• First-Order Algorithms
• Second-Order Algorithms
Note: this taxonomy is inspired by the 2019 book “Algorithms for Optimization.” Let’s take a closer
look at each in turn.
1.7.1.2.2 Bracketing Algorithms
Bracketing optimization algorithms are intended for optimization problems with one input variable
where the optima is known to exist within a specific range. Bracketing algorithms are able to
efficiently navigate the known range and locate the optima, although they assume only a single
optima is present (referred to as unimodal objective functions). Some bracketing algorithms may be
able to be used without derivative information if it is not available. Examples of bracketing algorithms
include:
• Fibonacci Search
• Golden Section Search
• Bisection Method
1.7.1.2.3 Local Descent Algorithms
Local descent optimization algorithms are intended for optimization problems with more than one
input variable and a single global optima (e.g. unimodal objective function). Perhaps the most
common example of a local descent algorithm is the line search algorithm.
• Line Search
There are many variations of the line search (e.g. the Brent-Dekker algorithm), but the procedure
generally involves choosing a direction to move in the search space, then performing a bracketing
type search in a line or hyperplane in the chosen direction. This process is repeated until no further
improvements can be made. The limitation is that it is computationally expensive to optimize each
directional move in the search space.
20

1.7.1.2.4 First-Order Algorithms


First-order optimization algorithms explicitly involve using the first derivative (gradient) to choose
the direction to move in the search space. The procedures involve first calculating the gradient of the
function, then following the gradient in the opposite direction (e.g. downhill to the minimum for
minimization problems) using a step size (also called the learning rate). The step size is a
hyperparameter that controls how far to move in the search space, unlike “local descent algorithms”
that perform a full line search for each directional move. A step size that is too small results in a
search that takes a long time and can get stuck, whereas a step size that is too large will result in zig-
zagging or bouncing around the search space, missing the optima completely. First-order algorithms
are generally referred to as gradient descent, with more specific names referring to minor extensions
to the procedure, e.g.:
• Gradient Descent
• Momentum
• Adagrad
• RMSProp
• Adam
The gradient descent algorithm also provides the template for the popular stochastic version of the
algorithm, named Stochastic Gradient Descent (SGD) that is used to train artificial neural networks
(deep learning) models. The important difference is that the gradient is appropriated rather than
calculated directly, using prediction error on training data, such as one sample (stochastic), all
examples (batch), or a small subset of training data (mini-batch). The extensions designed to
accelerate the gradient descent algorithm (momentum, etc.) can be and are commonly used with SGD.
• Stochastic Gradient Descent
• Batch Gradient Descent
• Mini-Batch Gradient Descent
1.7.1.2.5 Second-Order Algorithms
Second-order optimization algorithms explicitly involve using the second derivative (Hessian) to
choose the direction to move in the search space. These algorithms are only appropriate for those
objective functions where the Hessian matrix can be calculated or approximated. Examples of
second-order optimization algorithms for univariate objective functions include:
• Newton’s Method
• Secant Method
Second-order methods for multivariate objective functions are referred to as Quasi-Newton Methods.
• Quasi-Newton Method
There are many Quasi-Newton Methods, and they are typically named for the developers of the
algorithm, such as:
• Davidson-Fletcher-Powell
• Broyden-Fletcher-Goldfarb-Shanno (BFGS)
• Limited-memory BFGS (L-BFGS)
Now that we are familiar with the so-called classical optimization algorithms, let’s look at algorithms
used when the objective function is not differentiable [1].
1.7.2 Non-Differential Objective Function
Optimization algorithms that make use of the derivative of the objective function are fast and
efficient. Nevertheless, there are objective functions where the derivative cannot be calculated,
typically because the function is complex for a variety of real-world reasons. Or the derivative can be
21

calculated in some regions of the domain, but not all, or is not a good guide. Some difficulties on
objective functions for the classical algorithms described in the previous section include:
• No analytical description of the function (e.g. simulation).
• Multiple global optima (e.g. multimodal).
• Stochastic function evaluation (e.g. noisy).
• Discontinuous objective function (e.g. regions with invalid solutions).
As such, there are optimization algorithms that do not expect first- or second-order derivatives to be
available. These algorithms are sometimes referred to as black-box optimization algorithms as they
assume little or nothing (relative to the classical methods) about the objective function. A grouping
of these algorithms include:
• Direct Algorithms
• Stochastic Algorithms
• Population Algorithms
Let’s take a closer look at each in turn.
1.7.2.1 Direct Algorithms
Direct optimization algorithms are for objective functions for which derivatives cannot be calculated.
The algorithms are deterministic procedures and often assume the objective function has a single
global optima, e.g. unimodal. Direct search methods are also typically referred to as a “pattern search”
as they may navigate the search space using geometric shapes or decisions, e.g. patterns. Gradient
information is approximated directly (hence the name) from the result of the objective function
comparing the relative difference between scores for points in the search space. These direct
estimates are then used to choose a direction to move in the search space and triangulate the region
of the optima. Examples of direct search algorithms include:
• Cyclic Coordinate Search
• Powell’s Method
• Hooke-Jeeves Method
• Nelder-Mead Simplex Search
1.7.2.2 Stochastic Algorithms
Stochastic optimization algorithms are algorithms that make use of randomness in the search
procedure for objective functions for which derivatives cannot be calculated. Unlike the deterministic
direct search methods, stochastic algorithms typically involve a lot more sampling of the objective
function, but are able to handle problems with deceptive local optima. Stochastic optimization
algorithms include:
• Simulated Annealing
• Evolution Strategy
• Cross-Entropy Method
1.7.2.3 Population Algorithms
Population optimization algorithms are stochastic optimization algorithms that maintain a pool (a
population) of candidate solutions that together are used to sample, explore, and hone in on an
optima. Algorithms of this type are intended for more challenging objective problems that may have
noisy function evaluations and many global optima (multimodal), and finding a good or good enough
solution is challenging or infeasible using other methods. The pool of candidate solutions adds
robustness to the search, increasing the likelihood of overcoming local optima. Examples of
population optimization algorithms include:
• Genetic Algorithm
22

• Differential Evolution
• Particle Swarm Optimization
1.7.3 Reference
[1] Jason Brownlee, “How to Choose an Optimization Algorithm”, From Web Site : Machine Learning
Mastery, Dec 2020.

1.8 Optimization Frame Work


By convention, the standard form defines a minimization problem. A maximization problem can be
treated by negating the objective function. In CFD analysis, we mostly deal with Continues
Optimization. Most optimization methods use an iterative procedure. The initial set x design
variables, which in the context of aerodynamic optimization this is referred to as the baseline
configuration, and is updated until a minimum of f(x) is identified or the optimization process runs
out of allocated time/iterations. The work flow for optimization problem is:
1 Identify the independent/dependent variables (Design Space);
2 Parametrization of Design Space;
3 Types of Design Variables (Discrete and/or Continuous);
4 The level of information fidelity required from the flow solver, depending on problem ;
5 Sensitivity Analysis & obtaining the gradient of objective function
6 Searching for optimum (optimization algorithm)
7 Single or Multi-Objective Optimization;
8 Constraints Handling;
It is important to note that no optimization procedure guarantees the global optima of the objective
function f(x) will be found the process may only converge towards a locally optimal solution.
Typically in this situation there are three possibilities:
➢ Restart the optimization process to investigate if the same solution is found;
➢ Approach the design problem with a different optimization methodology to compare solution
quality at a high computational expense;
➢ Accept the optimum found knowing that while it is superior to the baseline configuration it
may not be the optimal solution.
1.8.1 Single vs. Multi-Objective Optimization
Many real-world decision making problems need to achieve several objectives: minimize risks,
maximize reliability, minimize deviations from desired levels, minimize cost, etc.1. The main goal of
single-objective (SO) optimization is to find the “best” solution, which corresponds to the minimum
or maximum value of a single objective function that lumps all different objectives into one. This
type of optimization is useful as a tool which should provide decision makers with insights into the
nature of the problem, but usually cannot provide a set of alternative solutions that trade different
objectives against each other. On the contrary, in a multi-objective (MO) continues non-linear
optimization with conflicting objectives, there is no single optimal solution. The interaction among
different objectives gives rise to a set of compromised solutions, largely known as the trade-off, non-
dominated, non-inferior or Pareto-optimal solutions. The consideration of many objectives in the
design or planning stages provides three major improvements to the procedure that directly
supports the decision-making process [Cohon, 1978]:
• A wider range of alternatives is usually identified when a multi-objective methodology is
employed.
• Consideration of multiple objectives promotes more appropriate roles for the participants in
the planning and decision-making processes, i.e. “analyst” or “modeler” who generates
alternative solutions, and “decision maker” who uses the solutions generated by the analyst
23

to make informed decisions.


• Models of a problem will be more realistic if many objectives are considered.

Single Objective Function:


Minimize f(x) subject to: Objective Function
gi(x) ≤ 0 , i =1, 2, . . . , m ; Inequality constraints
hn(x) = 0 , n = 1, 2, . . . , p : Equality constraints
x = {x1 , x2, …… , xndv}T ; Design Variables
and xlk ≤ xk ≤ xuk ; Parameterized constraints

Multiple Objectives:
Minimize F(x) = [F1(x) , F2(x) , . . . , Fk (x)]T
subject to
inequality constraints gj (F(x)) ≤ 0 , j = 1, 2, . . . , m , and
quality constraints hL (F(x)) = 0, L = 1, 2, . . . , e ,
F(x) ∈ Ek are also called objectives, criteria, payoff functions, cost functions, or value
functions, where k is the number of objective functions, m is the number of inequality
constraints, and e is the number of equality constraints. x ∈ En is a vector of design
variables (also called decision variables), where n is the number independent
variables.

Single-objective optimization identifies a single optimal alternative which can be used within
the multi-objective framework. This does not involve accumulating different objectives into a single
objective function, but entails setting all except one of them as constraints in the optimization

Local Gradient
Direct
Numerical Stochastic
Methods
Optimization
Methods and

Inverse Set of Equations


Searches

Single Point Search Simulated Annealing (SA)

Randum Search

Guided Randum Surrogate Model (SM)


Search Techniques
Genetic Algorithms (GA)
Evolutionary
Algorithms
Multi-Point Evolutuionary Stategies
Search
Tabu Search

DoE

Complex/Simplex

Hybrid Methods

Figure 1.8.1 Different Search and Optimization Techniques


24

process. However, most design and planning problems are characterized by a large and often infinite
number of alternatives. Thus, multi-objective methodologies are more likely to identify a wider range
of these alternatives since they do not need to specify for which level of one objective a single optimal
solution is obtained for another. Figure 1.6.1 illustrates different optimization techniques and
searches. Be advised that in Direct search methods perform hill climbing in the function space by
moving in a direction related to the local gradient. Where else in indirect methods, the solution is
sought by solving a set of equations resulting from setting the gradient of the objective function to
zero1. A more orthodox figure could be devised as appears in [Andersson]2, where optimization
methods could be divided into derivative and non-derivative methods.
1.8.1.1 Various Methods to Solve Multiple Objective Optimization
A large number of approaches exist in the literature to solve multi-objective optimization problems.
These are aggregating (combining), population-based non-Pareto, and Pareto-based techniques. In
case of aggregating techniques, different objectives are generally combined into one using weighting
or a goal-based method. One of the techniques in the population-based non-Pareto approach is the
Vector Evaluated Genetic Algorithm (VEGA). Here, different sub-populations are used for the different
objectives. Pareto-based approaches include Multiple Objective GA (MOGA), non-dominated sorting
GA (NSGA), and positioned Pareto GA. Note that all these techniques are essentially non-exclusive in
nature. Simulated annealing (SA) performs reasonably well in solving single-objective optimization
problems. However, its application for solving multi-objective problems has been limited, mainly
because it finds a single solution in a single run instead of a set of solutions. This appears to be a
critical bottleneck in multi-objective optimization. However, SA has been found to have some
favorable characteristics for multimodal search. The advantage of SA stems from its good selection
technique3.
1.8.1.2 Pareto Optimality
In contrast to single-objective optimization, a
solution to a multi-objective problem is more of a
concept than a definition. Typically, there is no
single global solution, and it is often necessary to
determine a set of points that all fit a
predetermined definition for an optimum. The
predominant concept in defining an optimal point
is that of Pareto optimality (Pareto 1906), which
is defined as follows:
Definition Pareto Optimal: A point x∗ ∈ X,
is Pareto optimal if there does not exist
Figure 1.8.2 Pareto Optimal
another point, x ∈ X, such that F(x) ≤ F(x∗),
and Fi (x) < Fi (x∗) for at least one function. All Pareto optimal points lie on the boundary of
the feasible criterion space. Often, algorithms
provide solutions that may not be Pareto optimal
but may satisfy other criteria, making them significant for practical applications. For each solution
that is contained in the Pareto set, one can only improve one objective by accepting a trade-off in at
least one other objective. That is, roughly speaking, in a two-dimensional problem, we are interested
in finding the lower left boundary of the reachable set in objective space. Figure 1.6.2 shows a
Pareto frontier (in red), the set of Pareto optimal solutions (those that are not dominated by any
other feasible solutions). The boxed points represent feasible choices, and smaller values are
preferred to larger ones. Point C is not on the Pareto frontier because it is dominated by both point A
and point B. Points A and B are not strictly dominated by any other, and hence do lie on the frontier
4. As an example, we want to investigate the trade-off between two (or more) conflicting objectives
25

in the design of a supersonic aircraft. We might want to simultaneously minimize aerodynamic drag
and sonic boom and we do not know what the trade-off is. How much would the drag increase for a
given reduction in sonic boom? (Figure 1.6.3). In this situation there is no best design. There is a

Figure 1.8.3 Pareto Front in Aircraft Design [Antoine and Kroo]

set of designs that are the best possible for that combination of the two objectives. In other words,
for these optimal solutions, the only way to improve one objective is to worsen the other. This is
achieved by using composite weighted functions in conjunction with gradient-based methods for
various weights such as the one purposed by (Jamson)11:
w1 + w2 + ⋯ + wn = 1 , F(𝐱) = w1 F1 (𝐱) + w2 F2 (𝐱) + ⋯ + wn Fn (𝐱)n
Eq. 1.8.1
The gradient to determine an assigned a weight overall loss or a gain for the design is created by
summing all the gradients times each respective weight.
1.8.1.3 Case Study - Multi-Objective (Point) Optimization
(Kenway and Martins, 2016)5, among others, have used multi-point optimization strategies in order
to consider several operating conditions simultaneously. For more realistic and robust design it is
crucial to take into account more than one operating condition, especially off design conditions,
which form additional multi-objective requirements into the optimization. The single-point
optimization achieved an 8.6 drag count reduction and the shock-wave over the upper surface of the
wing is almost entirely eliminated. Drag divergence curves in this work show the nature of the single-
point optimization presenting a significant dip in the drag at the design condition, but the
performance is significantly deteriorated at off design conditions relative to the baseline condition.
The multi-point optimization, accounting for 3 design conditions, found that drag at the nominal
operating condition increased by 2.8 counts and produced double shocks on the upper surface of the
wing as visible in Figure 1.6.4. However, at the sacrifice of performance at the nominal operating
condition, off design conditions for the multi-point optimization design was found to perform
26

substantially better
over the entire range of
Mach numbers.
1.8.1.3.1 Acceleration
Technique
for Multi-
Level
Optimization
An acceleration
technique that reduced
the overall
computational cost of
the optimization is
sought. Aerodynamic
shape optimization is a
computational
intensive endeavor, Figure 1.8.4 High Performance Low Drag for Single and Multiple Design
Points (Courtesy of Kenway & Martins41)
where the majority of
the computational
effort is spent in the flow solutions and gradient evaluations. Therefore, many CFD researchers have
tried to develop more efficient flow and adjoin solvers. Commonly used methods, such as multigrid,
pre-conditioning, and variations on Newton-type methods, can improve the convergence of the
solver, thus reducing the overall optimization time. Our flow solver has been significantly improved
over the years to provide efficient and reliable flow solutions. Another area of improvement is the
efficiency of the gradient computation. As mentioned before, the adjoin method proficiently
computes gradients with respect to large numbers of shape design variables. For our adjoin
implementation, the cost of computing the gradient of a single function of interest with respect to
hundreds or even thousands of shape design variables is lower than the cost of one flow solution.
1.8.2 Multi-Objective vs. Multi-Level Optimization
According to [Houssam Abbas] of University of Pennsylvania, Multi-objective problem doesn't quite
optimize two objectives simultaneously: rather, it treats both objectives as equally important,
and will give you a trade-off curve (so-called Pareto front). At some points of that curve, you are
making a trade-off in favor of objective 1, at others, in favor of objective 2. All points along the curve
are feasible for the same set of constraints, and this set of constraints does not depend on either
objective. A multi-level program is different; you really care about one objective, say f(x). And you
want the optimum of f(x) over a set S which happens to be defined using another optimization (the
lower level program). For different values of x, you get different values of S but this isn't a trade-off
like in the bi-objective case: here you are seeking the optimum solution, and there's exactly one

Multi-Level Optimization:
Minimize (x , y) , F(x , y)
subject to: y ∊ S(x); G(x, y) ≤ 0
where S(x) denotes the set of solutions of the lower level problem as:
Minimize (y) : f(x , y) subject to g(x, y) ≤ 0
(though perhaps many optimizers). Indeed we are talking about two separate entities of modeling
frameworks. In fact, the two can be combined in a model where, for example, we can have several
objectives at the so-called "upper level" of the bi-level program. We consider the following multi-
objective multi-level programming problem6.
27

[Fathi & Shadaram]7 introduced a of Multi-Level, Multi-Objective, as well as, Multi-Point aerodynamic
optimization of the axial compressor blade. Generally, they versioned an approach to the problem
to build an objective function which is the summation of penalty terms, to limit the violations of the
constraints. To reduce the computational effort, optimization procedure is working on two levels.
Fast but approximate prediction methods has been used to find a near-optimum geometry at the firs-
level, which is then further verified and refined by a more accurate but expensive Navier–Stokes
solver. Genetic algorithm and gradient-based optimization were used to optimize the parameters of
first-level and second-level, respectively. Following equation is best described this where the
columns described the multi-objective vs. rows describing the multi-levels.

Obj 1 − Level 1 ⋯ Obj N − Level N


[ ⋮ ⋱ ⋮ ]
Obj 1 − Level M ⋯ Obj N − Level M
Eq. 1.8.2
1.8.3 Single vs. Multi-level Optimization
Common for single-level optimization methods is a central optimizer that makes all design decisions.
The two methods presented here are distinguished by the kind of consistency that is maintained
during the optimization. The most common and basic single-level optimization method is the Multi-
Disciplinary feasible (MDF) formulation, described by [Cramer et al.]8. The method is also called
All-in-One by [Kodiyalam and Sobieszczanski-Sobieski]9. The single-level optimization methods
presented in the previous sections have a central optimizer making all design decisions. Distribution
of the decision making process is enabled using multi-level optimization methods, where a system
optimizer communicates with a number of subspace optimizers. Several multi-level optimization
methods have been presented in the literature. These include some of the most well-known ones,
such as Concurrent subspace optimization (CSSO), originally developed by [Sobieszczanski-
Sobieski]10 at NASA Langley Research Center. The original formulation is inspired by the idea to
optimize one subspace with corresponding design variables at a time, holding the other variables
constant. The method has diverged into different variants, which makes it impossible to present a
unified approach. investigated in the following sections.
1.8.4 References
1 Dragan Savic,” Single-objective vs. Multi-objective Optimization for Integrated Decision Support”,

Centre for Water Systems, Department of Engineering School of Engineering and Computer Science,
University of Exeter, UK.
2 Johan Andersson, “A survey of multi objective optimization in engineering design”, Technical Report:

LiTH-IKP-R-1097.
3 Bandyopadhyay, S., Saha, S, “Some Single- and Multi-objective Optimization Techniques”, Chapter 2”,

ISBN: 973-3-642-35450-5, 2013.


4 Wikipedia.
5 G. Kenway and J.R.R.A. Martins. “Aerodynamic Shape Optimization of the CRM Configuration

Including Buffet-Onset Conditions”, 54th AIAA Aerospace Sciences Meeting, AIAA 2016-1294, CA.
6 Jane J. Ye, “Necessary optimality conditions for multi-objective bi-level programs”, 2010.
7 A. Fathi · A. Shadaram, “Multi-Level Multi-Objective Multi-Point Optimization System for Axial Flow

Compressor 2D Blade Design”, Arab J Scientific Engineering, 2013.


8 Cramer, E. J., Dennis Jr., J. E., Frank, P. D., Lewis, R. M., and Shubin, G. R.. “Problem formulation for

multidisciplinary optimization”. SIAM Journal on Optimization, 4(4), 754-776, 1994.


9 Kodiyalam, S., and Sobieszczanski-Sobieski, J. “Multidisciplinary design optimization – some formal

methods, framework requirements, and application to vehicle design”. International Journal of Vehicle
Design, 2001.
28

10 Sobieszczanski-Sobieski, J. ”Optimization by decomposition: a step from hierarchic to non-hierarchic


systems”. 2nd NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and
Optimization. Hampton, Virginia, USA, 1998.
11 Jameson, A., Leoviriyakit, K., and Shankaran, S., "Multi-point Aero-Structural Optimization of Wings

Including Planform Variations", 45th Aerospace Sciences Meeting and Exhibit, AIAA-2007-764, Reno,
NV, 8–11 Jan 2007.

1.9 Constraint Handling


Constraint handling in aerodynamic, and indeed any industrial optimization problem, plays a
consequential role in the quality and robustness of an optimized solution within the defined design
space. Geometric parametrization itself poses a constrained optimization problem since, in addition
to minimizing the objective f(x), the design variables must satisfy some geometric constraints.
Constraint management techniques found in literature which have been classified by [Koziel &
Michalewicz]3 and [Sienz & Innocente]4 as:
➢ strategies that preserve only feasible solutions with no constraint violations: infeasible
solutions are deleted;
➢ strategies that allow feasible and infeasible solutions to co-exist in a population, however
penalty functions penalize the infeasible solutions (constraint based reasoning);
➢ strategies that create feasible solutions only;
➢ strategies that artificially modify solutions to boundary constraints if boundaries are
exceeded; and
➢ strategies that repair/modify infeasible solutions.
Most commonly optimizations apply weighted penalties to the objective function if the constraint(s)
are violated (Eq. 1.8.1). The reason for
this is that penalty functions are often
deemed to ease the optimization process,
and bring the advantage of transforming
constrained problems into
unconstrained one by directly enforcing
the penalties directly to the objective
function. With this method Pareto-
optimal solutions with good diversity
and reliable convergence for many
algorithms can be obtained easily when
the number of constraints are small;
fewer than 20 constraints. It becomes
more difficult to reach Pareto-optimal
solutions efficiently as the number of
constraints increase, and the number of
analyses of objectives and constraints
quickly becomes prohibitively expensive
for many applications. Figure 1.9.1 Concept of using Parallel Evaluation
Strategy of Feasible and Infeasible Solutions to Guide
This is because the selection pressure
Optimization Direction in a GA
decreases due to the reduced region in

3 S. Koziel and Z. Michalewicz. “Evolutionary Algorithms, Homomorphous Mappings, and Constrained Parameter

Optimization”. Evolutionary Computation, 7(1):19{44, 1999.


4 J. Siens and M.S. Innocente. “Particle Swarm Optimisation: Fundamental Study and its Application to

Optimisation and to Jetty Scheduling Problems”. Trends in Engineering Computational Technology, 2008.
29

which feasible solutions exist. (Kato et al., 2015)5 suggest that in certain circumstances Pareto-
optimal solutions may exist in-between regions of solution feasibility and infeasibility. This is
illustrated in Figure 1.7.1, where it is seen that feasible and infeasible solutions could be evaluated
in parallel to guide the optimization search direction towards feasible design spaces. This is
intuitively true for single discipline aerodynamic optimization problems where often small
modifications to design variables can largely impact the performance rendering designs infeasible.
Algorithm understanding of infeasible solutions can help in the betterment of feasible solutions
though algorithm learning/training and constraint based reasoning. (Robinson et al., 2006)6,
comparing the performance of alternative trust-region constraint handling methods, showed that
reapplying knowledge of constraint information to a variable complexity wing design optimization
problem reduced high-fidelity function calls by 58% and additionally compare the performance to
alternative constraint managed techniques.
Elsewhere, (Gemma and Mastroddi, 2015)7 demonstrated that for the multi-disciplinary, multi-
objective aircraft optimizations the objective space of feasible and infeasible design candidates are
likely to share no such definitive boundary. With the adoption of utter constraints, structural
constraints, and mission constraints solutions defined as infeasible under certain conditions would
otherwise be accepted, hence forming complex Pareto fronts. Interdisciplinary considerations such
as this help to develop and balance conflicting constraints. For example, structural properties which
may be considered feasible, but are perhaps heavier than necessary will inflict aero-elastic
instabilities at lower frequencies. In the aerospace industry alone there are several devoted open-
source aerodynamic optimization algorithms with built-in constraint handling capability. Some
studies have also adopted MATLAB's optimization tool-box for successful optimization constraint
management.

5 T. Kato, K. Shimoyama, and S. Obayashi. “Evolutionary Algorithm with Parallel Evaluation Strategy of Feasible
and Infeasible Solutions Considering Total Constraint Violation”. IEEE, 1(978):986-993, 2015.
6 T.D. Robinson, K.E. Willcox, M.S. Eldred, and R. Haimes. “Multi-fidelity Optimization for Variable-Complexity

Design”. 11th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, pages 1-18, Portsmouth,
VA, 2006. AIAA 2006-7114.
7 S. Gemma and F. Mastroddi. “Multi-Disciplinary and Multi-Objective Optimization of an Unconventional Aircraft

Concept”. 16th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, AIAA 2015-2327, pages
1-20, Dallas, TX, 2015.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy