US - TMC - 05 - Optimization 2022
US - TMC - 05 - Optimization 2022
2
Mathematical Background
An optimization or mathematical programming problem
generally be stated as:
Find x, which minimizes or maximizes f(x) subject
to
d i ( x) ai i = 1,2,, m *
ei ( x) = bi i = 1,2,, p *
3
Optimization problems can be classified on the basis of the
form of f(x):
If f(x) and the constraints are linear, we have
linear programming.
If f(x) is quadratic and the constraints are linear,
we have quadratic programming.
If f(x) is not linear or quadratic and/or the
constraints are nonlinear, we have nonlinear programming.
4
PART A
ONE-DIMENSIONAL
UNCONSTRAINED OPTIMIZATION
One-Dimensional Unconstrained
Optimization
6
How do we distinguish global
optimum from local one?
By graphing to gain insight into the behavior of the
function.
7
Golden-Section Search
A unimodal function has a single maximum or a minimum
in the a given interval. For a unimodal function:
- First pick two points that will bracket your extremum [xl,
xu].
- Pick an additional third point within this interval to
determine whether a maximum occurred.
- Then pick a fourth point to determine whether the
maximum has occurred within the first three or last three
points
8
l0 = l1 + l2
l1 l2
=
l0 l1
•The first condition specifies that the sum of the two sub
lengths l1 and l2 must equal the original interval length.
•The second say that the ratio of the length must be equal
l1 l l2
= 2 R=
l1 + l2 l1 l1
1
1+ R = R2 + R −1 = 0
R
− 1 + 1 − 4(−1) 5 −1
R= = = 0.61803
2 2
Golden Ratio
9
The method starts with two initial guesses, xl and xu,
that bracket one local extremum of f(x):
Next two interior points x1 and x2 are chosen according to
the golden ratio
5 −1
d= ( xu − xl )
2
x1 = xl + d
x2 = xu − d
10
Two results can occur:
First, the golden ratio is used to create the two interior points
Because f (x2) > f (x1), the maximum is in the interval defined by xl, x2, and x1.
Thus, for the new interval, the lower bound remains xl = 0, and x1 becomes the
upper bound, that is, xu = 2.472.
In addition, the former x2 value becomes the new x1, that is, x1 = 1.528. Further,
we do not have to recalculate f (x1) because it was determined on the previous
iteration as f (1.528) = 1.765.
The function evaluation at x2 is f(0.994) = 1.531. Since this value is less than the
function value at x1, the maximum is in the interval prescribed by x2, x1, and xu.
where x0, x1, and x2 are the initial guesses, and x3 is the value of x that
corresponds to the maximum value of the parabolic fit to the guesses.
Example #2
Use the parabolic interpolation to find the maximum of
𝑥2
𝑓 𝑥 = 2 sin 𝑥 −
10
within the interval x0 = 0, x1=1 and x2 = 4.
which has a function value of f(1.5055) = 1.7691. For the next iteration,
f ( xi )
xi +1 = xi −
f ( xi )
17
Example #3
Use the Newton’s Method to find the maximum of
𝑥2
𝑓 𝑥 = 2 sin 𝑥 −
10
within an initial guess of x0=2.5.
Thus, within four iterations, the result converges rapidly on the true value.
PART B
MULTIDIMENSIONAL
UNCONSTRAINED OPTIMIZATION
Multidimensional Unconstrained
Optimization
Techniques to find minimum and maximum of a function
of several variables are described.
20
DIRECT METHODS Random Search
Based on evaluation of the function randomly at selected
values of the independent variables.
21
Advantages
Works even for discontinuous and nondifferentiable
functions.
Always finds the global optimum rather than the global
minimum.
Disadvantages
As the number of independent variables grows, the task can
become onerous.
Not efficient, it does not account for the behavior of
underlying function.
22
Univariate and Pattern Searches
More efficient than random search and still doesn’t require
derivative evaluation.
23
Pattern directions can be used to shoot directly along the
ridge towards maximum.
24
GRADIENT METHODS
Gradients and Hessians
The Gradient
If f(x,y) is a two dimensional function, the
gradient vector tells us
What direction is the steepest ascend?
How much we will gain by taking that step?
f f
f = i+ j or del f
x y
Directional derivative of
f(x,y) at point x=a and y=b
25
•For n dimensions
f
x ( x )
1
f ( x )
f ( x) = x2
f
( x )
xn
26
Example #4
Employ the gradient to evaluate the steepest ascent direction for the function
𝑓(𝑥, 𝑦) = 𝑥𝑦2
at the point (2, 2). Assume that positive x is pointed east and positive y is pointed
north
28
Assuming that the partial derivatives are continuous at and
near the point being evaluated
2
2 f 2 f 2 f
H = 2 −
x y 2 xy
2 f
If H 0 and 2 0, then f(x, y) has a local minimum
x
2 f
If H 0 and 2 0, then f(x, y) has a local maximum
x
If H 0, then f(x, y) has a saddle point
29
Finite-Difference Approximations
The Steepest Ascend Method
Start at an initial point (xo,yo), determine the direction of
steepest ascend, that is, the gradient.
Then search along the direction of the gradient, ho, until
we find maximum. Process is then repeated.
31
The problem has two parts
Determining the “best direction” and
Determining the “best value” along that search direction.
Steepest ascent method uses the gradient approach as its
choice for the “best” direction.
To transform a function of x and y into a function of h
along the gradient section:
f
x = xo + h
x
f
y = yo + h
y
32
If xo=1 and yo=2
f = 3i + 4 j
x = 1 + 3h
y = 2 + 4h
33
Example #5
Suppose we have the following two-dimensional function:
𝑓(𝑥, 𝑦) = 2𝑥𝑦 + 2𝑥 − 𝑥2 − 2𝑦2
Develop a one-dimensional version of this equation along the gradient direction
at point x=−1 and y = 1.
This pair of equations can be solved for the optimum, x = 2 and y = 1. The
second partial derivatives can also be determined and evaluated at the
optimum,
CONSTRAINED OPTIMIZATION
LINEAR PROGRAMMING
An optimization approach that deals with meeting a desired
objective such as maximizing profit or minimizing cost in
presence of constraints such as limited resources
Mathematical functions representing both the objective and
the constraints are linear.
39
Standard Form
Basic linear programming problem consists of two major
parts:
The objective function
A set of constraints
For maximization problem, the objective function is
generally expressed as
Maximize Z = c1 x1 + c2 x2 + + cn xn
40
The constraints can be represented generally as
ai1 x1 + ai 2 x2 + + ain xn bi
41
Possible outcomes that can be generally obtained in a
linear programming problem
1. Unique solution. The maximum objective function
intersects a single point.
2. Alternate solutions. Problem has an infinite number
of optima corresponding to a line segment.
3. No feasible solution.
4. Unbounded problems. Problem is under-constrained
and therefore open-ended.
42
Any Questions?
hvusynh@hcmiu.edu.vn