0% found this document useful (0 votes)
11 views70 pages

29-Introduction To Classical Optimization-20-03-2024

Uploaded by

arryankeshan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views70 pages

29-Introduction To Classical Optimization-20-03-2024

Uploaded by

arryankeshan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 70

Module 5: CLASSICAL OPTIMIZATION OVERVIEW

Introduction to Optimization
Goal of decisions minimize the effort required
(or)
maximize the desired benefit

Optimization plays a very important role in designs of various engineering problems and
of course in Design engineering, Manufacturing engineering and System engineering.
Engineering Applications of Optimization
Design of aerospace structures for minimum weight
Optimal trajectories of space vehicles
Minimum weight design of structures for random loading
Optimal design of linkages, cams, gears, machine tools and other mechanical components
Selection of machining conditions in metal cutting processes for minimum production cost
Design of material handling equipment like conveyors, trucks and cranes for minimum cost
Design of pumps, turbines and heat transfer equipment for maximum efficiency
Optimum design of control systems
Optimal production planning, controlling and scheduling
Analysis of statistical data and building empirical models from experimental results to
obtain the most accurate representation of the physical phenomenon
Planning of maintenance and replacement of equipment to reduce operation costs
Allocation of resources or services among several activities to maximize the benefit
General Statement of Optimization Problem
Classification of Optimization
Classification based on
The existence of constraints
Constrained and Unconstrained
The nature of design variables
Parameter or Static optimization problems
Trajectory or Dynamic optimization problems
The physical structure of the problem
Optimal control and non-optimal control problems
The nature of equations involved
linear, nonlinear, geometric and quadratic programming problems
The permissible values of the design variables
Integer programming problem, Real valued programming problem
The deterministic nature of variables involved
Deterministic Programming problem, Stochastic programming problem
The Separability of the functions
Separable and non-separable programming problems
The number of objective functions
Single and Multi-objective programming problems
Classical Optimization Techniques
• The classical optimization techniques are useful in finding the optimum solution or
unconstrained maxima or minima of continuous and differentiable functions.

• These are analytical methods and make use of differential calculus in locating the
optimum solution.

• The classical methods have limited scope in practical applications as some of them
involve objective functions which are not continuous and/or differentiable.

• Yet, the study of these classical techniques of optimization form a basis for developing
most of the numerical techniques that have evolved into advanced techniques more
suitable to today’s practical problems.
Classical Optimization Techniques
• These methods assume that the function is differentiable twice with respect to
the design variables and the derivatives are continuous.

• Three main types of problems can be handled by the classical optimization


techniques:
– single variable functions
– multivariable functions with no constraints,
– multivariable functions with both equality and inequality constraints.

• In problems with equality constraints the Lagrange multiplier method can be


used.

• If the problem has inequality constraints, the Kuhn-Tucker conditions can be


used to identify the optimum solution.
Classical Optimization Techniques
• Optimization Techniques:
• Classical – Analytical Process based on differential calculus
• Non-Classical – Based on numerical process
• Limitation of Classical Techniques:
• Functions must be continuous and differentiable….at least to the first order
should be possible.
• Not applicable for non-differentiable / discontinuous functions.
Classical Optimization Techniques
• Feasible region is having infinite number of points. Hence, identifying the optimal
solution is not an easy task.
• There are certain conditions to be fulfilled. Then only one could declare that solution
is optimal.
• Need to study the necessary and sufficient conditions for optimality for non-linear
programming problem.
Classical Optimization Techniques - Single Variable Optimization Problem

Goal: Identification of optimal point is the primary concern.


Let
Problem: 𝒇 𝒙 be a continuous function defined in [a,b ] (∈ 𝑹)

The decision 𝒙∗ ∈ 𝐚, 𝐛 is to be found such that 𝒇 𝒙 is maximum or minimum


Single Variable Optimization Problem - Terminology

Relative or Local Minima


• A function of one variable f (x) has a relative or local
minimum at x = x* if f (x*) ≤ f (x*+h) for all
sufficiently small positive and negative values of h. Local
minimum

Relative or Local Maxima


• A point x* is called a relative or local maximum if
f (x*) ≥ f (x*+h) for all sufficiently small positive and
Global minima
negative values (close to zero) of h.
Local minima
Single Variable Optimization Problem - Terminology
Single Variable Optimization Problem - Terminology
Global or Absolute Minima
• A function of one variable f (x) is said to have a global or absolute minimum at x = x*
if f (x*) ≤ f (x) for all x, and not just for all x close to x*, in the domain over which f
(x) is defined.
Global or Absolute Maxima
• Similarly, a point x* will be a global or absolute maximum of f (x) if f(x*) ≥ f (x) for
all x in the domain.
Single Variable Optimization Problem - Terminology

Stationary Points
Single Variable Optimization - Problem
• A single-variable optimization problem is one in which the value 𝑥 = 𝑥∗ is to be found
in the interval [a, b] such that 𝑥∗ minimizes 𝑓(𝑥).
• For example, determine the maximum and minimum values of the function.
𝑓 𝑥 = 3𝑥 4 − 4𝑥 3 − 24𝑥 2 + 48𝑥 + 15
• Two theorems provide the necessary and sufficient conditions for the relative
minimum of a function of a single variable
• Necessary condition
• Sufficient condition
Single Variable Optimization - Solution

Necessary condition to identify a stationary point for a given non-linear function.

If a function f (x) is defined in the interval a ≤ x ≤ b and has a stationary point


(relative minimum or maximum or point of inflection) at x = x*, where a < x*
< b, and if the first order derivative df (x) / dx = f’(x) exists as a finite number
at x = x*, then f’ (x*)=0.

The theorem does not say that the function necessarily will have a minimum or
maximum at every point where the derivative is zero.

So, need sufficient condition.


Single Variable Optimization - Solution
Sufficient condition
Let f’(x*)=f’’(x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0.
Then f(x*) is a
i. Local minimum value of f (x) if f (n)(x*) > 0 and n is even

ii. Local maximum value of f (x) if f (n)(x*) < 0 and n is even

iii. Neither a minimum nor a maximum if n is odd. In this case x* is a point of inflection.
Single Variable Optimization – Example 1
Determine the maximum and minimum values of the function:
𝒇 𝒙 = 𝟑𝒙𝟒 − 𝟒𝒙𝟑 − 𝟐𝟒𝒙𝟐 + 𝟒𝟖𝒙 + 𝟏𝟓

Solution:
For finding out the stationary points necessary condition is to be applied.

i.e. equate the first order derivative to 0, i.e. 𝑓 ′ 𝑥 = 0.

𝑓 ′ 𝑥 = 12𝑥 3 − 12𝑥 2 − 48𝑥 + 48


= 12 𝑥 3 − 𝑥 2 − 4𝑥 + 4
= 12 𝑥 − 1 𝑥 2 − 4
= 12 𝑥 − 1 𝑥 + 2 𝑥 − 2
𝑥 = 1, 𝑥 = 2 𝑎𝑛𝑑 𝑥 = −2
Single Variable Optimization – Example 1

From graph, we could find the relative minima,


global minima, relative maxima and global
maxima.
But to find them analytically, we need to find
second order derivative.
𝑓 ′ 𝑥 = 12 𝑥 3 − 𝑥 2 − 4𝑥 + 4
𝑓 ′′ 𝑥 = 36𝑥 2 − 24𝑥 − 48
𝑎𝑡 𝑥 = 1, 𝑓 ′′ 𝑥 = −36 < 0 …..function has relative maxima
𝑎𝑡 𝑥 = 2, 𝑓 ′′ (𝑥) = 48 > 0 … … . . function has relative minima
𝑎𝑡 𝑥 = −2, 𝑓 ′′ 𝑥 = 144 > 0 … function has relative minima

Conclusion:
𝑎𝑡 𝑥 = 1, 𝑓 𝑥 has relative maxima and the value is 𝑓 𝑥 = 1 = 38
𝑎𝑡 𝑥 = 2, , 𝑓 𝑥 has relative minima and the value is 𝑓 𝑥 = 2 = 31
𝑎𝑡 𝑥 = −2, , 𝑓 𝑥 has relative minima and the value is 𝑓 𝑥 = −2 = -97
Single Variable Optimization – Example 2

Determine the stationary points of the following function, 𝒇 𝒙 = 𝟐𝒙𝟔 − 𝟔𝒙𝟒 + 𝟔𝒙𝟐 + 𝟏𝟎
Solution:
𝑓 𝑥 = 2𝑥 6 − 6𝑥 4 + 6𝑥 2 + 10
𝑓 ′ 𝑥 = 12𝑥 5 − 24𝑥 3 + 12𝑥 = 0
= 12𝑥 𝑥 2 − 1 2 = 0
= 12𝑥 (𝑥 + 1)(𝑥 − 1) 2 = 0
Stationary points⇒ 𝒙 = 𝟎, 𝒙 = +𝟏, 𝒙 = −𝟏
𝑓 ′′ 𝑥 = 60𝑥 4 − 72𝑥 2 + 12
• 𝒂𝒕 𝒙 = 𝟎, 𝒇′′ 𝒙 = 𝟏𝟐 > 𝟎 (relative minima)
• 𝑎𝑡 𝑥 = +1, 𝑓 ′′ 𝑥 = 0 We could not conclude for
𝑥 = +1 and for 𝑥 = −1,
′′
• 𝑎𝑡 𝑥 = −1, 𝑓 𝑥 = 0 since 𝑓 ′′ 𝑥 = 0.

Need to find the third order derivative.


Single Variable Optimization – Example 2
𝑓 ′′′ 𝑥 = 240𝑥 3 − 144𝑥
• 𝑎𝑡 𝑥 = +1, 𝑓 ′′′ 𝑥 = 96 > 0
• 𝑎𝑡 𝑥 = −1, 𝑓 ′′ 𝑥 = −96 < 0

n=3, the order of the derivative is ODD and the values are non-zero.
Neither minima nor minima.
𝒙 = +𝟏 and 𝒙 = −𝟏 are points of inflection.
Multi Variable Optimization – Unconstrained NLP

Generalized unconstrained NLPP:


𝑂𝑝𝑡𝑖𝑚𝑖𝑧𝑒 𝑓 𝑋
𝑊ℎ𝑒𝑟𝑒, 𝑋 = (𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , … … … 𝑥𝑛 ) ∈ R 𝑛

Need to investigate the necessary and sufficient conditions for the minimum and
maximum of unconstrained function of several variables.
Multi Variable Optimization – Unconstrained NLP

Necessary Condition (first order condition)

If 𝑓(𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , … … … 𝑥𝑛 ) has an extreme point (maximum or minimum)


at 𝑋 = 𝑋 ∗ and if the first order partial derivatives of 𝑓 𝑋 exist at 𝑋 ∗ then,

𝜕𝑓 𝑋 ∗ 𝜕𝑓 𝑋 ∗ 𝜕𝑓 𝑋 ∗ 𝜕𝑓 𝑋 ∗
= = = ⋯……… = =0
𝜕𝑥1 𝜕𝑥2 𝜕3 𝜕𝑥𝑛
Multi Variable Optimization – Unconstrained NLP

Sufficient Condition (second order condition)

1. At the extreme point, if the matrix of second partial derivatives (Hessian matrix) is
positive definite then it is relative minimum (strictly a convex function).

2. At the extreme point, if the matrix of second partial derivatives (Hessian matrix) is
negative definite then it is relative maximum (strictly a concave function).

3. At the extreme point, if the matrix of second partial derivatives (Hessian matrix) is
neither positive nor negative definite then it is a saddle point.
Multi Variable Optimization – Unconstrained NLP
Hessian matrix

• The Hessian matrix is a square matrix of second-order partial derivatives of a scalar-valued


function 𝑓(𝑥1 , 𝑥2 , 𝑥3 , … … … . . 𝑥𝑛 )
𝜕2 𝑓
𝑯𝑖𝑗 =
𝜕𝑥𝑖 𝜕𝑥𝑗
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓

𝜕𝑥12 𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥𝑛
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
If the second order partial derivatives of f are all
… continuous, then the Hessian matrix is a symmetric
𝜕𝑥2 𝜕𝑥1 𝜕𝑥22 𝜕𝑥2 𝜕𝑥𝑛
• .. matrix.
.
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
… 2 i.e, 𝑯𝑖𝑗 = 𝑯𝑗𝑖
𝜕𝑥𝑛 𝜕𝑥1 𝜕𝑥𝑛 𝜕𝑥2 𝜕𝑥𝑛
Multi Variable Optimization – Unconstrained NLP
For Example
𝜕𝑓 𝜕𝑓 𝜕𝑓
𝑓(𝑥1 , 𝑥2 , 𝑥3 ) = 𝑥1 − 𝑥2 2 + 2𝑥32 =2 = −2 =0
𝜕𝑥12 𝜕𝑥1 𝑥2 𝜕𝑥1 𝑥3
generate the Hessian Matrix
𝜕𝑓 𝜕𝑓 𝜕𝑓
= −2 =2 =0
𝜕𝑓 𝜕𝑥2 𝑥1 𝜕𝑥22 𝜕𝑥2 𝑥3
= 2 𝑥1 − 𝑥2
𝜕𝑥1
𝜕𝑓 𝜕𝑓 𝜕𝑓
=0 =0 =4
𝜕𝑓 𝜕𝑥3 𝑥1 𝜕𝑥3 𝑥2 𝜕𝑥32
= 2 𝑥1 − 𝑥2 −1
𝜕𝑥2

𝜕𝑓 𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
= 4𝑥3 𝜕𝑥12 𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥3
𝜕𝑥3 2 −2 0
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
H= = −2 2 0
𝜕𝑥2 𝜕𝑥1 𝜕𝑥22 𝜕𝑥2 𝜕𝑥3
𝜕2 𝑓 𝜕2 𝑓 𝜕2 𝑓
0 0 4
𝜕𝑥3 𝜕𝑥1 𝜕𝑥3 𝜕𝑥2 𝜕𝑥32
Working Rule to Find the Extreme Points of Functions of n-Variables
Let us consider u = f (𝑋1 , 𝑋2 , 𝑋3 , ....., 𝑋𝑛 ) as a function of 𝑋1 , 𝑋2 , 𝑋3 , ....., 𝑋𝑛 .
Working Rule to Find the Extreme Points of Functions of n-Variables

Its leading minors are defined as Hence, the following cases will arise:
Multi Variable Optimization – Unconstrained NLP : Example 1
Find the extreme points of the function
𝒇(𝒙𝟏 , 𝒙𝟐 ) = 𝒙𝟑𝟏 + 𝒙𝟑𝟐 + 𝟐𝒙𝟐𝟏 + 𝟒𝒙𝟐𝟐 + 𝟔

Solution:
The necessary conditions for the existence of an extreme point are:

𝜕𝑓
= 3𝑥12 + 4𝑥1 = 𝑥1 (3𝑥1 + 4) = 0
𝜕𝑥1

𝜕𝑓
= 3𝑥22 + 8𝑥2 = 𝑥2 (3𝑥2 + 8) = 0
𝜕𝑥2

These equations are satisfied at the points: (0,0), (0, -8/3), (-4/3, 0), and (-4/3, -8/3)
Multi Variable Optimization – Unconstrained NLP : Example 1
Sufficiency conditions (Hessian Matrix):

𝜕𝑓 𝜕𝑓
= 3𝑥12 + 4𝑥1 ; = 3𝑥22 + 8𝑥2 = 0
𝜕𝑥1 𝜕𝑥2

𝜕2𝑓 𝜕2𝑓 𝜕2𝑓


2
= 6𝑥1 + 4 2
= 6𝑥2 + 8 =0
𝜕𝑥1 𝜕𝑥2 𝜕𝑥1 𝜕𝑥2

𝜕2𝑓 𝜕2𝑓
𝜕𝑥12 𝜕𝑥1 𝜕𝑥2 6𝑥1 + 4 0
𝑯 = =
(𝑥1 ∗,𝑥2 ∗) 𝜕2𝑓 𝜕2𝑓 0 6𝑥2 + 8
𝜕𝑥2 𝜕𝑥1 𝜕𝑥22 (𝑥1 ∗,𝑥2 ∗)
Multi Variable Optimization – Unconstrained NLP : Example 1
𝜕2𝑓 𝜕2𝑓
𝜕𝑥12 𝜕𝑥1 𝜕𝑥2
𝑯 =
(𝑥1 ∗,𝑥2 ∗) 𝜕2𝑓 𝜕2𝑓
𝜕𝑥2 𝜕𝑥1 𝜕𝑥22 (𝑥1 ∗,𝑥2 ∗)
Extreme H1 H2 Nature of H Nature of f (X)
Points (X) extreme
6𝑥1 + 4 0 points (X)
=
0 6𝑥2 + 8 (0,0) +4 +32 Positive definite Relative 6
minimum

H1=|6x1+4| (0,-8/3) +4 -32 Indefinite Saddle 418/27


point
6𝑥1 + 4 0
𝐻2 = (-4/3,0) -4 -32 Indefinite Saddle 194/27
0 6𝑥2 + 8
point

(-4/3,-8/3) -4 +32 Negative definite Relative 50/3


maximum
Multi Variable Optimization – Unconstrained NLP : Example 2
Multi Variable Optimization – Unconstrained NLP : Example 2
Multi Variable Optimization – Unconstrained NLP : Example 2
Introduction – Optimization Problem
 There are many different optimization, or “solving,” methods, some better suited to different types
of problems than others.
 Linear solving methods include techniques known as tabu search, linear programming, and
scatter search. Nonlinear solving methods include genetic algorithms etc.
 The optimization problem can be defined as a computational situation where the objective is to
find the best of all possible solutions.

 Classical optimization theory uses differential calculus to determine points of maxima and
minima (extrema) for unconstrained and constrained functions. The methods may not be
suitable for efficient numerical computations, but the under-lying theory provides the basis
for most nonlinear programming algorithms.
Introduction – Optimization Problem

 Using optimization to solve design problems provides unique insights into


situations.
 The model can compare the current design to the best possible and includes
information about limitations and implied costs of arbitrary rules and
policy decisions.
 A well-designed optimization model can also aid in what-if analysis, revealing
where improvements can be made or where trade-offs may need to be
made.
 The application of optimization to engineering problems spans multiple
disciplines
Introduction – Optimization Problem
 Design variables: Variables with which the design problem is parameterized:
X = (𝑋1 , 𝑋2 , 𝑋3 , ………….. 𝑋𝑛 )
 Objective: quantity that is to be minimized (maximized) Usually denoted
by:
( “cost function”) f(x)
 Constraint: condition that has to be satisfied

 Inequality constraint: g(x)≤ 0


 Equality constraint: h(x) = 0
Solving - Optimization Problem

 Optimization problems are typically solved using an iterative algorithm:

Constants Responses
Model

Design variables Derivatives of responses


Optimizer (design sensi- tivities)
Defining an Optimization Problem

1. Choose design variables and their bounds


2. Formulate objective
3. Formulate constraints (restrictions)
4. Choose suitable optimization algorithm
Example – Design of a SODA Can

 Design a SODA can to hold an specified amount of SODA


and other requirements.
 The cans will be produced in billions, so it is desirable to
minimize the cost of manufacturing.
 Since the cost is related directly to the surface area of the
sheet metal used, it is reasonable to minimize the sheet
metal required to fabricate the can.
Example – Design of a SODA Can
Requirements:
1. The diameter of the can should be no more
than 8 cm and no less than 3.5 cm.
2. The height of the can should be no more
than18 cm and no less than 8 cm.
3. The can is required to hold at least 400 ml of
fluid.
Example – Design of a SODA Can

Design variables
D = diameter of the can (cm)
H = height of the can (cm)

Objective function

The design objective is to minimize the surface area


Example – Design of a SODA Can
The constraints must be formulated in terms of design variables.
The first constraint is that the can must hold at least 400 ml of fluid.

The other constraints on the size of the can are:

The problem has two independent design variables and five explicit constraints.
Optimization Problem – Characteristics

Linearity

 Nonlinear objective functions can have multiple local optima:

x2 f
f

x2 x1
x x1

● Challenge: finding the global optimum.


Optimization Problem – Characteristics
An extreme point of a function f(X) defines either a maximum or a minimum
Figure 18.1 illustrates the maxima and minima of a
single-variable function f(x) over the interval [a,b].
The points x1, x2, x3, x4, and x6 are all extrema of f(x),
with xl, x3, and x6 as maxima and x2 and x4 as
minima.
Because

f(x6) is a global or absolute maximum, and f(x1) and f(x3) are local or relative maxima.
Similarly, f(x4) is a local minimum and f(x2) is a global minimum.
Optimization Problem – Characteristics

Although x1 (in Figure 18.1) is a maximum point, it differs


from remaining local maxima in that the value of f
corresponding to at least one point in the neighborhood of
x1 equals f(x1).

In this respect, x1 is a weak maximum, whereas x3 and


x6 are strong maxima.
Optimization Problem – Characteristics

Single‐Variable Optimization
Let f (x) be a continuous function of single variable x defined in interval [a, b].
Local Maxima

A function f (x) with single variable is said to have a local (relative) maxima at x = 𝑿𝟎
 If f (𝑿𝟎 ) ≥ f (𝑋0 + h)

for all sufficiently small positive and negative values of h.


Local Minima

A function f (x) with single variable is said to have a local (relative) minima at x = 𝑿𝟎
 If f (𝑿𝟎 ) ≤ f (𝑿𝟎 + h)

for all sufficiently small positive and negative values of h.


Unconstrained optimization problems

Single‐Variable Optimization
A function of one variable f (x) is said to have a relative or local minimum at x = x*

if f (x*) ≤ f (x* + h) for all sufficiently small positive and negative values of h.

Similarly, a point x* is called a relative or local maximum


if f (x*) ≥ f (x* + h) for all values of h sufficiently close to zero.

A function f (x) is said to have a global or absolute minimum at x* if f (x*) ≤ f (x) for all x, and not
just for all x close to x*, in the domain over which f (x) is defined.

Similarly, a point x* will be a global maximum of f (x) if f (x*) ≥ f (x)


Unconstrained optimization problems
Working Rule to Find the Extreme Points of Functions of n-Variables
Let us consider u = f (𝑋1 , 𝑋2 , 𝑋3 , ....., 𝑋𝑛 ) as a function of 𝑋1 , 𝑋2 , 𝑋3 , ....., 𝑋𝑛 .
Working Rule to Find the Extreme Points of Functions of n-Variables

Its leading minors are defined as Hence, the following cases will arise:
Example 1
Assume the following relationship for revenue and cost functions. Find out at what
level of output x, where x is measured in tons per week, profit is maximum
-
Draw a table of variations to study the derivative around stationary
and critical
points :
A table of variations must contain
 all stationary and critical
 the value of the function at stationary and critical points
 the intervals between and around the stationary and critical
points
 the sign of the derivative in these intervals

There are therefore two stationary points (X= -1 and X = 2).


There are however, no critical points since the derivative is
well defined for all X
Example 2
The profit P earned, by a company, on some item is function of its units produced
say x and is given by
P = 800x – 2x2

If the company’s expenditure or interest, rent and salary of the staff be Rs. 1 lakh, show
that the company will always be in loss.
Optimization Problem – Characteristics
Unimodality
 Bracketing and sectioning methods work best for unimodal functions:
“An unimodal function consists of exactly one monotonically increasing
and decreasing part”
Optimization Problem – Characteristics

Convexity

 Convex set:
“A set S is convex if for every two points x1, x2 in S, the connecting line also lies
completely inside S”
Lagrange Multipliers

 The method of Lagrange multipliers gives a set of


necessary conditions to identify optimal points of
equality constrained optimization problems.
 This is done by converting a constrained problem to an
equivalent unconstrained problem with the help of
certain unspecified parameters known as Lagrange
multipliers.
Lagrange Multipliers
Finding an Optimum solution using Lagrange Multipliers

 The classical problem formulation


minimize f(x1, x2, ..., xn)
Subject to h1(x1, x2, ..., xn) = 0
can be converted to
minimize L(x, λ) = f(x) - λ h1(x)
where
L(x, λ) is the Lagrangian function
λ is an unspecified positive or negative constant called the
Lagrangian Multiplier
Lagrange Multipliers

1. Original problem is rewritten as:


minimize L(x, λ) = f(x) - λ h1(x)
2. Take derivatives of L(x, λ) with respect to xi and set them equal to zero.
• If there are n variables (i.e., x1, ..., xn) then you will get n equations with n +
1 unknowns (i.e., n variables xi and one Lagrangian multiplier λ)
3. Express all xi in terms of Langrangian multiplier λ
4. Plug x in terms of λ in constraint h1(x) = 0 and solve λ.
5. Calculate x by using the just found value for λ.
 Note that the n derivatives and one constraint equation result in n+1
equations for n+1 variables!
Multiple constraints

 The Lagrangian multiplier method can be used for any number of equality
constraints.
 Suppose we have a classical problem formulation with k equality

constraints
minimize f(x1, x2, ..., xn)
Subject to h1(x1, x2, ..., xn) = 0
......
hk(x1, x2, ..., xn) = 0

This can be converted in

minimize L(x, ) = f(x) -  T h(x)


Where  T is the transpose vector of Lagrangian multpliers and has length k
EXAMPLE
A factory manufactures HONDA CITY and HONDA CIVIC cars.
Determine the optimal number of HONDA CITY and HONDA CIVIC cars
produced if the factory capacity is 90 cars per day, and the cost of
manufacturing is
C (x, y)= 6x2 + 12y2, where x is the number of HONDA CITY cars and y
is the number of HONDA CIVIC cars produced.
EXAMPLE(Cont.)
 VARIABLES
x = No. of HONDA CITY cars produced
y = No. of HONDA CIVIC cars produced

 COST of Manufacturing;
C (x, y)= 6x2 + 12y2

 OBJECTIVE: MINIMIZE COST


 CONSTRAINT: 90 cars per day
x + y = 90

 Original problem is rewritten as:


minimize L(x, ) = f(x) -  h (x)
EXAMPLE(Cont.)
Practice Questions
The gross domestic product (GDP) of a certain country following a national crisis (at t≡0) is
approximated by
G(t)=−0.4t3 + 4.8 t2 +20 ; 0≤t≤12
where G(t) is measured in billions of dollars. When during this time period is the GDP at its highest?
We compute Furthermore, since G′(t) is
G′(t)=−1.2 t2 +9.6t and G′′(t)=−2.4t+9.6 defined for all t in the domain of
We now solve for the critical points of G: G, we conclude that there G has a
G′(t)=0
− 1.2 t2 +9.6t =0
single critical point at t=8.
t(−1.2t+9.6) =0 We now compare the value of G
t =0, 8 at the critical point and both
Since t=0 is an endpoint, it is not a critical boundary points:
point. G(0)=20, G(8)=122.4, G(12)=20
Therefore, the GDP is at its
highest after 8 years.
 Use the method of Lagrange multipliers to find the minimum value of
48X +96Y - X2 – 2 XY – 9 Y2
subject to the constraint
5X+Y-54=0

 Use the method of Lagrange multipliers to find the minimum value of


X21+ X22 – 4 X1 - 6 X2+ 4
subject to the constraint
X1+ X2−10

 Use the method of Lagrange multipliers to find the minimum value of


f(x,y)= X2 + 4 Y2 −2x+8y
subject to the constraint
x+2y=7
Unconstrained optimization algorithms

 Single-variable methods
🞑 0th order (involving only f )
🞑 1st order (involving f and f ’ )
🞑 2nd order (involving f, f ’ and f ” )
 Multiple variable methods
🞑 Gradient Descent Methods
🞑 Simplex Method
🞑 Sequential Linear Programming
🞑 Sequential Quadratic Programming
🞑 Etc.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy