0% found this document useful (0 votes)
3 views28 pages

Class Constrained Optimization 2024

The document discusses constrained optimization, focusing on minimizing a function subject to equality and inequality constraints. It introduces the Lagrange multiplier theorem and Karush-Kuhn-Tucker conditions for necessary optimality, providing examples and proofs for both equality and inequality constraints. The document also covers sensitivity analysis related to changes in constraints and their impact on the optimal solution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views28 pages

Class Constrained Optimization 2024

The document discusses constrained optimization, focusing on minimizing a function subject to equality and inequality constraints. It introduces the Lagrange multiplier theorem and Karush-Kuhn-Tucker conditions for necessary optimality, providing examples and proofs for both equality and inequality constraints. The document also covers sensitivity analysis related to changes in constraints and their impact on the optimal solution.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Constrained Optimization

minimize f (x)
where xT = {x1 , x2 , · · · xn }
subject to
hi (x) = 0; i = 1, 2 · · · , m
gj (x) ≤ 0; j = 1, 2, · · · , r
There is a restriction on number of equality constraints. i.e.,
m ≤ n.
However, there is no restriction on inequality constraints.

S. Sivasubramani EE322- Constrained Optimization 1/ 34


Equality Constraints

Let us consider now equality constrained problem.

minimize f (x)

subject to
hi (x) = 0; i = 1, 2 · · · , m
We assume f and h are twice continuously differentiable functions.

S. Sivasubramani EE322- Constrained Optimization 2/ 34


Lagrange Multiplier theorem - Necessary Condition

It states that for a local minimum, there exists scalars


λ1 , λ2 , · · · , λm called Lagrange multipliers, such that
m
X
∇f (x∗ ) + λi ∇hi (x∗ ) = 0
i=1

hi (x) = 0
It will give n + m equations in n + m unknowns where n and m are
number of design variables and Lagrange multipliers respectively.

S. Sivasubramani EE322- Constrained Optimization 3/ 34


Lagrange Multiplier Theorem - Proof

minimize f (x1 , x2 )
subject to h(x1 , x2 ) = 0
Suppose x2 = φ(x1 )
minimize f (x1 , φ(x1 ))
subject to h(x1 , φ(x1 )) = 0
∂f ∂f dx2
+ =0 (1)
∂x1 ∂x2 dx1
∂h ∂h dx2
+ =0 (2)
∂x1 ∂x2 dx1
From (2),
∂h
dx2 ∂x
=− 1
dx1 ∂h
∂x2
S. Sivasubramani EE322- Constrained Optimization 4/ 34
On substituting this in (1),
 
∂h
∂f ∂f 
− ∂x1  = 0

+
∂x1 ∂x2  ∂h 
∂x2
 
∂f
 ∂x2 
− ∂h ,
Let λ =  

∂x2
∂f ∂h
+λ =0 (3)
∂x1 ∂x1
From λ,
∂f ∂h
+λ =0 (4)
∂x2 ∂x2
From (3) and (4)
m
X

∇f (x ) + λi ∇hi (x∗ ) = 0
i=1
S. Sivasubramani EE322- Constrained Optimization 5/ 34
Example

minimize x1 + x2
subject to x12 + x22 = 2
x2

∇f (x ∗ ) = (1, 1) x1

x ∗ = (−1, −1)
∇h(x ∗ ) = (−2, −2)

S. Sivasubramani EE322- Constrained Optimization 6/ 34


The Lagrange multiplier is λ = 12 . It is free in sign.

∇f (x∗ ) = −λ∇h(x∗ )
Similarly,
m
X
∇f (x∗ ) = − λi ∇hi (x∗ )
i=1

It requires that ∇h1 (x), ∇h2 (x), · · · , ∇hi (x) should be


independent.

S. Sivasubramani EE322- Constrained Optimization 7/ 34


Example - No Lagrange Multiplier

minimize x1 + x2
subject to
(x1 − 1)2 + x22 = 1
(x1 − 2)2 + x22 = 4
Optimum is  
∗ 0
x =
0
     
1 −2 −4
∇f (x∗ ) = ; ∇h1 (x∗ ) = ; ∇h2 (x∗ ) = ;
1 0 0
2
X
∇f (x∗ ) 6= − λi ∇hi (x∗ )
i=1

S. Sivasubramani EE322- Constrained Optimization 8/ 34


Example - No Lagrange Multiplier

x2

∇f (x ∗ ) = (1, 1)
∇h1 (x ∗ ) = (−2, 0)
∇h1 (x ∗ ) = (−4, 0)
x ∗ = (0, 0) x1

S. Sivasubramani EE322- Constrained Optimization 9/ 34


Lagrangian Function

Lagrangian function is
m
X
L(x, λ) = f (x) + λi hi (x)
i=1

By making Lagrangian stationary with respect to x and λ,

∇Lx (x∗ , λ∗ ) = 0; ∇Lλ (x∗ , λ∗ ) = 0;


m
X
∇f (x∗ ) + λi ∇hi (x∗ ) = 0
i=1

hi (x) = 0
It converts a constrained problem to an unconstrained problem.

S. Sivasubramani EE322- Constrained Optimization 10/ 34


Inequality Constraints - active

x2
Minimize

(x1 − 1.5)2 + (x2 − 1.5)2

subject to
x ∗ = (1, 1)
x1 + x2 − 2 ≤ 0
x1
−x1 ≤ 0; −x2 ≤ 0

S. Sivasubramani EE322- Constrained Optimization 11/ 34


Inequality Constraints - Inactive

x2
Minimize

(x1 − 0.5)2 + (x2 − 0.5)2

subject to

x1 + x2 − 2 ≤ 0 x ∗ = (0.5, 0.5)
x1
−x1 ≤ 0; −x2 ≤ 0

S. Sivasubramani EE322- Constrained Optimization 12/ 34


General Problem

minimize f (x)
subject to
hi (x) = 0; i = 1, 2 · · · , m
gj (x) ≤ 0; j = 1, 2, · · · , r
By adding slack variable to g (x),

gj (x) + sj2 = 0

Lagrange multiplier µj corresponding to gj (x) must be nonnegative.

µj ≥ 0; j = 1, 2, · · · , r

S. Sivasubramani EE322- Constrained Optimization 13/ 34


Karush-Kuhn-Tucker Optimality Conditions - Necessary

Lagrangian function
m
X r
X
L(x, λ, µ, s) = f (x) + λi hi (x) + µj (gj (x) + sj2 )
i=1 j=1

By making Lagrangian stationary with respect to x ,λ,µ and s

m
X r
X
∇f (x∗ ) + λi ∇hi (x∗ ) + µj ∇gj (x∗ ) = 0
i=1 j=1
hi (x∗ ) = 0; i = 1, · · · , m

gj (x ) + sj2 = 0; j = 1, · · · , r
2µ∗j sj = 0; j = 1, · · · , r
µ∗j ≥ 0; j = 1, · · · , r

S. Sivasubramani EE322- Constrained Optimization 14/ 34


Example

1 2
minimize (x + x22 + x32 )
2 1
subject to
x1 + x2 + x3 ≤ −3
Lagrangian function is
1
L = (x12 + x22 + x32 ) + µ(x1 + x2 + x3 + 3 + s 2 )
2
By KKT conditions,

x1∗ + µ∗ = 0
x2∗ + µ∗ = 0
x3∗ + µ∗ = 0
x1∗ + x2∗ + x3∗ + 3 + s 2 = 0
2µ∗ s = 0
S. Sivasubramani EE322- Constrained Optimization 15/ 34
Example - contd

Case 1 : The constraint is active i.e., s = 0

x1∗ = −1; x2∗ = −1; x3∗ = −1; µ∗ = 1

As it satisfies KKT conditions, it is local minimum.

Case 2 : The constraint is inactive i.e., µ = 0.

x1∗ = 0; x2∗ = 0; x3∗ = 0; µ∗ = 0; s 2 = −3

Since slack variable (s 2 ) is negative, case 2 violates. It can not be


the solution.

S. Sivasubramani EE322- Constrained Optimization 16/ 34


KKT Necessary Conditions - Alternate form

From KKT Conditions,

gj (x∗ ) + sj2 = 0; j = 1, · · · , m

2µ∗j sj = 0; j = 1, · · · , m
−gj (x∗ ) = sj2
Since sj2 ≥ 0,
−gj (x∗ ) ≥ 0

gj (x∗ ) ≤ 0

µ∗j sj2 = 0

µ∗j gj (x∗ ) = 0

S. Sivasubramani EE322- Constrained Optimization 17/ 34


Lagrangian function
m
X r
X
L(x, λ, µ) = f (x) + λi hi (x) + µj gj (x)
i=1 j=1

m
X r
X
∇f (x∗ ) + λi ∇hi (x∗ ) + µj ∇gj (x∗ ) = 0
i=1 j=1
hi (x∗ ) = 0; i = 1, · · · , m

gj (x ) ≤ 0; j = 1, · · · , r
µj gj (x∗ )

= 0; j = 1, · · · , r
µ∗j ≥ 0; j = 1, · · · , r

S. Sivasubramani EE322- Constrained Optimization 18/ 34


Example

Minimize − (x1 x2 + x2 x3 + x3 x1 )
subject to x1 + x2 + x3 = 3
Lagrangian function

L = −(x1 x2 + x2 x3 + x3 x1 ) + λ(x1 + x2 + x3 − 3)

KKT necessary conditions (first order) are

−x2∗ − x3∗ + λ∗ = 0
−x1∗ − x3∗ + λ∗ = 0
−x2∗ − x1∗ + λ∗ = 0
x1∗ + x2∗ + x3∗ = 3

x1∗ = 1; x2∗ = 1; x3∗ = 1; λ∗ = 2

S. Sivasubramani
Example
Consider the problem
1 2
minimize (x + x22 + x32 )
2 1
subject to
x1 + x2 + x3 ≤ −3
Lagrangian function is
1
L = (x12 + x22 + x32 ) + µ(x1 + x2 + x3 + 3)
2
By KKT conditions,
x1∗ + µ∗ = 0
x2∗ + µ∗ = 0
x3∗ + µ∗ = 0
x1∗ + x2∗ + x3∗ + 3 ≤ 0
µ≥0
S. Sivasubramani EE322- Constrained Optimization 25/ 34
Example - contd

Case 1 : The constraint is active i.e., g (x) = 0

x1∗ = −1; x2∗ = −1; x3∗ = −1; µ∗ = 1

As it satisfies KKT conditions, it can be the solution to KKT


conditions. 1

Case 2 : The constraint is inactive i.e., µ = 0.

x1∗ = 0; x2∗ = 0; x3∗ = 0; µ∗ = 0;

Since it violates g (x) < −3, it can not be the solution.

1
Since this problem is convex, it is a local minimum.
S. Sivasubramani EE322- Constrained Optimization 26/ 34
Sensitivity

Minimize f (x)
subject to aT x = b
Let us assume that x∗ is a local minimum and λ∗ is a
corresponding Lagrange multiplier. If the level of the constraint b
is changed to b + ∆b, the minimum x∗ will change to x∗ + ∆x.
Since,
b + ∆b = aT (x∗ + ∆x) = aT x∗ + aT ∆x
= b + aT ∆x
∆b = aT ∆x
The variations ∆x and ∆b are related by

aT ∆x = ∆b

S. Sivasubramani EE322- Constrained Optimization 28/ 34


Using Lagrange multiplier condition,

∇f (x∗ ) + λ∇h(x∗ ) = 0

Since h(x) is affine,


∇f (x∗ ) = −λ∗ a
The objective function variation can be written as using first order
approximation

f (x∗ + ∆x) = f (x∗ ) + ∇f (x∗ )T ∆x

∆f = f (x∗ + ∆x) − f (x∗ ) = ∇f (x∗ )T ∆x


∆f = −λ∗ aT ∆x
Since aT ∆x = ∆b,
∆f = −λ∗ ∆b

S. Sivasubramani EE322- Constrained Optimization 29/ 34


∆f
= −λ∗
∆b
Thus Lagrange multiplier λ∗ gives the rate of optimal cost
decrease as the level of constraint increases.
In the case where there are multiple constraints

aT
i x = bi ; i = 1, 2, · · · , m

The function variation can be modified as


m
X
∆f = − λ∗i aT
i ∆x
i=1

m
X
∆f = − λ∗i ∆bi
i=1

S. Sivasubramani EE322- Constrained Optimization 30/ 34


Example - Sensitivity

Minimize − (x1 x2 + x2 x3 + x3 x1 )
subject to x1 + x2 + x3 = 4
As we have already minimized for x1 + x2 + x3 = 3, this problem
can be solved by sensitivity analysis.
∆f = −λ∗ ∆b
∆f = f (4) − f (3) = −λ∗ ∆b
As f = −3 and λ∗ = 2 for b = 3, (refer slide # 23)
f ∗ (4) = f ∗ (3) − λ∗ ∆b
f ∗ (4) = −3 − 2 ∗ 1 = −5
It gives an approximate value!

If it were minimized using Lagrange multiplier theorem,


fmin = −5.333 for b = 4.
S. Sivasubramani EE322- Constrained Optimization 31/ 34
Why µ nonnegative?

Minimize f (x)
subject to g (x) ≤ 0
Let x∗ and µ∗satisfy KKT conditions and f ∗ be the minimum
function.
Consider the modified problem
Minimize f (x)
subject to g (x) ≤ e
where e ≥ 0. From sensitivity analysis,
∆f f (e) − f (0)
= = −µ∗
e e
If µ∗ is negative,
f ∗ (e) > f ∗ (0)
It violates the sensitivity interpretation. Therefore µ∗ is
nonnegative.
S. Sivasubramani EE322- Constrained Optimization 32/ 34
Example - Power System

P1 P2 P3

PD

Minimize F
subject to P1 + P2 + P3 = PD
where F is the cost in $/hr.
3
X
F = ai + bi Pi + ci Pi2
i=1
Lagrangian function
3
X
L= (ai + bi Pi + ci Pi2 ) + λ(P1 + P2 + P3 − PD )
i=1
S. Sivasubramani EE322- Constrained Optimization 33/ 34
First order KKT conditions,
2c1 P1 + b1 + λ = 0
2c2 P2 + b2 + λ = 0
2c3 P3 + b3 + λ = 0
P1 + P2 + P3 − PD = 0
From first three equations,
dFi
= −λ
dPi
If equality constraint is
PD − (P1 + P2 + P3 ) = 0
dFi

dPi
Since the objective function is convex and constraint is affine, the
first order necessary (KKT) conditions is enough to say that the
solution obtained is indeed minimum.
EE322- Constrained Optimization 34/ 34

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy