Class Constrained Optimization 2024
Class Constrained Optimization 2024
minimize f (x)
where xT = {x1 , x2 , · · · xn }
subject to
hi (x) = 0; i = 1, 2 · · · , m
gj (x) ≤ 0; j = 1, 2, · · · , r
There is a restriction on number of equality constraints. i.e.,
m ≤ n.
However, there is no restriction on inequality constraints.
minimize f (x)
subject to
hi (x) = 0; i = 1, 2 · · · , m
We assume f and h are twice continuously differentiable functions.
hi (x) = 0
It will give n + m equations in n + m unknowns where n and m are
number of design variables and Lagrange multipliers respectively.
minimize f (x1 , x2 )
subject to h(x1 , x2 ) = 0
Suppose x2 = φ(x1 )
minimize f (x1 , φ(x1 ))
subject to h(x1 , φ(x1 )) = 0
∂f ∂f dx2
+ =0 (1)
∂x1 ∂x2 dx1
∂h ∂h dx2
+ =0 (2)
∂x1 ∂x2 dx1
From (2),
∂h
dx2 ∂x
=− 1
dx1 ∂h
∂x2
S. Sivasubramani EE322- Constrained Optimization 4/ 34
On substituting this in (1),
∂h
∂f ∂f
− ∂x1 = 0
+
∂x1 ∂x2 ∂h
∂x2
∂f
∂x2
− ∂h ,
Let λ =
∂x2
∂f ∂h
+λ =0 (3)
∂x1 ∂x1
From λ,
∂f ∂h
+λ =0 (4)
∂x2 ∂x2
From (3) and (4)
m
X
∗
∇f (x ) + λi ∇hi (x∗ ) = 0
i=1
S. Sivasubramani EE322- Constrained Optimization 5/ 34
Example
minimize x1 + x2
subject to x12 + x22 = 2
x2
∇f (x ∗ ) = (1, 1) x1
x ∗ = (−1, −1)
∇h(x ∗ ) = (−2, −2)
∇f (x∗ ) = −λ∇h(x∗ )
Similarly,
m
X
∇f (x∗ ) = − λi ∇hi (x∗ )
i=1
minimize x1 + x2
subject to
(x1 − 1)2 + x22 = 1
(x1 − 2)2 + x22 = 4
Optimum is
∗ 0
x =
0
1 −2 −4
∇f (x∗ ) = ; ∇h1 (x∗ ) = ; ∇h2 (x∗ ) = ;
1 0 0
2
X
∇f (x∗ ) 6= − λi ∇hi (x∗ )
i=1
x2
∇f (x ∗ ) = (1, 1)
∇h1 (x ∗ ) = (−2, 0)
∇h1 (x ∗ ) = (−4, 0)
x ∗ = (0, 0) x1
Lagrangian function is
m
X
L(x, λ) = f (x) + λi hi (x)
i=1
hi (x) = 0
It converts a constrained problem to an unconstrained problem.
x2
Minimize
subject to
x ∗ = (1, 1)
x1 + x2 − 2 ≤ 0
x1
−x1 ≤ 0; −x2 ≤ 0
x2
Minimize
subject to
x1 + x2 − 2 ≤ 0 x ∗ = (0.5, 0.5)
x1
−x1 ≤ 0; −x2 ≤ 0
minimize f (x)
subject to
hi (x) = 0; i = 1, 2 · · · , m
gj (x) ≤ 0; j = 1, 2, · · · , r
By adding slack variable to g (x),
gj (x) + sj2 = 0
µj ≥ 0; j = 1, 2, · · · , r
Lagrangian function
m
X r
X
L(x, λ, µ, s) = f (x) + λi hi (x) + µj (gj (x) + sj2 )
i=1 j=1
m
X r
X
∇f (x∗ ) + λi ∇hi (x∗ ) + µj ∇gj (x∗ ) = 0
i=1 j=1
hi (x∗ ) = 0; i = 1, · · · , m
∗
gj (x ) + sj2 = 0; j = 1, · · · , r
2µ∗j sj = 0; j = 1, · · · , r
µ∗j ≥ 0; j = 1, · · · , r
1 2
minimize (x + x22 + x32 )
2 1
subject to
x1 + x2 + x3 ≤ −3
Lagrangian function is
1
L = (x12 + x22 + x32 ) + µ(x1 + x2 + x3 + 3 + s 2 )
2
By KKT conditions,
x1∗ + µ∗ = 0
x2∗ + µ∗ = 0
x3∗ + µ∗ = 0
x1∗ + x2∗ + x3∗ + 3 + s 2 = 0
2µ∗ s = 0
S. Sivasubramani EE322- Constrained Optimization 15/ 34
Example - contd
gj (x∗ ) + sj2 = 0; j = 1, · · · , m
2µ∗j sj = 0; j = 1, · · · , m
−gj (x∗ ) = sj2
Since sj2 ≥ 0,
−gj (x∗ ) ≥ 0
gj (x∗ ) ≤ 0
µ∗j sj2 = 0
µ∗j gj (x∗ ) = 0
m
X r
X
∇f (x∗ ) + λi ∇hi (x∗ ) + µj ∇gj (x∗ ) = 0
i=1 j=1
hi (x∗ ) = 0; i = 1, · · · , m
∗
gj (x ) ≤ 0; j = 1, · · · , r
µj gj (x∗ )
∗
= 0; j = 1, · · · , r
µ∗j ≥ 0; j = 1, · · · , r
Minimize − (x1 x2 + x2 x3 + x3 x1 )
subject to x1 + x2 + x3 = 3
Lagrangian function
L = −(x1 x2 + x2 x3 + x3 x1 ) + λ(x1 + x2 + x3 − 3)
−x2∗ − x3∗ + λ∗ = 0
−x1∗ − x3∗ + λ∗ = 0
−x2∗ − x1∗ + λ∗ = 0
x1∗ + x2∗ + x3∗ = 3
S. Sivasubramani
Example
Consider the problem
1 2
minimize (x + x22 + x32 )
2 1
subject to
x1 + x2 + x3 ≤ −3
Lagrangian function is
1
L = (x12 + x22 + x32 ) + µ(x1 + x2 + x3 + 3)
2
By KKT conditions,
x1∗ + µ∗ = 0
x2∗ + µ∗ = 0
x3∗ + µ∗ = 0
x1∗ + x2∗ + x3∗ + 3 ≤ 0
µ≥0
S. Sivasubramani EE322- Constrained Optimization 25/ 34
Example - contd
1
Since this problem is convex, it is a local minimum.
S. Sivasubramani EE322- Constrained Optimization 26/ 34
Sensitivity
Minimize f (x)
subject to aT x = b
Let us assume that x∗ is a local minimum and λ∗ is a
corresponding Lagrange multiplier. If the level of the constraint b
is changed to b + ∆b, the minimum x∗ will change to x∗ + ∆x.
Since,
b + ∆b = aT (x∗ + ∆x) = aT x∗ + aT ∆x
= b + aT ∆x
∆b = aT ∆x
The variations ∆x and ∆b are related by
aT ∆x = ∆b
∇f (x∗ ) + λ∇h(x∗ ) = 0
aT
i x = bi ; i = 1, 2, · · · , m
m
X
∆f = − λ∗i ∆bi
i=1
Minimize − (x1 x2 + x2 x3 + x3 x1 )
subject to x1 + x2 + x3 = 4
As we have already minimized for x1 + x2 + x3 = 3, this problem
can be solved by sensitivity analysis.
∆f = −λ∗ ∆b
∆f = f (4) − f (3) = −λ∗ ∆b
As f = −3 and λ∗ = 2 for b = 3, (refer slide # 23)
f ∗ (4) = f ∗ (3) − λ∗ ∆b
f ∗ (4) = −3 − 2 ∗ 1 = −5
It gives an approximate value!
Minimize f (x)
subject to g (x) ≤ 0
Let x∗ and µ∗satisfy KKT conditions and f ∗ be the minimum
function.
Consider the modified problem
Minimize f (x)
subject to g (x) ≤ e
where e ≥ 0. From sensitivity analysis,
∆f f (e) − f (0)
= = −µ∗
e e
If µ∗ is negative,
f ∗ (e) > f ∗ (0)
It violates the sensitivity interpretation. Therefore µ∗ is
nonnegative.
S. Sivasubramani EE322- Constrained Optimization 32/ 34
Example - Power System
P1 P2 P3
PD
Minimize F
subject to P1 + P2 + P3 = PD
where F is the cost in $/hr.
3
X
F = ai + bi Pi + ci Pi2
i=1
Lagrangian function
3
X
L= (ai + bi Pi + ci Pi2 ) + λ(P1 + P2 + P3 − PD )
i=1
S. Sivasubramani EE322- Constrained Optimization 33/ 34
First order KKT conditions,
2c1 P1 + b1 + λ = 0
2c2 P2 + b2 + λ = 0
2c3 P3 + b3 + λ = 0
P1 + P2 + P3 − PD = 0
From first three equations,
dFi
= −λ
dPi
If equality constraint is
PD − (P1 + P2 + P3 ) = 0
dFi
=λ
dPi
Since the objective function is convex and constraint is affine, the
first order necessary (KKT) conditions is enough to say that the
solution obtained is indeed minimum.
EE322- Constrained Optimization 34/ 34