Duality Theory
Duality Theory
Linear Optimization
4. Duality theory
Based on the book Introduction to Linear Optimization by D. Bertsimas and J.N. Tsitsiklis
Outline
Sec. 4.1 We discuss how we can obtain lower bounds for LP problems.
Sec. 4.2 We start with a LP problem, called the primal, and introduce
another LP problem, called the dual.
Sec. 4.3 We study the relation between the these two problems.
Sec. 4.4 We interpret the meaning of an optimal solution to the dual
problem.
Sec. 4.5 We develop the dual simplex method.
Sec. 4.6 We study Farkas’ lemma, which uncovers the deeper structure
of LP.
4.1 Motivation
Motivation
minimize x1 + 3x2
subject to x1 + 3x2 ≥ 2.
minimize x1 + 3x2
subject to x1 + 3x2 ≥ 2.
x1 + 3x2 ≥ 2.
Motivation
minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1.
minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1.
x1 + x2 ≥ 2
+ 2 · (x2 ≥ 1)
= x1 + 3x2 ≥ 4
minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1
x1 − x2 ≥ 3.
▶ How can we obtain a lower bound?
Motivation
▶ Let’s see a more interesting example:
minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1
x1 − x2 ≥ 3.
▶ How can we obtain a lower bound?
p1 · (x1 + x2 ≥ 2) p1 + p3 = 1
+ p2 · (x2 ≥ 1) p1 + p2 − p3 = 3
+ p3 · (x1 − x2 ≥ 3) p1 , p2 , p3 ≥ 0
= x1 + 3x2 ≥ B →B = 2p1 + p2 + 3p3
▶ There are many different ways to obtain lower bounds:
▶ p1 = 1, p2 = 2, p3 = 0 gives us the lower bound B = 4.
▶ p1 = 0, p2 = 4, p3 = 1 gives us the lower bound B = 7.
Motivation
▶ Let’s see a more interesting example:
minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1
x1 − x2 ≥ 3.
▶ How can we obtain a lower bound?
p1 · (x1 + x2 ≥ 2) p1 + p3 = 1
+ p2 · (x2 ≥ 1) p1 + p2 − p3 = 3
+ p3 · (x1 − x2 ≥ 3) p1 , p2 , p3 ≥ 0
= x1 + 3x2 ≥ B →B = 2p1 + p2 + 3p3
▶ A natural question is: What is the best lower bound for the
optimal cost that we can obtain in this way?
▶ The best lower bound B can be obtained by maximizing
2p1 + p2 + 3p3 over the constraints that we have derived.
Motivation
▶ We obtain:
Primal Dual
minimize 1x1 + 3x2 maximize 2p1 + 1p2 + 3p3
subject to 1x1 +1x2 ≥ 2 subject to 1p1 +0p2 +1p3 = 1
0x1 +1x2 ≥ 1 1p1 +1p2 −1p3 = 3
1x1 −1x2 ≥ 3, p1 , p2 , p3 ≥ 0.
Motivation
▶ Consider a more general primal LP problem:
minimize c′ x
subject to Ax ≥ b.
(p′ A)x ≥ p′ b.
minimize c′ x
subject to Ax = b
x ≥ 0.
▶ We denote by p the dual variables associated to Ax = b and
by q the dual variables associated to x ≥ 0.
▶ Variables q must be nonnegative, but variables p can be free.
▶ Each pair of vectors p, q with q ≥ 0 yields the inequality
(p′ A + q′ I)x ≥ p′ b + q′ 0.
▶ Thus to obtain lower bounds we just need to consider the dual
LP problem:
maximize p′ b + q′ 0
subject to p′ A + q′ I = c′
q ≥ 0.
Motivation: standard form
maximize p′ b + q′ 0
subject to p′ A + q′ I = c′
q ≥ 0.
▶ We can now drop the q variables and obtain the equivalent
dual LP problem:
maximize p′ b
subject to p′ A ≤ c′ .
Primal Dual
′
minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .
Primal Dual
′
minimize cx maximize p′ b
subject to Ax = b subject to p free
x ≥ 0, p′ A ≤ c′ .
xj free, j ∈ N3 , p′ Aj = cj , j ∈ N3 .
▶ For each constraint in the primal (other than the sign
constraints), we introduce a variable in the dual problem.
▶ For each variable in the primal, we introduce a constraint in
the dual.
The dual problem
▶ Let A be a matrix with rows a′i and columns Aj .
▶ Given a primal problem, on the left, its dual is defined to be
the problem on the right:
Primal Dual
′
min c x max p′ b
s. t. a′i x ≥ bi , i ∈ M1 , s. t. pi ≥ 0, i ∈ M1 ,
a′i x = bi , i ∈ M3 , pi free, i ∈ M3 ,
′
xj ≥ 0, j ∈ N1 , p Aj ≤ cj , j ∈ N1 ,
xj free, j ∈ N3 , p′ Aj = cj , j ∈ N3 .
▶ Depending on whether the primal constraint is an equality
or inequality constraint, the corresponding dual variable is
either free or sign-constrained, respectively.
The dual problem
▶ Let A be a matrix with rows a′i and columns Aj .
▶ Given a primal problem, on the left, its dual is defined to be
the problem on the right:
Primal Dual
′
min c x max p′ b
s. t. a′i x ≥ bi , i ∈ M1 , s. t. pi ≥ 0, i ∈ M1 ,
a′i x ≤ bi , i ∈ M2 , pi ≤ 0, i ∈ M2 ,
a′i x = bi , i ∈ M3 , pi free, i ∈ M3 ,
′
xj ≥ 0, j ∈ N1 , p Aj ≤ cj , j ∈ N1 ,
′
xj ≤ 0, j ∈ N2 , p Aj ≥ cj , j ∈ N2 ,
′
xj free, j ∈ N3 , p Aj = cj , j ∈ N3 .
▶ Depending on whether a variable in the primal problem is
free or sign-constrained, we have an equality or inequality
constraint, respectively, in the dual problem.
The dual problem
Primal
min x1 + 2x2 + 3x3
s. t. − x1 + 3x2 = 5
2x1 − x2 + 3x3 ≥ 6
x3 ≤ 4
x1 ≥ 0
x2 ≤ 0
x3 free
Example 4.1
▶ Its dual is shown on the right:
Primal Dual
min x1 + 2x2 + 3x3 max 5p1 + 6p2 + 4p3
s. t. − x1 + 3x2 = 5 s. t. p1 free
2x1 − x2 + 3x3 ≥ 6 p2 ≥ 0
x3 ≤ 4 p3 ≤ 0
x1 ≥ 0 − p1 + 2p2 ≤ 1
x2 ≤ 0 3p1 − p2 ≥ 2
x3 free 3p2 + p3 = 3
Example 4.1
▶ We transform the dual into an equivalent minimization
problem (and we multiply the three last constraints by −1).
Dual
max 5p1 + 6p2 + 4p3
s. t. p1 free
p2 ≥ 0
p3 ≤ 0
− p1 + 2p2 ≤ 1
3p1 − p2 ≥ 2
3p2 + p3 = 3
Example 4.1
▶ We transform the dual into an equivalent minimization
problem (and we multiply the three last constraints by −1).
Dual
− min − 5p1 − 6p2 − 4p3
s. t. p1 free
p2 ≥ 0
p3 ≤ 0
p1 − 2p2 ≥ −1
− 3p1 + p2 ≤ −2
− 3p2 − p3 = −3
Example 4.1
▶ Then, on the left, we show its dual:
Dual of dual
Dual
− max − x1 − 2x2 − 3x3
− min − 5p1 − 6p2 − 4p3
s. t. x1 − 3x2 = −5
s. t. p1 free
− 2x1 + x2 − 3x3 ≤ −6
p2 ≥ 0
− x3 ≥ −4
p3 ≤ 0
x1 ≥ 0
p1 − 2p2 ≥ −1
x2 ≤ 0
− 3p1 + p2 ≤ −2
x3 free
− 3p2 − p3 = −3
Example 4.1
▶ We transform it into an equivalent minimization problem
(and we multiply the three first constraints by −1).
Dual of dual
− max − x1 − 2x2 − 3x3
s. t. x1 − 3x2 = −5
− 2x1 + x2 − 3x3 ≤ −6
− x3 ≥ −4
x1 ≥ 0
x2 ≤ 0
x3 free
Example 4.1
▶ We have obtained the primal problem we started with!
Dual of dual
min x1 + 2x2 + 3x3
s. t. − x1 + 3x2 = 5
2x1 − x2 + 3x3 ≥ 6
x3 ≤ 4
x1 ≥ 0
x2 ≤ 0
x3 free
The dual problem
▶ The first primal problem considered in Example 4.1 had all of
the ingredients of a general LP problem.
▶ This suggests that the conclusion reached at the end of the
example should hold in general.
Theorem 4.1
If we transform the dual into an equivalent minimization
problem and then form its dual, we obtain a problem
equivalent to the original problem.
Proof: Exercise.
Hint: Follow the steps performed in Example 4.1 with abstract
symbols replacing specific numbers.
Primal Dual
′ + ′ −
minimize cx −cx maximize p′ b
subject to Ax+ − Ax− ≥ b subject to p≥0
x ≥0
+
p′ A ≤ c′
x− ≥ 0, − p′ A ≤ −c′ .
Example 4.2
Consider the pair of primal and dual problems:
Primal Dual
′
minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .
∑
m−1 ∑
m−1
am = γi ai , bm = γi bi .
i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
▶ If the last equality is redundant, then it can be eliminated.
▶ This means that there exist scalars γ1 , . . . , γm−1 such that
∑
m−1 ∑
m−1
am = γi ai , bm = γi bi .
i=1 i=1
∑
m−1 ∑
m−1 ∑
m−1 ∑
m−1
p′ b = pi bi +pm bm = pi bi +pm γi bi = (pi +γi pm )bi .
i=1 i=1 i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
▶ If the last equality is redundant, then it can be eliminated.
▶ This means that there exist scalars γ1 , . . . , γm−1 such that
∑
m−1 ∑
m−1
am = γi ai , bm = γi bi .
i=1 i=1
∑
m−1 ∑
m−1 ∑
m−1 ∑
m−1
p′ A = pi a′i +pm a′m = pi a′i +pm γi a′i = (pi +γi pm )a′i .
i=1 i=1 i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
∑
m−1 ∑
m−1
p′ b = (pi + γi pm )bi , p′ A = (pi + γi pm )a′i .
i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
∑
m−1 ∑
m−1
p′ b = (pi + γi pm )bi , p′ A = (pi + γi pm )a′i .
i=1 i=1
∑
m−1
maximize qi bi
i=1
∑
m−1
subject to qi a′i ≤ c′ .
i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
∑
m−1 ∑
m−1
p′ b = (pi + γi pm )bi , p′ A = (pi + γi pm )a′i .
i=1 i=1
Theorem 4.2
Suppose that we have transformed a LP problem Π1 to
another LP problem Π2 , by a sequence of transformations of
the following types:
(a) Replace a free variable with the difference of two
nonnegative variables.
(b) Replace an inequality constraint by an equality constraint
involving a nonnegative slack variable.
(c) If some row of the matrix A in a feasible standard form
problem is a linear combination of the other rows,
eliminate the corresponding equality constraint.
Then, the duals of Π1 and Π2 are equivalent.
Proof: Exercise.
4.3 The duality theorem
The duality theorem
p′ b ≤ c′ x.
c′ x ≥ p̄′ b, P
c′ x ≥ p̄′ b
c′ x∗ ≥ p̄′ b,
Corollary 4.1
(a) If the optimal cost in the primal is −∞, then the dual
problem must be infeasible.
(b) If the optimal cost in the dual is +∞, then the primal
problem must be infeasible.
Dual
Primal
maximize p1
minimize x1
subject to p1 ≤ 0
subject to x1 ≤ 1.
p1 = 1.
Corollary 4.2
Let x and p be feasible solutions to the primal and the dual,
respectively, and suppose that p′ b = c′ x. Then, x and p are
optimal solutions to the primal and the dual, respectively.
c′ x ≥ p∗ ′ b, P
c′ x ≥ p∗′ b
∗′ ′ ∗
p b=cx ,
This leads to nine possible combinations for the primal and the
dual.
The duality theorem
Primal Dual
minimize x1 + 2x2 maximize p1 + 3p2
subject to x1 + x2 = 1 subject to p1 + 2p2 = 1
2x1 + 2x2 = 3. p1 + 2p2 = 2.
pi (a′i x − bi ) = 0, ∀i,
′
(cj − p Aj )xj = 0, ∀j.
pi (a′i x − bi ) = 0, ∀i,
′
(cj − p Aj )xj = 0, ∀j.
pi (a′i x − bi ) = 0, ∀i,
pi (a′i x − bi ) = 0, ∀i,
′
(cj − p Aj )xj = 0, ∀j.
pi (a′i x − bi ) = 0, ∀i,
′
(cj − p Aj )xj = 0, ∀j.
Intuitive explanation:
▶ A constraint which is not active at an
a′i x ≥ bi
optimal solution can be removed from
the problem without affecting the P
optimal cost.
▶ There is no point in using such c
constraint to obtain a lower bound.
Complementary slackness
pi (a′i x − bi ) = 0, ∀i,
′
(cj − p Aj )xj = 0, ∀j.
Intuitive explanation:
▶ A constraint which is not active at an
optimal solution can be removed from
the problem without affecting the
P′
optimal cost.
▶ There is no point in using such c
constraint to obtain a lower bound.
Complementary slackness
p′ Aj = cj .
p′ = c′B B−1 .
Complementary slackness
minimize c′ x
subject to Ax = b
x ≥ 0.
xB = B−1 b
p′ d = p1 d1 + p2 d2 + · · · + pm dm .
p′ Aj ≤ cj
p′ = c′B B−1 .
c̄ = c′ − c′B B−1 A ≥ 0′
p′ A ≤ c′ .
▶ We can think of the simplex method as an algorithm that:
▶ maintains primal feasibility and
▶ works towards dual feasibility.
▶ A method with this property is called a primal algorithm.
Standard form problems and the dual simplex method
▶ An alternative is to
▶ start with a dual feasible solution and
▶ work towards primal feasibility.
▶ A method of this type is called a dual algorithm.
▶ In this section, we present a dual simplex method,
implemented in terms of the full tableau.
▶ We argue that it solves the dual problem, and we show that it
moves from one basic feasible solution of the dual problem to
another.
▶ An alternative implementation that only keeps track of the
matrix B−1 , instead of the entire tableau, is called a revised
dual simplex method. (Exercise 4.23)
The dual simplex method
The dual simplex method
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
The dual simplex method
c̄ ≥ 0.
p′ A ≤ c′
(xB(ℓ) , v1 , . . . , vn ),
c̄j
c̄i + vi ,
|vj |
vj < 0, c̄j ≥ 0.
c̄j > 0.
vj < 0, c̄j ≥ 0.
c̄j > 0.
▶ Let us now consider the possibility that the reduced cost c̄j in
the pivot column is zero.
▶ In this case, the zeroth row of the tableau does not change
and the dual cost c′B B−1 b remains the same.
▶ The proof of termination given earlier does not apply and the
algorithm can cycle.
▶ This can be avoided by employing a suitable anticycling rule,
such as the following.
The dual simplex method
p′ AB(i) = cB(i) , i = 1, . . . , m.
maximize p′ b
subject to pA ≤ c.
The geometry of the dual simplex method
p′ A ≤ c′ ,
Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1
Example 4.8
Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1
Example 4.8
Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1
Example 4.8
Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1
First pivot:
x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
Example 4.8
First pivot:
x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
Example 4.8
First pivot:
x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
Example 4.8
Second pivot:
x1 x2 x3 x4
−3/2 0 0 1/2 1/2
x2 = 1/2 0 1 −1/2 1/2
x1 = 1 1 0 0 −1
First pivot:
x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
The geometry of the dual simplex method
▶ This sequence of tableaux corresponds to the path
A − B − C.
▶ In the primal space, the path traces a sequence of infeasible
basic solutions until, at optimality, it becomes feasible.
The geometry of the dual simplex method
▶ This sequence of tableaux corresponds to the path
A − B − C.
▶ In the dual space, the algorithm behaves exactly like the
primal simplex method: it moves through a sequence of
(dual) basic feasible solutions, while at each step improving
the cost function.
The geometry of the dual simplex method
p′ A ≥ 0′ & p′ b < 0.
▶ Then, for any x ≥ 0, we have
0 ≤ p′ Ax p′ b < 0
Farkas’ lemma and linear inequalities
▶ Consider a set of standard form constraints
Ax = b
x ≥ 0.
▶ Suppose that there exists some vector p such that
p′ A ≥ 0′ & p′ b < 0.
▶ Then, for any x ≥ 0, we have
0 ≤ p′ Ax ̸= p′ b < 0
p′ A ≥ 0′ & p′ b < 0.
▶ Then, for any x ≥ 0, we have
0 ≤ p′ Ax ̸= p′ b < 0
p′ A ≥ 0′ & p′ b < 0,
the standard form constraints have no feasible solution.
▶ Such a vector p is a certificate of infeasibility.
Farkas’ lemma and linear inequalities
{z | p′ z = 0}
{z | p′ z = 0}