0% found this document useful (0 votes)
28 views161 pages

Duality Theory

Uploaded by

Udit Jethva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views161 pages

Duality Theory

Uploaded by

Udit Jethva
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 161

ISyE/Math/CS/Stat 525

Linear Optimization

4. Duality theory

Prof. Alberto Del Pia


University of Wisconsin-Madison

Based on the book Introduction to Linear Optimization by D. Bertsimas and J.N. Tsitsiklis
Outline

Sec. 4.1 We discuss how we can obtain lower bounds for LP problems.
Sec. 4.2 We start with a LP problem, called the primal, and introduce
another LP problem, called the dual.
Sec. 4.3 We study the relation between the these two problems.
Sec. 4.4 We interpret the meaning of an optimal solution to the dual
problem.
Sec. 4.5 We develop the dual simplex method.
Sec. 4.6 We study Farkas’ lemma, which uncovers the deeper structure
of LP.
4.1 Motivation
Motivation

Suppose we want to find a lower bound B for the optimal cost in


our LP problem.
▶ Consider a simple example:

minimize x1 + 3x2
subject to x1 + 3x2 ≥ 2.

▶ What is a lower bound for the optimal cost?


Motivation

Suppose we want to find a lower bound B for the optimal cost in


our LP problem.
▶ Consider a simple example:

minimize x1 + 3x2
subject to x1 + 3x2 ≥ 2.

▶ What is a lower bound for the optimal cost?


▶ It is B = 2.
▶ In fact, any feasible point has to satisfy the constraint

x1 + 3x2 ≥ 2.
Motivation

▶ Let’s try a different example:

minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1.

▶ What is a lower bound for the optimal cost?


Motivation

▶ Let’s try a different example:

minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1.

▶ What is a lower bound for the optimal cost?


▶ We can obtain a bound by summing inequalities in this way:

x1 + x2 ≥ 2
+ 2 · (x2 ≥ 1)
= x1 + 3x2 ≥ 4

▶ We obtain the lower bound B = 4.


Motivation
▶ Let’s see a more interesting example:

minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1
x1 − x2 ≥ 3.
▶ How can we obtain a lower bound?
Motivation
▶ Let’s see a more interesting example:

minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1
x1 − x2 ≥ 3.
▶ How can we obtain a lower bound?
p1 · (x1 + x2 ≥ 2) p1 + p3 = 1
+ p2 · (x2 ≥ 1) p1 + p2 − p3 = 3
+ p3 · (x1 − x2 ≥ 3) p1 , p2 , p3 ≥ 0
= x1 + 3x2 ≥ B →B = 2p1 + p2 + 3p3
▶ There are many different ways to obtain lower bounds:
▶ p1 = 1, p2 = 2, p3 = 0 gives us the lower bound B = 4.
▶ p1 = 0, p2 = 4, p3 = 1 gives us the lower bound B = 7.
Motivation
▶ Let’s see a more interesting example:

minimize x1 + 3x2
subject to x1 + x2 ≥ 2
x2 ≥ 1
x1 − x2 ≥ 3.
▶ How can we obtain a lower bound?
p1 · (x1 + x2 ≥ 2) p1 + p3 = 1
+ p2 · (x2 ≥ 1) p1 + p2 − p3 = 3
+ p3 · (x1 − x2 ≥ 3) p1 , p2 , p3 ≥ 0
= x1 + 3x2 ≥ B →B = 2p1 + p2 + 3p3
▶ A natural question is: What is the best lower bound for the
optimal cost that we can obtain in this way?
▶ The best lower bound B can be obtained by maximizing
2p1 + p2 + 3p3 over the constraints that we have derived.
Motivation

▶ We obtain:

maximize 2p1 + p2 + 3p3


subject to p1 + p3 = 1
p1 + p2 − p3 = 3
p1 , p2 , p3 ≥ 0.
▶ This newly derived optimization problem is a LP problem and
is called the dual LP problem.
Motivation

▶ The original problem is called the primal LP problem.

Primal Dual
minimize 1x1 + 3x2 maximize 2p1 + 1p2 + 3p3
subject to 1x1 +1x2 ≥ 2 subject to 1p1 +0p2 +1p3 = 1
0x1 +1x2 ≥ 1 1p1 +1p2 −1p3 = 3
1x1 −1x2 ≥ 3, p1 , p2 , p3 ≥ 0.
Motivation
▶ Consider a more general primal LP problem:

minimize c′ x
subject to Ax ≥ b.

▶ We denote by p the dual variables associated to the


constraints Ax ≥ b.
▶ Note that the variables p need to be nonnegative.
▶ Each nonnegative vector p yields the inequality

(p′ A)x ≥ p′ b.

▶ Thus to obtain lower bounds we just need to consider the dual


LP problem:
maximize p′ b
subject to p′ A = c′ ,
p ≥ 0.
Motivation: standard form
▶ Now consider a primal LP problem that is in standard form:

minimize c′ x
subject to Ax = b
x ≥ 0.
▶ We denote by p the dual variables associated to Ax = b and
by q the dual variables associated to x ≥ 0.
▶ Variables q must be nonnegative, but variables p can be free.
▶ Each pair of vectors p, q with q ≥ 0 yields the inequality

(p′ A + q′ I)x ≥ p′ b + q′ 0.
▶ Thus to obtain lower bounds we just need to consider the dual
LP problem:
maximize p′ b + q′ 0
subject to p′ A + q′ I = c′
q ≥ 0.
Motivation: standard form

maximize p′ b + q′ 0
subject to p′ A + q′ I = c′
q ≥ 0.
▶ We can now drop the q variables and obtain the equivalent
dual LP problem:

maximize p′ b
subject to p′ A ≤ c′ .

▶ We obtain the following pair of primal and dual problem:


Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
Motivation
▶ We have seen two pairs of primal and dual problems.

Primal Dual

minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .

Primal Dual

minimize cx maximize p′ b
subject to Ax = b subject to p free
x ≥ 0, p′ A ≤ c′ .

▶ Next, we give the general definition that applies to a LP


problem in general form.
4.2 The dual problem
The dual problem
▶ Let A be a matrix with rows a′i and columns Aj .
▶ Given a primal problem, on the left, its dual is defined to be
the problem on the right:
Primal Dual

min c x max p′ b
s. t. a′i x ≥ bi , i ∈ M1 , s. t. pi ≥ 0, i ∈ M1 ,

xj free, j ∈ N3 , p′ Aj = cj , j ∈ N3 .
▶ For each constraint in the primal (other than the sign
constraints), we introduce a variable in the dual problem.
▶ For each variable in the primal, we introduce a constraint in
the dual.
The dual problem
▶ Let A be a matrix with rows a′i and columns Aj .
▶ Given a primal problem, on the left, its dual is defined to be
the problem on the right:
Primal Dual

min c x max p′ b
s. t. a′i x ≥ bi , i ∈ M1 , s. t. pi ≥ 0, i ∈ M1 ,

a′i x = bi , i ∈ M3 , pi free, i ∈ M3 ,

xj ≥ 0, j ∈ N1 , p Aj ≤ cj , j ∈ N1 ,

xj free, j ∈ N3 , p′ Aj = cj , j ∈ N3 .
▶ Depending on whether the primal constraint is an equality
or inequality constraint, the corresponding dual variable is
either free or sign-constrained, respectively.
The dual problem
▶ Let A be a matrix with rows a′i and columns Aj .
▶ Given a primal problem, on the left, its dual is defined to be
the problem on the right:
Primal Dual

min c x max p′ b
s. t. a′i x ≥ bi , i ∈ M1 , s. t. pi ≥ 0, i ∈ M1 ,
a′i x ≤ bi , i ∈ M2 , pi ≤ 0, i ∈ M2 ,
a′i x = bi , i ∈ M3 , pi free, i ∈ M3 ,

xj ≥ 0, j ∈ N1 , p Aj ≤ cj , j ∈ N1 ,

xj ≤ 0, j ∈ N2 , p Aj ≥ cj , j ∈ N2 ,

xj free, j ∈ N3 , p Aj = cj , j ∈ N3 .
▶ Depending on whether a variable in the primal problem is
free or sign-constrained, we have an equality or inequality
constraint, respectively, in the dual problem.
The dual problem

We summarize these relations:

PRIMAL minimize maximize DUAL


≥ bi ≥0
constraints ≤ bi ≤0 variables
= bi free
≥0 ≤ cj
variables ≤0 ≥ cj constraints
free = cj
The dual problem

▶ If we start with a maximization problem, we can always


convert it into an equivalent minimization problem, and then
form its dual according to the rules we have described.
▶ However, to avoid confusion, we will adhere to the convention
that the primal is a minimization problem, and its dual is a
maximization problem.
▶ Finally, we will refer to the objective function in the dual
problem as a “cost” that is being maximized.
Example 4.1
▶ Consider the primal problem:

Primal
min x1 + 2x2 + 3x3
s. t. − x1 + 3x2 = 5
2x1 − x2 + 3x3 ≥ 6
x3 ≤ 4
x1 ≥ 0
x2 ≤ 0
x3 free
Example 4.1
▶ Its dual is shown on the right:

Primal Dual
min x1 + 2x2 + 3x3 max 5p1 + 6p2 + 4p3
s. t. − x1 + 3x2 = 5 s. t. p1 free
2x1 − x2 + 3x3 ≥ 6 p2 ≥ 0
x3 ≤ 4 p3 ≤ 0
x1 ≥ 0 − p1 + 2p2 ≤ 1
x2 ≤ 0 3p1 − p2 ≥ 2
x3 free 3p2 + p3 = 3
Example 4.1
▶ We transform the dual into an equivalent minimization
problem (and we multiply the three last constraints by −1).

Dual
max 5p1 + 6p2 + 4p3
s. t. p1 free
p2 ≥ 0
p3 ≤ 0
− p1 + 2p2 ≤ 1
3p1 − p2 ≥ 2
3p2 + p3 = 3
Example 4.1
▶ We transform the dual into an equivalent minimization
problem (and we multiply the three last constraints by −1).

Dual
− min − 5p1 − 6p2 − 4p3
s. t. p1 free
p2 ≥ 0
p3 ≤ 0
p1 − 2p2 ≥ −1
− 3p1 + p2 ≤ −2
− 3p2 − p3 = −3
Example 4.1
▶ Then, on the left, we show its dual:

Dual of dual
Dual
− max − x1 − 2x2 − 3x3
− min − 5p1 − 6p2 − 4p3
s. t. x1 − 3x2 = −5
s. t. p1 free
− 2x1 + x2 − 3x3 ≤ −6
p2 ≥ 0
− x3 ≥ −4
p3 ≤ 0
x1 ≥ 0
p1 − 2p2 ≥ −1
x2 ≤ 0
− 3p1 + p2 ≤ −2
x3 free
− 3p2 − p3 = −3
Example 4.1
▶ We transform it into an equivalent minimization problem
(and we multiply the three first constraints by −1).

Dual of dual
− max − x1 − 2x2 − 3x3
s. t. x1 − 3x2 = −5
− 2x1 + x2 − 3x3 ≤ −6
− x3 ≥ −4
x1 ≥ 0
x2 ≤ 0
x3 free
Example 4.1
▶ We have obtained the primal problem we started with!

Dual of dual
min x1 + 2x2 + 3x3
s. t. − x1 + 3x2 = 5
2x1 − x2 + 3x3 ≥ 6
x3 ≤ 4
x1 ≥ 0
x2 ≤ 0
x3 free
The dual problem
▶ The first primal problem considered in Example 4.1 had all of
the ingredients of a general LP problem.
▶ This suggests that the conclusion reached at the end of the
example should hold in general.

Theorem 4.1
If we transform the dual into an equivalent minimization
problem and then form its dual, we obtain a problem
equivalent to the original problem.

Proof: Exercise.
Hint: Follow the steps performed in Example 4.1 with abstract
symbols replacing specific numbers.

▶ A compact statement that is often used to describe


Theorem 4.1 is: “The dual of the dual is the primal.”
The dual problem

▶ Any LP problem can be manipulated into one of several


equivalent forms, for example:
▶ By introducing a slack variable to turn an inequality constraint
into an equality constraint.
▶ By using the difference of two nonnegative variables to replace
a single free variable.
▶ Each equivalent form leads to a somewhat different form for
the dual problem.
▶ Nevertheless, the examples that follow indicate that the duals
of equivalent problems are equivalent.
Example 4.2
Consider the pair of primal and dual problems:
Primal Dual

minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .

▶ We transform the primal problem by introducing surplus


variables and then obtain its dual:
Dual
Primal
maximize p′ b
minimize c′ x + 0′ s
subject to p free
subject to Ax − s = b
p′ A = c′
x free
− p ≤ 0.
s ≥ 0,
Example 4.2
Consider the pair of primal and dual problems:
Primal Dual

minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .

▶ Alternatively, we take the primal problem and replace x by


nonnegative variables, and then obtain its dual:

Primal Dual
′ + ′ −
minimize cx −cx maximize p′ b
subject to Ax+ − Ax− ≥ b subject to p≥0
x ≥0
+
p′ A ≤ c′
x− ≥ 0, − p′ A ≤ −c′ .
Example 4.2
Consider the pair of primal and dual problems:
Primal Dual

minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .

▶ Note that we have three equivalent forms of the primal.


▶ The duals of the three variants of the primal problem are
also equivalent (exercise).
Example 4.2
Consider the pair of primal and dual problems:
Primal Dual

minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .

▶ Note that we have three equivalent forms of the primal.


▶ The duals of the three variants of the primal problem are
also equivalent (exercise).
▶ The next example is in the same spirit and examines the
effect of removing redundant equality constraints in a
standard form problem.
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
▶ If the last equality is redundant, then it can be eliminated.
▶ This means that there exist scalars γ1 , . . . , γm−1 such that


m−1 ∑
m−1
am = γi ai , bm = γi bi .
i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
▶ If the last equality is redundant, then it can be eliminated.
▶ This means that there exist scalars γ1 , . . . , γm−1 such that


m−1 ∑
m−1
am = γi ai , bm = γi bi .
i=1 i=1

▶ Note that the dual cost p′ b is equal to


m−1 ∑
m−1 ∑
m−1 ∑
m−1
p′ b = pi bi +pm bm = pi bi +pm γi bi = (pi +γi pm )bi .
i=1 i=1 i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
▶ If the last equality is redundant, then it can be eliminated.
▶ This means that there exist scalars γ1 , . . . , γm−1 such that


m−1 ∑
m−1
am = γi ai , bm = γi bi .
i=1 i=1

▶ Similarly, p′ A in the dual constraints can be rewritten


m−1 ∑
m−1 ∑
m−1 ∑
m−1
p′ A = pi a′i +pm a′m = pi a′i +pm γi a′i = (pi +γi pm )a′i .
i=1 i=1 i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,


m−1 ∑
m−1
p′ b = (pi + γi pm )bi , p′ A = (pi + γi pm )a′i .
i=1 i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,


m−1 ∑
m−1
p′ b = (pi + γi pm )bi , p′ A = (pi + γi pm )a′i .
i=1 i=1

Let qi = pi + γi pm , i = 1, . . . , m − 1. The dual problem becomes:


m−1
maximize qi bi
i=1

m−1
subject to qi a′i ≤ c′ .
i=1
Example 4.3
Consider a feasible standard form problem, and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,


m−1 ∑
m−1
p′ b = (pi + γi pm )bi , p′ A = (pi + γi pm )a′i .
i=1 i=1

▶ If we had first eliminated the last (redundant) constraint


and then written the dual, we would have obtained exactly
this dual.
The dual problem
The conclusions of the preceding two examples are summarized
and generalized by the following result.

Theorem 4.2
Suppose that we have transformed a LP problem Π1 to
another LP problem Π2 , by a sequence of transformations of
the following types:
(a) Replace a free variable with the difference of two
nonnegative variables.
(b) Replace an inequality constraint by an equality constraint
involving a nonnegative slack variable.
(c) If some row of the matrix A in a feasible standard form
problem is a linear combination of the other rows,
eliminate the corresponding equality constraint.
Then, the duals of Π1 and Π2 are equivalent.

Proof: Exercise.
4.3 The duality theorem
The duality theorem

▶ We saw in Section 4.1 that for some LP problems, the cost of


any dual solution provides a lower bound for the optimal cost.
▶ We now show that this property is true in general.

Theorem 4.3 (Weak duality)


If x is a feasible solution to the primal problem and p is a
feasible solution to the dual problem, then

p′ b ≤ c′ x.

Let’s prove it!


The duality theorem: a geometric view
Primal Dual

minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .

▶ Let p̄ be dual feasible.


▶ From weak duality,
c

c′ x ≥ p̄′ b, P

for every x feasible.


▶ In particular x∗
b
c′ x ≥ c′ x∗

c′ x ≥ p̄′ b
c′ x∗ ≥ p̄′ b,

where x∗ is optimal for the primal.


The duality theorem

▶ The weak duality theorem provides some useful information


about the relation between the primal and the dual.
▶ We have, for example, the following corollary.

Corollary 4.1
(a) If the optimal cost in the primal is −∞, then the dual
problem must be infeasible.
(b) If the optimal cost in the dual is +∞, then the primal
problem must be infeasible.

Let’s prove it!


Example

Dual
Primal
maximize p1
minimize x1
subject to p1 ≤ 0
subject to x1 ≤ 1.
p1 = 1.

▶ The optimal cost of the primal is −∞.


▶ The dual is infeasible.
The duality theorem

▶ Another important corollary of the weak duality theorem is


the following.

Corollary 4.2
Let x and p be feasible solutions to the primal and the dual,
respectively, and suppose that p′ b = c′ x. Then, x and p are
optimal solutions to the primal and the dual, respectively.

Let’s prove it!


The duality theorem

▶ The next theorem is the central result on LP duality.

Theorem 4.4 (Strong duality)


If a LP problem has an optimal solution, so does its dual, and
the respective optimal costs are equal.

Let’s prove it!


The duality theorem: a geometric view
Primal Dual

minimize cx maximize p′ b
subject to Ax ≥ b subject to p≥0
x free, p′ A = c′ .

▶ Let p∗ be optimal for the dual.


▶ From weak duality,
c

c′ x ≥ p∗ ′ b, P

for every x feasible.


▶ From strong duality, x∗
b
c′ x ≥ c′ x∗

c′ x ≥ p∗′ b
∗′ ′ ∗
p b=cx ,

where x∗ is optimal for the primal.


The duality theorem

Recall that in a LP problem, exactly one of the following three


possibilities can occur:

(a) There is an optimal solution.


(b) The problem is “unbounded”; that is,
▶ the optimal cost is −∞ (for minimization problems), or
▶ the optimal cost is +∞ (for maximization problems).
(c) The problem is infeasible.

This leads to nine possible combinations for the primal and the
dual.
The duality theorem

Finite optimum Unbounded Infeasible


Finite optimum
Unbounded
Infeasible
The duality theorem

Finite optimum Unbounded Infeasible


Finite optimum Impossible Impossible
Unbounded Impossible
Infeasible Impossible

▶ By the strong duality theorem, if one problem has an


optimal solution, so does the other.
The duality theorem

Finite optimum Unbounded Infeasible


Finite optimum Impossible Impossible
Unbounded Impossible Impossible
Infeasible Impossible

▶ By the strong duality theorem, if one problem has an


optimal solution, so does the other.
▶ Furthermore, Corollary 4.1 implies that if one problem is
unbounded, the other must be infeasible.
The duality theorem

Finite optimum Unbounded Infeasible


Finite optimum Possible Impossible Impossible
Unbounded Impossible Impossible Possible
Infeasible Impossible Possible Possible

▶ By the strong duality theorem, if one problem has an


optimal solution, so does the other.
▶ Furthermore, Corollary 4.1 implies that if one problem is
unbounded, the other must be infeasible.
▶ All the remaining cases can occur.
▶ We now see an example where both problems are infeasible.
Example 4.5

Primal Dual
minimize x1 + 2x2 maximize p1 + 3p2
subject to x1 + x2 = 1 subject to p1 + 2p2 = 1
2x1 + 2x2 = 3. p1 + 2p2 = 2.

▶ The primal is infeasible.


▶ The dual is also infeasible.
The duality theorem

▶ There is another interesting relation between the primal and


the dual which is known as Clark’s theorem.
▶ It asserts that unless both problems are infeasible, at least one
of them must have an unbounded feasible set (Exercise 4.21).
Complementary slackness
Complementary slackness

An important relation between primal and dual optimal solutions is


provided by the complementary slackness conditions.

Theorem 4.5 (Complementary slackness)


Let x and p be feasible solutions to the primal and the dual
problem, respectively. The vectors x and p are optimal
solutions for the two respective problems if and only if:

pi (a′i x − bi ) = 0, ∀i,

(cj − p Aj )xj = 0, ∀j.

Let’s prove it!


Complementary slackness

Theorem 4.5 (Complementary slackness)


Let x and p be feasible solutions to the primal and the dual
problem, respectively. The vectors x and p are optimal
solutions for the two respective problems if and only if:

pi (a′i x − bi ) = 0, ∀i,

(cj − p Aj )xj = 0, ∀j.

▶ If the primal problem is in standard form, the first


complementary slackness condition

pi (a′i x − bi ) = 0, ∀i,

is automatically satisfied by every feasible solution.


Complementary slackness

Theorem 4.5 (Complementary slackness)


Let x and p be feasible solutions to the primal and the dual
problem, respectively. The vectors x and p are optimal
solutions for the two respective problems if and only if:

pi (a′i x − bi ) = 0, ∀i,

(cj − p Aj )xj = 0, ∀j.

▶ If the primal problem is not in standard form and has a


constraint like
a′i x ≥ bi ,
the corresponding complementary slackness condition
asserts that if the constraint is not active, then the dual
variable pi is zero.
Complementary slackness

Theorem 4.5 (Complementary slackness)


Let x and p be feasible solutions to the primal and the dual
problem, respectively. The vectors x and p are optimal
solutions for the two respective problems if and only if:

pi (a′i x − bi ) = 0, ∀i,

(cj − p Aj )xj = 0, ∀j.

Intuitive explanation:
▶ A constraint which is not active at an
a′i x ≥ bi
optimal solution can be removed from
the problem without affecting the P
optimal cost.
▶ There is no point in using such c
constraint to obtain a lower bound.
Complementary slackness

Theorem 4.5 (Complementary slackness)


Let x and p be feasible solutions to the primal and the dual
problem, respectively. The vectors x and p are optimal
solutions for the two respective problems if and only if:

pi (a′i x − bi ) = 0, ∀i,

(cj − p Aj )xj = 0, ∀j.

Intuitive explanation:
▶ A constraint which is not active at an
optimal solution can be removed from
the problem without affecting the
P′
optimal cost.
▶ There is no point in using such c
constraint to obtain a lower bound.
Complementary slackness

We now consider a primal problem in standard form.


▶ Assume that a nondegenerate optimal basic feasible solution is
known.
▶ Then, the complementary slackness conditions determine a
unique feasible solution to the dual problem.
▶ This dual solution is also optimal.
▶ Before showing it, we illustrate this fact in the next example.
Example 4.6
Consider a problem in standard form and its dual:
Primal Dual
minimize 13x1 + 10x2 + 6x3 maximize 8p1 + 3p2
subject to 5x1 + x2 + 3x3 = 8 subject to 5p1 + 3p2 ≤ 13
3x1 + x2 = 3 p1 + p2 ≤ 10
x1 , x2 , x3 ≥ 0, 3p1 ≤ 6.
Example 4.6
Consider a problem in standard form and its dual:
Primal Dual
minimize 13x1 + 10x2 + 6x3 maximize 8p1 + 3p2
subject to 5x1 + x2 + 3x3 = 8 subject to 5p1 + 3p2 ≤ 13
3x1 + x2 = 3 p1 + p2 ≤ 10
x1 , x2 , x3 ≥ 0, 3p1 ≤ 6.

Optimal solution: x∗ = (1, 0, 1).


▶ The vector x∗ = (1, 0, 1) is a nondegenerate optimal basic
feasible solution to the primal problem of cost 19.
▶ The condition pi (a′i x∗ − bi ) = 0 is automatically satisfied for
each i, since the primal is in standard form.
Example 4.6
Consider a problem in standard form and its dual:
Primal Dual
minimize 13x1 + 10x2 + 6x3 maximize 8p1 + 3p2
subject to 5x1 + x2 + 3x3 = 8 subject to 5p1 + 3p2 ≤ 13
3x1 + x2 = 3 p1 + p2 ≤ 10
x1 , x2 , x3 ≥ 0, 3p1 ≤ 6.

Optimal solution: x∗ = (1, 0, 1).


▶ The condition (cj − p′ Aj )x∗j = 0 is clearly satisfied for j = 2,
because x∗2 = 0.
▶ However, since x∗1 > 0 and x∗3 > 0, we obtain

5p1 + 3p2 = 13,


3p1 = 6,

which we can solve to obtain p1 = 2 and p2 = 1.


Example 4.6
Consider a problem in standard form and its dual:
Primal Dual
minimize 13x1 + 10x2 + 6x3 maximize 8p1 + 3p2
subject to 5x1 + x2 + 3x3 = 8 subject to 5p1 + 3p2 ≤ 13
3x1 + x2 = 3 p1 + p2 ≤ 10
x1 , x2 , x3 ≥ 0, 3p1 ≤ 6.

Optimal solution: x∗ = (1, 0, 1). Optimal solution: p∗ = (2, 1).


▶ The solution (p1 , p2 ) = (2, 1) is dual feasible, and it has
cost 19, which is the same as the cost of x∗ .
▶ By Corollary 4.2 we have that:
▶ x∗ is an optimal solution of the primal, as claimed earlier,
▶ (p1 , p2 ) = (2, 1) is an optimal solution of the dual.
Complementary slackness
We now generalize the above example.
▶ Suppose that x is a nondegenerate optimal basic feasible
solution to a primal problem in standard form.
Complementary slackness
We now generalize the above example.
▶ Suppose that x is a nondegenerate optimal basic feasible
solution to a primal problem in standard form.
▶ Suppose that xj is a basic variable. By nondegeneracy, we
have xj > 0.
▶ Then, the complementary slackness condition
(cj − p′ Aj )xj = 0 yields

p′ Aj = cj .

▶ By considering all basic variables we obtain the system of


equations
p′ B = c′B .
▶ Since B is invertible, this system has a unique solution:

p′ = c′B B−1 .
Complementary slackness

▶ The optimality and nondegeneracy of the primal solution


imply that the vector of reduced costs is nonnegative:

c̄′ = c′ − c′B B−1 A = c′ − p′ A ≥ 0.

▶ This implies that the solution p is dual feasible.


▶ Then p is an optimal dual solution, because we have been
enforcing complementary slackness.
Complementary slackness
This reasoning shows that the condition on the reduced costs
being non-negative is equivalent to dual feasibility.
▶ In Example 4.6:
Primal Dual
minimize 13x1 + 10x2 + 6x3 maximize 8p1 + 3p2
subject to 5x1 + x2 + 3x3 = 8 subject to 5p1 + 3p2 ≤ 13
3x1 + x2 = 3 p1 + p2 ≤ 10
x1 , x2 , x3 ≥ 0, 3p1 ≤ 6.
▶ The vector x = (0, 3, 5/3) is a nondegenerate non-optimal

basic feasible solution to the primal problem of cost 40.


▶ We use the complementary slackness conditions to
construct a basic solution to the dual.
Complementary slackness
This reasoning shows that the condition on the reduced costs
being non-negative is equivalent to dual feasibility.
▶ In Example 4.6:
Primal Dual
minimize 13x1 + 10x2 + 6x3 maximize 8p1 + 3p2
subject to 5x1 + x2 + 3x3 = 8 subject to 5p1 + 3p2 ≤ 13
3x1 + x2 = 3 p1 + p2 ≤ 10
x1 , x2 , x3 ≥ 0, 3p1 ≤ 6.
▶ The vector x = (0, 3, 5/3) is a nondegenerate non-optimal

basic feasible solution to the primal problem of cost 40.


▶ We use the complementary slackness conditions to
construct a basic solution to the dual.
▶ Since x∗2 > 0 and x∗3 > 0, we have p1 + p2 = 10 and
3p1 = 6, thus we obtain p = (2, 8).
Complementary slackness
This reasoning shows that the condition on the reduced costs
being non-negative is equivalent to dual feasibility.
▶ In Example 4.6:
Primal Dual
minimize 13x1 + 10x2 + 6x3 maximize 8p1 + 3p2
subject to 5x1 + x2 + 3x3 = 8 subject to 5p1 + 3p2 ≤ 13
3x1 + x2 = 3 p1 + p2 ≤ 10
x1 , x2 , x3 ≥ 0, 3p1 ≤ 6.
▶ The vector x = (0, 3, 5/3) is a nondegenerate non-optimal

basic feasible solution to the primal problem of cost 40.


▶ We use the complementary slackness conditions to
construct a basic solution to the dual.
▶ Since x∗2 > 0 and x∗3 > 0, we have p1 + p2 = 10 and
3p1 = 6, thus we obtain p = (2, 8).
▶ This is NOT dual feasible, since 5p1 + 3p2 = 34 > 13.
4.4 Optimal dual variables as marginal costs
Optimal dual variables as marginal costs
We now see how dual variables can be interpreted as prices.
▶ Consider the standard form problem

minimize c′ x
subject to Ax = b
x ≥ 0.

▶ We assume that the rows of A are linearly independent and


that there is a nondegenerate optimal basic feasible solution
x∗ .
▶ Let B be the corresponding basis matrix and let

xB = B−1 b

be the vector of basic variables, which is positive, by


nondegeneracy.
Optimal dual variables as marginal costs

▶ We now replace b by b + d, where d is a small perturbation


vector:
minimize c′ x
subject to Ax = b + d
x ≥ 0.
▶ Since B−1 b > 0, we also have B−1 (b + d) > 0, as long as d is
small. How small?
▶ This implies that the same basis leads to a basic feasible
solution of the perturbed problem as well.
Optimal dual variables as marginal costs

▶ Perturbing the right-hand side vector b has no effect on the


reduced costs associated with this basis

c̄′ = c′ − c′B B−1 A.

▶ By the optimality and nondegeneracy of x∗ in the original


problem, the vector of reduced costs c̄ is nonnegative.
▶ This establishes that the same basis is optimal for the
perturbed problem as well.
Optimal dual variables as marginal costs

▶ The optimal cost in the perturbed problem is

c′B B−1 (b + d) = c′B B−1 b + c′B B−1 d = c′B B−1 b + p′ d,

where p′ = c′B B−1 is an optimal solution to the dual problem.


▶ Therefore, a small change of d in the right-hand side vector b
results in a change in the optimal cost of

p′ d = p1 d1 + p2 d2 + · · · + pm dm .

▶ We conclude that each component pi of the optimal dual


vector can be interpreted as the marginal cost per unit
increase of the ith requirement bi .
Optimal dual variables as marginal costs
We conclude with yet another interpretation of duality, for
standard form problems.
Primal
minimize c′ x Dual
∑ n
maximize p′ b
subject to Aj xj = b
j=1 subject to p′ Aj ≤ cj j = 1, . . . , n.
x ≥ 0,
▶ In order to develop some concrete intuition, we phrase our
discussion in terms of the diet problem (Example 1.3 in
Section 1.1).
▶ We interpret each vector Aj as the nutritional content of
the jth available food, and view bi as the content of
nutrient i in an ideal food that we wish to synthesize.
Optimal dual variables as marginal costs
We conclude with yet another interpretation of duality, for
standard form problems.
Primal
minimize c′ x Dual
∑ n
maximize p′ b
subject to Aj xj = b
j=1 subject to p′ Aj ≤ cj j = 1, . . . , n.
x ≥ 0,
▶ Let us interpret pi as the “fair price” per unit of the ith
nutrient: how much we are willing to pay for it.
▶ Then the “fair price” of the jth food is p′ Aj .
▶ However, the jth food is priced cj , and we have

p′ Aj ≤ cj

for each food j by dual feasibility.


Optimal dual variables as marginal costs
We conclude with yet another interpretation of duality, for
standard form problems.
Primal
minimize c′ x Dual
∑ n
maximize p′ b
subject to Aj xj = b
j=1 subject to p′ Aj ≤ cj j = 1, . . . , n.
x ≥ 0,
Complementary slackness asserts that:
▶ If xj > 0, then p′ Aj = cj :
Every food j which is used to synthesize the ideal food, is
“priced fairly”.
▶ If p′ Aj < cj , then xj = 0:
Every food j which is not “priced fairly”, is not used to
synthesize the ideal food.
Optimal dual variables as marginal costs
We conclude with yet another interpretation of duality, for
standard form problems.
Primal
minimize c′ x Dual
∑ n
maximize p′ b
subject to Aj xj = b
j=1 subject to p′ Aj ≤ cj j = 1, . . . , n.
x ≥ 0,
▶ The price of the ideal food is c′ x∗ , where x∗ is an optimal
solution to the primal problem, while its “fair price” is p′ b.
▶ The duality relation
c′ x∗ = p′ b
states that the ideal food should also be “priced fairly”.
4.5 Standard form problems and the dual simplex
method
Standard form problems and the dual simplex method

▶ In the proof of the strong duality theorem, we considered a


primal problem in standard form and its dual:
Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
▶ Then, we considered the simplex method applied to the
primal.
Standard form problems and the dual simplex method

▶ We defined a dual vector

p′ = c′B B−1 .

▶ We then noted that the primal optimality condition

c̄ = c′ − c′B B−1 A ≥ 0′

is the same as the dual feasibility condition

p′ A ≤ c′ .
▶ We can think of the simplex method as an algorithm that:
▶ maintains primal feasibility and
▶ works towards dual feasibility.
▶ A method with this property is called a primal algorithm.
Standard form problems and the dual simplex method

▶ An alternative is to
▶ start with a dual feasible solution and
▶ work towards primal feasibility.
▶ A method of this type is called a dual algorithm.
▶ In this section, we present a dual simplex method,
implemented in terms of the full tableau.
▶ We argue that it solves the dual problem, and we show that it
moves from one basic feasible solution of the dual problem to
another.
▶ An alternative implementation that only keeps track of the
matrix B−1 , instead of the entire tableau, is called a revised
dual simplex method. (Exercise 4.23)
The dual simplex method
The dual simplex method

▶ We consider a problem in standard form, under the usual


assumption that A has linearly independent rows.

Primal
Dual
minimize c′ x
maximize p′ b
subject to Ax = b
subject to p′ A ≤ c′ .
x ≥ 0,
The dual simplex method

▶ Let B be a basis matrix, consisting of m linearly independent


columns of A, and consider the corresponding tableau

−c′B xB c̄1 ··· c̄n


xB(1) | |
..
. B−1 A1 · · · B−1 An
xB(m) | |
▶ The basis matrix B defines:
▶ A basic solution to the primal problem xB = B−1 b.
▶ A dual vector p′ = c′B B−1 .
The dual simplex method
−c′B xB c̄1 ··· c̄n
xB(1) | |
.. −1 −1
. B A1 · · · B An
xB(m) | |

▶ We do not require B−1 b to be nonnegative.


▶ This means that we have a basic, but not necessarily
feasible solution to the primal problem.
The dual simplex method
−c′B xB c̄1 ··· c̄n
xB(1) | |
.. −1 −1
. B A1 · · · B An
xB(m) | |

▶ However, we assume that

c̄ ≥ 0.

▶ Equivalently, the vector p′ satisfies

p′ A ≤ c′

and we have a feasible solution to the dual problem.


The dual simplex method
−c′B xB c̄1 ··· c̄n
xB(1) | |
.. −1 −1
. B A1 · · · B An
xB(m) | |

▶ The cost of this dual feasible solution is

p′ b = c′B B−1 b = c′B xB .

▶ It is the negative of the entry at the upper left corner of the


tableau.
The dual simplex method
−c′B xB c̄1 ··· c̄n
xB(1) | |
.. −1 −1
. B A1 · · · B An
xB(m) | |

▶ If B−1 b ≥ 0: We also have a primal feasible solution with


the same cost, and optimal solutions to both problems have
been found.
▶ If B−1 b ̸≥ 0: we perform a change of basis in a manner we
describe next.
The dual simplex method
−c′B xB c̄1 ··· c̄j ··· c̄n
xB(1) (B−1 A1 )1 ... (B−1 Aj )1 ... (B−1 An )1
.. .. .. ..
. . . .
xB(ℓ) v1 ... vj ... vn
.. .. .. ..
. . . .
xB(m) (B−1 A1 )m . . . (B−1 Aj )m . . . (B−1 An )m
▶ We find some ℓ such that xB(ℓ) < 0 and consider the ℓth
row of the tableau, called the pivot row.
▶ This row is of the form

(xB(ℓ) , v1 , . . . , vn ),

where vi is the ℓth component of B−1 Ai .


▶ For each i with vi < 0 (if such i exist), we form the ratio
c̄i
|vi | .
The dual simplex method
−c′B xB c̄1 ··· c̄j ··· c̄n
xB(1) (B−1 A1 )1 ... (B−1 Aj )1 ... (B−1 An )1
.. .. .. ..
. . . .
xB(ℓ) v1 ... vj ... vn
.. .. .. ..
. . . .
xB(m) (B−1 A1 )m . . . (B−1 Aj )m . . . (B−1 An )m
▶ Let j be an index for which this ratio is smallest; that is,
vj < 0 and
c̄j c̄i
= min .
|vj | i|vi <0 |vi |
▶ We call the corresponding entry vj the pivot element.
▶ The jth column of the tableau is called the pivot column.
▶ Note that xj must be a nonbasic variable, since the jth
column in the tableau contains the negative element vj .
The dual simplex method
−c′B xB c̄1 ··· c̄j ··· c̄n
xB(1) (B−1 A1 )1 ... (B−1 Aj )1 ... (B−1 An )1
.. .. .. ..
. . . .
xB(ℓ) v1 ... vj ... vn
.. .. .. ..
. . . .
xB(m) (B−1 A1 )m . . . (B−1 Aj )m . . . (B−1 An )m
▶ We then perform a change of basis: column Aj enters the
basis and column AB(ℓ) exits.
▶ This change of basis (or pivot) is effected exactly as in the
primal simplex method:
We add to each row of the tableau a multiple of the pivot
row so that all entries in the pivot column are set to zero,
with the exception of the pivot element which is set to 1.
The dual simplex method
−c′B xB c̄1 ··· c̄j ··· c̄n
xB(1) (B−1 A1 )1 ... (B−1 Aj )1 ... (B−1 An )1
.. .. .. ..
. . . .
xB(ℓ) v1 ... vj ... vn
.. .. .. ..
. . . .
xB(m) (B−1 A1 )m . . . (B−1 Aj )m . . . (B−1 An )m
▶ In particular, in order to set the reduced cost in the pivot

column to zero, we multiply the pivot row by |vjj | and add it
to the zeroth row.
▶ For every i, the new value of c̄i is equal to

c̄j
c̄i + vi ,
|vj |

which is nonnegative because of the way that j was


selected. Show it!
The dual simplex method
−c′B xB c̄1 ··· c̄j ··· c̄n
xB(1) (B−1 A1 )1 ... (B−1 Aj )1 ... (B−1 An )1
.. .. .. ..
. . . .
xB(ℓ) v1 ... vj ... vn
.. .. .. ..
. . . .
xB(m) (B−1 A1 )m . . . (B−1 Aj )m . . . (B−1 An )m
▶ We conclude that the reduced costs in the new tableau will
also be nonnegative and dual feasibility has been
maintained.
Example 4.7
x1 x2 x3 x4 x5
0 2 6 10 0 0
x4 = 2 −2 4 1 1 0
x5 = −1 4 −2 −3 0 1

▶ Since xB(2) < 0, we choose the second row to be the pivot


row.
Example 4.7
x1 x2 x3 x4 x5
0 2 6 10 0 0
x4 = 2 −2 4 1 1 0
x5 = −1 4 −2 −3 0 1

▶ Since xB(2) < 0, we choose the second row to be the pivot


row.
▶ Negative entries of the pivot row are found in the second
and third column.
▶ We compare the corresponding ratios c̄i /|vi |, i = 2, 3:
▶ c̄2 /|v2 | = 6/| − 2| = 3,
▶ c̄3 /|v3 | = 10/| − 3| = 3.3̄.
▶ The smallest ratio corresponds to i = 2 and, therefore, the
second column enters the basis.
Example 4.7
x1 x2 x3 x4 x5
0 2 6 10 0 0
x4 = 2 −2 4 1 1 0
x5 = −1 4 −2 −3 0 1

▶ We multiply the pivot row by 3 and add it to the zeroth


row.
▶ We multiply the pivot row by 2 and add it to the first row.
▶ We then divide the pivot row by −2.
▶ The new tableau is:
Example 4.7
x1 x2 x3 x4 x5
−3 14 0 1 0 3
x4 = 0 6 0 −5 1 2
x2 = 1/2 −2 1 3/2 0 −1/2

▶ The cost has increased to 3.


▶ Furthermore, we now have B−1 b ≥ 0, and an optimal
solution has been found.
The dual simplex method
▶ Note that in the pivot column we have

vj < 0, c̄j ≥ 0.

▶ Let us temporarily assume that

c̄j > 0.

▶ Then, in order to replace c̄j by zero, we need to add a positive


multiple of the pivot row to the zeroth row.
▶ Since xB(ℓ) < 0 this has the effect of adding a negative
quantity to the upper left corner.
▶ Equivalently, the dual cost increases.
The dual simplex method
▶ Note that in the pivot column we have

vj < 0, c̄j ≥ 0.

▶ Let us temporarily assume that

c̄j > 0.

▶ Then, in order to replace c̄j by zero, we need to add a positive


multiple of the pivot row to the zeroth row.
▶ Since xB(ℓ) < 0 this has the effect of adding a negative
quantity to the upper left corner.
▶ Equivalently, the dual cost increases.
▶ Thus, as long as the reduced cost of every nonbasic variable is
nonzero, the dual cost increases with each basis change.
▶ Therefore, no basis will ever be repeated in the course of the
algorithm.
The dual simplex method

It follows that the algorithm must eventually terminate and this


can happen in one of two ways:
(a) We have B−1 b ≥ 0 and an optimal solution.
(b) All of the entries v1 , . . . , vn in the pivot row are nonnegative
and we are therefore unable to locate a pivot element.
This implies that the optimal dual cost is equal to +∞ and
the primal problem is infeasible. (Exercise 4.22).
▶ We now provide a summary of the algorithm.
The dual simplex method

An iteration of the dual simplex method


1. A typical iteration starts with the tableau associated with
a basis matrix B and with all reduced costs nonnegative.
2. Examine the components of the vector B−1 b in the zeroth
column of the tableau. If they are all nonnegative, we
have an optimal basic feasible solution and the algorithm
terminates; else, choose some ℓ such that xB(ℓ) < 0.
3. Consider the ℓth row of the tableau, with elements
xB(ℓ) , v1 , . . . , vn (the pivot row). If vi ≥ 0 for all i, then the
optimal dual cost is +∞ and the algorithm terminates.
The dual simplex method

An iteration of the dual simplex method


4. For each i such that vi < 0, compute the ratio
c̄i
|vi |

and let j be the index of a column that corresponds to the


smallest ratio. The column AB(ℓ) exits the basis and the
column Aj takes its place.
5. Add to each row of the tableau a multiple of the ℓth row
(the pivot row) so that vj (the pivot element) becomes 1
and all other entries of the pivot column become 0.
The dual simplex method

▶ Let us now consider the possibility that the reduced cost c̄j in
the pivot column is zero.
▶ In this case, the zeroth row of the tableau does not change
and the dual cost c′B B−1 b remains the same.
▶ The proof of termination given earlier does not apply and the
algorithm can cycle.
▶ This can be avoided by employing a suitable anticycling rule,
such as the following.
The dual simplex method

Lexicographic pivoting rule for the dual simplex method


1. Choose any row ℓ such that xB(ℓ) < 0, to be the pivot row.
2. Determine the index j of the entering column as follows.
For each column with vi < 0, divide all entries by |vi |
(including the entry in the zeroth row), and then choose
the lexicographically smallest column.
If there is a tie between several lexicographically smallest
columns, choose the one with the smallest index.
The dual simplex method

▶ If the dual simplex method is initialized so that every column


of the tableau is lexicographically positive, and if the above
lexicographic pivoting rule is used, the method terminates in a
finite number of steps.
▶ The proof is similar to the proof of the corresponding result
for the primal simplex method (Theorem 3.4) and is left as an
exercise (Exercise 4.24).
When should we use the dual simplex method
When should we use the dual simplex method
▶ At this point, it is natural to ask when the dual simplex
method should be used.
▶ One such case arises when a basic feasible solution of the
dual problem is readily available.
When should we use the dual simplex method
▶ At this point, it is natural to ask when the dual simplex
method should be used.
▶ One such case arises when a basic feasible solution of the
dual problem is readily available.
▶ Suppose, for example, that we already have an optimal
basis for some LP problem.
▶ We wish to solve the same problem for a different choice of
the right-hand side vector b.
When should we use the dual simplex method
▶ At this point, it is natural to ask when the dual simplex
method should be used.
▶ One such case arises when a basic feasible solution of the
dual problem is readily available.
▶ Suppose, for example, that we already have an optimal
basis for some LP problem.
▶ We wish to solve the same problem for a different choice of
the right-hand side vector b.

▶ The optimal basis for the original problem may be primal


infeasible under the new value of b.
▶ This is because the constraints of the primal are Ax = b,
x ≥ 0.
When should we use the dual simplex method
▶ At this point, it is natural to ask when the dual simplex
method should be used.
▶ One such case arises when a basic feasible solution of the
dual problem is readily available.
▶ Suppose, for example, that we already have an optimal
basis for some LP problem.
▶ We wish to solve the same problem for a different choice of
the right-hand side vector b.

▶ On the other hand, a change in b does not affect the


reduced costs and we still have a dual feasible solution.
▶ This can be seen directly since the constraints of the dual
are p′ A ≤ c′ .
When should we use the dual simplex method
▶ At this point, it is natural to ask when the dual simplex
method should be used.
▶ One such case arises when a basic feasible solution of the
dual problem is readily available.
▶ Suppose, for example, that we already have an optimal
basis for some LP problem.
▶ We wish to solve the same problem for a different choice of
the right-hand side vector b.

▶ On the other hand, a change in b does not affect the


reduced costs and we still have a dual feasible solution.
▶ This can be seen directly since the constraints of the dual
are p′ A ≤ c′ .
▶ Thus, instead of solving the new problem from scratch, it
may be preferable to apply the dual simplex algorithm
starting from the optimal basis for the original problem.
The geometry of the dual simplex method
The geometry of the dual simplex method

▶ Our development of the dual simplex method was based


entirely on tableau manipulations and algebraic arguments.
▶ We now present an alternative viewpoint based on geometric
considerations.
▶ We continue assuming that we are dealing with a problem in
standard form and that the matrix A has linearly independent
rows.
▶ Let B be a basis matrix with columns AB(1) , . . . , AB(m) .
▶ This basis matrix determines a basic solution to the primal
problem with xB = B−1 b.
The geometry of the dual simplex method

▶ The same basis can also be used to determine a dual vector p


by means of the equations

p′ AB(i) = cB(i) , i = 1, . . . , m.

▶ These are m equations in m unknowns; since the columns


AB(1) , . . . , AB(m) are linearly independent, there is a unique
solution p.
▶ For such a vector p, the number of linearly independent active
dual constraints is equal to the dimension of the dual vector.
▶ It follows that we have a basic solution to the dual problem

maximize p′ b
subject to pA ≤ c.
The geometry of the dual simplex method

▶ In matrix notation, the dual basic solution p satisfies

p′ B = c′B , or p′ = c′B B−1 ,

which was referred to as the vector of simplex multipliers in


Chapter 3.
▶ If p is also dual feasible, that is, if

p′ A ≤ c′ ,

then p is a basic feasible solution of the dual problem.


▶ To summarize, a basis matrix B is associated with
▶ a basic solution to the primal problem and also with
▶ a basic solution to the dual.
The geometry of the dual simplex method

▶ We now have a geometric interpretation of the dual simplex


method: at every iteration, we have a basic feasible solution
to the dual problem.
▶ The basic feasible solutions obtained at any two consecutive
iterations have m − 1 linearly independent active constraints
in common (the reduced costs of the m − 1 variables that are
common to both bases are zero).
▶ Thus, consecutive basic feasible solutions are either adjacent
or they coincide.
Example 4.8
Primal
Dual
minimize x1 + x2
maximize 2p1 + p2
subject to x1 + 2x2 − x3 = 2
subject to p1 + p2 ≤ 1
x1 − x4 = 1
2p1 ≤ 1
x ≥ 0,
p ≥ 0.

▶ The feasible set of the primal problem is 4-dimensional.


▶ We eliminate the variables x3 and x4 , and obtain an
equivalent 2-dimensional problem.
Example 4.8
Primal’ Dual
minimize x1 + x2 maximize 2p1 + p2
subject to x1 + 2x2 ≥ 2 subject to p1 + p2 ≤ 1
x1 ≥ 1 2p1 ≤ 1
x ≥ 0, p ≥ 0.
Example 4.8
Primal’ Dual
minimize x1 + x2 maximize 2p1 + p2
subject to x1 + 2x2 ≥ 2 subject to p1 + p2 ≤ 1
x1 ≥ 1 2p1 ≤ 1
x ≥ 0, p ≥ 0.
Example 4.8
▶ There is a total of five different bases in the standard form
primal problem and five different basic solutions.
▶ These correspond to the points A, B, C, D, E.
Example 4.8
▶ There is a total of five different bases in the standard form
primal problem and five different basic solutions.
▶ These correspond to the points A, B, C, D, E.
▶ The same five bases also lead to five basic solutions to the
dual problem, which are points A, B, C, D, E.
Example 4.8
▶ For example, choose the columns A3 , A4 to be basic.
▶ We set x1 = 0 and x2 = 0 in the primal.
▶ We obtain the infeasible primal basic solution
x = (0, 0, −2, −1) (point A).
Example 4.8
▶ For example, choose the columns A3 , A4 to be basic.
▶ The corresponding dual basic solution is obtained by letting
p′ A3 = c3 and p′ A4 = c4 .
▶ We obtain p = (0, 0), (point A).
▶ This is a basic feasible solution of the dual problem and can
be used to start the dual simplex method.
Example 4.8

Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1
Example 4.8

Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1
Example 4.8

Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1
Example 4.8

Initial tableau:
x1 x2 x3 x4
0 1 1 0 0
x3 = −2 −1 −2 1 0
x4 = −1 −1 0 0 1

First pivot:

x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
Example 4.8

First pivot:

x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
Example 4.8

First pivot:

x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
Example 4.8

Second pivot:

x1 x2 x3 x4
−3/2 0 0 1/2 1/2
x2 = 1/2 0 1 −1/2 1/2
x1 = 1 1 0 0 −1

First pivot:

x1 x2 x3 x4
−1 1/2 0 1/2 0
x2 = 1 1/2 1 −1/2 0
x4 = −1 −1 0 0 1
The geometry of the dual simplex method
▶ This sequence of tableaux corresponds to the path
A − B − C.
▶ In the primal space, the path traces a sequence of infeasible
basic solutions until, at optimality, it becomes feasible.
The geometry of the dual simplex method
▶ This sequence of tableaux corresponds to the path
A − B − C.
▶ In the dual space, the algorithm behaves exactly like the
primal simplex method: it moves through a sequence of
(dual) basic feasible solutions, while at each step improving
the cost function.
The geometry of the dual simplex method

▶ We have observed that the dual simplex method moves from


one basic feasible solution of the dual to an adjacent one.
▶ It may be tempting to say that the dual simplex method is
simply the primal simplex method applied to the dual.
▶ This is a somewhat ambiguous statement, however, because
the dual problem is not in standard form.
▶ If we were to convert it to standard form and then apply the
primal simplex method, the resulting method is not necessarily
identical to the dual simplex method (Exercise 4.25).
▶ A more accurate statement is to simply say that the dual
simplex method is a variant of the simplex method tailored to
problems defined exclusively in terms of linear inequality
constraints.
Duality and degeneracy
Duality and degeneracy
▶ Let us keep assuming that we are dealing with a standard
form problem in which the rows of the matrix A are linearly
independent.
▶ Any basis matrix B leads to an associated dual basic solution
given by p′ = c′B B−1 .
▶ At this basic solution, the dual constraint p′ Aj ≤ cj is active if
and only if c′B B−1 Aj = cj , that is, if and only if the reduced
cost c̄j is zero.
▶ Since p is m-dimensional, dual degeneracy amounts to having
more than m reduced costs that are zero.
▶ Given that the reduced costs of the m basic variables must be
zero, dual degeneracy is obtained whenever there exists a
nonbasic variable whose reduced cost is zero.
▶ The example that follows deals with the relation between basic
solutions to the primal and the dual in the face of degeneracy.
Example 4.9
Primal
Dual
minimize 3x1 + x2
maximize 2p1
subject to x1 + x2 − x3 = 2
subject to p1 + 2p2 ≤ 3
2x1 − x2 − x4 = 0
p1 − p2 ≤ 1
x ≥ 0,
p ≥ 0.

▶ The feasible set of the primal problem is 4-dimensional.


▶ We eliminate x3 and x4 to obtain an equivalent
2-dimensional primal problem.
Example 4.9
Primal’ Dual
minimize 3x1 + x2 maximize 2p1
subject to x1 + x2 ≥ 2 subject to p1 + 2p2 ≤ 3
2x1 − x2 ≥ 0 p1 − p2 ≤ 1
x ≥ 0, p ≥ 0.
Example 4.9
Primal’ Dual
minimize 3x1 + x2 maximize 2p1
subject to x1 + x2 ≥ 2 subject to p1 + 2p2 ≤ 3
2x1 − x2 ≥ 0 p1 − p2 ≤ 1
x ≥ 0, p ≥ 0.
Example 4.9
▶ There is a total of six different bases in the standard form
primal problem, but only four different basic solutions
(points A, B, C, D).
Example 4.9
▶ There is a total of six different bases in the standard form
primal problem, but only four different basic solutions
(points A, B, C, D).
▶ In the dual problem, however, the six bases lead to six
distinct basic solutions (points A, A′ , A”, B, C, D).
Example 4.9
▶ For example, we let columns A3 , A4 be basic
▶ The primal basic solution has x1 = x2 = 0, (point A).
▶ The corresponding dual basic solution is (p1 , p2 ) = (0, 0),
(point A), which is feasible.
Example 4.9
▶ Let columns A1 , A3 be basic.
▶ The primal basic solution has again x1 = x2 = 0, (point A).
▶ The corresponding dual basic solution is (p1 , p2 ) = (0, 3/2),
(point A’), which is feasible.
Example 4.9
▶ Finally, we let columns A2 , A3 be basic.
▶ We have the same primal solution x1 = x2 = 0, (point A).
▶ The corresponding dual basic solution is (p1 , p2 ) = (0, −1),
(point A”), which is infeasible.
Duality and degeneracy

▶ Example 4.9 has established that different bases may lead to


the same basic solution for the primal problem, but to
different basic solutions for the dual.
▶ Furthermore, out of the different basic solutions to the dual
problem, it may be that some are feasible and some are
infeasible.
Duality and degeneracy

▶ We conclude with a summary of some properties of bases and


basic solutions, for standard form problems, that were
discussed in this section.
(a) Every basis determines a basic solution to the primal, but also
a corresponding basic solution to the dual, namely,
p′ = c′B B−1 .
(b) This dual basic solution is feasible if and only if all of the
reduced costs are nonnegative.
(c) Under this dual basic solution, the reduced costs that are
equal to zero correspond to active constraints in the dual
problem.
(d) This dual basic solution is degenerate if and only if some
nonbasic variable has zero reduced cost.
4.6 Farkas’ lemma and linear inequalities
Farkas’ lemma and linear inequalities

▶ Suppose that we wish to determine whether a given system of


linear inequalities is infeasible.
▶ In this section, we approach this question using duality theory,
and we show that infeasibility of a given system of linear
inequalities is equivalent to the feasibility of another, related,
system of linear inequalities.
▶ Intuitively, the latter system of linear inequalities can be
interpreted as a search for a certificate of infeasibility for the
former system.
Farkas’ lemma and linear inequalities
▶ Consider a set of standard form constraints
Ax = b
x ≥ 0.
▶ Suppose that there exists some vector p such that

p′ A ≥ 0′ & p′ b < 0.
▶ Then, for any x ≥ 0, we have

0 ≤ p′ Ax p′ b < 0
Farkas’ lemma and linear inequalities
▶ Consider a set of standard form constraints
Ax = b
x ≥ 0.
▶ Suppose that there exists some vector p such that

p′ A ≥ 0′ & p′ b < 0.
▶ Then, for any x ≥ 0, we have

0 ≤ p′ Ax ̸= p′ b < 0

▶ We conclude that Ax ̸= b, for all x ≥ 0.


Farkas’ lemma and linear inequalities
▶ Consider a set of standard form constraints
Ax = b
x ≥ 0.
▶ Suppose that there exists some vector p such that

p′ A ≥ 0′ & p′ b < 0.
▶ Then, for any x ≥ 0, we have

0 ≤ p′ Ax ̸= p′ b < 0

▶ We conclude that Ax ̸= b, for all x ≥ 0.


▶ This argument shows that if we can find a vector p satisfying

p′ A ≥ 0′ & p′ b < 0,
the standard form constraints have no feasible solution.
▶ Such a vector p is a certificate of infeasibility.
Farkas’ lemma and linear inequalities

Farkas’ lemma below states that whenever a standard form


problem is infeasible, such a certificate of infeasibility p is
guaranteed to exist.

Theorem 4.6 (Farkas’ lemma)


Let A be a matrix of dimensions m × n and let b be a vector in
Rm . Then, exactly one of the following two alternatives holds:
(a) There exists some x ≥ 0 such that Ax = b.
(b) There exists some vector p such that p′ A ≥ 0′ and
p′ b < 0.

Let’s prove it!


Farkas’ lemma and linear inequalities

We now provide a geometric illustration of Farkas’ lemma.

▶ Let A1 , . . . , An be the columns of the


matrix A and note that

n
Ax = Ai xi .
i=1

▶ Therefore, the existence of a vector


x ≥ 0 satisfying Ax = b is the same as
requiring that b lies in the set of all
nonnegative linear combinations of the
vectors A1 , . . . , An .
Farkas’ lemma and linear inequalities

We now provide a geometric illustration of Farkas’ lemma.

▶ If b does not belong to the shaded


region, we expect that we can find a
vector p and an associated hyperplane

{z | p′ z = 0}

such that b lies on one side of the


hyperplane while the shaded region lies
on the other side.
▶ We then have p′ b < 0 and p′ Ai ≥ 0 for
all i, or, equivalently, p′ A ≥ 0′ , and the
second alternative holds.
Farkas’ lemma and linear inequalities

We now provide a geometric illustration of Farkas’ lemma.

▶ If b does not belong to the shaded


region, we expect that we can find a
vector p and an associated hyperplane

{z | p′ z = 0}

such that b lies on one side of the


hyperplane while the shaded region lies
on the other side.
▶ We then have p′ b < 0 and p′ Ai ≥ 0 for
all i, or, equivalently, p′ A ≥ 0′ , and the
second alternative holds.
Farkas’ lemma and linear inequalities

▶ Results such as Theorems 4.6 are often called theorems of the


alternative.
▶ There are several more results of this type.
▶ See, for example, Exercises 4.26, 4.27, and 4.28.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy