0% found this document useful (0 votes)
82 views2 pages

Checking Optimality of A Basic Feasible Solution

For a basic feasible solution (BFS) to be optimal for a linear program, the following condition must be met: the objective function value (zj) at the BFS must be greater than or equal to the objective coefficient (cj) for each non-basic variable j. If this condition holds for all non-basic variables, then the BFS is the optimal solution. However, if the condition does not hold for at least one variable, the BFS may or may not be optimal depending on whether it is degenerate or non-degenerate.

Uploaded by

Gaurav Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views2 pages

Checking Optimality of A Basic Feasible Solution

For a basic feasible solution (BFS) to be optimal for a linear program, the following condition must be met: the objective function value (zj) at the BFS must be greater than or equal to the objective coefficient (cj) for each non-basic variable j. If this condition holds for all non-basic variables, then the BFS is the optimal solution. However, if the condition does not hold for at least one variable, the BFS may or may not be optimal depending on whether it is degenerate or non-degenerate.

Uploaded by

Gaurav Gupta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

In this note we identify the condition that tells whether a given basic feasible solution (BFS)

can be declared optimal. The LP is represented in the standard as: Maximize 𝑐̅ ⋅ 𝑥̅ s.t. 𝐴𝑥̅ = 𝑏̅
and 𝑥̅ ≥ 0, where 𝐴 is 𝑚 × 𝑛 matrix with 𝑚 ≤ 𝑛 and full-rank, i.e., 𝑟𝑎𝑛𝑘(𝐴) = 𝑚. Let 𝑥̅
denote a BFS. Without loss of generality, let us assume that the first 𝑚 components of 𝑥̅ are
the basic variables, i.e., 𝑥̅ = (𝑥1 , 𝑥2 , … , 𝑥𝑚 , 0, … ,0)𝑇 with 𝐴1̅ , 𝐴̅2 , … , 𝐴̅𝑚 being linearly
independent. Then 𝑥1 𝐴̅1 + 𝑥2 𝐴̅2 + ⋯ + 𝑥𝑚 𝐴̅𝑚 = 𝑏̅ and 𝑥1 , 𝑥2 , … , 𝑥𝑚 ≥ 0. Let 𝑧0 denote the
objective function value at 𝑥̅ , i.e., 𝑧0 = 𝑐̅ ⋅ 𝑥̅ = 𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ + 𝑐𝑚 𝑥𝑚 . Let us define
𝑥1,𝑗 , 𝑥2,𝑗 , … , 𝑥𝑚,𝑗 and 𝑧𝑗 for 𝑗 = 1,2, … , 𝑛 as follows:
(1) 𝑥1,𝑗 𝐴1̅ + 𝑥2,𝑗 𝐴̅2 + ⋯ + 𝑥𝑚,𝑗 𝐴̅𝑘 = 𝐴𝑗̅
(2) 𝑧𝑗 = 𝑐1 𝑥1,𝑗 + 𝑐2 𝑥2,𝑗 + ⋯ + 𝑐𝑚 𝑥𝑚,𝑗
Observe that (𝑥1 − 𝑥1,𝑗 )𝐴1̅ + (𝑥2 − 𝑥2,𝑗 )𝐴̅2 + ⋯ + (𝑥𝑚 − 𝑥𝑚,𝑗 )𝐴̅𝑚 + 𝐴𝑗̅ = (𝑥1 𝐴1̅ + 𝑥2 𝐴̅2 +
⋯ + 𝑥𝑚 𝐴̅𝑚 ) − (𝑥1,𝑗 𝐴1̅ + 𝑥2,𝑗 𝐴̅2 + ⋯ + 𝑥𝑚,𝑗 𝐴̅𝑚 ) + 𝐴𝑗̅ = 𝑏̅ for all 𝑗. Then 𝑥1,𝑗 , 𝑥2,𝑗 , … , 𝑥𝑚,𝑗
can be viewed as the amounts by which 𝑥1 , 𝑥2 , … , 𝑥𝑚 must be decreased in order to increase
𝑥𝑗 by one unit while satisfying the equations 𝐴𝑥̅ = 𝑏̅. The decrease in 𝑥1 , 𝑥2 , … , 𝑥𝑚 decreases
the objective function by 𝑧𝑗 , and the increase in 𝑥𝑗 increases it by 𝑐𝑗 . If the net effect 𝑐𝑗 − 𝑧𝑗 is
positive, then we can expect to improve the solution by increasing 𝑥𝑗 . However, if 𝑐𝑗 ≤ 𝑧𝑗 ∀𝑗,
then intuition suggests that the current BFS is optimal.

Result: For a given BFS, if 𝑐𝑗 ≤ 𝑧𝑗 for all 𝑗, then the BFS is optimal.
Proof: Let 𝑥̅ denote the BFS with all the above properties. Let 𝑦̅ = (𝑦1 , 𝑦2 , … , 𝑦𝑛 ) denote an
arbitrary feasible solution, i.e., 𝑦1 , 𝑦2 , … , 𝑦𝑛 ≥ 0 and 𝑦1 𝐴1̅ + 𝑦2 𝐴̅2 + ⋯ + 𝑦𝑛 𝐴̅𝑛 = 𝑏̅ . Then
𝑧 = 𝑐1 𝑦1 + 𝑐2 𝑦2 + ⋯ + 𝑐𝑛 𝑦𝑛 is the objective function value at 𝑦̅. If show that 𝑧0 ≥ 𝑧, then
the proof is complete. Using (1), we can rewrite 𝑦1 𝐴1̅ + 𝑦2 𝐴̅2 + ⋯ + 𝑦𝑛 𝐴̅𝑛 = 𝑏̅ as:
𝑚 𝑚 𝑚

𝑦1 (∑ 𝑥𝑖,1 𝐴̅𝑖 ) + 𝑦2 (∑ 𝑥𝑖,2 𝐴̅𝑖 ) + ⋯ + 𝑦𝑛 (∑ 𝑥𝑖,𝑛 𝐴̅𝑖 ) = 𝑏̅


𝑖=1 𝑖=1 𝑖=1
For 𝑖 = 1,2, … , 𝑚, if we take the 𝑖-th term out from each of the above sums and put them
together, we get (𝑦1 𝑥𝑖,1 + 𝑦2 𝑥𝑖,2 + ⋯ + 𝑦𝑛 𝑥𝑖,𝑛 )𝐴̅𝑖 . Then we have
𝑛 𝑛 𝑛

(∑ 𝑦𝑗 𝑥1,𝑗 ) 𝐴1̅ + (∑ 𝑦𝑗 𝑥2,𝑗 ) 𝐴̅2 + ⋯ + (∑ 𝑦𝑗 𝑥𝑚,𝑗 ) 𝐴̅𝑚 = 𝑏̅


𝑗=1 𝑗=1 𝑗=1

Since 𝐴1̅ , 𝐴̅2 , … , 𝐴̅𝑚 are linearly independent, the equations 𝑥1 𝐴1̅ + 𝑥2 𝐴̅2 + ⋯ + 𝑥𝑚 𝐴̅𝑚 = 𝑏̅
have unique solution. Hence, 𝑥𝑖 = ∑𝑛𝑗=1 𝑦𝑗 𝑥𝑖,𝑗 for 𝑖 = 1,2, … , 𝑚. Then
𝑛 𝑛 𝑛

𝑧0 = (∑ 𝑦𝑗 𝑥1,𝑗 ) 𝑐1 + (∑ 𝑦𝑗 𝑥2,𝑗 ) 𝑐2 + ⋯ + (∑ 𝑦𝑗 𝑥𝑚,𝑗 ) 𝑐𝑚


𝑗=1 𝑗=1 𝑗=1
We can rewrite the above expression by taking the 𝑗-th term, for 𝑗 = 1,2, … , 𝑛, out from each
of the above sums and then put them together to obtain
𝑚 𝑚 𝑚

𝑧0 = 𝑦1 (∑ 𝑥𝑖,1 𝑐𝑖 ) + 𝑦2 (∑ 𝑥𝑖,2 𝑐𝑖 ) + ⋯ + 𝑦𝑛 (∑ 𝑥𝑖,𝑛 𝑐𝑖 )


𝑖=1 𝑖=1 𝑖=1
Then by (2), 𝑧0 = 𝑦1 𝑧1 + 𝑦2 𝑧2 + ⋯ + 𝑦𝑛 𝑧𝑛 . Since 𝑐𝑗 ≤ 𝑧𝑗 for all 𝑗, then 𝑧0 ≥ 𝑦1 𝑐1 + 𝑦2 𝑐2 +
⋯ + 𝑦𝑛 𝑐𝑛 = 𝑧, as required. This completes the proof.

The above result gives sufficient condition for optimality of a BFS. It does not tell what
happens when 𝑐𝑗 > 𝑧𝑗 for at least one 𝑗 ∈ {1,2, … , 𝑛}. Intuition may suggest that the BFS is
sub-optimal, but it may not be true. Let 𝑥̅ denote the BFS with all previously mentioned
properties. If we can construct a feasible solution with a better objective function value, than
𝑥̅ is sub-optimal. First note that 𝑐𝑗 = 𝑧𝑗 for all 𝑗 ≤ 𝑚. This is because if 𝑗 ≤ 𝑚, 𝑥𝑖,𝑗 = 0 for
𝑖 ≠ 𝑗 and 𝑥𝑗,𝑗 = 1 solve (1), and then 𝑧𝑗 = 𝑐𝑗 by (2). So 𝑗 with 𝑐𝑗 > 𝑧𝑗 must be from {𝑚 +
1, 𝑚 + 2, … , 𝑛}. Without loss of generality, let us assume that 𝑐𝑚+1 > 𝑧𝑚+1.
For an arbitrary 𝜃 ≥ 0, let us define 𝑥𝑖′ = 𝑥𝑖 − 𝜃𝑥𝑖,𝑚+1 for 𝑖 = 1,2, … , 𝑚. Then 𝑥1′ 𝐴1̅ +
𝑥2′ 𝐴̅2 + ⋯ + 𝑥𝑚
′ ̅
𝐴𝑚 + 𝜃𝐴̅𝑚+1 = (𝑥1 𝐴1̅ + 𝑥2 𝐴̅2 + ⋯ + 𝑥𝑚 𝐴̅𝑚 ) − 𝜃(𝑥1,𝑚+1 𝐴1̅ + 𝑥2,𝑚+1 𝐴̅2 +
⋯ + 𝑥𝑚,𝑚+1 𝐴̅𝑚 ) + 𝜃𝐴̅𝑚+1 = 𝑏̅ . Then 𝑥̅ ′ = (𝑥1′ , 𝑥2′ , … , 𝑥𝑚

, 𝜃, 0, … ,0) satisfies 𝐴𝑥̅ ′ = 𝑏̅. Now
if we can ensure that 𝑥𝑖′ ≥ 0 for all 𝑖 ≤ 𝑚, then 𝑥̅ ′ is a feasible solution. Since 𝑥̅ is a BFS,
𝑥𝑖 ≥ 0 ∀𝑖 ≤ 𝑚. Then 𝑥𝑖,𝑚+1 ≤ 0 ⇒ 𝑥𝑖′ ≥ 0, as required. However, if 𝑥𝑖,𝑚+1 > 0, we need
𝜃 ≤ 𝑥𝑖 ⁄𝑥𝑖,𝑚+1 for ensuring 𝑥𝑖′ ≥ 0. There can be multiple 𝑖 with positive 𝑥𝑖,𝑚+1 . Then there
are multiple such restrictions on 𝜃. Overall we need 𝜃 ≤ 𝜃0 = min{𝑥𝑖 ⁄𝑥𝑖,𝑚+1 : 𝑥𝑖,𝑚+1 >
0, 𝑖 = 1,2, … , 𝑚} in order to ensure 𝑥𝑖′ ≥ 0 ∀𝑖 ≤ 𝑚. Note that if 𝑥𝑖,𝑚+1 ≤ 0 ∀𝑖 ≤ 𝑚, then 𝜃0
is undefined and there is no upper bound for 𝜃.
Let us check the objective function value of the new feasible solution. 𝑧 ′ = 𝑐 ⋅ 𝑥̅ ′ = 𝑐1 𝑥1′ +
𝑐2 𝑥2′ + ⋯ + 𝑐𝑚 𝑥𝑚 ′
+ 𝑐𝑚+1 𝜃 = (𝑐1 𝑥1 + 𝑐2 𝑥2 + ⋯ + 𝑐𝑚 𝑥𝑚 ) − 𝜃(𝑐1 𝑥1,𝑚+1 + 𝑐2 𝑥2,𝑚+1 + ⋯ +
𝑐𝑚 𝑥𝑚,𝑚+1 ) + 𝜃𝑐𝑚+1 = 𝑧0 + 𝜃(𝑐𝑚+1 − 𝑧𝑚+1 ). Then 𝜃 > 0 ⇒ 𝑧 ′ > 𝑧0 , i.e., if we can choose
a strictly positive 𝜃 for constructing 𝑥̅ ′ , then the current BFS is sub-optimal. If 𝜃0 > 0 or 𝜃0
is undefined, then we can choose a strictly positive 𝜃. If 𝜃0 = 0, which happens when 𝑥𝑖 = 0
and 𝑥𝑖,𝑚+1 > 0 for some 𝑖 = 1,2, … , 𝑚, then we cannot say that the BFS is sub-optimal. We
also cannot say that it is optimal. Then the nature of 𝑥̅ remains inconclusive.
If 𝑥𝑖 > 0 ∀𝑖 ≤ 𝑚, i.e., if all the basic variables are strictly positive, then 𝑥̅ is called a non-
degenerate BFS, otherwise it’s called a degenerate BFS. If 𝑥̅ is non-degenerate, then 𝜃0 =
min{𝑥𝑖 ⁄𝑥𝑖,𝑚+1 : 𝑥𝑖,𝑚+1 > 0, 𝑖 = 1,2, … , 𝑚} > 0 whenever 𝜃0 exists. Then we can always find
𝑥̅ ′ with 𝑧 ′ > 𝑧0 . Hence, for a non-degenerate BFS if 𝑐𝑗 > 𝑧𝑗 for some 𝑗, then the BFS is sub-
optimal. In the case of degenerate BFS, 𝜃0 may be zero, and we cannot conclude anything.
Then the safe option is to proceed with further calculations. Hence, if we find a BFS with
𝑐𝑗 ≤ 𝑧𝑗 for all 𝑗, then we have found optimal solution and we stop, otherwise we continue to
explore new BFS with a hope to stop soon.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy