0% found this document useful (0 votes)
92 views7 pages

Asdfghk

The document discusses an example to illustrate the simplex algorithm for solving linear programming problems. It presents an example LPP and shows the initial basic feasible solution and the steps of the simplex method to iteratively improve the objective value until an optimal solution is reached. The document also provides a method to generate the initial BFS when the standard form has an identity submatrix.

Uploaded by

AkhilMasa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views7 pages

Asdfghk

The document discusses an example to illustrate the simplex algorithm for solving linear programming problems. It presents an example LPP and shows the initial basic feasible solution and the steps of the simplex method to iteratively improve the objective value until an optimal solution is reached. The document also provides a method to generate the initial BFS when the standard form has an identity submatrix.

Uploaded by

AkhilMasa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Module 3 1

Topic 2:

Case of unbounded LPP, Simplex algorithm and illustration through examples :


Before addressing the important questions raised in the previous lecture related to the simplex
algorithm, we shall first present an example to understand the process of the incoming-outgoing
vectors used in the construction of new BFS from the existing BFS.
Example 3.1: Consider the LPP.
max 3x1 + 8x2
subject to 3x1 − 5x2 ≥ −10
2x1 − x2 − x3 ≤ 20
x1 + 2x2 ≤ 15
x1 , x2 ≥ 0.

The standard form of the LPP is


max 3x1 + 8x2 + 0x3 + 0.x4 + 0.x5
subject to 3x1 − 5x2 − x3 = −10
2x1 − x2 + x4 = 20
x1 + 2x2 + x5 = 15
x1 , x2 , x3 , x4 , x5 ≥ 0.
 
3 −5 −1
Consider a sub matrix B = [a1 , a2 , a3 ], that is, B =  2 −1 0 
 

1 2 0
with |B| = −5 6= 0. So, B is a basis matrix and the corresponding BFS of the given LPP is
(xB , 0), where

     
x1 0 2 1 −10
1
xB =  x2  = B −1 b =  0 −1 2   20 
    
5
x3 −5 −11 −7 15
 
11
=  2
 

33

Copyright
c Reserved IIT Delhi
2

The two non-basic variables are x4 and x5 , and for then

z4 − c4 = cTB B −1 a4 − c4
  
0 2 1 0
1
= (3 8 0)  0 −1 1  1  − 0
 
5
−5 −11 −7 1
−2
=
5
T −1
and z5 − c5 = cB B a5 − c5 = 19 . Since z4 − c4 < 0, x4 is the entering variable. Now,
  5
2/5
y4 = B −1 a4 =  −1/5 . To find the out going or leaving variable we shall use the
 

11/5
minimum ratio concept, that is,

 
xBr xBi
= min | yi4 > 0
yr4 yi4
 
11 33
= min ,
2/5 5/11
= 15
xB3
=
y34
Hence xB3 (which in our case is x3 ) is the leaving variable. This is the pivot element is y34 = 11
5 .
The new basis matrix B̂ is [a1 , a2 , a3 , a4 ] with associated BFS given by
x̂Bi = xBi − yy34
i4
xB3 , i = 1, 2
xB3
x̂B3 = y34 yielding x̂B = (5, 5, 15)
Thus the new BFS is (5, 5, 0, 15, 0). Graphically one has moved from the extreme point (11, 2)
to the extreme point (5, 5) of the feasible set of the given LPP shown in the shades in the figure
below:

C(2,2)
x2
<

^
(5,5):B

(0,2)
(11,2):B

(10,0) x>1

Copyright
c Reserved IIT Delhi
3

By now we have realized that the simplex method is an iterative algorithm in each iteration
of which we move from one BFS to the improved BFS. Thus, if a starting BFS is known, the
method can be used iteratively till the optimal BFS ( if it exists) is obtained. We now present
a mechanism by which the initial BFS of the given LPP can be generated with ease.
Note that if a basis matrix is I ( identify matrix of order m) then finding the associated BFS
is simple as xB = B −1 b = b. Thus if we can make sure that b ≥ 0 , we have the BFS as
xB = b. Now ensuring that b ≥ 0 is an easy task as for if some bi < 0 in a constraint of LPP
then multiply that constraints of LPP by (-1) before converting the LPP in standard form. So
without loss of generality assume b ≥ 0 .Next, if we can make sure that an identity matrix
is present in the standard form of LPP we are done. Some times this situation is present by
default however it may happen that the coefficient matrix of standard form of LPP does not
have identity submatrix. In the latter case we will artificially create an identity submatrix in
the system . But for the time being we postponed this case and discuss the other case when
identity matrix is present in standard form of LPP.

Example 3.2: Consider the LPP


max z = 3x1 + 5x2 + 4x3
subject to 2x1 + 3x2 ≤ 8
3x1 + 2x2 + 4x3 ≤ 15
2x2 + 5x3 ≤ 10
x1 , x2 , x3 ≥ 0.
 
8
Observe that b =  15  ≥ 0 . Adding the slack variables in the case and associating cost
 

10
zero with them in the objective function.

max z = 3x1 + 5x2 + 4x3 + 0.x4 + 0.x5 + 0.x6


subject to 2x1 + 3x2 + x4 = 8
3x1 + 2x2 + 4x3 + x5 = 10
2x2 + 5x3 + x6 = 10
x1 , . . . , x 6 ≥ 0
Again observe that corresponding to variables x4 , x5 , x6 there is 3× 3 identity
 submatrix present
x4
in the standard form. Thereby taking B = [a4 a5 a6 ] and xB =  x5 , we have a BFS
 

x6
 
8
xB = B −1 b = b =  15 
 

10

Copyright
c Reserved IIT Delhi
4

Once we have initial BFS , we can perform the simplex iterations to keep on generating new BFS
which continuously improves the objective value. To illustrate the simplex iterations, we con-
sider the above example again. The steps of simplex iterations are shown in tabular forms below.

cj → 3 5 4 0 0 0
cB vB xB y1 y2 y3 y4 y5 y6 ratio
0 x4 8 2 3 0 1 0 0 8/3
0 x5 15 3 ­ 4 0 1 0 15/2 →
0 x6 10 0 2 5 0 0 1 10/2
zB = 0 −3 −5 −4 0 0 0
zj − cj → ↑

In the above table , note that B = I, yj = B −1 aj = aj , ∀ j, zj − cj = cTB aj − cj = −cj as cB = 0.


Here vB and xB notations are used to denote vectors in basis and their n corresponding
o values,
xBi
respectively. The last column on right contains the legitimate ratio yij |yij > 0 for that j
which represent the entering variable index.
Doing the pivoting and updating the BFS using formula (3) and other entries using (2), we
obtain the following tables.

cj → 3 5 4 0 0 0
cB vB xB y1 y2 y3 y4 y5 y6 ratio
5 x2 8/3 2/3 1 0 1/3 0 0 −
0 x5 29/3 5/3 0 4 −2/3 1 0 29/12
0 x6 14/3 −4/3 0 ° −2/3 0 1 14/15 →
zB = 40/3 → 1/3 0 −4 5/3 0 0
zj − cj ↑


cB vB xB y1 y2 y3 y4 y5 y6 ratio
5 x2 8/3 2/3 1 0 1/3 0 0 4
41
0 x5 89/15 15 0 0 −2/15 1 −4/5 89/41 →
4 x3 14/15 −4/15 0 1 −2/15 0 1/5 −
zB = 256/15 → −11/15 0 0 17/15 0 4/5
zj − cj ↑

Copyright
c Reserved IIT Delhi
5

cB vB xB y1 y2 y3 y4 y5 y6
5 x2 50/41 1 0 0 15/41 −10/41 8/41
3 x1 89/41 0 1 0 −2/41 15/41 −12/41
4 x3 62/41 0 0 1 −6/41 4/41 5/41
zB = 765/41
zj − cj → 0 0 0 45/41 11/41 24/41

Since zj − cj ≥ 0, ∀j; the optimality criteria is satisfied. The optimal solution is read from the
89
last table (optimal table) as x1 = 41 , x2 = 50 62 765
41 , x3 = 41 and the optimal value of LPP is z = 41
It is worth to note here that although the presence of identity matrix as a sub matrix in the
system ease out the process of finding initial BFS , it is not absolutely essential to have the
identity matrix in the system. The main purpose is to have an initial BFS of the given LPP
by whichever mechanics it is possible. For sake of convenience we always desire to begin with
identity matrix.
Unlike the previous example, it is possible that the legitimate variables ( original, slack and
surplus) in the standard form of LPP fail to posses an identity matrix. In such cases we force-
fully or artificially create identity matrix introducing additional variables where ever they are
absolutely required for so. These additional variables are assumed to be non-negative and are
called artificial variable.

Example 3.3 Consider the LPP with mixed constraints


max z = − 2x1 − x2
subject to 3x1 + x2 = 3
4x1 + 3x2 ≥ 6
x1 + 2x2 ≤ 4
x1 , x2 ≥ 0

The standard form of this LPP is given by:

3x1 + x2 = 3
4x1 + 3x2 − x3 = 6
x1 + 2x2 + x4 = 4
x1 , x2 , x3 , x4 ≥ 0

Now only the last column of 3 × 3 identity matrix I is present corresponding to x4 . The
remaining two columns of I will be created using additional variables x5 ≥ 0 and x6 ≥ 0. These
two variables are subsequently called the artificial variables. Thus, we have the constraint
system as follows:

Copyright
c Reserved IIT Delhi
6

3x1 + x2 + x5 = 3
4x1 + 3x2 − x3 + x6 = 6
x1 + 2x2 + x4 = 4
x1 , x2 , x3 , x4 , x5 , x6 ≥ 0

Now observe that the corresponding to ordered set x5 , x6 , x4 we have a 3 × 3 identity matrix
in the above system. But one thing is clear that the fictious/artificial variables have been cre-
ated only for convenience and they should in no way contribute in the final optimal solution
(if exists). It is therefore advisable to remove these variables from the basis as well as analysis
as soon as possible. The objective function is slightly modified to take care of this aspect.
However, before we obtained out the requisite modifications, we present the following theorem
which indicate the unboundedness of the given LPP.

Theorem 3.1: Suppose there exists a vector aj in A which is not in B (i.e a nonbasic vector)
having zj − cj < 0 but the corresponding yj = B −1 aj has yij ≤ 0, ∀i = 1, 2, ...., m. Then the
LPP is unbounded.

Proof : Let (xB , 0) be a BFS of maximization LPP with basis matrix B.


Xm
Then xBi bi = b where B = [b1 , b2 , ....bm ].
i=1
Let ξ > 0 be any arbitrary positive real number. Then
Xm
xBi bi − ξaj + ξaj = b.
i=1
m
X
Now aj = Byj = yij bi . Thus
i=1
m
X
(xBi − ξyij )bi + ξaj = b
i=1
m
X
implying x̂Bi bi + ξaj = b
i=1
where x̂Bi − ξyij ≥ 0, ∀i.

Copyright
c Reserved IIT Delhi
7

This latter yields that (x̂B , 0) is a feasible solution of LPP. The objective function value at
(x̂B , 0) is described by:

ẑB = ĉTB x̂B


m+1
X
= ĉBi x̂Bi
i=1
m
X
= CBi (xBi − ξyij ) + cj ξ
i=1
m
X
= zB − ξ CBi yij + cj ξ
i=1
= zB − ξ(zj − cj )

Consequently, ẑB − zB = −ξ(zj − cj ).


Now it is given that zj − cj < 0, thus the RHS of above expression is positive, and as ξ > 0 is
arbitrary , ẑB − zB → +∞ as ξ → +∞. Thus, while remaining in the feasible set (note that
(x̂B , 0) is a feasible solution for all values of ξ) of the given LPP we can tend the objective
function value to infinity, leading to unbounded LPP.

The proof of above theorem is fairly constructive . Not only it indicate the unbounded case of
LPP but it also tell us how to construct a feasible solution where any desired objective function
value can be attained. In other words, if we know that the given LPP is unbounded we can make
the objective function function value as large as desired (for maximization) while remaining in
the feasible set of LPP. The above proof help us to construct that feasible point of LPP at which
any desired objective value is able. We will talk more about it in next lectures.

Copyright
c Reserved IIT Delhi

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy