Lecture CB304 Midsem Part2
Lecture CB304 Midsem Part2
Sandip Khan
tains and valleys (Fig. 14.1). For higher-dimensional problems, convenient images are not
possible.
We have chosen to limit this chapter to the two-dimensional case. We have adopted
this approach because the essential features of multidimensional searches are often best
communicated visually.
Multidimensional unconstrained optimization
Techniques for multidimensional unconstrained optimization can be classified in a
number of ways. For purposes of the present discussion, we will divide them depending on
whether they require derivative evaluation. The approaches that do not require derivative
evaluation are called nongradient, or direct, methods. Those that require derivatives are
called gradient,
Nongradient, or descent
or direct, methods: That domethods.
(or ascent), not require derivative evaluation.
14.1
tangible way to visual- Lines of constant f f
imensional searches is x
ntext of ascending a
(maximization) or de-
into a valley (mini-
. (a) A 2-D topo-
map that corresponds
D mountain in (b).
x
y
y
(a) (b)
367
implies, this method repeatedly evaluates the function at randomly selected values of the
eventually be located.
Problem Statement.
independent variables. If aUse
sufficient
a randomnumber of samples
number are conducted,
generator to locatethe
theoptimum
maximumwillo
MPLE 14.1eventually
RandombeSearch
located.
Method
Random Search Method
f(x, y) = y − x − 2x 2 − 2x y − y 2
Problem Statement. Use a random number generator to locate the maximum of
14.1 Random Search Method
in the domain
f(x, y) bounded
= y − x − by2 x− =
2x 2x −2
y − y 2 2 and y = 1 to 3. The domain is depicted in
to (E14.1.1)
Problemthat
Use
Notice a Statement.
random number generator to
Use a random
a single maximum locate the
number
of 1.5 maximum of following
generator
occurs at x =to−1 function
locate
andthey= maximum
1.5. of
in the domain bounded by x = −2 to 2 and y = 1 to 3. The domain is depicted in Fig. 14.2.
2 2
Solution. Random number generatorsattypically
f(x,
Notice y) =
that ay − x
single− 2x −
maximum 2x
ofy −
1.5 y
occurs x = −1 andgenerate
y = 1.5. values between (E14.1.1)
0 an
Solution.
designate
in the domain
Within xsuch toa 2number
Random
bounded
= −2 and ynumber
by as
= 1xto r, the
to 2following
=3generators
−2 y = 1 to
typically
and formula
generate can be
3. Thevalues
domain isused
between 0to
and
depictedgenerate
in1. Fig. xv
If we14.2.
Pagedesignate
PM domly that a such
within a number
a range as r,of
between the1.5
xfollowing formula canand
be used to generate x values ran-
2:36 Notice 369 single maximum l to xu: at
occurs x = −1 y = 1.5.
domly within a range between xl to xu:
Solution.
x =x x=l Random
+ (x −numberx )r generators typically generate values between 0 and 1. If we
xl + (xuu − xll)r
designate such a number as r, the following formula can be used to generate x values ran-
For the present
Forwithin
domly the presentapplication,
application,
a range =
−2x−2
betweenxl xx=ll to and and
xu = x = 2,
2,u and theand the isformula is
formula
u:
14.1 DIRECT METHODS 369
= −2 ++(2(2−
x = −2 − (−2))r
(−2))r ==
xx = xl + (xu − xl )r −2−2
+ 4r+ 4r
This can be tested by substituting 0 and 1 to yield −2 and 2, respectively.
Similarly forapplication,
y, a formula for the−2
present example
For the
This canpresent
be tested xl =
by substituting and xu =12,tocould
0 and and be developed
the
yield −2
formula as respectively.
and is2,
xy = l ++
= y−2 −−
(yu(2 yl )r = 1 +=
(−2))r (3 −
−21)r+=4r1 + 2r
FIGURE 14.2
The
canfollowing
ThisEquation testedExcel
be (E14.1.1) VBA macrocode
byshowing
substituting uses
0 and theyield
1at to VBA −2
random number function Rnd, to
the maximum x = −1 and yand 2, respectively.
= 1.5.
FIGURE 14.2
generate (x, y) pairs. These are then substituted into Eq. (E14.1.1). The maximum value
Equation
from among(E14.1.1) showing
these random trials isthe maximum
stored at x =
in the variable maxf, and
−1and the ycorresponding
= 1.5. x and
y values in maxx and maxy, yrespectively.
FIGURE
maxf14.2
= –1E9
The following Excel VBA macrocode uses the VBA random number function Rnd, t
domly within a range between xl to xu:
generate (x, y) pairs. These are then substituted into Eq. (E14.1.1). The maximum valu
x = xl + (xu − xl )r
from among these random trials is stored in the variable maxf, and the corresponding x an
For theand
y values in maxx present application,
maxy, xl = −2 and xu = 2, and the formula is
respectively.
x = −2 + (2 − (−2))r = −2 + 4r
maxf = –1E9
For jThis
= can
1 To n by substituting 0 and 1 to yield −2 and 2, respectively.
be tested
x = –2 + 4 * Rnd
y = 1 + 2 * Rnd
fn FIGURE
= y – 14.2x – 2 * x ^ 2 – 2 * x * y – y ^ 2
If Equation
fn > (E14.1.1)
maxf Then showing the maximum at x = −1 and y = 1.5.
maxf = fn
maxx = x y
maxy = y
End If 3
Next j – 10
– 20
0
2
A number of iterations yields
0
Iterations x 1 y0 f (x,
–2 –1 1 2 y)
x
Iterations x y f (x, y)
The results indicate that the technique homes in on the true maximum.
This simple brute force approach works even for discontinuous and nondifferentiable
functions. Furthermore, it always finds the global optimum rather than a local optimum. Its
major shortcoming is that as the number of independent variables grows, the implementa-
ne at the point. Next, move along the y axis with x constant to point 3. Continue this
rocess generating points 4, 5, 6, etc.
IGURE 14.3
Univariate search method
graphical depiction of how a univariate search is conducted.
!
,! 2 2 #
= %! − " = 0 => %! =
,%! %! %" %"
!
,! 2 1 "
= 6 − " = 0. => %" =
,%" %" %! 3%!
X1 X2 y
1 0.578 15.42
1.51 0.46 14.7
1.62 0.45 14.76
1.64 0.45 14.75
y
GRADIENT METHODS
Gradient
FIGURE 14.6 Method 373
The directional gradient is defined along an axis h that forms an angle θ with the x axis.
Assumingwhere
thatthe partial
your goalderivatives are most
is to gain the evaluated at x =with
elevation a and = b.
they next
step, the 374 Assuming
next logicalthat your goal
question is toMULTIDIMENSIONAL
would gainwhat
be: the most UNCONSTRAINED
elevation
direction the the next sO
is with
steepestical question
ascent? The would
answerbe: towhat direction is
this question is provided
the steepestveryascent?
neatlyThe answer t
by whatprovided very
is referred to neatly by what isasreferred
mathematically to mathematically
theVector
gradient, is de-as atheconcise
whichprovides
notation gradi
fined asfined as sions, as
∂f ∂f ⎧ ∂f ⎫
!f = i+ j ⎪
⎪ (x) ⎪
⎪
∂x ∂y ⎪
⎪
⎪ ∂ x1 ⎪ ⎪
⎪
⎪
⎪ ⎪
⎪
⎪
⎪ ∂ f ⎪
⎪
⎪
This vector is also referred to as “del f.” It represents
⎪ the
(x) ⎪
⎪ directional deriv
Vector point
notation ⎪
⎨ ∂ x2 ⎬ ⎪
x = aprovides
and y = b.a concise
means to generalize the gradient to n !f(x) = . ⎪
⎪
⎪ ⎪
dimensions, as ⎪
⎪ . ⎪
⎪
⎪
⎪ ⎪
⎪
⎪
⎪ . ⎪
⎪
⎪
⎪ ⎪
⎪
⎪
⎪ ∂ f ⎪
⎪
⎩ (x)⎭
∂ xn
How do we use the gradient? For the
in gaining elevation as quickly as possibl
cally and how much we will gain by tak
depth later in this chapter.
EXAMPLE 14.2 Using the Gradient to Evaluate the Path of Steepest Ascent
cha01064_ch14.qxd 3/20/09 12:36 PM Page 375
Problem Statement. Employ the gradient to evaluate the steepest ascent direction for the
function
f(x, y) = x y 2
14.2 GRADIENT METHODS 375
at the point (2, 2). Assume that positive x is pointed east and positive y is pointed north.
Solution. First, our elevation can be determined as y
4
f(2, 2) = 2(2)2 = 8 8 24 40
∂f
= y 2 = 22 = 4
∂x 2
∂f
= 2x y = 2(2)(2) = 8
∂y 1
(a, b)
y
y=x
difficult
∂ 2 f ∂ for=
2 inconvenient
f(x, y +y +
f(x, tof(x,
δy)δy)−−2 2f(x,compute f(x,analytically,
+ f(x,
y)+
y) yy −
−δy)
δy) both the gradient and the determinant
= (14.8)(14.8)
the∂ Hessian
y2 ∂ y 2 can be evaluatedδy
δy 2
2 numerically. In most cases, the approach introduced
Sec.∂ 26.3.3
f
∂ 2 f for the modified secant method is employed. That is, the independent variab
=
can∂ x∂ ∂ x∂=
bey perturbed slightly to generate the required partial derivatives. For example, i
y
f(x + δx, y + δy)
centered-difference − f(x + δx,
approach y − δy) − they
is adopted, f(x − δx, + δy)
cany be + f(x − δx,
computed asy − δy)
f(x + δx, y + δy) − f(x + δx, y − δy) − f(x − δx, y + δy) + f(x − δx, y − δy)
4δxδy
∂f f(x + δx, y) − f(x − δx, 4δxδy y) (14.9)
where
= (14.9) (14
∂ x δ is some small fractional
2δx value.
Note
where δ is thatsmall
some the methods employed
fractional value.in commercial software packages also use forward
Steepest Ascent Method
Ø Walk a short distance along the gradient direction.
Ø Re-evaluate the gradient and walk another short distance.
Ø By repeating the process you would eventually get to the top of the hill.
Alternate Way
h2
h0
h1
2 0
=
MULTIDIMENSIONAL
h
UNCONSTRAINED OPTIMIZATION
1 380 4 7 xMULTIDIMENSIONAL UNCONSTRAINED OPTIMIZ
NED OPTIMIZATION
GURE 14.10
where h is distance along the h axis. For example,
where suppose
h is distance = 1h axis.
alongx0the 0 =examp
and yFor 2 an
eis.
relationship 3i +suppose
between
For example, 4j,arbitrary
an as shown
x0 = in Fig.
direction
1 and 0 =14.10.
hyand x2and The
! f =coordinates
3i + 4j, as of
andy coordinates. any in
shown point
Fig. along
14.10. the
The hcoordinates
axis are g
e coordinates of any point along the h axis are given by
x = 1 + 3h x = 1 + 3h
(14.12) y = 2 + 4h
y =coordinates
Starting at x0, y0 the 2 + 4h of any point in the gradient direction can be ex-
essed as (14.13)
The following example illustrates how we can us
The following example illustrates how we can use these transformations to conver
f these transformations to convert a two- dimensional function of x and y into a one-dimen
ow we can∂ use
x = x0 + hdimensional function of x and y into a one-dimensional function in h.
(14.10)
o a one-dimensional
∂x function in h.
EXAMPLEThe
14.3∂ f Developing
following
dimensional function a 1-D
example of Function
illustrates
x and yhow aAlong
intowe thethese
can use Gradient
one-dimensional Direction
transformations
function in to
h. convert a two-
=
dimensional 2x − 4y
function
Problem
of =
x and
Statement.
y into−
2(−1) 4(1)we=have
−6 thefunction
a one-dimensional
Suppose
in h.
following 2
two-dimensional function: 0
∂y =
PLE 14.3 Developing a 1-D Function Along the Gradient Direction h
MPLE 14.3 Developing a 1-D
f(x,Function
y) = 2x yAlong thex Gradient
+ 2x −2 2
− 2y Direction
Therefore,
Problem the gradient
Statement. vectorweishave the following two-dimensional function:
Suppose 1 4
Problem Statement. Suppose we have the following two-dimensional function:
Develop a one-dimensional version of this equation along the gradient direction at point
!f
f(x, =
y)
x =6i
= −
−1
2x 6j
y
and+ y2x= −
1.
f(x, y) = 2x y + 2x − x − 2y
2
2 x − 22y
2
FIGURE 14.10
Develop
Develop aSolution.
To find athe maximum, The partial
one-dimensional
one-dimensional we derivatives
couldof search
version
version of this
this can be along
along evaluated
The
equation
equation thealong atthe
gradient (−1,
relationship
the gradient 1),
between
direction,
gradient an arbitrary
that
direction
direction at direction
is,atalong
point point an hh
== −1
−1and
xrunning and yy==
along ∂1. 1. direction of this vector. The function can be expressed along this ax
fthe
= 2y + 2 − 2x = 2(1) + 2 − 2(−1) = 6
Solution.
Solution.! The ∂ x
Thepartial
partialderivatives
derivatives" can
can be evaluated
be evaluated at (−1,
at (−1, 1), 1),
∂f f ∂f
∂∂ff x 0 + ∂
=h,2xy0−+4y = h2(−1)= −f(−14(1) 6h, 1 −Starting
+ −6
= 6h) at x0, y0 the coordinates of an
= 2y +∂∂2yx2−−2x2x= = 2(1)∂y+ 2+−22(−1) = 6= 6 pressed as
∂ x = 2y +
∂x
2(1) − 2(−1)
∂∂ff =Therefore,
2(−1 + 6h)(1 − 6h)
the gradient + 2(−1
vector is + 6h) − (−1 + 6h)∂2f − 2(1 − 6h)2
= 2x − 4y = 2(−1) − 4(1) = −6 x = x0 + h
∂ y = 2x − 4y = 2(−1) − 4(1) = −6 ∂x
where !f = derivatives
∂ y the partial 6i − 6j are evaluated at x = −1 and y = 1.
Therefore, To
thefind
gradient
By combining f
vector is we could search along the gradient direction,
terms, ∂function
Therefore, vector we
the maximum,
the gradient is develop a one-dimensional
y = y0 + h g(h)
that that an
is, along maps
h axisf(x
!f the
along 6ih−axis, y expressed along this axis as
6j along the direction of this vector. The function can∂be
= running
!f = 6i − ! 6j "
To find the maximum, we 2∂ fcould search
∂ f along the gradient direction, that is, along an h axis
To find
running the=
g(h)
along −180h x0 + we
f direction
maximum,
the +h,72h y0 +−search
could 7 h along
= f(−1the+gradient
6h, 1 − 6h)
direction, that is, along an h axis
∂ x of this vector.
∂ y The function can be expressed along this axis as
running! along the direction"of this vector. The function can be expressed 2
along 2this axis as
∂ f = 2(−1 ∂ f+ 6h)(1 − 6h) + 2(−1 + 6h) − (−1 + 6h) − 2(1 − 6h)
f !x0 + h, y0 + h = " f(−1 + 6h, 1 − 6h)
∂∂x f ∂ y∂ f
+ the
f x0where +
h, ypartial
0 h = f(−1
derivatives are+ 6h, 1 −at6h)
evaluated x= −1 and y =21.
Now that
= 2(−1∂By + we
x 6h)(1 have
combining
developed
∂y +
− 6h) 2(−1we
terms,
a
+ 6h)function
− (−1
develop + 6h) −the
along2
2(1 path
a one-dimensional − 6h) of steepest ascent, we can
function g(h) that maps f(x, y)
plore how
= to answer
2(−1
along +the6h)(1
h the
axis,− second
6h) + question.
2(−1 + 6h) That
− (−1 is,+how
6h) 2 far along this
− 2(1 − 6h) 2 path do we travel?
where the partial derivatives are evaluated at x = −1 and y = 1.
where the partial derivatives are evaluated at x = −1 and y = 1.
By combining terms, we develop a one-dimensional function g(h) that m
along the h axis,
g(h) = −180h 2 + 72h − 7
FIGURE 14.11
X It can be shown thatY the
The method of optimal
method f ascent.
steepest
of steepest descent
is linearly convergent. Further, it
-1 to move very slowly
tends 1 along long, narrow ridges. This is because the new gradient at
each
0.2maximum point will
-0.2be perpendicular to the original direction. Thus, the technique
(x, y) coordinates corresponding to this point,
takes
1.4many small steps criss-crossing
1 the direct route to the summit. Hence, although it is
x = −1 + 6(0.2)
reliable, there are other approaches = 0.2
that converge much more rapidly, particularly in the
vicinity of an optimum. The y =remainder
1 − 6(0.2) of
= −0.2
the section is devoted to such methods.
This step is depicted in Fig. 14.11 as the move from point 0 to 1.
14.2.3 Advanced Gradient
The secondApproaches
step is merely implemented by repeating the procedure. First, the pa
derivatives can be evaluated at the new starting point (0.2, −0.2) to give
(b) f (x, y, PROBLEMS
z) = x2 + y2 + 2z2
(c) f (x, y) = ln(x2 + 2xy + 3y2)
Q1 Find
14.1
14.6 Find the
the minimum
directionalvalue
derivative
of of (b) f (x, y, z) =
2 (c) f (x, y) =
= (x
y) =
f(x, y) 2x− 3)y2 2+ (y − 2)2
+
e point 14.6 Find the
starting x =y 1=and
at x = 2atand = 1,direction
2 iny the h = 3i +
using theofsteepest descent
2j. method with
f(x, y) =
a14.2
stopping
Repeatcriterion
Example s = 1%.
of ε14.2 Explain
for the your results.
following function at the point
Q2 1.2).
14.7
(0.8, Perform one iteration of the steepest ascent method to locate starting at x =
the maximum of a stopping cri
f(x, y) = 2x y + 1.5y − 1.25x 2 − 2y 2 + 5
14.7 Perform
f(x, y) = 3.5x + 2y + x 2 − x 4 − 2x y − y 2
14.3 Given the maximum
ns that using initial guesses x = 0 and y = 0. Employ 2 2
bisection to find the
y) =size
f(x, step
optimal 2.25x y +gradient
in the 1.75y −search − 2y
1.5x direction. f(x, y) =
deriva-
14.8 Perform
Construct andone iteration
solve of theofoptimal
a system lineargradient
algebraicsteepest descent
equations that using initial g
method
maximizesto locate the minimum
f (x). Note that this of
is done by setting the partial deriva- optimal step s
o appli- tivesf(x,
of fy)with respect 14.8 Perform
= −8x + xto2 both x and 2y to zero.
+ 12y + 4y − 2x y
. 14.3. 14.4 method to loc
h of the using initial
(a) Start guesses
with x =guess
an initial 0 andofyx==0.1 and y = 1 and apply two appli-
14.9cations
Develop a program f(x, y) =
of the steepestusing
ascenta method
programming or from
to f(x, y) macroProb.
language
14.3.
h of the to
(b)implement
Constructthe random
a plot fromsearch method.
the results Design
of (a) the subprogram
showing the path ofso
the using initial g
(Revelle
15.1.1 et al. 1997).Form
Standard
The basic linear programming problem consists of two major parts: the objective function
15.1.1 Standard
and a set of Form
constraints. For a maximization problem, the objective function is generally
Linear Programming
expressed
The basic as
linear programming problem consists of two major parts: the objective
and aMaximize
set of constraints.
Z = c1 x1 + For
c2 x2 a+maximization
· · · + cn xn problem, the objective function is
(15.1)
For a maximization
expressed as problem, the objective function is generally expressed as
3/20/09 12:39where cj =388
PM Page payoff of each unit of the jth activity that is undertaken and xj = magnitude of
the jth Z = cthe
activity. Thus,
Maximize + c2ofx2the
1 x 1value + objective
· · · + cn xfunction,
n Z, is the total payoff due to the
total number of activities, n.
whereThe = payoff of
cj constraints caneach unit of the
be represented as that is undertaken and xj = mag
jth activity
generally
the jth activity. Thus, the value of the objective function, Z, is the total payoff d
CONSTRAINED
ai1 x1 + ai2 x2OPTIMIZATION
+ · · · + a x ≤ bi (15.2)
total number of activities,inn.n
The constraints
where aij = amount of can
the ithbe represented
resource generally
that is consumed for eachas
unit of the jth activity and
387
bi = amount of the ith resource that is available. That is, the resources are limited.
The second general type of constraint specifies that all activities must have a positive
a x
value,1
i1 + ai2 x2 + · · · + ain xn ≤ bi
xi ≥ 0 (15.3)
In the present context, this expresses the realistic notion that, for some problems, negative
activity is physically impossible (for example, we cannot produce negative goods).
Together, the objective function and the constraints specify the linear programming
problem. They say that we are trying to maximize the payoff for a number of activities
under the constraint that these activities utilize finite amounts of resources. Before show-
ing how this result can be obtained, we will first develop an example.
Develop a linear programming formulation to maximize the profits for this operation.
Ø The raw gas is processed into two grades of heating gas, regular and
Solution. The engineer operating this plant must decide how much of each gas to pro-
duce topremium
maximize quality.
profits. If the amounts of regular and premium produced weekly are des-
Ø Only
ignated onexof
as x1 and the grades can be produced at a time.
2, respectively, the total weekly profit can be calculated as
Ø The facility is open for only 80 hr/week.
Total profit = 150x + 175x
Ø There is limited1 on-site2 storage for each of the products (note that a metric
or written
ton,asor
a linear
tonne,programming objective
is equal to 1000 kg):function,
Maximize Z = 150x1 + 175x2
TheDevelop
Ø a linear
constraints can beprogramming
developed in aformulation to For
similar fashion. maximize the
example, theprofits forgas
total raw this
operation.
used can be computed as
Total gas used = 7x1 + 11x2
value,
xi ≥total
This 0 cannot exceed the available supply of 77 m3/week, so the (15.3)
constraint can be rep-
In resented
the presentas
context, this expresses the realistic notion that, for some problems, negative
activity is physically impossible (for example, we cannot produce negative goods).
7x1 +the
Together, 11x 2 ≤ 77function and the constraints specify the linear programming
objective
problem. They say that
If the amounts we are trying
of regular and to maximizeproduced
premium the payoff for a number
weekly areofdesignated
activities as x1
The remaining constraints can be developed in a similar fashion,
under the constraint that these activities utilize finite amounts of resources. Before show- with the resulting
and x2, respectively
ingtotal
how LPthis formulation given we
result can be obtained, by will first develop an example.
EXAMPLE 15.1 Setting Maximize Z = 150x1
Up the LP Problem + 175x2 (maximize profit)
Problem
subjectStatement.
to The following problem is developed from the area of chemical or
petroleum engineering. However, it is relevant to all areas of engineering that deal with
producing 1 + 11xwith
7xproducts 2 ≤limited
77 resources. (material constraint)
Suppose that a gas-processing plant receives a fixed amount of raw gas each week. The
10x1 + 8x2 ≤ 80 (time constraint)
raw gas is processed into two grades of heating gas, regular and premium quality. These
grades of ≤ are
x1gas 9 in high demand (that is, they are guaranteed
(“regular” storage
to sell) constraint)
and yield different
profits to the company. However, their production involves both time and on-site storage
x2 ≤
constraints. For6example, only one of the grades can be (“premium”
produced at a storage
time, and constraint)
the facility
is open for
x1,x 2 ≥
only 80 0hr/week. Further, there is limited on-site storage for
(positivity each of the products.
constraints)
All these factors are listed below (note that a metric ton, or tonne, is equal to 1000 kg):
Note that the above set of equations constitute the total LP formulation. The parenthetical
explanations at the right haveProduct
been appended to clarify the meaning of each term.
Resource Regular Premium Resource Availability
x2
The
7x1 + 11xx2
≤remaining
77 constraints
2
Redundant total
The LP formulation
remaining given
constraints can by
be deve
total LP formulation given by
8 Maximize
8 Z = 150x1 + 1
Maximize Z = 150x1 + 175x2 (ma
4
subject to
E subject
F to E
D 1 D Z!
7x1 ≤
7x1 + 11x +77
11x2 ≤ 77 14 (ma
4 2 00
3
5
C 8x2 1≤+
10x1 +10x 808x 2 ≤ 80C (tim
2 Z!
x1 ≤ 9 x1 ≤ 9 60 (“re
0
A B x2 ≤ 6 x 0 ≤A6 (“pr
B
0 6 2
4 8 x1 Z! 4
x1,x2 ≥ 0 0 (po
x1,x2 ≥ 0
(a) Note that the above set of equations
(b) constitu
Note that
explanations at thethe
rightabove set appended
have been of equa
explanations at the right have b
m. (a) The constraints define a feasible
creased until it reaches the highest value
resented as
moves up and to the right until it touches This total cannot exceed the available suppl
resented as 7x 1 + 11x 2 ≤ 77
The
7x1 + 11x 2 ≤remaining
77 constraints
x2 total
The LP formulation
remaining given
constraints can by
be deve
nt total LP formulation given by
Maximize Z = 150x1 + 1
8 Maximize Z = 150x1 + 175x2 (ma
subject to
subject to
F
E D Z! 7x1 ≤
7x1 + 11x +77
11x2 ≤ 77 (ma
14 2
00
3
C
10x1 +10x8x2 1≤+
808x 2 ≤ 80 (tim
Z! x1 ≤ 9 x1 ≤ 9 (“re
60
0
A B
x2 ≤ 6 x ≤ 6 (“pr
0 2
8 x1 Z! 4 8 x x 1,x 2 ≥ 0
1 (po
0 x1,x2 ≥ 0
(b)
Note that the above set of equations constitu
Note that
explanations at thethe
rightabove set appended
have been of equa
explanations at the right have b
392 CONSTRAINED OPTIMIZATION
x2 x2 x2
0 0 0
x1 x1 x1
FIGURE 15.2
Aside from a single optimal solution (for example, Fig. 15.1b), there are three other possible
outcomes of a linear programming problem: (a) alternative optima, (b) no feasible solution,
(a) alternative optima, (b) no feasible solution, and (c) an
and (c) an unbounded result.
unbounded result.
3. No feasible solution. As in Fig. 15.2b, it is possible that the problem is set up so that
there is no feasible solution. This can be due to dealing with an unsolvable problem or
due to errors in setting up the problem. The latter can result if the problem is over-
constrained to the point that no solution can satisfy all the constraints.
4. Unbounded problems. As in Fig. 15.2c, this usually means that the problem is under-
constrained and therefore open-ended. As with the no-feasible-solution case, it can
often arise from errors committed during problem specification.
Now let us suppose that our problem involves a unique solution. The graphical ap-
proach might suggest an enumerative strategy for hunting down the maximum. From
Unique solution: The maximum objective function intersects a single point.
Alternate solutions: Suppose that the objective function in the example had
coefficients so that it was precisely parallel to one of the constraints. Then, rather than
a single point, the problem would have an infinite number of optima corresponding to
a line segment
No feasible solution: It is possible that the problem is set up so that there is no feasible
solution. This can be due to dealing with an unsolvable problem or due to errors in
setting up the problem. The latter can result if the problem is over- constrained to the
point that no solution can satisfy all the constraints.
Unbounded problems. This usually means that the problem is under- constrained and
therefore open-ended. As with the no-feasible-solution case, it can often arise from
errors committed during problem specification.