0% found this document useful (0 votes)
106 views16 pages

AIMMS Modeling Guide - Linear Programming Tricks

AIMMS Modeling Guide - Linear Programming Tricks This file contains only one chapter of the book. For a free download of the complete book in pdf format, please visit www.aimms.com or order a hardcopy at www.lulu.com / aimms.

Uploaded by

gjorhugull
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views16 pages

AIMMS Modeling Guide - Linear Programming Tricks

AIMMS Modeling Guide - Linear Programming Tricks This file contains only one chapter of the book. For a free download of the complete book in pdf format, please visit www.aimms.com or order a hardcopy at www.lulu.com / aimms.

Uploaded by

gjorhugull
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

AIMMS Modeling Guide - Linear Programming Tricks

This le contains only one chapter of the book. For a free download of the complete book in pdf format, please visit www.aimms.com or order your hardcopy at www.lulu.com/aimms.

Aimms 3.12

Copyright c 19932011 by Paragon Decision Technology B.V. All rights reserved. Paragon Decision Technology B.V. Schipholweg 1 2034 LS Haarlem The Netherlands Tel.: +31 23 5511512 Fax: +31 23 5511517 Paragon Decision Technology Inc. 500 108th Avenue NE Ste. # 1085 Bellevue, WA 98004 USA Tel.: +1 425 458 4024 Fax: +1 425 458 4025 Paragon Decision Technology Pte. Ltd. 80 Raes Place UOB Plaza 1, Level 36-01 Singapore 048624 Tel.: +65 9640 4182

Email: info@aimms.com WWW: www.aimms.com

Aimms is a registered trademark of Paragon Decision Technology B.V. IBM ILOG CPLEX and sc CPLEX is a registered trademark of IBM Corporation. GUROBI is a registered trademark of Gurobi Optimization, Inc. KNITRO is a registered trademark of Ziena Optimization, Inc. XPRESS-MP is a registered trademark of FICO Fair Isaac Corporation. Mosek is a registered trademark of Mosek ApS. Windows and Excel are A registered trademarks of Microsoft Corporation. TEX, LTEX, and A S-LTEX are trademarks of the American M A Mathematical Society. Lucida is a registered trademark of Bigelow & Holmes Inc. Acrobat is a registered trademark of Adobe Systems Inc. Other brands and their products are trademarks of their respective holders. Information in this document is subject to change without notice and does not represent a commitment on the part of Paragon Decision Technology B.V. The software described in this document is furnished under a license agreement and may only be used and copied in accordance with the terms of the agreement. The documentation may not, in whole or in part, be copied, photocopied, reproduced, translated, or reduced to any electronic medium or machine-readable form without prior consent, in writing, from Paragon Decision Technology B.V. Paragon Decision Technology B.V. makes no representation or warranty with respect to the adequacy of this documentation or the programs which it describes for any particular purpose or with respect to its adequacy to produce any particular result. In no event shall Paragon Decision Technology B.V., its employees, its contractors or the authors of this documentation be liable for special, direct, indirect or consequential damages, losses, costs, charges, claims, demands, or claims for lost prots, fees or expenses of any nature or kind. In addition to the foregoing, users should recognize that all complex software systems and their documentation contain errors and omissions. The authors, Paragon Decision Technology B.V. and its employees, and its contractors shall not be responsible under any circumstances for providing information or corrections to errors and omissions discovered at any time in this book or the software it describes, whether or not they are aware of the errors or omissions. The authors, Paragon Decision Technology B.V. and its employees, and its contractors do not recommend the use of the software described in this book for applications in which errors or omissions could threaten life, injury or signicant loss.
A This documentation was typeset by Paragon Decision Technology B.V. using L TEX and the Lucida font family.

Part II

General Optimization Modeling Tricks

Chapter 6 Linear Programming Tricks

This chapter explains several tricks that help to transform some models with special, for instance nonlinear, features into conventional linear programming models. Since the fastest and most powerful solution methods are those for linear programming models, it is often advisable to use this format instead of solving a nonlinear or integer programming model where possible. The linear programming tricks in this chapter are not discussed in any particular reference, but are scattered throughout the literature. Several tricks can be found in [Wi90]. Other tricks are referenced directly. Throughout this chapter the following general statement of a linear programming model is used: Minimize:
jJ

This chapter

References

Statement of a linear program

cj xj aij xj bi
jJ

Subject to: i I j J

xj 0

In this statement, the cj s are referred to as cost coecients, the aij s are referred to as constraint coecients, and the bi s are referred to as requirements. The symbol denotes any of , =, or constraints. A maximization model can always be written as a minimization model by multiplying the objective by (1) and minimizing it.

6.1 Absolute values


Consider the following model statement: Minimize:
jJ

The model cj > 0 i I

cj |xj | aij xj bi
jJ

Subject to:

xj

free

Chapter 6. Linear Programming Tricks

64

Instead of the standard cost function, a weighted sum of the absolute values of the variables is to be minimized. To begin with, a method to remove these absolute values is explained, and then an application of such a model is given. The presence of absolute values in the objective function means it is not possible to directly apply linear programming. The absolute values can be avoided by replacing each xj and |xj | as follows. xj =
+ xj xj

Handling absolute values . . .

+ |xj | = xj + xj + xj , xj 0

The linear program of the previous paragraph can then be rewritten as follows. Minimize:
jJ + cj (xj + xj )

cj > 0 i I j J . . . correctly

Subject to:
+ aij (xj xj ) bi jJ + xj , xj 0

The optimal solutions of both linear programs are the same if, for each j, at + + least one of the values xj and xj is zero. In that case, xj = xj when xj 0, and xj = xj when xj 0. Assume for a moment that the optimal values + + of xj and xj are both positive for a particular j, and let = min{xj , xj }. + + Subtracting > 0 from both xj and xj leaves the value of xj = xj xj + unchanged, but reduces the value of |xj | = xj +xj by 2. This contradicts the optimality assumption, because the objective function value can be reduced by 2cj . Sometimes xj represents a deviation between the left- and the right-hand side of a constraint, such as in regression. Regression is a well-known statistical method of tting a curve through observed data. One speaks of linear regression when a straight line is tted. Consider tting a straight line through the points (vj , wj ) in Figure 6.1. The coecients a and b of the straight line w = av + b are to be determined. The coecient a is the slope of the line, and b is the intercept with the w-axis. In general, these coecients can be determined using a model of the following form: Minimize: Subject to: f (z) wj =avj + b zj j J

Application: curve tting

Example

Chapter 6. Linear Programming Tricks

65

(0, b)

slope is a v

(0, 0)

Figure 6.1: Linear regression In this model zj denotes the dierence between the value of avj + b proposed by the linear expression and the observed value, wj . In other words, zj is the error or deviation in the w direction. Note that in this case a, b, and zj are the decision variables, whereas vj and wj are data. A function f (z) of the error variables must be minimized. There are dierent options for the objective function f (z). Least-squares estimation is an often used technique that ts a line such that the sum of the squared errors is minimized. The formula for the objective function is: 2 zj f (z) =
jJ

Dierent objectives in curve tting

It is apparent that quadratic programming must be used for least squares estimation since the objective is quadratic. Least absolute deviations estimation is an alternative technique that minimizes the sum of the absolute errors. The objective function takes the form: f (z) =
jJ

|zj |

When the data contains a few extreme observations, wj , this objective is appropriate, because it is less inuenced by extreme outliers than is least-squares estimation. Least maximum deviation estimation is a third technique that minimizes the maximum error. This has an objective of the form: f (z) = max |zj |
jJ

This form can also be translated into a linear programming model, as explained in the next section.

Chapter 6. Linear Programming Tricks

66

6.2 A minimax objective


Consider the model Minimize: Subject to: aij xj bi
jJ

The model max


kK jJ

ckj xj i I j J

xj 0

Such an objective, which requires a maximum to be minimized, is known as a minimax objective. For example, when K = {1, 2, 3} and J = {1, 2}, then the objective is: Minimize: max{c11 x1 + c12 x2 c21 x1 + c22 x2 c31 x1 + c32 x2 }

An example of such a problem is in least maximum deviation regression, explained in the previous section. The minimax objective can be transformed by including an additional decision variable z, which represents the maximum costs: z = max
kK jJ

Transforming a minimax objective

ckj xj

In order to establish this relationship, the following extra constraints must be imposed: ckj xj z k K
jJ

Now when z is minimized, these constraints ensure that z will be greater than, or equal to, jJ ckj xj for all k. At the same time, the optimal value of z will be no greater than the maximum of all jJ ckj xj because z has been minimized. Therefore the optimal value of z will be both as small as possible and exactly equal to the maximum cost over K. Minimize: Subject to:
jJ

z aij xj bi ckj xj z
jJ

The equivalent linear program i I k K j J

xj 0

The problem of maximizing a minimum (a maximin objective) can be transformed in a similar fashion.

Chapter 6. Linear Programming Tricks

67

6.3 A fractional objective


Consider the following model: Minimize:
jJ

The model cj xj +
jJ

dj xj + i I j J

Subject to: aij xj bi


jJ

xj 0

In this problem the objective is the ratio of two linear terms. It is assumed that the denominator (the expression jJ dj xj + ) is either positive or negative over the entire feasible set of xj . The constraints are linear, so that a linear program will be obtained if the objective can be transformed to a linear function. Such problems typically arise in nancial planning models. Possible objectives include the rate of return, turnover ratios, accounting ratios and productivity ratios. The following method for transforming the above model into a regular linear programming model is from Charnes and Cooper ([Ch62]). The main trick is to introduce variables yj and t which satisfy: yj = txj . In the explanation below, it is assumed that the value of the denominator is positive. If it is negative, the directions in the inequalities must be reversed. 1. Rewrite the objective function in terms of t, where t = 1/(
jJ

Transforming a fractional objective

dj xj + )

and add this equality and the constraint t > 0 to the model. This gives: Minimize:
jJ

cj xj t + t aij xj bi
jJ

Subject to: i I

dj xj t + t = 1
jJ

t>0 xj 0 j J

2. Multiply both sides of the original constraints by t, (t > 0), and rewrite the model in terms of yj and t, where yj = xj t. This yields the model:

Chapter 6. Linear Programming Tricks

68

Minimize:
jJ

cj yj + t aij yj bi t
jJ

Subject to: i I

dj yj + t = 1
jJ

t>0 yj 0 j J

3. Finally, temporarily allow t to be 0 instead of t > 0 in order to get a linear programming model. This linear programming model is equivalent to the fractional objective model stated above, provided t > 0 at the optimal solution. The values of the variables xj in the optimal solution of the fractional objective model are obtained by dividing the optimal yj by the optimal t.

6.4 A range constraint


Consider the following model: Minimize:
jJ

The model cj xj

Subject to: di
jJ

aij xj ei xj 0

i I j J

When one of the constraints has both an upper and lower bound, it is called a range constraint. Such a constraint occurs, for instance, when a minimum amount of a nutrient is required in a blend and, at the same time, there is a limited amount available. The most obvious way to model such a range constraint is to replace it by two constraints: aij xj di
jJ

Handling a range constraint

and i I

aij xj ei
jJ

However, as each constraint is now stated twice, both must be modied when changes occur. A more elegant way is to introduce extra variables. By introducing new variables ui one can rewrite the constraints as follows: ui +
jJ

aij xj = ei

i I

Chapter 6. Linear Programming Tricks

69

The following bound is then imposed on ui : 0 ui ei di It is clear that ui = 0 results in aij xj = ei


jJ

i I

while ui = ei di results in aij xj = di


jJ

A summary of the formulation is: Minimize:


jJ

The equivalent linear program cj xj

Subject to: ui +
jJ

aij xj = ei xj 0 0 ui ei di

i I j J i I

6.5 A constraint with unknown-but-bounded coecients


This section considers the situation in which the coecients of a linear inequality constraint are unknown-but-bounded. Such an inequality in terms of uncertainty intervals is not a deterministic linear programming constraint. Any particular selection of values for these uncertain coecients results in an unreliable formulation. In this section it will be shown how to transform the original nondeterministic inequality into a set of deterministic linear programming constraints. Consider the constraint with unknown-but-bounded coecients aj aj xj b
jJ

This section

Unknown-butbounded coecients

where aj assumes an unknown value in the interval [Lj , Uj ], b is the xed right-hand side, and xj refers to the solution variables to be determined. Without loss of generality, the corresponding bounded uncertainty intervals can be written as [aj j , aj + j ], where aj is the midpoint of [Lj , Uj ].

Chapter 6. Linear Programming Tricks

70

Replacing the unknown coecients by their midpoint results in a deterministic linear programming constraint that is not necessarily a reliable representation of the original nondeterministic inequality. Consider the simple linear program Maximize: Subject to: x ax 8 with the uncertainty interval a [1, 3]. Using the midpoint a = 2 gives the optimal solution x = 4. However, if the true value of a had been 3 instead of the midpoint value 2, then for x = 4 the constraint would have been violated by 50%. Consider a set of arbitrary but xed xj values. The requirement that the constraint with unknown-but-bounded coecients must hold for the unknown values of aj is certainly satised when the constraint holds for all possible values of aj in the interval [aj j , aj + j ]. In that case it suces to con sider only those values of aj for which the term aj xj attains its maximum value. Note that this situation occurs when aj is at one of its bounds. The sign of xj determines which bound needs to be selected.

Midpoints can be unreliable

Worst-case analysis

aj xj aj xj + j xj aj xj aj xj j xj

xj 0 xj 0

Note that both inequalities can be combined into a single inequality in terms of |xj |.

aj xj aj xj + j |xj |

xj An absolute value formulation

As a result of the above worst-case analysis, solutions to the previous formula tion of the original constraint with unknown-but-bounded coecients aj can now be guaranteed by writing the following inequality without reference to aj . aj xj +
jJ jJ

j |xj | b

In the above absolute value formulation it is usually too conservative to require that the original deterministic value of b cannot be loosened. Typically, a tolerance > 0 is introduced to allow solutions xj to violate the original righthand side b by an amount of at most max(1, |b|).

A tolerance . . .

Chapter 6. Linear Programming Tricks

71

The term max(1, |b|) guarantees a positive increment of at least , even in case the right-hand side b is equal to zero. This modied right-hand side leads to the following -tolerance formulation where a solution xj is feasible whenever it satises the following inequality. aj xj +
jJ jJ

. . . relaxes the right-hand side

j |xj | b + max(1, |b|)

This -tolerance formulation can be rewritten as a deterministic linear programming constraint by replacing the |xj | terms with nonnegative variables yj , and requiring that yj xj yj . It is straightforward to verify that these last two inequalities imply that yj |xj |. These two terms are likely to be equal when the underlying inequality becomes binding for optimal xj values in a linear program. The nal result is the following set of deterministic linear programming constraints, which captures the uncertainty reected in the original constraint with unknown-but-bounded coecients as presented at the beginning of this section.

The nal formulation

aj xj +
jJ jJ

j yj b + max(1, |b|) yj xj yj yj 0

6.6 A probabilistic constraint


This section considers the situation that occurs when the right-hand side of a linear constraint is a random variable. As will be shown, such a constraint can be rewritten as a purely deterministic constraint. Results pertaining to probabilistic constraints (also referred to as chance-constraints) were rst published by Charnes and Cooper ([Ch59]). Consider the following linear constraint aj xj B
jJ

This section

Stochastic right-hand side

where J = {1, 2, . . . , n} and B is a random variable. A solution xj , j J, is feasible when the constraint is satised for all possible values of B. For open-ended distributions the right-hand side B can take on any value between and +, which means that there cannot be a feasible solution. If the distribution is not open-ended, suppose for instance that Bmin B Bmax , then the substitution of Bmin for B results in a deterministic model. In most Acceptable values only

Chapter 6. Linear Programming Tricks

72

practical applications, it does not make sense for the above constraint to hold for all values of B. Specifying that the constraint jJ aj xj B must hold for all values of B is equivalent to stating that this constraint must hold with probability 1. In practical applications it is natural to allow for a small margin of failure. Such failure can be reected by replacing the above constraint by an inequality of the form Pr
jJ

A probabilistic constraint

aj xj B 1

which is called a linear probabilistic constraint or a linear chance-constraint. Here Pr denotes the phrase Probability of, and is a specied constant fraction ( [0, 1]), typically denoting the maximum error that is allowed. Consider the density function fB and a particular value of as displayed in Figure 6.2. Deterministic equivalent

1 B-axis B Figure 6.2: A density function fB A solution xj , j J, is considered feasible for the above probabilistic con straint if and only if the term jJ aj xj takes a value beneath point B. In this case a fraction (1 ) or more of all values of B will be larger than the value of the term jJ aj xj . For this reason B is called the critical value. The probabilistic constraint of the previous paragraph has therefore the following deterministic equivalent: aj xj B
jJ

The critical value B can be determined by integrating the density function from until a point where the area underneath the curve becomes equal to . This point is then the value of B. Note that the determination of B as described in this paragraph is equivalent to using the inverse cumulative distribution function of fB evaluated at . From probability theory, the cumulative distribution

Computation of critical value

Chapter 6. Linear Programming Tricks

73

function FB is dened by FB (x) = Pr[B x]. The value of FB is the corresponding area underneath the curve (probability). Its inverse species for each particular level of probability, the point B for which the integral equals the probability level. The cumulative distribution function FB and its inverse are illustrated in Figure 6.3.
B-axis

-axis

1
B-axis -axis

Figure 6.3: Cumulative distribution function F and its inverse.

As the previous paragraph indicated, the critical B can be determined through the inverse of the cumulative distribution function. Aimms supplies this function for a large number of distributions. For instance, when the underlying distribution is normal with mean 0 and standard deviation 1, then the value of B can be found as follows: B = InverseCumulativeDistribution( Normal(0,1) ,) Consider the constraint j aj xj B with a stochastic right-hand side. Let B = N(0, 1) and = 0.05. Then the value of B based on the inverse cumulative distribution is -1.645. By requiring that j aj xj 1.645, you make sure that the solution xj is feasible for 95% of all instances of the random variable B. The following gure presents a graphical overview of the four linear probabilistic constraints with stochastic right-hand sides, together with their deterministic equivalent. The shaded areas correspond to the feasible region of jJ aj xj .

Use Aimms-supplied function

Example

Overview of probabilistic constraints

Chapter 6. Linear Programming Tricks

74

Pr
jJ

jJ

aj xj B 1
1

aj xj B
B

B-axis

Pr
jJ

jJ

aj xj B
1

aj xj B

B-axis

Pr
jJ

jJ

aj xj B 1
1

aj xj B

B-axis

Pr
jJ

jJ

aj xj B
1

aj xj B
B

B-axis

Table 6.1: Overview of linear probabilistic constraints

6.7 Summary
This chapter presented a number of techniques to transform some special models into conventional linear programming models. It was shown that some curve tting procedures can be modeled, and solved, as linear programming models by reformulating the objective. A method to reformulate objectives which incorporate absolute values was given. In addition, a trick was shown to make it possible to incorporate a minimax objective in a linear programming model. For the case of a fractional objective with linear constraints, such as those that occur in nancial applications, it was shown that these can be transformed into linear programming models. A method was demonstrated to specify a range constraint in a linear programming model. At the end of this chapter, it was shown how to reformulate constraints with a stochastic right-hand side to deterministic linear constraints.

Bibliography

[Ch59] A. Charnes and W.W. Cooper, Change-constrained programming, Management Science 6 (1959), 7380. [Ch62] A. Charnes and W.W. Cooper, Programming with linear fractional functional, Naval Research Logistics Quarterly 9 (1962), 181186. [Wi90] H.P. Williams, Model building in mathematical programming, 3rd ed., John Wiley & Sons, Chichester, 1990.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy