0% found this document useful (0 votes)
138 views21 pages

Analyzing of Water Resources Systems Using Linear Programming and Transportation Method

The document summarizes linear programming and the simplex method. It discusses how linear programming problems involve optimizing an objective function subject to constraints. The simplex method is then introduced as the most commonly used technique for solving linear programming problems. It works by systematically testing the vertices of the feasible region as possible solutions and moving through phase I and II to find an initially feasible and then optimal solution.

Uploaded by

Jin Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
138 views21 pages

Analyzing of Water Resources Systems Using Linear Programming and Transportation Method

The document summarizes linear programming and the simplex method. It discusses how linear programming problems involve optimizing an objective function subject to constraints. The simplex method is then introduced as the most commonly used technique for solving linear programming problems. It works by systematically testing the vertices of the feasible region as possible solutions and moving through phase I and II to find an initially feasible and then optimal solution.

Uploaded by

Jin Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Analyzing of water resources systems using linear

programming and transportation method.

A report submitted to the

Department of Water resources Engineering,

College of Engineering

University of Duhok

Student name:
Moodle Email:
Year: 4th
Course: analysis of water resource system
Course code: Ew4206

Instructor: Wa'el Abdul-Bari


Date:6/7/2020
Contents

CHAPTER1|LINEAR PROGRAMMING (GRAPHICAL METHOD)........3


INTRODUCTION OF LINEAR PROGRAMMING.....................................................3
ABSTRUCT...............................................................................................................4
PROCEDURE OF SOLVING PROBLEMS BY GRAPHICALA METHOD................4
EXAMPLE 1.......................................................................................................5
EXAMPLE 2.......................................................................................................6

CHAPTER 2|LINEAR PROGRAMMING (SIMPLEX METHOD)............1


INTRODUCTION.......................................................................................................1
ABSTRUCT...............................................................................................................1
EXAMPLE 1.......................................................................................................3

CHAPTER 3| MATLAB FOR SOLVING LINEAR PROGRAMMING


PROBLEMS...........................................................................................8
EXAMPLE.................................................................................................................8

CHAPTER 4| TRANSPORTATION METHOD (VOGEL


APPROXIMATION METHOD)..............................................................10
INTRODUCTION.....................................................................................................10
ABDSTRUCT..........................................................................................................10
APPLYING THE VOGEL APPROXIMATION METHOD..................................11
EXAMPLE........................................................................................................11

REFERENCES.....................................................................................13
CHAPTER1|LINEAR PROGRAMMING (GRAPHICAL
METHOD).

INTRODUCTION OF LINEAR PROGRAMMING


Maximizing or minimizing the objective function involving a large number of variables,
which are again restricted by certain number of constraint equations or inequations, are
problems which come under the heading of mathematical programming problems. If the
constraints are all linear equations and the number of variables are same as the number of
constraints, then there is a possibility of a unique solution, which can be obtained by solving
a set of simultaneous equations. But if the number of constraints are more or less than the
number of variables, then the methods of linear programming are most helpful. Such
problems with the objective function and the constrains, all linear, appear widely in the field
of engineering and applied sciences. All other problems, where the constraints are not all
linear, fall in the category of non-linear program

Linear programming (LP) is the most commonly applied form of constrained optimization.
Constrained optimization is much harder than unconstrained optimization: you still have to
find the best point of the function, but now you also have to respect various constraints
while doing so. For example, you must guarantee that the optimum point does not have a
value above or below a prespecified limit when substituted into a given constraint function.
The constraints usually relate to limited resources. The simple methods you used in high
school to find peaks and valleys won’t work anymore: now the best solution (the optimum
point) may not occur at the top of a peak or at the bottom of a valley. The best solution
might occur half way up a peak when a constraint prohibits movement farther up.
The main elements of any constrained optimization problem are:

 Variables (also called decision variables). The values of the variables are not
known when you start the problem. The variables usually represent things that
you can adjust or control, for example the rate at which to manufacture items.
The goal is to find values of the variables that provide the best value of the
objective function.

 Objective function. This is a mathematical expression that combines the


variables to express your goal. It may represent profit, for example. You will be
required to either maximize or minimize the objective function.

 Constraints. These are mathematical expressions that combine the variables to


express limits on the possible solutions. For example, they may express the idea
that the number of workers available to operate a particular machine is limited, or
that only a certain amount of steel is available per day.

 Variable bounds. Only rarely are the variables in an optimization problem


permitted to take on any value from minus infinity to plus infinity. Instead, the
variables usually have bounds. For example, zero and 1000 might bound the
production rate of widgets on a particular machine.

ABSTRUCT
Graphical Method of Solving a Linear Programming Problem (LPP) If the objective
function is a function of two variables in LPP, it can be easily solved graphically. Each
constraint of the problem is considered as an equation, even if it is an inequation type and
then drawn on the graph. Obviously, each of them will be a straight line. A straight line
divides the plane of the paper into two parts. Each inequation constraint will be one part,
determined by the line, which can be known by picking up one suitable point of the region
[for example (0, 0)].

PROCEDURE OF SOLVING PROBLEMS BY GRAPHICALA


METHOD

If the line already passes through (0, 0), we choose some other suitable point and put in the
constraint to see whether it satisfies. If so, then the inequation constraint is the region
determined by the line containing that suitable point. The common region satisfying all the
constraints is the feasible region. We then draw the line representing the objective function
corresponding to two suitable values of the objective function and come to know about the
direction in which objective function increases or decreases. If the LPP is a maximization
problem, then we move the line representing the objective function in the direction in which
it increases over the feasible region and obtain a point or entire edge, where the objective
function is maximum. Similarly, we move in the decreasing direction for a minimization
problem and obtain a point or an entire edge where the objective function value is minimum.
This graphical method also allows us to see whether the LPP has unbounded solution or no
solution

EXAMPLE 1

The feasible region has to contain points, which are in the region x, BC x, as well as OAD
simultaneously. This is impossible. Hence, the feasible region is empty. In this case the LPP
has no solution.
EXAMPLE 2
1

1
CHAPTER 2|LINEAR PROGRAMMING (SIMPLEX METHOD)

INTRODUCTION

Simplex method, Standard technique in linear programming for solving an optimization


problem, typically one involving a function and several constraints expressed as inequalities.
The inequalities define a polygonal region (see polygon), and the solution is typically at one
of the vertices. The simplex method is a systematic procedure for testing the vertices as
possible solutions.

Many different methods have been proposed to solve linear programming problems, but
simplex method has proved to be the most effective. This technique will nurture your insight
needed for a sound understanding of several approaches to other programming models, which
will be studied in subsequent chapters. Simplex Method is applicable to any problem that can
be formulated in terms of linear objective function, subject to a set of linear constraints.
Often, this method is termed Dantzig's simplex method.

ABSTRUCT

We distinguish between the simplex method which starts with a linear program in standard
form and the simplex algorithm which starts with a Canonical form, consists of a sequence of
pivot operations, and forms the main subroutine of the simplex method. The first step of the
simplex method is the introduction into the standard form of certain artificial variables. The
resulting auxiliary problem is in canonical form. At this point the simplex algorithm is
employed. It consists of a sequence of pivot operations referred to as Phase I that produces a
succession of different canonical forms. The objective is to find a feasible solution if one
exists. If the final canonical form yields such a solution, the simplex algorithm is again
applied in a second succession of pivot operations referred to as Phase II. The objective is to
find an optimal feasible solution if one exists.

Following two conditions need to be met before applying the simplex method:

1. The right-hand side of each constraint inequality should be non-negative. In case, any
linear programming problem has a negative resource value, then it should be converted into
positive value by multiplying both the sides of constraint inequality by “-1”.
2. The decision variables in the linear programming problem should be non-negative.

2
*Thus, the simplex algorithm is efficient since it considers few feasible solutions,
provided by the corner points, to determine the optimal solution to the linear
programming problem.

The actual task of solving large optimization problems is done by software


implementations for the simplex method. The solution steps:

1. Convert all constraints into equations:

a) (≤) constraints g(xi) ≤ b

g(xi) + s = bs: slack variable

b) (≥) constraints g(xi) ≥ b

g(xi) – s + a = bs: surplus variable a: artificial variable

c) (=) constraints g(xi) = b

g(xi) + a = b

Important: The right-hand side of the constraints must be non-negative

(b ≥ 0): 2x1 + x2 ≥ –1 to – 2x1 – x2 ≤ 1

The objective function will take the following forms:

a) Maximize Z = ∑ ci xi + ∑ 0 sj – ∑ M ak

b) Minimize Z = ∑ ci xi + ∑ 0 sj + ∑ M ak

(i = 1, 2, … n) ; (j = 1, 2, … m) ; (k = 1, 2, … p) and (M → ∞).

2. Obtain a first solution by setting all decision variables equal to zero, xi = 0.

3
3. Check the set of variables for optimization. The solution is optimum if the coefficients of Z
are: Positive or zero (for Maximize Z) Negative or zero (for Minimize Z).

EXAMPLE 1

Maximize z = 3x1 + 2x2

subject to

-x1 + 2x2 ≤ 4
3x1 + 2x2 ≤ 14
x1 – x2 ≤ 3

x1, x2 ≥ 0

Solution.

First, convert every inequality constraints in the LPP into an equality constraint, so that the
problem can be written in a standard from. This can be accomplished by adding a slack
variable to each constraint. Slack variables are always added to the less than type constraints.

Converting inequalities to equalities

-x1 + 2x2 + x3 = 4
3x1 + 2x2 + x4 = 14
x1 – x2 + x5 = 3
x1, x2, x3, x4, x5 ≥ 0

Where x3, x4 and x5 are slack variables.

Since slack variables represent unused resources, their contribution in the objective function is
zero. Including these slack variables in the objective function, we get

Maximize z = 3x1 + 2x2 + 0x3 + 0x4 + 0x5

4 4
5

Initial basic feasible solution

Now we assume that nothing can be produced. Therefore, the values of the decision variables
are zero.
x1 = 0, x2 = 0, z = 0

When we are not producing anything, obviously we are left with unused capacity
x3 = 4, x4 = 14, x5 = 3

We note that the current solution has three variables (slack variables x3, x4 and x5) with non-
zero solution values and two variables (decision variables x1 and x2) with zero values.
Variables with non-zero values are called basic variables. Variables with zero values are
called non-basic variables.

a11 = -1, a12 = 2, a13 = 1, a14 = 0, a15 = 0, b1 = 4


a21 = 3, a22 = 2, a23 = 0, a24 = 1, a25 = 0, b2 = 14
a31= 1, a32 = -1, a33 = 0, a34 = 0, a35 = 1, b3 = 3

Calculating values for the index row (zj – cj)

z1 – c1 = (0 X (-1) + 0 X 3 + 0 X 1) - 3 = -3
z2 – c2 = (0 X 2 + 0 X 2 + 0 X (-1)) - 2 = -2
z3 – c3 = (0 X 1 + 0 X 0 + 0 X 0) - 0 = 0
z4 – c4 = (0 X 0 + 0 X 1 + 0 X 0) - 0 = 0
z5 – c5 = (0 X 0 + 0 X 0 + 0 X 1) – 0 = 0

5
6

Choose the smallest negative value from zj – cj (i.e., – 3). So column under x1 is the key
column.
Now find out the minimum positive value
Minimum (14/3, 3/1) = 3
So row x5 is the key row.
Here, the pivot (key) element = 1 (the value at the point of intersection).
Therefore, x5 departs and x1 enters.

We obtain the elements of the next table using the following rules:

1. If the values of zj – cj are positive, the inclusion of any basic variable will not increase the
value of the objective function. Hence, the present solution maximizes the objective function.
If there are more than one negative values, we choose the variable as a basic variable
corresponding to which the value of z j – cj is least (most negative) as this will maximize the
profit.

2. The numbers in the replacing row may be obtained by dividing the key row elements by the
pivot element and the numbers in the other two rows may be calculated by using the formula:

 New old (corresponding no. of key row) X (corresponding no. of key


number= number- column)

    pivot element

Calculating values for table 2

x3 row

a11 = -1 – 1 X ((-1)/1) = 0
a12 = 2 – (-1) X ((-1)/1) = 1
a13 = 1 – 0 X ((-1)/1) = 1
a14 = 0 – 0 X ((-1)/1) = 0
a15 = 0 – 1 X ((-1)/1) = 1
b1 = 4 – 3 X ((-1)/1) = 7

x4 row

6
7

a21 = 3 – 1 X (3/1) = 0
a22 = 2 – (-1) X (3/1) = 5
a23 = 0 – 0 X (3/1) = 0
a24 = 1 – 0 X (3/1) = 1
a25 = 0 – 1 X (3/1) = -3
b2 = 14 – 3 X (3/1) = 5

x1 row

a31 = 1/1 = 1
a32 = -1/1 = -1
a33 = 0/1 = 0
a34 = 0/1 = 0
a35 = 1/1 = 1
b3 = 3/1 = 3

Calculating values for the index row (zj – cj)

z1 – c1 = (0 X 0 + 0 X 0 + 3 X 1) - 3 = 0
z2 – c2 = (0 X 1 + 0 X 5 + 3 X (-1)) – 2 = -5
z3 – c3 = (0 X 1 + 0 X 0 + 3 X 0) - 0 = 0
z4 – c4 = (0 X 0 + 0 X 1 + 3 X 0) - 0 = 0
z5 – c5 = (0 X 1 + 0 X (-3) + 3 X 1) – 0 = 3

7
8

Key column = x2 column


Minimum (7/1, 5/5) = 1
Key row = x4 row
Pivot element = 5
x4 departs and x2 enters.

Calculating values for table 3

x3 row

a11 = 0 – 0 X (1/5) = 0
a12 = 1 – 5 X (1/5) = 0
a13 = 1 – 0 X (1/5) = 1
a14 = 0 – 1 X (1/5) = -1/5
a15 = 1 – (-3) X (1/5) = 8/5
b1 = 7 – 5 X (1/5) = 6

x2 row

a21 = 0/5 = 0
a22 = 5/5 = 1
a23 = 0/5 = 0
a24 = 1/5
a25 = -3/5
b2 = 5/5 = 1

x1 row

a31 = 1 – 0 X (-1/5) = 1
a32 = -1 – 5 X (-1/5) = 0
a33 = 0 – 0 X (-1/5) = 0
a34 = 0 – 1 X (-1/5) = 1/5
a35 = 1 – (-3) X (-1/5) = 2/5
b3 = 3 – 5 X (-1/5) = 4

8
9

CHAPTER 3| MATLAB FOR SOLVING LINEAR


PROGRAMMING PROBLEMS.

EXAMPLE

The winemaker example led us to the following problem:

12x1 + 7x2 = max,

subject to

2x1 + x2 ≤ 10,000

3x1 + 2x2 ≤ 16,000

x1 ≥ 0, x2 ≥ 0.

If we define

9
1

this problem can be identified with the linear programming maximum problem associated
with f, A, b. Likewise it can be identified with the linear programming minimum problem
associated with −f, A, b. Solution of linear programming minimum problems with MATLAB

MATLAB provides the command linprog to find the minimizer (solution point) x of a linear
programming minimum problem. Without equality constraint the syntax is

x=linprog(f,A,b)

If you also want to retrieve the minimal value fmin = minx(fTx), type

[x,fmin]=linprog(f,A,b)

If inequality and equality constraint are given, use the commands

x=linprog(f,A,b,Aeq,beq)

or

[x,fmin]=linprog(f,A,b,Aeq,beq)

Let’s solve our winemaker problem:

>> f=[-12;-7];b=[10000;16000;0;0];A=[2 1;3 2;-1 0;0 -1]; >> [x,fopt]=linprog(f,A,b)


Optimization terminated successfully.

x=

1.0e+003 *

3.99999999989665

1
1

CHAPTER 4| TRANSPORTATION METHOD (VOGEL


APPROXIMATION METHOD).

INTRODUCTION

a special type of LPP, arising from transporting goods from various origins to destinations are
discussed in this section. Any other model conforming to transportation type problems
seeking minimization of cost, subject to availability constraints and demand constraints are
also termed as transportation type problems. The method based on simplex method takes an
easy form with the help of loops in the transportation table

Methods for Initial BFS of a Transportation Problem There are the following methods for
getting an initial basic feasible solution:

1. North-West Corner Rule

2. Matrix Minima Method

3. Row Minima Method

4. Column Minima Method

5. Vogel's Approximation Method (VAM)

ABDSTRUCT

The most common method used to determine efficient initial solutions for solving the trans-
potation problem (using a modified version of the simplex method) is Vogel’s Approximation
Method (VAM) [l]. Th e method involves calculating the penalty (difference between the
lowest cost and the second-lowest cost) for each row and column of the cost-matrix, and then
assigning the maximum number of units possible to the least-cost cell in the row or column
with the largest penalty. In the case of unbalanced transportation problems (i.e., problems
where the total supply does not equal the total demand), the transportation simplex method
necessitates the creation of a dummy row or column to make the problem balanced. The
traditional solution approach assigns zero values to the costs of transporting goods to or from
these dummies. The drawback with this approach is that VAM will usually allocate items to
the dummy cells before the other cells in the table. This initial solution may therefore not be
very efficient for unbalanced problems.

1
1

APPLYING THE VOGEL APPROXIMATION METHOD

Applying the Vogel Approximation Method requires the following steps:

Step 1: Determine a penalty cost for each row (column) by subtracting the lowest unit cell
cost in the row (column) from the next lowest unit cell cost in the same row (column).
Step 2: Identify the row or column with the greatest penalty cost. Break the ties arbitrarily (if
there are any). Allocate as much as possible to the variable with the lowest unit cost in the
selected row or column. Adjust the supply and demand and cross out the row or column that
is already satisfied. If a row and column are satisfied simultaneously, only cross out one of the
two and allocate a supply or demand of zero to the one that remains.
Step 3:
 If there is exactly one row or column left with a supply or demand of zero, stop.

 If there is one row (column) left with a positive supply (demand), determine the basic
variables in the row (column) using the Minimum Cell Cost Method. Stop.
 If all of the rows and columns that were not crossed out have zero supply and demand
(remaining), determine the basic zero variables using the Minimum Cell Cost Method. Stop.
In any other case, continue with Step 1.

EXAMPLE

Consider the transportation problem presented in the following table:

Solution

1
1

The highest penalty occurs in the first row. The minimum cij in this row is c 11 (i.e., 2). Hence,
x11 = 5 and the first row is eliminated.

Now again calculate the penalty. The following table shows the computation of penalty for
various rows and columns.

nitial basic feasible solution

5 X 2 + 2 X 3 + 6 X 1 + 7 X 4 + 2 X 1 + 12 X 2 = 76.

1
1

REFERENCES

1- Topics in Linear Programming and Games Theory by Lakshmisree Bandopadhyaya


2- http://www.sce.carleton.ca/faculty/chinneck/po/Chapter2.pdf
3- https://www.britannica.com/science/linear-programming-mathematics
4- https://universalteacherpublications.com/univ/ebooks/or/Ch3/simplex.htm
5- https://www.math.colostate.edu/~gerhard/MATH331/lab/linprog.pdf
6- https://www.sciencedirect.com/science/article/pii/089396599090003T
7- https://www.linearprogramming.info/vogel-approximation-method-transportation-
algorithm-in-linear-programming/
8-

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy