0% found this document useful (0 votes)
19 views62 pages

1 4Opt-E

Uploaded by

Van Thắng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views62 pages

1 4Opt-E

Uploaded by

Van Thắng
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

DISCRETE MATHEMATICS

DISCRETE MATHEMATICS

3
NỘI DUNG

PART1: COMBINATORIAL THEORY

PART 2: GRAPH THEORY

4
Contents of part 1

Chapter 0: Sets, Functions


Chapter 1: Counting problem
Chapter 2: Existence problem
Chapter 3: Enumeration problem
Chapter 4: Combinatorial optimization problem

5
Chapter 4. Combinatorial optimization problem

1. Introduction to problem
2. Brute force
3. Branch and bound

6
1. Introduction to problem

1.1. General problem


1.2. Traveling salesman problem
1.3. Knapsack problem
1.4. Bin backing problem

7
1.1. General problem

• In many practical application problems of combinatorics, each configuration is assigned to a value equal to
the rating of the worth using of the configuration for a particular use purpose.
• Then it appears the problem: Among possible combination configurations, determine the one that the worth
using is the best. Such kind of problems is called the combinatorial optimization problem.

8
1.1. General problem

The combinatorial optimization problem in general could be stated as follows:


Find the min (or max) of function
f(x)  min (max),
with condition:
x  D,
where D is a finite set.
Terminologies:
• f(x) – objective function of problem,
• x  D - a solution
• D – set of solutions of problem.
• Set D is often described as a set of combinatorial configurations that satisfy given properties.
• Solution x*D having minimum (maximum) value of the objective function is called optimal solution, and the
value f*=f(x*) is called optimzal value of the problem.

9
1. Introduction to problem

1.1. General problem


1.2. Traveling salesman problem
1.3. Knapsack problem
1.4. Bin backing problem

10
1.2. Traveling salesman problem (TSP)

• A salesman wants to travel n cities: 1, 2, 3,…, n.


• Itinerary is a way of starting from a city, and going through all the remaining cities, each city exactly once,
and then back to the starting city.
• Given cij is the cost of going from city i to city j (i, j = 1, 2, ..., n),
• Find the itinerary with minimum total cost.

11
1.2. Traveling salesman problem

We have a 1-1 correspondence between a itinerary


T(1)  T(2) ...  T(n)  T(1)
and a permutation  = ((1), (2),..., (n)) of n natural numbers 1, 2,..., n.

Set the cost of itinerary:


f() = c(1),(2) +... + c(n-1),(n) + c(n),(1).

Denote:
 - set of all permutations of n natural numbers 1, 2, ..., n.

12
1.2. Traveling salesman problem

• Then, the TSP could be stated as the following combinatorial optimization problem:
min { f() :    }.
• One could see that the number of possible itineraries is n!, but there are only (n-1)! itineraries if the starting
city is fixed.

13
1. Introduction to problem

1.1. General problem


1.2. Traveling salesman problem
1.3. Knapsack problem
1.4. Bin backing problem

14
1.3. Knapsack problem

• Problem Definition
• Want to carry essential items in one bag
• Given a set of items, each has
• An weight (i.e., 12kg)
• A value (i.e., 4$)

• Goal
• To determine the # of each item to include in a collection so that
• The total weight is less than some given weight that the bag can carry
• And the total value is as large as possible

15
1.3. Knapsack problem

• Three Types:
• 0/1 Knapsack Problem
• restricts the number of each kind of item to zero or one
• Bounded Knapsack Problem
• restricts the number of each item to a specific value
• Unbounded Knapsack Problem
• places no bounds on the number of each item

• Complexity Analysis
• The general knapsack problem is known to be NP-hard
• No polynomial-time algorithm is known for this problem

16
1.3. Knapsack problem

• Three Types:
• 0/1 Knapsack Problem
• restricts the number of each kind of item to zero or one
• Bounded Knapsack Problem
• restricts the number of each item to a specific value
• Unbounded Knapsack Problem
• places no bounds on the number of each item

• Complexity Analysis
• The general knapsack problem is known to be NP-hard
• No polynomial-time algorithm is known for this problem

17
1. Introduction to problem

1.1. General problem


1.2. Traveling salesman problem
1.3. Knapsack problem
1.4. Bin backing problem

18
1.4. Bin backing problem

• Given n items: the weight of items are w1, w2, ..., wn. Need to find a way to place these n items into bins of
same size b such that the number of bins used is minimal.

• We have the assumption:


wi  b, i = 1, 2,.., n.
• Therefore, the number of bins needed to hold all these n items is not more than n. The problem is to find the
minimum possible number of bins:
• We will give user n bins. The problem is to help user to determine: each of the n item will be placed in
which of the n bins, so that the number of bins having items in them is minimum.

19
1.4. Bin backing problem

• We have the boolean variable


xij = 1, if item i is placed in bin j,
0, otherwise.
Then, bin backing problem is stated in the form:
n n

 sgn( x )  min,
j 1 i 1
ij Note: The signum function of a real number x is defined as follows:

x
j 1
ij  1, i  1, 2,..., n

w x
i 1
i ij  b, j  1, 2,..., n;

xij  {0,1}, i, j  1, 2,..., n.

20
Chapter 4. Combinatorial optimization problem

1. Introduction to problem
2. Brute force
3. Branch and bound

21
2. Brute force

• One of the most obvious methods to solve the combinatorial optimization problem is: On the basis of the
combinatorial enumeration algorithms, we go through each solution of the problem, and for each solution,
we calculate its value of objective function; then compare values of objective functions of all solutions to find
the optimal solution whose objective function is minimal (maximal).
• The approach based on such principles is called the brute force.

22
2. Brute force

Example: 0/1 knapsack problem


n
m ax{ f ( x)   v j x j : x  D},
j 1
n
where D  {x  ( x1 , x2 ,..., xn )  A :  w j x j  b}
n

j 1

 vj , wj, b are positive integers, j=1,2,…, n.

 Need algorithm to enumerate all elements of set D

23
2. Brute force

Backtracking: enumerate all possible solutions:


• Construct set Sk:
• S1={ 0, t1 }, where t1=1 if bw1; t1 = 0, otherwise
• Assume the current partial solution is (x1, …, xk-1). Then:
• The remaining capacity of the bag is:
bk-1= b - w1x1- …-wk-1xk-1
• The value of items already in the bag is:
fk-1= v1x1 + … + vk-1xk-1
Therefore: Sk = {0, tk}, where tk=1 if bk-1wk; tk = 0, otherwise
• Implement Sk?
for (y = 0; y++; y <= tk)

24
2. Brute force

int x[20], xopt[20], c[20], w[20];


int n,b, bk, fk, fopt;
void InputData ( ){
<Enter value of n, v, w, b>;
}
void PrintSolution ( ){
<Optimal solution: xopt; Optimal value of objective function: fopt>;
}
int main( ){
InputData( );
bk=b;
fk= 0;
fopt= 0;
BackTrack(1);
PrintSolution( );
}

25
2. Brute force
void BackTrack(int k)
{
int j, t;
if (bk>=w[k]) t=1 else t=0;
for (j= t; j--; j>=0)
{
x[k] = j;
bk= bk-w[k]*x[k];
fk= fk + v[k]*x[k];
if (k == n)
{
if (fk>fopt) {
xopt:=x; fopt:=fk;
}
}
else BackTrack(k+1);
bk = bk+w[k]*x[k];
fk = fk - v[k]*x[k];
}
}

26
2. Brute force

• Brute force is difficult to do even on the most morden super computer. Example to enumerate all
15! = 1 307 674 368 000
permutations on the machine with the calculation speed of 1 billions operations per second, and if to
enumerate one permutation requires 100 operations, then we need:
130767 seconds > 36 hours!
20! ===> 7645 years

27
2. Brute force

• However, it must be emphasized that in many cases (for example, in the traveling salesman problem, the
knapsack, bin backing problem), we have not found yet any effective methods other than the brute force.
• Then, a problem arises that in the process of enumerating all solutions, we need to make use of the found
information to eliminate solutions that are definitely not optimal.
• In the next section, we will look at such a search approach to solve the combinatorial optimization problems.
In literature, it is called Branch and bound algorithm.

28
Chapter 4. Combinatorial optimization problem

1. Introduction to problem
2. Brute force
3. Branch and bound

29
3. Branch and bound

3.1. General diagram


3.2. Example
3.2.1. Traveling salesman problem
3.2.2. Knapsack problem

30
3.1. General diagram

• Branch and bound algorithm consists of 2 procedures:


• Branching Procedure
• Bounding Procedure
• Branching procedure: The process of partitioning the set of solutions into subsets of size decreasing gradually
until the subsets consists only one element.
• Bounding procedure: It is necessary to give an approach to calculate the bound for the value of the objective
function on each subset A in the partition of the set of solutions.

31
3.1. General diagram

• We will describe the idea of algorithm on the model of the following general combinatorial optimization
problem:
min { f(x) : x  D },
where D is the finite set.
• Assume set D is described as following:
D = {x = (x1, x2, ..., xn)  A1 A2 ...  An:
x satisfies property P},
where A1, A2, ..., An are finite set, and P is property on the
Descartes product A1  A2  ...  An.

32
3.1. General diagram

• The requirement about describe the set D is to be able to use the backtracking algorithm to enumerate all
solutions of the problem.
• Problem
max {f(x): x  D}
is the equivalent of the problem
min {g(x): x  D}, where g(x) = -f(x)
Therefore, we can limit to considering the minimize problem

33
3.1. General diagram
Branching procedure can implement by using backtracking:

() D
1 n1
a 1 a 2
1 ... a 1
1 n1
(a )1 (a12 ) . . . (a )
1
D(a11 ) D(a1 ) 2
D( a1n1 )

where D(a1i )  {x  D : x1  a1i }, i  1, 2,..., n1


is the set of solutions which can be obtained from partial solution (a1i )
We have the partition:
D  D(a11 )  D(a12 )  ...  D(a1n1 )

34
3.1. General diagram
• Branching can be described as following:

( ) D(a1,…,ak)

a1k 1 a 2 akp1
k 1 ...
D(a1 ,..., ak , a1k 1 )
p
 We have partition: D(a1 ,..., ak )  D(a1 ,..., ak , aki 1 )
i 1

D(a1,..., ak)= { xD: xi = ai , i = 1,..., k } is a subset of solutions where the first k elements of solutions are already
known: x1 = a1 ,x2 = a2 , ...., xk = ak

35
Bounding

• We need to determine function g defined on the set of all partial solutions that satisfies the following
inequality:
The value of objective function of all solutions having the first
g(a1,..., ak)  min{f(x): xD(a1, ..., ak)} (*) k elements as (a1, a2,…,ak)

For each k-level partial solution (a1, a2, ..., ak), k = 1, 2, ...
• The inequality (*) means that the value of g of partial solution (a1, a2, ..., ak) is not greater than the minimum
value of objective function of solution set D(a1, ..,ak)
D(a1,..., ak)= { xD: xi = ai , i = 1,..., k },

In other word, g(a1, a2, . . . , ak) is the lower bound of the value of objective function of solution set D(a1, a2,
..., ak).

36
Cut branch by using lower bound

• Assume we already have function g defined as above. We will use this function to reduce the amount of
searching during the process to consider all possible solutions in the backtracking algorithm.
• In the process to enumerate solutions, assume we already obtain some solutions. Thus, denote x* the
solution with objective function is minimum among all solutions obtained so far, and denote f* = f(x*)
• We call
• x* is the current best solution (optimal solution),
• f* is the current best value of objective function (optimal objective value).

37
Cut branch by using lower bound
If g(a1,…,ak-1,𝑎𝑘2 ) > 𝑓 ∗
f* is the current best objective value 𝑝
If g(a1,…,ak-1,𝑎𝑘 ) > 𝑓 ∗
(a1,…,ak-1)
1 2
a a akp
k k ...
If g(a1,…,ak-1,𝑎𝑘1 ) > 𝑓 ∗
2
1
a
k 1 a k 1 ... akq1

all solutions with first k elements as (a1,…,ak-1,𝑎1𝑘 ) certainly have the objective value >
f*  we do not need to browse this branch
g(a1,..., ak) is lower bound of partial solution (a1,..., ak)

38
3.1. General diagram
void Branch(int k) {
//Construct xk from partial soluion (x1, x2, ..., xk-1)
for akAk
if (akSk )
{
xk = ak;
if (k == n) <Update Record>;
else if (g(x1,..., xk)  f*) Branch(k+1);
}
}
void BranchAndBound ( ) {
f* = +;
//if you know any solution x* then set f* = f(x*)
Branch(1);
if (f* < +)
<f* is the optimal objective value, x* is optimal solution >
else < problem does not have any solutions >;
}

39
3.1. General diagram

g(a1,..., ak)  min{f(x): xD(a1, ..., ak)} (*)

The construction of g function depends on each specific combinatorial optimization problem. Usually we try to
build it so that:
• Calculating the value of g must be simpler than solving the combinatorial optimization problem on the right
side of (*).
• The value of g(a1, ..., ak) must be close to the value of the right side of (*).
Unfortunately, these two requirements are often contradictory in practice.

40
3. Branch and bound

3.1. General diagram


3.2. Example
3.2.1. Traveling salesman problem
3.2.2. Knapsack problem

41
3.2.1. Traveling salesman problem

• A salesman wants to travel n cities: 1, 2, 3,…, n.


• Itinerary is a way of starting from a city, and going through all the remaining cities, each city exactly once,
and then back to the starting city.
• Given cij is the cost of going from city i to city j (i, j = 1, 2, ..., n),
• Find the itinerary with minimum total cost.

42
3.2.1. Traveling salesman problem

Fix the starting city as city 1, the TSP leads to the problem:
• Determine the minimum value of
f(1,x2,..., xn) = c[1,x2]+c[x2,x3]+...+c[xn-1,xn] + c[xn,1]
where
(x2, x3, ..., xn) is permutation of natural numbers 2, ..., n.

43
3.2.1. Traveling salesman problem

Lower bound function:


• Denote
cmin = min { c[i, j] , i, j = 1, 2, ..., n, i  j }
the smallest cost between all pairs of cities.
• We need to evaluate the lower bound for the partial solution (1, u2, ..., uk) corresponding to the partial
journey that has passed through k cities
1  u2  . . .  uk-1  uk

44
3.2.1. Traveling salesman problem

Lower bound function:


• The cost need to pay for this partial solution is
 = c[1,u2] + c[u2, u3] + ... + c[uk-1, uk].
• To develop it as the complete journey:

1 2 3
1  u2  . . .  uk-1  uk  uk+1  uk+2  …..  un  1

Cost:  Cost: (n-k+1)cmin

We still need to go through n-k+1 segments, each segment with the cost at least cmin, thus the lower bound of
the partial solution (1, u2, ..., uk) can be calculated by the formula:
g(1, u2, ..., uk) =  + (n-k+1) cmin

45
3.2.1. Traveling salesman problem

Give 5 cities {1, 2, 3, 4, 5}. Solve the TSP where the salesman starts from the city 1, and the cost matrix:

0 3 14 18 15
3 0 4 22 20
C= 17 9 0 16 4
9 20 7 0 18
9 15 11 5 0

46
3.2.1. Traveling salesman problem

• We have cmin = 3. The process executing the algorithm is described by the solution search tree.
• Information written in each box is the following in order:
• elements of partial solution,
•  is the cost of partial solution,
• g – lower bound of partial solution.

47
3.2.1. Traveling salesman problem
Root, f* = +∞

(2) (4);
(3); (5);
 = 3  = 14  = 18  = 15
g = 3 + 4*3 = 15 g = 14 +4*3 = 26 g = 18 +4*3 = 30 g = 15 +4*3 = 27

(2,3); (2,4); (2,5);


 =3 + 4 = 7  =3 + 22 = 25  =3 + 20 = 23
g = 7 + 3*3 = 16 g = 25 + 3*3 = 34 g = 23 + 3*3 = 32

(2,3,4); (2,3,5);
 =7 +16 = 23  =7 +4 = 11
g = 23 + 2*3 = 29 g = 11 + 2*3 = 17
These branches are eliminated because their lower bound g > f*=25

(2,3,4,5); (2,3,5,4);
 =23 + 18 = 41  =11 + 5 = 16

Journey (1,2,3,4,5,1) Journey (1,2,3,5,4,1)


Cost=25
0 3 14 18 15
Cost=50
Update record f*=50 Update record f*=25 3 0 4 22 20
C= 17 9 0 16 4
9 20 7 0 18
9 15 11 5 0
48
48
3.2.1. Traveling salesman problem

Terminate the algorithm, we obtain:


- Optimal solution (1, 2, 3, 5, 4, 1) correspond to the journey
1  2 3 5 4 1,
- The minimum cost is 25.

49
3.2.1. Traveling salesman problem
void Branch(int k) {
for (int v = 2; v<=n;v++) {
if (visited[v] == FALSE) {
xk = v; visited[v] = TRUE;
f = f + c(xk−1,xk);
if (k == n) //Update record:
{ if (f + c(xn,x1) < f*) f∗ = f + c(xn,x1); }
else {
g = f + (n−k + 1)∗cmin; //calculate bound
if (g < f∗ ) Branch(k + 1);
}
}
f = f − c(xk−1,xk);
visited[v] = FALSE;
}
}

50
3.2.1. Traveling salesman problem
void BranchAndBound()
{
f* = +;
for (v = 1; v<=n;v++) visited[v] = FALSE;
f=0; x1 = 1; visited[x1] = TRUE;
Branch(2);
return f∗;
}

51
3. Branch and bound

3.1. General diagram


3.2. Example
3.2.1. Traveling salesman problem
3.2.2. Knapsack problem

52
3.2.2. Knapsack problem

• There are n types of items.


• Item type j has
• weight wj and
• profit pj (j = 1, 2,..., n) .
• We need to select subsets of these items to put it into the bag of capacity c such that the total profit
obtained from items loaded in the bag is maximum.

53
3.2.2. Knapsack problem
• We have the variable
xj – number items type j loaded in the bag, j=1,2, ..., n
• Mathematical model of problem: Find
n n
f  max { f ( x)   p j x j :  w j x j  c, x j  Z  , j  1, 2,..., n }
*

j 1 j 1

where Z+ is the set of nonnegative integers

Knapsack problem with integer variables

• Denote D the set of solutions to the problem:


n
D  {x  ( x1 ,..., xn ) :  p j x j  c, x j  Z  , j  1, 2,..., n }
j 1

54
3.2.2. Knapsack problem
• Assume we index the item in the order such that the following inequality is satisfied:
v1 /w1  v2 / w2  . . .  vn / wn .
(it means items are ordered in descending order of profit per one unit of weight)
• To construct the upper bound function, we consider the following Knapsack continuous variables (KPC): Find

n n
g *  max { f ( x)   p j x j :  w j x j  c, x j  0, j  1, 2,..., n }
j 1 j 1

55
3.2.2. Knapsack problem
Proposition. The optimal solution to the KPC is vectơ (x* = 𝑥1∗ , 𝑥2∗ ,.., 𝑥𝑛∗ ) where elements are determined by the
formula:
𝑥1∗ = c/w1, 𝑥2∗ = 𝑥3∗ = ⋯ . = 𝑥𝑛∗ = 0
and the optimal value is g* = v1b /w1.
Proof. Consider x = (x1,..., xn) as a solution to the KPC. Then
pj  (p1 / w1 ) wj , j = 1, 2, ..., n
as xj  0, we have
pj xj  (p1 / w1 ) wj xj , j = 1, 2, ..., n.
• Therefore n n

 p x ( p
j 1
j j
j 1
1 / w1 ) w j x j

n
 ( p1 / w1 ) w j x j
j 1

 ( p1 / w1 )c  g *
Proposition is proved.

56
3.2.2. Knapsack problem
• Now we have the k-level partial solution: (u1, u2, ..., uk), then the profit of items currently loaded in the bag is
k = p1u1 + p2u2 + . . . + pkuk
and the remaining capacity of the bag is
ck = c – (w1u1 + w2u2 + . . . + wkuk)
• We have:
max{ f ( x) : x  D, x j  u j , j  1, 2,..., k}
n n
 max { k  
j  k 1
pjxj : wx
j  k 1
j j  ck , x j  Z  , j  k  1, k  2,..., n}

n n
  k  max {  p j x j : wx j j  ck , x j  0, j  k  1, k  2,..., n}
j  k 1 j  k 1

  k  pkthe
• Thus, we can calculate 1ck upper
/ wk 1 bound for the partial solution (u , u , ..., u ) by formula g(u1, u2,..., uk) = k +
1 2 k
pk+1 ck / wk+1

57
3.2.2. Knapsack problem

• Note: When continuing build the (k+1)th element of solution, candidates for xk+1 are 0, 1, ..., [ck / wk+1 ]
• Using the result of the proposition, when selecting value for xk+1, we browse candidates for xk+1 in the
descending order: [ck / wk+1 ], [ck / wk+1 ]-1,…,1, 0

58
3.2.2. Knapsack problem
Example: Solve the knap sack problem using the branch and bound algorithm:

• Note that in this example, all four items are already sorted in descending order of profit on an unit weight

59
59
3.2.2. Knapsack problem

• The process executing the algorithm is described by the solution search tree.
• Information written in each box is the following in order:
• elements of partial solution,
•  is the cost of partial solution (profit of items currently loaded in the bag),
• w : remaining capacity of the bag,
• g : upper bound of partial solution.

60
Root, f*=-∞ f(x) = 10 x1 + 5 x2 + 3 x3 + 6 x4  max,
5 x1 + 3 x2 + 2 x3 + 4 x4  8,
Select item 1: xj  Z+ , j =1, 2, 3, 4.
8/5 = 1 x1 = 1 x1 = 0

(1)  = 10 (0);  = 0
w = 8-5=3; g = 10 +5*3/3 = 15 w = 8; g = 0 +5*8/3 = 40/3

x2 = 1 Select item 2: x2 = 0 Eliminate because upper bound g < f*=15


3/3 = 1

(1,1);  = 10+5=15 (1,0);  = 10+0=10


w = 3-3 = 0; g = 15 w = 3; g = 10+3*3/2=14.5

x3 = 0 Select item 3:
0/2 = 0 Eliminate because upper bound g < f*=15
(1,1,0);  = 15
w = 0; g = 15
• Finish algorithm, we obtain:
– Optimal solution: x* = (1, 1, 0, 0),
Select item 4:
x4 = 0 0/4 = 0 – Optimal objective value: f* = 15.
(1,1,0,0) We obtain a new solution
f*=15 Update record: f*=15 61
61
THANK YOU !

62

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy