0% found this document useful (0 votes)
2 views167 pages

Unit 3 Dynamic Programming

Dynamic Programming (DP) is an algorithm design method for solving optimization problems defined by recurrences with overlapping subproblems, introduced by Richard Bellman in the 1950s. The main approach involves setting up a recurrence relation, solving smaller instances once, and recording solutions in a table to extract the final solution. Examples of DP applications include the Knapsack Problem and Matrix Chain Multiplication, which involve optimizing selections and orders to minimize costs.

Uploaded by

Raunak Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views167 pages

Unit 3 Dynamic Programming

Dynamic Programming (DP) is an algorithm design method for solving optimization problems defined by recurrences with overlapping subproblems, introduced by Richard Bellman in the 1950s. The main approach involves setting up a recurrence relation, solving smaller instances once, and recording solutions in a table to extract the final solution. Examples of DP applications include the Knapsack Problem and Matrix Chain Multiplication, which involve optimizing selections and orders to minimize costs.

Uploaded by

Raunak Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 167

Unit 3 Dynamic Programming

Dynamic Programming
• Dynamic Programming is an algorithm design
method that can be used when the solution to a
problem may be viewed as the result of a
sequence of decisions
Dynamic Programming
Dynamic Programming

Dynamic Programming is a general algorithm design technique


for solving problems defined by recurrences with overlapping
subproblems

• Invented by American mathematician Richard Bellman in the


1950s to solve optimization problems and later assimilated by CS

• “Programming” here means “planning”

• Main idea:
- set up a recurrence relating a solution to a larger instance to
solutions of some smaller instances
- solve smaller instances once
- record solutions in a table
- extract solution to the initial instance from that table
Examples of DP algorithms
Some difficult discrete optimization problems:
1. knapsack
2. Chain Matrix Multiplication
3. Constructing an optimal binary search tree
4. Floyd’s algorithm for all-pairs shortest paths
5. Bellman Ford Algorithm
6. Traveling salesman Problem
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … v n
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack

Recursive solution?
What is smaller problem?
How to use solution to smaller in solution to larger Table?
Order to solve?
Initial conditions?
Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack

Consider instance defined by first i items and capacity j (j ≤ W).


Let V[i,j] be optimal value of such instance. Then
max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi ≥ 0
V[i,j] = V[i-1,j] if j- wi < 0

Initial conditions: V[0,j] = 0 and V[i,0] = 0


Knapsack Problem by DP
Given n items of
integer weights: w1 w2 … wn
values: v1 v2 … vn
a knapsack of integer capacity W
find most valuable subset of the items that fit into the knapsack

Consider instance defined by first i items and capacity j (j ≤ W).


Let V[i,j] be optimal value of such an instance. Then
max {V[i-1,j], vi + V[i-1,j- wi]} if j- wi ≥ 0
{
V[i,j] =
V[i-1,j] if j- wi < 0

Initial conditions: V[0,j] = 0 and V[i,0] = 0


Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0
w2 = 1, v2= 10 2 0
w3 = 3, v3= 20 3 0
w4 = 2, v4= 15 4 0 ?
Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value
J-wi=5-2=3 >=0
1 2 $12 V[1,2]=max{v[0,2],v1+v[0,3]}
=max{0,12+0}
2 1 $10 =12
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0 12
w2 = 1, v2= 10 2 0
w3 = 3, v3= 20 3 0
w4 = 2, v4= 15 4 0 ?
Knapsack Problem by DP (example)
Example: Knapsack of capacity W = 5
item weight value
1 2 $12
2 1 $10
3 3 $20
4 2 $15 capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0 12 12 12 12
w2 = 1, v2= 10 2 0 10 12 22 22 22
w3 = 3, v3= 20 3 0 10 12 22 30 32
w4 = 2, v4= 15 4 0 10 15 25 30 37
Knapsack Problem by DP (example)
capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0 12 12 12 12
w2 = 1, v2= 10 2 0 10 12 22 22 22
w3 = 3, v3= 20 3 0 10 12 22 30 32
w4 = 2, v4= 15 4 0 10 15 25 30 37

Maximal value is V[4,5]=$37


Find the optimal solution by tracking the back computation of the table.
Compare V[4,5] with V[3,5] V[4,5] ≠ V[3,5]
Item 4 is included in the knapsack for optimal solution along with an optimal subset
for filling the knapsack with weight 5-2=3.So remaining units of the knapsack capacity
=3
Knapsack Problem by DP (example)
capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0 12 12 12 12
w2 = 1, v2= 10 2 0 10 12 22 22 22
w3 = 3, v3= 20 3 0 10 12 22 30 32
w4 = 2, v4= 15 4 0 10 15 25 30 37

Represent the element V[3,3]


V[3,3]=V[2,3] so the item 3 is not the part of an optimal solution.
Represent the element V[2,3]
V[2,3] ≠ V[1,3] so the item 2 is part of an optimal solution.
Item 2 is included in the knapsack for optimal solution along with an optimal subset
for filling the knapsack with weight 3-1=2
So remaining units of the knapsack capacity=2
Knapsack Problem by DP (example)
capacity j
0 1 2 3 4 5
0 0 0 0 0 0 0
w1 = 2, v1= 12 1 0 0 12 12 12 12
w2 = 1, v2= 10 2 0 10 12 22 22 22
w3 = 3, v3= 20 3 0 10 12 22 30 32
w4 = 2, v4= 15 4 0 10 15 25 30 37

Represent the element V[1,2]


V[1,2] ≠ V[0,2] so the item 1 is part of an optimal solution.
Item 1 is included in the knapsack for optimal solution along with an optimal subset
for filling the knapsack with weight 2-2=0
Item 1 is the final part of an optimal solution.
Optimal solution={item1,item2,item4}
Knapsack Problem by DP (pseudocode)
Algorithm DPKnapsack(w[1..n], v[1..n], W)
var V[0..n,0..W], P[1..n,1..W]: int
for j := 0 to W do
V[0,j] := 0
for i := 0 to n do Running time and
V[i,0] := 0 space: O(nW).
for i := 1 to n do
for j := 1 to W do
if w[i] ≤ j and v[i] + V[i-1,j-w[i]] > V[i-1,j] then
V[i,j] := v[i] + V[i-1,j-w[i]]; P[i,j] := j-
w[i]
else
V[i,j] := V[i-1,j]; P[i,j] := j
return V[n,W] and the optimal subset by backtracing
Knapsack Problem by DP (example)
Matrix Chain Multiplication
• Given some matrices to multiply, determine the best order
to multiply them so you minimize the number of single
element multiplications.
– i.e. Determine the way the matrices are parenthesized.

• First off, it should be noted that matrix multiplication is


associative, but not commutative. But since it is associative,
we always have:

• ((AB)(CD)) = (A(B(CD))), or any other grouping as long as the


matrices are in the same consecutive order.

• BUT NOT: ((AB)(CD)) = ((BA)(DC))


Matrix Chain Multiplication
Matrix multiplication
What is matrix chain multiplication
Formula MCM by using DP
How to use the formula for solving an example

A= a11 a12 a13 B=b11 b12


a21 a22 a23 b21 b22
b31 b32
dimension 2*3 dimension 3*2
C=A*B=a11*b11+a12*b21+a13*b31 a11*b12+a12*b22+a13*b32
a21*b11+a22*b21+a23*b31 a21*b12+a22*b22+a23*b32
Total multiplication 12
Total multiplication= 2*3 3*2= 2*3*2=12
Dimension of resultant matrix C=no of row=no of row in A
no of col =no of col from B
2 2
Matrix Chain Multiplication
Matrix multiplication
What is matrix chain multiplication
Formula MCM by using DP
How to use the formula for solving an example

A= a11 a12 a13 B=b11 b12


a21 a22 a23 b21 b22
b31 b32
dimension 2*3 dimension 3*2
C=A*B=a11*b11+a12*b21+a13*b31 a11*b12+a12*b22+a13*b32
a21*b11+a22*b21+a23*b31 a21*b12+a22*b22+a23*b32
Total multiplication 12
Total multiplication= 2*3 3*2= 2*3*2=12
Dimension of resultant matrix C=no of row=no of row in A
no of col =no of col from B
2 2
Matrix Chain Multiplication
A1 A2 A3
2*3 3*4 4*2
do d1 d1 d2 d2 d3

A1*A2*A3=2 possibility
First (A1*A2)*A3 second A1*(A2*A3)
2*3 3*4 4*2 36
2*3*4+0=24
2*4 4*2
2*4*2=16
Total no of mul=no of mul
for 24+16 =40
Second possibility
Matrix Chain Multiplication
A1 A2 A3
2*3 3*4 4*2
do d1 d1 d2 d2 d3
(A1*A2)*A3
2*3*4 0
c[1,2]=24 c[3 3]=0

24+0+16=40
C[1,2]+ c[3,3]+ d0*d2*d3
Cost of Cost of
A1*A2 A3
24+0+16=40
2* 4* 2
do d2 d3
C[1,2]+c[3, 3]+d0+d2+d3
i k K+1 j i-1 k j
Matrix Chain Multiplication
A1 A2 A3
2*3 3*4 4*2
do d1 d1 d2 d2 d3
A1*(A2*A3)
0+24+12=36
C[1,1] c[2,3] 2* 3* 2
C[1,1] c[2, 3] d0 d1 d3
I k k+1 j i-1 k j

C[i,j]=min{ [c[i,k]+ c[k+1,j]]+ di-1*dk*dj} 1<=k<j

A1* A2* A3* A4


d0 d1 d1 d2 d2 d3 d3 d4
Matrix Chain Multiplication
A1* A2* A3* A4
d0 d1 d1 d2 d2 d3 d3 d4

Possibility
A1*(A2*(A3*A4))
A1*((A2*A3)*A4))
(A1*A2)*(A3*A4)
((A1*A2)*A3)*A4
(A1*(A2*A3))*A4

Cataln number=2ncn/n+1 n is no of matrix n=4

modified cataln number formula is 2n-1cn-1/n-1+1=2n-1cn-1/n


Possible matrix mul=2*3c3/4=6c3/4 =5
N=5
Possible matrix mul=14
Matrix Chain Multiplication
Apply the formula which n=4
C[i,j]=min{ [c[i,k]+ c[k+1,j]]+ di-1*dk*dj} 1<=k<j
A1* A2* A3* A4
d0 d1 d1 d2 d2 d3 d3 d4

i=1 j=4

1<=k<4 k=1,2,3
C[1,4]=min k=1 {c[1,1]+c[2,4]+d0*d1*d4}
k=2 {c[1,2]+c[3,4]+d0*d2*d4}
k=3 {c[1,3]+c[4,4]+d0*d3*d4}

C[2,4] c[3,4] c[4,4]


C[2,4]
i=2 j=4

2<=k<4 k=2,3
C[2,4]=min k=2 {c[2,3]+c[4,4]+d1*d2*d4}
k=3 {c[2,3]+c[4,4]+d1*d3*d4}
1
Matrix
2 3
Chain
4
Multiplication
1 2 3 4
1 0 24 1 0 1
2 0 2 0
3 0 3 0
4 0 4 0

C[1,2] c[1,3] c[1,4] c[2,3] c[2,4]


C[3,4]

i=1 j=2 i<=k<j 1<=k<2


C[i,j]=min{ [c[i,k]+ c[k+1,j]]+ di-1*dk*dj} 1<=k<j
A1* A2* A3* A4
d0 d1 d1 d2 d2 d3 d3 d4
3 2 2 4 4 2 2 5

c[1,2] =min k=1 { c[1,1]+c[2,2]+d0*d1*d2}


{ 0+0+3*2*4}={0+0+24}=24
1
Matrix
2 3
Chain
4
Multiplication
1 2 3 4
1 0 24 1 0 1
2 0 16 2 0 2
3 0 40 3 0 3
4 0 4 0
Cost Table K table for parenthesis
C[1,2] c[2,3] C[3,4] c[1,3] c[2,4] c[1,4]

i=1 j=2 i<=k<j 1<=k<2


C[i,j]=min{ [c[i,k]+ c[k+1,j]]+ di-1*dk*dj} 1<=k<j
A1* A2* A3* A4
d0 d1 d1 d2 d2 d3 d3 d4
3 2 2 4 4 2 2 5
i=2 j=3 k=2 i=3 j=4 k=3
c[2,3] = min { c[2,2]+c[3,3]+d1*d2*d3} C[3,4]=min { c[3,3]+c[4,4]+d2*d3*d4}
{ 0+0+2*4*2} { 0+0+4*2*5}
16 40
1
Matrix
2 3
Chain
4
Multiplication
1 2 3 4
1 0 24 28 1 0 1 1
2 0 16 2 0 2
3 0 40 3 0 3
4 0 4 0
Cost Table K table for parenthesis
C[1,2] c[2,3] C[3,4] c[1,3] c[2,4] c[1,4]

i=1 j=2 i<=k<j 1<=k<2


C[i,j]=min{ [c[i,k]+ c[k+1,j]]+ di-1*dk*dj} i<=k<j
A1* A2* A3* A4
d0 d1 d1 d2 d2 d3 d3 d4
3 2 2 4 4 2 2 5
i=1 j=3 k=1,2
c[1,3] = min k=1 { c[1,1]+c[2,3]+d0*d1*d3}
k=2 { c[1,2]+c[3,3]+d0*d2*d3}
k=1 { 0+16+3*2*2}
k=2 { 24+0+3*4*2}
k=1 {16+12=28} k=2 {24+24=48}
1
Matrix
2 3
Chain
4
Multiplication
1 2 3 4
1 0 24 28 58 1 0 1 1 3
2 0 16 36 2 0 2 3
3 0 40 3 0 3
4 0 4 0
Cost Table K table for parenthesis
A1* A2* A3* A4
d0 d1 d1 d2 d2 d3 d3 d4
3 2 2 4 4 2 2 5
( ( A1) (A2 A3) ) A4
Optimal Binary Search Tree
• We now want to focus on the construction of binary
search trees for a static set of identifiers. And only
searches are performed.
• To find an optimal binary search tree for a given
static file, a cost measure must be determined for
search trees.
• It’s reasonable to use the level number of a node as
the cost.
Binary Search Tree Example

for for

whi ret
do do
le urn

ret whi
if
urn le

3 comparisons in
if worst case

4 comparisons in
worst case
Binary Search Tree Containing A Symbol
Table

• Let’s look at the problem of representing a symbol table


as a binary search tree. If a binary search tree contains
the identifiers a1, a2, …, an with a1 < a2 < … < an, and the
probability of searching for each ai is pi, then the total
cost of any binary search tree is

when only successful searches are made.


Binary Search Tree Containing A
Symbol Table
• For unsuccessful searches, let’s partitioned the identifiers not in the
binary search tree into n+1 classes Ei, 0 ≤ i ≤ n. If qi is the probability
that the identifier being sought is in Ei, then the cost of the failure
node is

• Therefore, the total cost of a binary search tree is

• An optimal binary search tree for the identifier set a1, …, an is one that
minimize the above equation over all possible binary search trees for
this identifier set. Since all searches must terminate either successfully
or unsuccessfully, we have
Binary Search Tree With Three Identifiers
Example
w
if do
hil
e
w
if
if do hil
e w
hil
do
e
w (b) (c)
(a) hil do
e
w
do hil
e
if if

(d) (e)
Cost of Binary Search Tree In The Example

• With equal probabilities, pi = qj = 1/7 for all i and j, we have


cost(tree a) = 15/7; cost(tree b) = 13/7
cost(tree c) = 15/7; cost(tree d) = 15/7
cost(tree e) = 15/7
Tree b is optimal.

• With p1=0.5, p2=0.1, p3=0.05, q0=0.15, q1=0.1, q2=0.05, and q3=0.05 we


have
cost(tree a) = 2.65; cost(tree b) = 1.9
cost(tree c) = 1.5; cost(tree d) = 2.05
cost(tree e) = 1.6

Tree c is optimal.
Determine Optimal Binary Search Tree

• So to determine which is the optimal binary search, it is not


practical to follow the above brute force approach since the
complexity is O(n4n/n3/2).
• Now let’s take another approach. Let Tij denote an optimal binary
search tree for ai+1, …, aj, i<j. Let cij be the cost of the search tree Tij.
Let rij be the root of Tij and let wij be the weight of Tij, where
W[i,j]=w[i,j-1]+pj+qj

• Therefore, by definition rii=0, wii=qi, 0 ≤ i ≤ n. T0n is an optimal


binary search tree for a1, …, an. Its cost function is c0n, it weight w0n,
and it root is r0n.
Determine Optimal Binary
Search Tree (Cont.)

• If Tij is an optimal binary search tree for ai+1, …, aj, and rij
=k, then i< k <j. Tij has two subtrees L and R. L contains
ai+1, …, ak-1, and R contains ak+1, …, aj. So the cost cij of Tij is

C(i,j) =min{c(i,k-1)+c(k,j)}+w(i,j) where i<k<j


R(i,j)=k(min)
Let n=4, (a1, a2, a3, a4) =
Example
(do, if return, while).
Let (p1, p2, p3,
p4)=(3,3,1,1)
and (q0, q1, q2, q3,
q4)=(2,3,1,1,1)
Let n=4, (a1, a2, a3, a4) =
Example
(do, if return, while).
Let (p1, p2, p3,
p4)=(3,3,1,1)
and (q0, q1, q2, q3,
q4)=(2,3,1,1,1)
• Let n=4, (a1, a2, a3,
Example
a4) = (do, if return,
while).
• Let (p1, p2, p3,
p4)=(3,3,1,1)
• and (q0, q1, q2, q3,
q4)=(2,3,1,1,1)
• wii = qii, cii=0, and rii=0,
0 ≤ i ≤ 4.
w01 = w[0,0]+p1+q1
• Let n=4, (a1, a2, a3,
Example
i,j 0 1 2 3 4
a4) = (do, if return, 0 w00=2 w01=
while). c00=0 c01
• Let (p1, p2, p3, r00=0 r01=
p4)=(3,3,1,1) 1 w11=3 w12=
• and (q0, q1, q2, q3, c11=0 c12
r11=0 r12=
q4)=(2,3,1,1,1)
• Level 0 2 w22=1 w23=1
• wij = qj, cii=0, and rii=0, c22=0 c23=0
r22=0 r23=0
0 ≤ i ≤ 4.
3 w33=1 w34=1
c33=0 c34=0
r33=0 r34=0

4 w44=1
c44=0
r44=0
1, a2, a3, a4) = (do, if return,
Example
i,j 0 1 2 3 4

0 w00=2 w01=8
2, p3, p4)=(3,3,1,1) c00=0 C01=8
r00=0 r01=1
1, q2, q3, q4)=(2,3,1,1,1)
1 w11=3 w12=
0, and rii=0, 0 ≤ i ≤ 4. c11=0 c12
0]+p1+q1 r11=0 r12=

2 w22=1 w23=
+pj+qj c22=0 c23=
+p1+q1 r22=0 r23=

=8 3 w33=1 w34=
-1}+c[k,j]}+w[i,j] i<k<j c33=0 c34=
r33=0 r34=
=1
c[00]+c[11}+w[01] 4 w44=1
0+0}+8 c44=0
r44=0
, a2, a3, a4) = (do, if return,
Example
i,j 0 1 2 3 4

0 w00=2 w01=8
p3, p4)=(3,3,1,1) c00=0 C01=8
, q2, q3, q4)=(2,3,1,1,1) r00=0 r01=1
0, and rii=0, 0 ≤ i ≤ 4. 1 w11=3 w12=7
c11=0 C12=7
]+p1+q1 r11=0 r12=2

pj+qj 2 w22=1 w23=


c22=0 c23=
r22=0 r23=
2+q2
7 3 w33=1 w34=
1}+c[k,j]}+w[i,j] i<k<j c33=0 c34=
r33=0 r34=
1
+c[22}+w[12] 4 w44=1
+0}+7 c44=0
r44=0
, a2, a3, a4) = (do, if return,
Example
i,j 0 1 2 3 4

0 w00=2 w01=8
p3, p4)=(3,3,1,1) c00=0 C01=8
, q2, q3, q4)=(2,3,1,1,1) r00=0 r01=1
0, and rii=0, 0 ≤ i ≤ 4. 1 w11=3 w12=7
c11=0 C12=7
]+p1+q1 r11=0 r12=2

pj+qj 2 w22=1 w23=3


c22=0 c23=3
r22=0 r23=3
3+q3
3 3 w33=1 w34=
1}+c[k,j]}+w[i,j] i<k<j c33=0 c34=
r33=0 r34=
=3
2]+c[33]}+w[23] 4 w44=1
}+3 c44=0
r44=0
1, a2, a3, a4) = (do, if return,
Example
i,j 0 1 2 3 4

0 w00=2 w01=8
2, p3, p4)=(3,3,1,1) c00=0 C01=8
r00=0 r01=1
1, q2, q3, q4)=(2,3,1,1,1)
1 w11=3 w12=7
0, and rii=0, 0 ≤ i ≤ 4. c11=0 C12=7
0]+p1+q1 r11=0 r12=2

2 w22=1 w23=3
+pj+qj c22=0 c23=3
p4+q4 r22=0 r23=3

=3 3 w33=1 w34=3
-1}+c[k,j]}+w[i,j] i<k<j c33=0 c34=3
r33=0 r34=4
k=4
3]+c[44]}+w[34] 4 w44=1
0}+3 c44=0
r44=0
a2, a3, a4) = (do, if return, while).
Example
i,j 0 1 2 3 4
3, p4)=(3,3,1,1)
0 w00=2 w01=8 w02=12
q2, q3, q4)=(2,3,1,1,1) c00=0 C01=8 C02=19
and rii=0, 0 ≤ i ≤ 4. r00=0 r01=1 r02=1
p1+q1 1 w11=3 w12=7 w13=
c11=0 C12=7 C13=
+qj r11=0 r12=2 r13=
2 w22=1 w23=3 w24=
q2 c22=0 c23=3 C24=
r22=0 r23=3 r24=
+c[k,j]}+w[i,j] i<k<=j 3 w33=1 w34=3
c33=0 c34=3
2 r33=0 r34=4
0]+c[12}+w[02]
4 w44=1
1]+c[22]}+w[02] c44=0
r44=0
}+12 =19
]}+12 =20
a2, a3, a4) = (do, if return, while).
Example
i,j 0 1 2 3 4
3, p4)=(3,3,1,1)
0 w00=2 w01=8 w02=12
q2, q3, q4)=(2,3,1,1,1) c00=0 C01=8 C02=19
and rii=0, 0 ≤ i ≤ 4. r00=0 r01=1 r02=1
p1+q1 1 w11=3 w12=7 w13=9
c11=0 C12=7 C13=12
+qj r11=0 r12=2 r13=2
2 w22=1 w23=3 w24=
q3 c22=0 c23=3 C24=
r22=0 r23=3 r24=
+c[k,j]}+w[i,j] i<k<=j 3 w33=1 w34=3
c33=0 c34=3
3 r33=0 r34=4
1]+c[23}+w[13]
4 w44=1
2]+c[33]}+w[13] c44=0
r44=0
}+9 =12
]}+9 =16
a2, a3, a4) = (do, if return, while).
Example
i,j 0 1 2 3 4
p3, p4)=(3,3,1,1) 0 w00=2 w01=8 w02=12
q2, q3, q4)=(2,3,1,1,1) c00=0 C01=8 C02=19
, and rii=0, 0 ≤ i ≤ 4. r00=0 r01=1 r02=1
+p1+q1 1 w11=3 w12=7 w13=9
c11=0 C12=7 C13=12
r11=0 r12=2 r13=2
pj+qj
2 w22=1 w23=3 w24=5
+q4 c22=0 c23=3 C24=8
r22=0 r23=3 r24=3
}+c[k,j]}+w[i,j] i<k<=j 3 w33=1 w34=3
,4 c33=0 c34=3
r33=0 r34=4
22]+c[34}+w[24]
23]+c[44]}+w[24] 4 w44=1
c44=0
r44=0
3}+5 =8
0]}+5 =8
2, a3, a4) = (do, if return, while).
Example
i,j 0 1 2 3 4
, p4)=(3,3,1,1)
, q3, q4)=(2,3,1,1,1) 0 w00=2 w01=8 w02=12 w03=14
c00=0 C01=8 C02=19 c03=25
nd rii=0, 0 ≤ i ≤ 4.
r00=0 r01=1 r02=1 r03=2
1+q1
1 w11=3 w12=7 w13=9 w14=
c11=0 C12=7 C13=12 c14=
qj
r11=0 r12=2 r13=2 r14=

3 2 w22=1 w23=3 w24=5


c22=0 c23=3 c24=8
r22=0 r23=3 r24=3
[k,j]}+w[i,j] i<k<=j
3 w33=1 w34=3
c33=0 c34=3
]+c[13}+w[03]
r33=0 r34=4
+c[23]}+w[03]
+c[33]}+w[03] 4 w44=1
c44=0
r44=0
14 =26
3]}+14 =25
14=33
2, a3, a4) = (do, if return, while).
Example
i,j 0 1 2 3 4
, p4)=(3,3,1,1)
, q3, q4)=(2,3,1,1,1) 0 w00=2 w01=8 w02=12 w03=14
c00=0 C01=8 C02=19 c03=25
nd rii=0, 0 ≤ i ≤ 4.
r00=0 r01=1 r02=1 r03=2
1+q1
1 w11=3 w12=7 w13=9 w14=
c11=0 C12=7 C13=12 c14=
qj
r11=0 r12=2 r13=2 r14=

4 2 w22=1 w23=3 w24=5


c22=0 c23=3 C24=8
r22=0 r23=3 r24=3
[k,j]}+w[i,j] i<k<=j
,4 3 w33=1 w34=3
c33=0 c34=3
]+c[13}+w[03]
r33=0 r34=4
+c[23]}+w[03]
+c[33]}+w[03] 4 w44=1
c44=0
r44=0
14 =26
3]}+14 =25
14=33
• Let n=4, (a1,
return, while
w[i,j] Example Computation • Let (p1, p2, p
• and (q0, q1, q
• wii = qii, cii=0,
}+w[03] w01 = w[0,0]+
}+w[03] 0 1 2 3 4
}+w[03] w00 =2 w11=3 w22=1 w33=1 w44=1
0 c00=0 c11=0 c22=0 c33=0 c44=0
r00=0 r11=0 r22=0 r33=0 r44=0
=25 w01=8 w12=7 w23=3 w34=3
1 c01=8 c12=7 c23=3 c34=3
r01=1 r12=2 r23=3 r34=4
w02=12 w13=9 w24=5
2 c02=19 c13=12 c24=8
r02=1 r13=2 r24=3
w03=14 w14=11
3 c03=25 c14=19
r03=2 r14=2
w04=16
4 c04=32
r04=2
Let n=4, (a1, a2, a3, a4) = (do, if return, while).

0 1 2 3 4 r01=1
w00=2 w11=3 w22=1 w33=1 w44=1 i=0 j=1 k=1
c00=0 c11=0 c22=0 c33=0 c44=0 LR=r[I,k-1]=r[00]=
0
RR=r[kj]=r[11] =0
r00=0 r11=0 r22=0 r33=0 r44=0
w01=8 w12=7 w23=3 w34=3
a1
1 c01=8 c12=7 c23=3 c34=3
r01=1 r12=2 r23=3 r34=4
w02=12 w13=9 w24=5
2 c02=19 c13=12 c24=8
r02=1 r13=2 r24=3 Weight of the tree= 16
w03=14 w14=11
3 c03=25 c14=19
r03=2 r14=2
w04=16
4 c04=32
r04=2
TSP BY DP
For a subset of cities S Є {1, 2, 3, ... , n} that
includes 1, and j Є S,
let C(S, j) be the length of the shortest path visiting
each node in S
exactly once, starting at 1 and ending at j.
When |S| > 1, we define C(S, 1) = ∝ since the path
cannot start and end at 1.
Now, let express C(S, j) in terms of smaller sub-
problems.
We need to start at 1 and end at j. We should select
the next city in such a way that
C(S,j)=minC(S−{j},i)+d(i,j)
where i∈S and i≠j c(S,j)=minC(s−{j},i)+d(i,j)
where i∈S and i≠j
TSP BY DP
TSP BY DP
TSP BY DP
TSP BY DP
TSP BY DP
TSP BY DP
The minimum cost path is 35.
Start from cost {1, {2, 3, 4}, 1}, we get the minimum value for d [1, 2]. When s = 3, select
the path from 1 to 2 (cost is 10) then go backwards. When s = 2, we get the minimum
value for d [4, 2]. Select the path from 2 to 4 (cost is 10) then go backwards.
When s = 1, we get the minimum value for d [4, 3]. Selecting path 4 to 3 (cost is 9), then
we shall go to then go to s = Φ step. We get the minimum value for d [3, 1] (cost is 6).
Computation Complexity of Optimal Binary
Search Tree
• To evaluate the optimal binary tree we need to compute cij for (j-
i)=1, 2, …,n in that order. When j-i=m, there are n-m+1 cij’s to
compute.
• The computation of each cij’s can be computed in time O(m).
• The total time for all cij’s with j-i=m is therefore O(nm-m2). The
total time to evaluate all the cij’s and rij’s is

• The computing complexity can be reduced to O(n2) by limiting the


search of the optimal l to the range of ri,j-1 ≤ l ≤ ri+1,j according to D.
E. Knuth.
Optimal Binary Search Trees
Problem: Given n keys a1 < …< an and probabilities p1, …, pn
searching for them, find a BST with a minimum
average number of comparisons in successful search.

Since total number of BSTs with n nodes is given by C(2n,n)/(n+1), which grows
exponentially, brute force is hopeless.

Example: What is an optimal BST for keys A, B, C, and D with


search probabilities 0.1, 0.2, 0.4, and 0.3, respectively?

Average # of comparisons = 1*0.4 +


2*(0.2+0.3) + 3*0.1 = 1.7
Optimal Binary Search Trees
Example Computation • Let n=4, (a
a4) = (do,
while).
• Let (p1, p
0 1 2 3 4 p4)=(3,3,1
ww0000=2= w11= w22= w33= w44= • and (q0, q
q4)=(2,3,1
0 cc0000=0= c11= c22= c33= c44=
• wii = qii, cii=
rr0000=0
= r11=3 r22=1 r33=1 r44=1 0 ≤ i ≤ 4.
w01= w12= w23= w34=
w01 = w[0,0
1 c01= c12= c23= c34=
r01= r12= r23= r34=
w02= w13= w24=
2 c02= c13= c24= W00=2 c00=0 r00=0
r02= r13= r24= w11=q11=3 c11=0 r11=0
w03= w14= w22=q22=1 c22=0 r22=0
c03= c14= w33=q33=1 c33=0 r33=0
3 W44=q44=1 c44=0 r44=0
r03= r14=
w04=
4 c04=
r04=
• Let n=4, (a1,
return, while
j]}+w[I,j] Example Computation • Let (p1, p2, p
• and (q0, q1, q
• wii = qii, cii=0,
11}+w[01]
w01 = w[0,0]+
0 1 2 3 4
w00 =2 w11=3 w22=1 w33=1 w44=1 W12=w[1,1]+p2+
0 c00=0 c11=0 c22=0 c33=0 c44=0 =3+3+1=7
r00=0 r11=0 r22=0 r33=0 r44=0 C12=min{c[I,k-1}+
w01=8 w12=7 w23=3 w34=3 i=1 j= 2 k=1
1 c01=8 c12=7 c23=3 c34=3 k=1 min{c[00
min{0+0}
r01=1 r12=2 r23=3 r34=4
8
w02= w13= w24= k=2 min{c[11]+c[
2 c02= c13= c24= min{0+0}
r02= r13= r24= 7
w03= w14= r12=2
3 c03= c14=
r03= r14=
w04=
4 c04=
r04=
• Let n=4, (a1,
return, while
j]}+w[I,j]
Example Computation • Let (p1, p2, p
• and (q0, q1, q
11}+w[01] • wii = qii, cii=0,
w01 = w[0,0]+
0 1 2 3 4
w00 =2 w11=3 w22=1 w33=1 w44=1 W12=w[1,1]+p2+
0 c00=0 c11=0 c22=0 c33=0 c44=0 =3+3+1=7
r00=0 r11=0 r22=0 r33=0 r44=0 C12=min{c[I,k-1}+
w[I,j] i<k<j w01=8 w12=7 w23=3 w34=3 i=1 j= 2 k=2
1 c01=8 c12=7 c23=3 c34=3 k=2 min{c[11]+
w[23] min{0+0}
r01=1 r12=2 r23=3 r34=4
7
w02= w13= w24= r12=2
2 c02= c13= c24=
r02= r13= r24=
w03= w14=
c03= c14=
w[I,j] i<k<j 3
r03= r14=
[34] w04=
4 c04=
r04=
• Let n=4, (a1,
return, while
j]}+w[I,j]
Example Computation • Let (p1, p2, p
• and (q0, q1, q
11}+w[01] • wii = qii, cii=0,
w01 = w[0,0]+
0 1 2 3 4
w00 =2 w11=3 w22=1 w33=1 w44=1 W12=w[1,1]+p2+
0 c00=0 c11=0 c22=0 c33=0 c44=0 =3+3+1=7
r00=0 r11=0 r22=0 r33=0 r44=0 C12=min{c[I,k-1}+
w[I,j] i<k<j w01=8 w12=7 w23=3 w34=3 i=1 j= 2 k=2
1 c01=8 c12=7 c23=3 c34=3 k=2 min{c[11]+
w[23] min{0+0}
r01=1 r12=2 r23=3 r34=4
7
w02= w13= w24= r12=2
2 c02= c13= c24=
r02= r13= r24=
w03= w14=
c03= c14=
w[I,j] i<k<j 3
r03= r14=
[34] w04=
4 c04=
r04=
• Let n=4, (a1,
return, while
w[i,j] Example Computation • Let (p1, p2, p
• and (q0, q1, q
• wii = qii, cii=0,
+w[02] w01 = w[0,0]+
+w[02] 0 1 2 3 4
w00 =2 w11=3 w22=1 w33=1 w44=1 W13=w[12]+
9 0 c00=0 c11=0 c22=0 c33=0 c44=0 =7+1+
0 r00=0 r11=0 r22=0 r33=0 r44=0 C13=min{c[i
w01=8 w12=7 w23=3 w34=3 i<k<=j
1 c01=8 c12=7 c23=3 c34=3 i=1 j= 3
k=2 min
r01=1 r12=2 r23=3 r34=4
k=3 min
w02=12 w13=9 w24=5
w[i,j] 2 c02=19 c13=12 c24=8 k=2 min
r02=1 r13=2 r24=3 k=3 min
w03= w14= C13=12
w[24] R13=2
3 c03= c14=
+w[24]
r03= r14=
w04=
4 c04=
r04=
Optimal Binary Search Trees

74
Analysis DP for Optimal BST Problem
Time efficiency: Θ(n3) but can be reduced to Θ(n2) by taking
advantage of monotonicity of entries in the
root table, i.e., R[i,j] is always in the range
between R[i,j-1] and R[i+1,j]
Space efficiency: Θ(n2)

Method can be expanded to include unsuccessful searches

75
Warshall’s Algorithm: Transitive Closure

• Computes the transitive closure of a relation


• Alternatively: existence of all nontrivial paths in a digraph
• Example of transitive closure:

3 3
1 1

0 0 1 0 0 0 1 0
1 0 0 1 1 1 1 1
4 4 0 0 0 0
2 0 0 0 0 2
0 1 0 0 1 1 1 1

76
Warshall’s Algorithm
Constructs transitive closure T as the last matrix in the sequence
of n-by-n matrices R(0), … , R(k), … , R(n) where
R(k)[i,j] = 1 iff there is nontrivial path from i to j with only first k
vertices allowed as intermediate
Note that R(0) = A (adjacency matrix), R(n) = T (transitive closure)
3 3 3 3 3
1 1 1 1 1

4 4 4 2 4 4
2 2 2 2

R(0) R(1) R(2) R(3) R(4)


0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0
1 0 0 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1

77
Warshall’s Algorithm (recurrence)
On the k-th iteration, the algorithm determines for every pair of vertices i, j if a path exists from i
and j with just vertices 1,…,k allowed as intermediate

R(k-1)[i,j] (path using just 1 ,…,k-1)


R(k)[i,j] = or
R [i,k] and R(k-1)[k,j] (path from i to k
(k-1)

{
and from k to i
using just 1 ,…,k-1)

78
Warshall’s Algorithm (matrix generation)
Recurrence relating elements R(k) to elements of R(k-1) is:

R(k)[i,j] = R(k-1)[i,j] or (R(k-1)[i,k] and R(k-1)[k,j])

It implies the following rules for generating R(k) from R(k-1):

Rule 1 If an element in row i and column j is 1 in R(k-1),


it remains 1 in R(k)

Rule 2 If an element in row i and column j is 0 in R(k-1),


it has to be changed to 1 in R(k) if and only if
the element in its row i and column k and the element
in its column j and row k are both 1’s in R(k-1)

79
Warshall’s Algorithm (example)
3 0 0 1 0 0 0 1 0
1 1 0 0 1 1 0 1 1
0 0 0 0 0 0 0 0
R(0) =
0 1 0 0 R(1) =
0 1 0 0

2 4

0 0 1 0 0 0 1 0 0 0 1 0
1 0 1 1 1 0 1 1 1 1 1 1
0 0 0 0 0 0 0 0 0 0 0 0
R(2) =
1 1 1 1 R(3) =
1 1 1 1 R(4) =
1 1 1 1

80
Application of Warshall’s algorithm
Warshall’s Algorithm (pseudocode and analysis)

Time efficiency: Θ(n3)


Space efficiency: Matrices can be written over their predecessors

82
Floyd’s Algorithm: All pairs shortest paths
SKIP
Problem: In a weighted (di)graph, find shortest paths between
every pair of vertices

Same idea: construct solution through series of matrices D(0), …,


D (n) using increasing subsets of the vertices allowed
as intermediate

Example:

4 3

1
1
6
1 5
4
2 3

83
Floyd’s Algorithm (matrix generation)
On the k-th iteration, the algorithm determines shortest paths
between every pair of vertices i, j that use only vertices among 1,
…,k as intermediate

D(k)[i,j] = min {D(k-1)[i,j], D(k-1)[i,k] + D(k-1)[k,j]}

D(k-1)[i,k]
k

D(k-1)[k,j]

D(k-1)[i,j]

84
Floyd’s Algorithm (example)
2
1 2 0 ∞ 3 ∞2 0 0 ∞ 3 ∞
∞ ∞ 2 0 5 ∞
3 6 7 ∞ 7 0 1 ∞ 7 0 1
D(0) =
6 ∞ ∞ 0 D(1) =
6 ∞ 9 0

3 1 4

0 ∞ 3 ∞ 0 10 3 4 0 10 3 4
2 0 5 ∞ 2 0 5 6 2 0 5 6
9 7 0 1 9 7 0 1 7 7 0 1
D(2) =
6 ∞ 9 0 D(3) =
6 16 9 0 D(4) =
6 16 9 0

D(3): 3 to 1 not allowing 4=9. D(4): 3 to 1 with allowing 4=7

85
Floyd’s Algorithm (pseudocode and analysis)

Time efficiency: Θ(n3)


Space efficiency: Matrices can be written over their predecessors
Note: Shortest paths themselves can be found, too (Problem 10)

86
Application of Floyd’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
prev keeps track of
the shortest path
Dijkstra’s algorithm
Dijkstra’s algorithm
Dijkstra’s algorithm
Single source shortest paths
• All of the shortest path algorithms we’ll look
at today are call “single source shortest
paths” algorithms
• Why?
3
B D
3

1
1 2
A
1
C E
4
 
3
B D
3
 1
1 2
A
1

C E 
4
Heap

A 0
B 

3
 C 
B D D 
3
E 
0 1
1 2
A
1

C E 
4
Heap

B 
C 

3
 D 
B D E 
3
0 1
1 2
A
1

C E 
4
Heap

B 
C 

3
 D 
B D E 
3
0 1
1 2
A
1

C E 
4
Heap

C 1
B 

3
 D 
B D E 
3
0 1
1 2
A
1
1
C E 
4
Heap

C 1
B 

3
 D 
B D E 
3
0 1
1 2
A
1
1
C E 
4
Heap

C 1
B 3
3
3
 D 
B D E 
3
0 1
1 2
A
1
1
C E 
4
Heap

C 1
B 3
3
3
 D 
B D E 
3
0 1
1 2
A
1
1
C E 
4
Heap

B 3
D 
3
3
 E 
B D
3
0 1
1 2
A
1
1
C E 
4
Heap

B 3
D 
3
3
 E 
B D
3
0 1
1 2
A
1
1
C E 
4
Heap

B 3
D 
3
3
 E 
B D
3
0 1
1 2
A
1
1
C E 
4
Heap

B 2
D 
2
3
 E 
B D
3
0 1
1 2
A
1
1
C E 
4
Heap

B 2
D 
2
3
 E 
B D
3
0 1
1 2
A
1
1
C E 
4
Heap

B 2
E 5
2
3
 D 
B D
3
0 1
1 2
A
1
1
C E 5
4
Heap

B 2
E 5
2
3
 D 
B D
3
0 1
2
A 1
Frontier?
1
1
C E 5
4
Heap

B 2
E 5
2
3
 D 
B D
3
0 1 All nodes reachable
1 2
A from starting node
1
1 within a given distance
C E 5
4
Heap

E 3
D 5
2 5
3
B D
3
0 1
1 2
A
1
1
C E 3
4
Heap

D 5

2 5
3
B D
3
0 1
1 2
A
1
1
C E 3
4
Heap

2 5
3
B D
3
0 1
1 2
A
1
1
C E 3
4
Heap

2 3
5
B D
0 1
1
A
1
1
C E 3
Is Dijkstra’s algorithm correct?
• Invariant:
Is Dijkstra’s algorithm correct?
• Invariant: For every vertex removed from the heap,
dist[v] is the actual shortest distance from s to v
Is Dijkstra’s algorithm correct?
• Invariant: For every vertex removed from the
heap, dist[v] is the actual shortest distance
from s to v
– The only time a vertex gets visited is when the
distance from s to that vertex is smaller than the
distance to any remaining vertex
– Therefore, there cannot be any other path that
hasn’t been visited already that would result in a
shorter path
Running time?
Running time?

1 call to MakeHeap
Running time?

|V| iterations
Running time?

|V| calls
Running time?

O(|E|) calls
Running time?
• Depends on the heap implementation
1 MakeHeap |V| ExtractMin |E| DecreaseKey Total

Array O(|V|) O(|V|2) O(|E|) O(|V|2)

Bin heap O(|V|) O(|V| log |V|) O(|E| log |V|) O((|V|+|E|) log |V|)

O(|E| log |V|)


Running time?
• Depends on the heap implementation
1 MakeHeap |V| ExtractMin |E| DecreaseKey Total

Array O(|V|) O(|V|2) O(|E|) O(|V|2)

Bin heap O(|V|) O(|V| log |V|) O(|E| log |V|) O((|V|+|E|) log |V|)

O(|E| log |V|)

Is this an improvement? If |E| < |V|2 / log |V|


Running time?
• Depends on the heap implementation
1 MakeHeap |V| ExtractMin |E| DecreaseKey Total

Array O(|V|) O(|V|2) O(|E|) O(|V|2)

Bin heap O(|V|) O(|V| log |V|) O(|E| log |V|) O((|V|+|E|) log |V|)

O(|E| log |V|)

Fib heap O(|V|) O(|V| log |V|) O(|E|) O(|V| log |V| + |E|)
What about Dijkstra’s on…?

1
1 B D

5
A 10 -10

C E
What about Dijkstra’s on…?

1
1 B D

5
A 10 -10

C E
What about Dijkstra’s on…?
Dijkstra’s algorithm only works
for positive edge weights

1
1 B D

5
A 10

C E
Bounding the distance
• Another invariant: For each vertex v, dist[v] is
an upper bound on the actual shortest distance
– start of at 
– only update the value if we find a shorter distance
• An update procedure

dist [v] min{dist [v], dist [u ]  w(u , v)}


dist [v] min{dist [v], dist [u ]  w(u , v)}

• Can we ever go wrong applying this update


rule?
– We can apply this rule as many times as we want
and will never underestimate dist[v]
• When will dist[v] be right?
– If u is along the shortest path to v and dist[u] is
correct
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest


path to v and dist[u] is correct
• Consider the shortest path from s to v

s p1 p2 p3 pk v
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest


path to v and dist[u] is correct
• What happens if we update all of the vertices
with the above update?

s p1 p2 p3 pk v
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest


path to v and dist[u] is correct
• What happens if we update all of the vertices
with the above update?

s p1 p2 p3 pk v

correct
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest


path to v and dist[u] is correct
• What happens if we update all of the vertices
with the above update?

s p1 p2 p3 pk v

correct correct
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest


path to v and dist[u] is correct
• Does the order that we update the vertices
matter?

s p1 p2 p3 pk v

correct correct
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest path to v and


dist[u] is correct
• How many times do we have to do this for vertex pi to
have the correct shortest path from s?
– i times

s p1 p2 p3 pk v
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest path to v and


dist[u] is correct
• How many times do we have to do this for vertex pi to
have the correct shortest path from s?
– i times

s p1 p2 p3 pk v

correct correct
dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest path to v and


dist[u] is correct
• How many times do we have to do this for vertex pi to
have the correct shortest path from s?
– i times

s p1 p2 p3 pk v

correct correct correct


dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest path to v and


dist[u] is correct
• How many times do we have to do this for vertex pi to
have the correct shortest path from s?
– i times

s p1 p2 p3 pk v

correct correct correct correct


dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest path to v and


dist[u] is correct
• How many times do we have to do this for vertex pi to
have the correct shortest path from s?
– i times

s p1 p2 p3 pk v

correct correct correct correct …


dist [v] min{dist [v], dist [u ]  w(u , v)}

• dist[v] will be right if u is along the shortest


path to v and dist[u] is correct
• What is the longest (vetex-wise) the path from
s to any node v can be?
– |V| - 1 edges/vertices

s p1 p2 p3 pk v

correct correct correct correct …


Bellman-Ford algorithm
Bellman-Ford algorithm

Initialize all the


distances
Bellman-Ford algorithm

iterate over all


edges/vertices
and apply update
rule
Bellman-Ford algorithm
Bellman-Ford algorithm

check for negative


cycles
Negative cycles

What is the shortest path


from a to e?

1
1 B D

5
A 10 -10

C E
3
Bellman-Ford algorithm
Bellman-Ford algorithm

S
10
A How many edges is
the shortest path
8 1
from s to:

G
-4
B A:
2

1 1

-2
F C
3
-1
E D
-1
Bellman-Ford algorithm

S
10
A How many edges is
the shortest path
8 1
from s to:

G
-4
B A: 3
2

1 1

-2
F C
3
-1
E D
-1
Bellman-Ford algorithm

S
10
A How many edges is
the shortest path
8 1
from s to:

G
-4
B A: 3
2

1 1

-2 B:
F C
3
-1
E D
-1
Bellman-Ford algorithm

S
10
A How many edges is
the shortest path
8 1
from s to:

G
-4
B A: 3
2

1 1

-2 B: 5
F C
3
-1
E D
-1
Bellman-Ford algorithm

S
10
A How many edges is
the shortest path
8 1
from s to:

G
-4
B A: 3
2

1 1

-2 B: 5
F C
3
-1
E D D:
-1
Bellman-Ford algorithm

S
10
A How many edges is
the shortest path
8 1
from s to:

G
-4
B A: 3
2

1 1

-2 B: 5
F C
3
-1
E D D: 7
-1
Bellman-Ford algorithm
0 
10
S A Iteration: 0
8 1


G
-4
B
2

1 1

-2 
 F C
3
-1
E D
-1
 
Bellman-Ford algorithm
0 10
10
S A Iteration: 1
8 1

8
G
-4
B
2

1 1

-2 
 F C
3
-1
E D
-1
 
Bellman-Ford algorithm
0 10
10
S A Iteration: 2
8 1

8
G
-4
B
2

1 1

-2 
9 F C
3
-1
E D
-1
12 
Bellman-Ford algorithm
0 5
10
S A Iteration: 3
8 1
10

8
G
-4
B A has the correct
2 distance and path
1 1

-2 
9 F C
3
-1
E D
-1
8 
Bellman-Ford algorithm
0 5
10
S A Iteration: 4
8 1
6

8
G
-4
B
2

1 1

-2 11
9 F C
3
-1
E D
-1
7 
Bellman-Ford algorithm
0 5
10
S A Iteration: 5
8 1
5

8
G
-4
B B has the correct
2 distance and path
1 1

-2 7
9 F C
3
-1
E D
-1
7 14
Bellman-Ford algorithm
0 5
10
S A Iteration: 6
8 1
5

8
G
-4
B
2

1 1

-2 6
9 F C
3
-1
E D
-1
7 10
Bellman-Ford algorithm
0 5
10
S A Iteration: 7
8 1
5

8
G
-4
B D (and all other
2 nodes) have the
1 1 correct distance
and path
-2 6
9 F C
3
-1
E D
-1
7 9
Correctness of Bellman-Ford
• Loop invariant:
Correctness of Bellman-Ford
• Loop invariant: After iteration i, all vertices with shortest
paths from s of length i edges or less have correct
distances
Runtime of Bellman-Ford

O(|V| |E|)
Runtime of Bellman-Ford

Can you modify the algorithm to run


faster (in some circumstances)?

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy