Chapter3-GreedyAlgorithms
Chapter3-GreedyAlgorithms
1 An activity-selection problem
this chapter and Chapter 23 independently of each other, you might fin
to read them together.
Chapter 3: Greedy Algorithm
Introduction
• A greedy algorithm always makes the choice that looks best at the moment. That is, it
1 An activity-selection problem
makes a locally optimal choice in the hope that this choice will lead to a globally optimal
solution.
• Greedy algorithms do not always yield optimal solutions, but for many problems they do.
Settime fi , where
of activities 𝑆 = {𝑎1, … 0 𝑎! s < fi < 1. If selected, activity ai takes place
𝑛 }. i
𝑎𝑖 half-open
needs resourcestime during interval
period [𝑓𝑖 , Œs fi /. isActivities
𝑆𝑖)i ;, which ai and
a half-open interval, is thecompatible
aj𝑠𝑖are
where start if th
Œsi ; f[𝑓i 𝑗/, and fj / do not overlap. That is, a and aj 𝑓are compatible i
time and 𝑓! is finish time (𝑠! < 𝑓! < ∞ ). Activities ai and aj are compatible if the intervals [𝑓𝑖 ,
𝑆! ) and
Œsj ;overlap.
𝑆" ) do not That is, ai and aj are compatible if 𝑠𝑖 ≥i 𝑓𝑗 𝑜𝑟 𝑠𝑗 ≥ 𝑖.
or sj " fi . In the activity-selection problem, we wish to select a maxi
subset of mutually compatible activities. We assume that the activities
in Select
Goal monotonically increasing
the largest possible order (mutually
set of nonoverlapping of finish time: activities.
compatible)
i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 9 9 10 11 12 14 16
For
• {a this example, the subset fa3 ; aactivities.,
3, a9, a11} consists of mutually compatible
consists
9 ; a11 g but of mutually
not the largest one. compatible
It• is{anot
1, a4a, a8maximum subset,
, a11} is a largest subset ofhowever, sinceactivities;
mutually compatible the subset fa1 ; a4 ; a8 ; a11 g is
fact,
• another
fa1 ;largest 8 ; a11 g is a largest subset of mutually compatible activitie
a4 ; asubset is {a2, a4, a9, a11}.
For the activity-selection problem, the greedy choice can be: the activity that leaves the resource available
for as many other activities as possible.
Question: Which activity leaves the resource available for most other activities?
Answer: The first activity to finish. (If more than one activity has the earliest finish time, can choose any
such activity.)
Since activities are sorted by finish time, just choose activity 𝑎1.
After choosing a1, we have only one subproblem to solve: finding a maximum size set of mutually
compatible activities that start after a1 finishes. Let 𝑆! denotes this set;
𝑆𝑘 = {𝑎𝑖 ∈ 𝑆 ∶ 𝑠𝑖 ≥ 𝑓! }
If we make the greedy choice of activity a1, then S1 remains the only subproblem to solve.
(Don’t have to worry about activities that finish before a1 starts, because 𝑠1 < 𝑓1 and no
activity 𝑎𝑖 has finish time 𝑓𝑖 < 𝑓1 ) no activity ai has 𝑓𝑖 ≤ 𝑠1.)
Solution:
• Repeatedly choose the activity that finishes first, keep only the activities compatible with
this activity, and repeat until no activities remain.
• Because we always choose the activity with the earliest finish time, the finish times of the
activities we choose must strictly increase. We can consider each activity just once
overall, in a monotonically increasing order of finish times.
The algorithm can work top-down, choosing an activity to put into the optimal solution and
then solv- ing the subproblem of choosing activities from those that are compatible with
those already chosen. Greedy algorithms typically have a top-down design: make a choice
and then solve a subproblem, rather than the bottom-up technique of solving subproblems
before making a choice.
Start and finish times are represented by arrays s and f, where f is assumed to be already sorted in
monotonically increasing order.
To start, add fictitious activity a0 with f0 = 0, so that S0 = S, the entire set of activities.
Recursive greedy algorithm
Start and finish times are represented by arrays s and f , where f is assumed to be
already sorted in monotonically increasing order.
To start, add fictitious activity a0 with f0 D 0, so that S0 D S, the entire set of
activities.
Procedure REC-ACTIVITY-SELECTOR takes as parameters the arrays s and f , index k of current
Procedure
subproblem, R EC
and -ACTIVITY
number -S ELECTOR
n of activities takes
in the as parameters
original problem. the arrays s and f , in-
dex k of current subproblem, and number n of activities in the original problem.
Initial call
k sk fk
0
R EC–-ACTIVITY
0
-S ELECTOR .s; f; 0; n/.
a0
a1
1 1 4 RECURSIVE -ACTIVITY-SELECTOR (s, f, 0, 11)
a0 m=1
a2
2 3 5 RECURSIVE -ACTIVITY-SELECTOR (s, f, 1, 11)
a1
a3
3 0 6
a1
a4
4 5 7
a1 m=4
RECURSIVE -ACTIVITY-SELECTOR (s, f, 4, 11)
a5
5 3 9
a1 a4
a6
6 5 9
a1 a4
a7
7 6 10
a1 a4
a8
8 8 11
a1 a4 m=8
a10
10 2 14
a1 a4 a8
a11
11 12 16
a1 a4 a8 m = 11
Figure 16.1 The operation of R ECURSIVE -ACTIVITY-S ELECTOR on the 11 activities given ear-
lier. Activities considered in each recursive call appear between horizontal lines. The fictitious
activity a0 finishes at time 0, and the initial call R ECURSIVE -ACTIVITY-S ELECTOR.s; f; 0; 11/, se-
lects activity a1 . In each recursive call, the activities that have already been selected are shaded,
and the activity shown in white is being considered. If the starting time of an activity occurs before
the finish time of the most recently added activity (the arrow between them points left), it is re-
jected. Otherwise (the arrow points directly up or to the right), it is selected. The last recursive call,
Time: Θ(𝑛) as each activity is examined exactly once, assuming that activities are already sorted by
16.1 times.
finish An activity-selection problem 421
TheThe procedure
knapsack works as follows. The variable k indexes the most recent addition
Problem
to A, corresponding to the activity ak in the recursive version. Since we consider
the• activities
n items.in order of monotonically increasing finish time, fk is always the
maximum
• Itemfinish time$𝑣
i is worth of𝑖 any activity
, weighs 𝑤𝑖 inpounds.
A. That is,
fk •D max
Findffa most valuable subset of items with a total weight ≤ 𝑊 .
i W ai 2 Ag : (16.3)
• Have to either take an item or not take it—can’t take part of it.
Lines 2–3 select activity a1 , initialize A to contain just this activity, and initialize k
to Fractional
index this activity.
knapsackThe for loop of lines 4–7 finds the earliest activity in Sk to
problem
finish. The loop considers each activity am in turn and adds am to A if it is compat-
ible• with
Likeallthe
previously
0-1 knapsackselected activities;
problem, but cansuch
takeana activity
fraction isofthe earliest in Sk to
an item.
finish.
• The To see whether
fractional activity problem
knapsack am is compatible with every activity
has the greedy-choice currently
property, and thein0-1
A, knapsack
it suffices by equation
problem does not. (16.3) to check (in line 5) that its start time s m is not earlier
than the finish time of the
• To solve the fractional problem:
f k activity most recently added to A. If activity am is
compatible, o thenRanklines
items6–7byadd activity am to% .ALet
value/weight:
$ and% set
$ $ to for
≥ %k%&' Thei. set A returned
m. all
by the call G REEDY-ACTIVITY-S ELECTOR %%.s; f /%is% precisely
%&' the set returned by
the call R ECURSIVE
o Take items -ACTIVITY -S ELECTOR
in decreasing order .s;
of value/weight.
f; 0; n/.
o
Like the recursive version, G REEDY-ACTIVITY
Will take all of the items with the -S ELECTOR
greatest schedules
value/weight, andapossibly
set of n a fraction of
the next item.
activities in ‚.n/ time, assuming that the activities were already sorted initially by
their finish times.
An example showing that the greedy strategy does not work for the 0-1 knapsack problem.
Exercises
16.1-1
Give a dynamic-programming algorithm for the activity-selection problem, based
on recurrence (16.2). Have your algorithm compute the sizes cŒi; j ! as defined
above and also produce the maximum-size subset of mutually compatible activities.
0-1 knapsack problem
! n items.
16.2 Elements of the greedy strategy 427
! Item i is worth $!i , weighs wi pounds.
! Find a most valuable subset of items with total weight ! W .
20
$80
! Have to either take an item or
30 $120
not take it—can’t take part of it.
30
item 3
+
item 2 50 + 30 $120
20 $100 20 $100
Fractional
item 1 knapsack
30 problem + + +
20 20 $100
10 10 $60 10 $60 10 $60
Like the 0-1 knapsack problem, but can take fraction of an item.
$60 $100 $120 knapsack = $220 = $160 = $180 = $240
Both have (a)optimal substructure. (b) (c)
Butthief
(a) The themust
fractional knapsack
select a subset of the threeproblem
items shown haswhosethe greedy-choice
weight must not exceedproperty,
50 and the 0-1
Figure 16.2 An example showing that the greedy strategy does not work for the 0-1 knapsack
knapsack problem.
pounds. problem does not.
(a) The thief must select a subset of the three items shown whose weight must not exceed
50 pounds. (b) The optimal subset includes items 2 and 3. Any solution with item 1 is suboptimal,
To optimal
(b) The solvesubset
thethough
even fractional
item 1 has
includes 2problem,
the greatest
items andvalue rank
per pound.
3. Any solution items
(c) For itemby
the fractional
with is value/weight:
1 knapsack Leteven !i =wi .
problem, taking
suboptimal,
the items in order of greatest value per pound yields an optimal solution.
itemi 1"has!ithe
!i =w
though =wi C1value
C1greatest for all i. Take items in decreasing order of value/weight.
per pound. Will
take all ofchoice.
theknapsack
(c) For the fractional
items withformulated
The problem the
problem,
greatest
in this
taking the
value/weight,
way
items ingives
orderrise
of to
and
many
greatest
possibly
overlapping sub- a fraction of the
value per pound
next
yields item.problems—a
an optimal solution. hallmark of dynamic programming, and indeed, as Exercise 16.2-2
asks you to show, we can use dynamic programming to solve the 0-1 problem.
Suppose we have a 100,000-character data file that we wish to store compactly. We observe that
the characters in the file occur with the frequencies given Figure 1. That is, only 6 different
characters appear, and the character occurs 45,000 times.
Figure 1: A character-coding problem. A data file of 100,000 characters contains only the
characters a–f, with the frequencies indicated.
Fixed-length code: If we assign each character a 3-bit codeword, we can encode the file in 300,000
bits.
Variable-length code: giving frequent characters short codewords and infrequent characters long
codewords. Here the 1-bit string 0 represents a, and the 4-bit string 1100 represents f. Only 24000
bits are needed.
Prefix Codes
Codes in which no codeword is also a prefix of some other codeword. Prefix codes are desirable
because they simplify decoding.
Encoding is always simple for any binary character code; we just concatenate the codewords
representing each character of the file.
Trees corresponding to the coding schemes in Figure 1. Each leaf is labeled with a character and
its frequency of occurrence. Each internal node is labeled with the sum of the frequencies of the
leaves in its subtree.
(a) The tree corresponding to the fixed-length code a = 000, . . . , f = 101.
(b) The tree corresponding to the optimal prefix code a = 0, b = 101, . . . , f = 1100.
An optimal code for a file is always represented by a full binary tree, in which every nonleaf
node has two children
The fixed-length code in our example is not optimal since its tree is not a full binary tree: it
contains codewords beginning 10. . . , but none beginning 11. . . .
Huffman invented a greedy algorithm that constructs an optimal prefix code called a Huffman
code.
The Huffman Coding is a lossless data compression algorithm, the algorithm is based on a binary-
tree frequency-sorting method that allow encode any message sequence into shorter encoded
messages and a method to reassemble into the original message without losing any data.
Huffman coding is based on the frequency of occurrence of a data item. The principle is to use a
lower number of bits to encode the data that occurs more frequently
• The algorithm builds the tree T corresponding to the optimal code in a bottom-up
manner.
• It begins with a set of |𝐶| leaves and performs a sequence of |𝐶| − 1 “merging”
operations to create the final tree.
• The algorithm uses a min-priority queue Q, keyed on the freq attribute, to identify the
two least-frequent objects to merge.
• When we merge two objects, the result is a new object whose frequency is the sum of the
frequencies of the two objects that were merged.
Running time: 𝑶(𝒏 𝒍𝒈 𝒏).
To analyze the running time of Huffman’s algorithm, we assume that Q is implemented as a binary
min-heap (a Binary complete tree and the key at the root must be minimum among all keys
present in Binary Heap).
For a set C of n characters, we can initialize Q in 𝑂(𝑛) time using a specific procedure.
Thus, the total running time of HUFFMAN on a set of n characters is 𝑂(𝑛 𝑙𝑔 𝑛).
We consider a connected, undirected graph G = ( V, E) where V is the set of pins, E is the set
of vertices and E is the set of edges. For each (𝑢, 𝑣) ∈ 𝐸 has a weight w(u,v).
ments, the one that uses the least amount of wire is usually the
We can model this wiring problem with a connected, und
.V; E/, where V is the set of pins, E is the set of possible interc
pairs of pins, and for each edge .u; !/ 2 E, we have a weight
the cost
We then wish to find an acyclic subset 𝑇 ⊆ (amount of wire
𝐸 that connects all of needed)
the verticesto
andconnect u and !. We th
whose total
weight
acyclic subset T " E that connects all of the vertices and who
X
w.T / D w.u; !/
.u;!/2T
is minimized.
is minimized. Since T is acyclic and connects all of the vertices
which we call a spanning tree since it “spans” the graph G. We
determining
Since T is acyclic and connects all of the vertices,the treeform
it must T the minimum-spanning-tree
a tree, which we call a spanning problem.1
tree since it “spans” the graph G.
an example of a connected graph and a minimum spanning tree
We call the problem of determining In thisTchapter,
the tree we shall examine
the minimum-spanning-tree two algorithms for solv
problem.
spanning-tree problem: Kruskal’s algorithm and Prim’s algori
make each of them run in time O.E lg V / using ordinary bina
Two Greedy algorithms are considered:
tree.
fWe callWe call
such ansuch
edgean
a safe a safe
edgeedge for edge for we
A, since A, since weitcan
can add addtoitAsafely
safely whileto A while the
maintaining
maintaining the invariant.
invariant.
G ENERIC -MST.G; w/
1 AD;
2 while A does not form a spanning tree
3 find an edge .u; !/ that is safe for A Search for a sage edge and
4 A D A [ f.u; !/g add it to the spanning Tree
5 return A
• The Loop checks, for each edge (u , v), whether the endpoints u and v belong to the same
tree (Check if FIND-SET(u) equals FIND-SET.(v)).
• If they do, then the edge (u , v) cannot be added to the forest without creating a cycle, and
the edge is discarded.
• Otherwise, the two vertices belong to different trees. In this case, line 7 adds the edge (u ,
v) to A, and
• line 8 merges the vertices in the two trees (UNION procedure).
632 Chapter 23 Minimum Spanning Trees
8 7 8 7
b c d b c d
4 9 4 9
2 2
(a) a 11 i 4 14 e (b) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7 8 7
b c d b c d
4 9 4 9
2 2
(c) a 11 i 4 14 e (d) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7 8 7
b c d b c d
4 9 4 9
2 2
(e) a 11 i 4 14 e (f) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7 8 7
b c d b c d
4 9 4 9
2 2
(g) a 11 i 4 14 e (h) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
23.2 23.4
Figure The algorithms of Kruskal
The execution and Prim algorithm on the graph from Figure 23.1. Shaded
of Kruskal’s 633 edges
belong to the forest A being grown. The algorithm considers each edge in sorted order by weight.
An arrow points to the edge under consideration at each step of the algorithm. If the edge joins two
distinct8 trees in
b c
7
the forest, 8 the two trees.
itd is added to the forest, therebybmerging c
7
d
4 9 4 9
2 2
(i) a 11checks,
i for each
4 edge 14 e whether
.u; !/, (j) thea endpoints
11 i u and ! 4belong to
14 thee same
7 6 7 6
8
tree. If they do, then the edge
10 .u; !/ cannot be 8
added to the forest without10creating
ha cycle, andg the edge isf discarded. Otherwise, the h two vertices
g belongf to different
1 2 1 2
trees. In this case, line 7 adds the edge .u; !/ to A, and line 8 merges the vertices
in the8two trees. 7 8 7
b c d b c d
4 9 4 9
2 2
(k) a 11 i 4 14 e (l) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7 8 7
b c d b c d
4 9 4 9
2 2
(m) a 11 i 4 14 e (n) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
Section
Like Kruskal’s 24.3. Prim’s algorithm, algorithm has the is
Prim’s algorithm property
a specialthat casethe of edges
the generic in the min-set A always
imum-spanning-tree
form
Prim’s
Prim’s aAlgorithm
single
algorithm tree.method As Figure from Section23.5 shows, 23.1. Prim’s the tree algorithm
startsoperatesfrom an much arbitrary root
like
vertex Dijkstra’s
r and grows algorithm for finding
untilPrim’s
the tree shortest paths
spansisalla special in a graph,
the vertices which we shall
in V . Eachmin- see
stepin adds to the
Like
treeKruskal’s
Section 24.3.from Prim’s algorithm,
algorithm roothas algorithm
the property that thethecase
edges ofspans
inthethegeneric
setthe always in V .
tree Aa asinglelighttree. edgeAs that connects A to23.1.an isolated vertex—one on which
root no edge
The starts an arbitrary vertex r and grows until tree all A vertices
imum-spanning-tree
form method
Figure from23.5Section
shows, the Prim’s
tree algorithm
starts from an operates
arbitrary much
of like
vertex
Each is
Astep r incident.
Dijkstra’s
and grows
adds to algorithm
the treeBy
untilACorollary
for
the finding
treeedge
a light spans23.2,
shortest
thatall this
paths
the
connects rule
vertices
A a adds
in to graph, V only
.which
aninisolated Each edges
we
stepshall
vertex—one that
adds see are
toin
on the safe
which no for A;
Section
therefore, 24.3.
when Prim’s algorithm has the property that the edges in the set always
tree of
edge A Aa is light edgethe
incident. algorithm
that connects A terminates,
to an isolated thevertex—one
edges in Aon form whicha minimum
A
no edge spanning
form a single tree. As Figure 23.5 shows, the tree starts from an arbitrary root
tree.
of A is
vertex
When This
the andstrategy
incident.
ralgorithm grows qualifies
By Corollary
until
terminates, treeas
the the
23.2, greedy
spans
edges
this rule
inall
A the since
form
addsat
vertices
only
a minimum each edges stepthat
in V .spanning
Each it tree.
stepadds
addsto
are safe
to the
the tree an edge
for A;
therefore,
that when thethe algorithm terminates, thepossible
edges in A toform
the atree’sminimum spanning
treecontributes
A a light edge minimum
that connects A amount
to an isolated vertex—one on which weight.no edge
tree. This strategy qualifies as greedy since at we each step fastit adds to the tree newan edge
In InA
of
order
that
order
is to implement
to incident.
implement
contributes the
By Corollary
Prim’s
minimum
Prim’s
algorithm
amount
algorithm
23.2,efficiently,
this rule addsefficiently,
possible to
onlyaedges
need
the tree’s
way wetoare
that
weight.
needsafeaafor
select fast wayto to select
edge
A;
addtherefore,
to the treewhen the algorithm terminates,
a new edgetoto add to the tree formedtheefficiently,
byedges
the inedges A formina A. minimum In the spanning
pseudocode below,
formed by the edges in A.
tree. This strategy qualifies as greedy since at each step it adds to the tree antoedge
In order implement Prim’s algorithm we need a fast way select
the
a that
new
Input connected
to edge
contributes to add
the algorithm: graph
thetominimum
the and
theconnected
Gtree the
formed
amount
graphroot
by Gthe ofthe
r edges
possible
and the
to the
rootinminimum
of In
tree’s
rA. thethe
weight.
spanning
pseudocode
minimum spanning treetreetotobe
below, be grown
are
grown. inputs
In order to
the connected the algorithm.
to graph
implement G and the root
Prim’s Duringr of the
algorithm execution
minimum
efficiently, weof thea fast
spanning
need algorithm,
tree
way all vertices that
totobeselect
grown
are
are inputs
not
a new edge to tothe
inexecution
the tree
add algorithm.
toreside treein
thealgorithm,During execution
a min-priority
formed by the edges ofqueue
intheA. algorithm,
In Qthe based allon
pseudocode vertices
abelow,
key that
attribute. For
are
During not thein the tree reside
of the in a min-priority
all vertices queue
that are based
not in on
the a
tree key attribute.
reside in a For
min-priority
each vertex
the connected
based !, athe
ongraph keyattribute
G and the!:key root r is of the minimum
the minimum Q
weighttree
spanning of toanybe edge
grown connecting !
each
are vertex to thethe attribute is theexecution
minimum of weight of any edge connecting
queue Q attribute.
inputs !, algorithm.!:key During the algorithm, all vertices
to aa vertex
vertexininthethetree; tree; by convention, 1 if there no that
isedge. such edge. The
!
to by convention, !:key Dthere
if is no such The
Forare in thev,tree
notvertex the reside in𝑣.a 𝑘𝑒𝑦
min-priority !:key queue Q
D 1 based of on
anyaedge key attribute.
connectingFor
attribute
each
attribute names
names thethe
attribute
parentparent of in theThe
is the minimum
tree.
weight
The algorithm implicitlyv to a
maintains
eachinvertex the attribute 𝑣.of𝑘𝑒𝑦 in=the
is! the ∞ tree. is algorithm implicitly maintains
!minimum
if thereweight of any edge connecting
!:"!,
!:" !:key 𝑣.!𝜋 names
the set from G -MST as as
vertex the tree; by convention, no such edge. The attribute
the
the set
toparent AAoffrom
a vertex 𝑣 ininG the
the ENERIC
tree;
ENERIC
tree. -MST
by
The convention,
algorithm 1 if there
!:key Dmaintains
implicitly the issetno A such edge. The
from GENERIC-MST as
attribute !:" names the parent of ! in the tree. The algorithm implicitly maintains
AAthe
D set
D f.!; fromW G!WENERIC
A!:"/
f.!; !:"/ V-MST
!2 V2 ! frg
! frgas! :Qg :
! Qg
When the algorithm terminates, the min-priority queue Q is empty; the minimum
When
When the
A Dthe
f.!;
spanning
algorithm
!:"/
algorithm
tree A for
2 V terminates,
W !terminates,
G is
! frgthe
thus
Qg : the min-priority
! min-priority queue
queue Q is empty; the Q is empty;
minimum the minimum
spanning tree
spanning
When thetree
A for G is thus
A forterminates,
algorithm G is thusthe min-priority queue Q is empty; the minimum
Aspanning tree A
D f.!; !:"/ W !for
2G V is
! thus
frgg :
A D f.!; !:"/ W ! 2 V ! frgg :
MST-P
A D f.!;RIM!:"/
.G; w; W ! r/
2 V ! frgg :
MST-P
1 for RIMeach.G;
MST-P RIM .G;
u 2 w; G:V
w; r/r/
2
13 1 for
u:key D 1
for each
each D
u:" uu2NIL2G:V
G:V
24 2 r:key u:key
u:key DD11
D0
3 u:" D NIL
35 Q D u:" G:V D NIL
4 r:key D 0
6 while
4 5 r:key Q ¤0;
Q DD G:V
7
5 6 Qwhile u D
D G:V E XTRACT-M IN .Q/ Identifies a vertex 𝑢 ∈ 𝑄 incident on a
Q¤;
87 for each
69 whileu Q E XTRACT -M IN .Q/
D¤ ! 2 G:AdjŒu# light edge that crosses the cut .(V-Q, Q)
if
;Q and
78 uforDeach
! 2 w.u; !/ < !:key
10 E!:"
XTRACT -M IN .Q/
! 2 G:AdjŒu#
9 if ! 2 Q D anduw.u; !/ < !:key Remove u from the set Q, and adds it to
8
1110 for each ! 2 G:AdjŒu#
!:" DDuw.u; !/
!:key the set V-Q of vertices in the tree,
911 if !!:key
2 QDand w.u;w.u;
!/ !/ < !:key
10 !:" D u
11 !:key D w.u; !/
Updates the key and 𝜋 attributes of every
vertex v adjacent to u but not in the tree.
23.2 The algorithms of Kruskal and Prim 635
8 7 8 7
b c d b c d
4 9 4 9
2 2
(a) a 11 i 4 14 e (b) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7 8 7
b c d b c d
4 9 4 9
2 2
(c) a 11 i 4 14 e (d) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7 8 7
b c d b c d
4 9 4 9
2 2
(e) a 11 i 4 14 e (f) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7 8 7
b c d b c d
4 9 4 9
2 2
(g) a 11 i 4 14 e (h) a 11 i 4 14 e
7 6 7 6
8 10 8 10
h g f h g f
1 2 1 2
8 7
b c d
4 9
2
(i) a 11 i 4 14 e
7 6
8 10
h g f
1 2
Figure 23.5 The execution of Prim’s algorithm on the graph from Figure 23.1. The root vertex
is a. Shaded edges are in the tree being grown, and black vertices are in the tree. At each step of
the algorithm, the vertices in the tree determine a cut of the graph, and a light edge crossing the cut
is added to the tree. In the second step, for example, the algorithm has a choice of adding either
edge .b; c/ or edge .a; h/ to the tree since both are light edges crossing the cut.