Data Structures
Data Structures
c
2016
(Fourth edition)
Contents
Contents 2
List of Figures 5
List of Tables 7
Preface 8
2 Efficiency of Sorting 39
2.1 The problem of sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2 Insertion sort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.3 Mergesort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.4 Quicksort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.5 Heapsort . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
2.6 Data selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2.7 Lower complexity bound for sorting . . . . . . . . . . . . . . . . . . . . 66
2.8 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3 Efficiency of Searching 69
3.1 The problem of searching . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.2 Sorted lists and binary search . . . . . . . . . . . . . . . . . . . . . . . . 72
3.3 Binary search trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
3.4 Self-balancing binary and multiway search trees . . . . . . . . . . . . . 82
3.5 Hash tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.6 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Bibliography 234
Index 235
List of Figures
The fourth edition follows the third edition, while incorporates fixes for the errata
discovered in the third edition.
Michael J. Dinneen
Georgy Gimel’farb
Mark C. Wilson
February 2016
(Minor revisions December 2017)
Introduction to the Third Edition
The focus for this third edition has been to make an electronic version that students
can read on the tablets and laptops that they bring to lectures. The main changes
from the second edition are:
Michael J. Dinneen
Georgy Gimel’farb
Mark C. Wilson
March 2013
Introduction to the Second Edition
Writing a second edition is a thankless task, as is well known to authors. Much of the
time is spent on small improvements that are not obvious to readers. We have taken
considerable efforts to correct a large number of errors found in the first edition, and
to improve explanation and presentation throughout the book, while retaining the
philosophy behind the original. As far as material goes, the main changes are:
• more exercises and solutions to many of them;
• a new section on maximum matching (Section 5.9);
• a new section on string searching (Part III);
• a Java graph library updated to Java 1.6 and freely available for download.
The web site http://www.cs.auckland.ac.nz/textbookCS220/ for the book pro-
vides additional material including source code. Readers finding errors are encour-
aged to contact us after viewing the errata page at this web site.
In addition to the acknowledgments in the first edition, we thank Sonny Datt for
help with updating the Java graph library, Andrew Hay for help with exercise solu-
tions and Cris Calude for comments. Rob Randtoul (PlasmaDesign.co.uk) kindly
allowed us to use his cube artwork for the book’s cover. Finally, we thank
MJD all students who have struggled to learn from the first edition and have given
us feedback, either positive or negative;
GLG my wife Natasha and all the family for their permanent help and support;
MCW my wife Golbon and sons Yusef and Yahya, for their sacrifices during the writ-
ing of this book, and the joy they bring to my life even in the toughest times.
31 October 2008
Introduction to the First Edition
This book is an expanded, and, we hope, improved version of the coursebook for
the course COMPSCI 220 which we have taught several times in recent years at the
University of Auckland.
We have taken the step of producing this book because there is no single text
available that covers the syllabus of the above course at the level required. Indeed,
we are not aware of any other book that covers all the topics presented here. Our
aim has been to produce a book that is straightforward, concise, and inexpensive,
and suitable for self-study (although a teacher will definitely add value, particularly
where the exercises are concerned). It is an introduction to some key areas at the
theoretical end of computer science, which nevertheless have many practical appli-
cations and are an essential part of any computer science student’s education.
The material in the book is all rather standard. The novelty is in the combina-
tion of topics and some of the presentation. Part I deals with the basics of algorithm
analysis, tools that predict the performance of programs without wasting time im-
plementing them. Part II covers many of the standard fast graph algorithms that
have applications in many different areas of computer science and science in gen-
eral. Part III introduces the theory of formal languages, shifting the focus from what
can be computed quickly to what families of strings can be recognized easily by a
particular type of machine.
The book is designed to be read cover-to-cover. In particular Part I should come
first. However, one can read Part III before Part II with little chance of confusion.
To make best use of the book, one must do the exercises. They vary in difficulty
from routine to tricky. No solutions are provided. This policy may be changed in a
later edition.
The prerequisites for this book are similar to those of the above course, namely
two semesters of programming in a structured language such as Java (currently used
at Auckland). The book contains several appendices which may fill in any gaps in
the reader’s background.
A limited bibliography is given. There are so many texts covering some of the
topics here that to list all of them is pointless. Since we are not claiming novelty
of material, references to research literature are mostly unnecessary and we have
omitted them. More advanced books (some listed in our bibliography) can provide
more references as a student’s knowledge increases.
A few explanatory notes to the reader about this textbook are in order.
We describe algorithms using a pseudocode similar to, but not exactly like, many
structured languages such as Java or C++. Loops and control structures are indented
in fairly traditional fashion. We do not formally define our pseudocode of comment
style (this might make an interesting exercise for a reader who has mastered Part III).
We make considerable use of the idea of ADT (abstract data type). An abstract
data type is a mathematically specified collection of objects together with opera-
tions that can be performed on them, subject to certain rules. An ADT is completely
independent of any computer programming implementation and is a mathematical
structure similar to those studied in pure mathematics. Examples in this book in-
clude digraphs and graphs, along with queues, priority queues, stacks, and lists. A
data structure is simply a higher level entity composed of the elementary memory
addresses related in some way. Examples include arrays, arrays of arrays (matrices),
linked lists, doubly linked lists, etc.
The difference between a data structure and an abstract data type is exemplified
by the difference between a standard linear array and what we call a list. An array is
a basic data structure common to most programming languages, consisting of con-
tiguous memory addresses. To find an element in an array, or insert an element, or
delete an element, we directly use the address of the element. There are no secrets
in an array. By contrast, a list is an ADT. A list is specified by a set S of elements
from some universal set U, together with operations insert, delete, size, isEmpty
and so on (the exact definition depends on who is doing the defining). We denote
the result of the operation as S.isEmpty(), for example. The operations must sat-
isfy certain rules, for example: S.isEmpty() returns a boolean value TRUE or FALSE;
S.insert(x, r) requires that x belong to U and r be an integer between 0 and S.size(),
and returns a list; for any admissible x and r we have S.isEmpty(S.insert(x, r)) =
FALSE, etc. We are not interested in how the operations are to be carried out, only
in what they do. Readers familiar with languages that facilitate object-based and
object-oriented programming will recognize ADTs as, essentially, what are called
classes in Java or C++.
A list can be implemented using an array (to be more efficient, we would also
have an extra integer variable recording the array size). The insert operation, for ex-
ample, can be achieved by accessing the correct memory address of the r-th element
of the array, allocating more space at the end of the array, shifting along some ele-
ments by one, and assigning the element to be inserted to the address vacated by the
shifting. We would also update the size variable by 1. These details are unimportant
in many programming applications. However they are somewhat important when
discussing complexity as we do in Part I. While ADTs allow us to concentrate on algo-
rithms without worrying about details of programming implementation, we cannot
ignore data structures forever, simply because some implementations of ADT oper-
ations are more efficient than others.
In summary, we use ADTs to sweep programming details under the carpet as long
as we can, but we must face them eventually.
A book of this type, written by three authors with different writing styles under
some time pressure, will inevitably contain mistakes. We have been helped to mini-
mize the number of errors by the student participants in the COMPSCI 220 course-
book error-finding competition, and our colleagues Joshua Arulanandham and An-
dre Nies, to whom we are very grateful.
Our presentation has benefitted from the input of our colleagues who have taught
COMPSCI 220 in the recent and past years, with special acknowledgement due to
John Hamer and the late Michael Lennon.
10 February 2004
Part I
Definition 1.2 (informal). The running time (or computing time) of an algorithm is
the number of its elementary operations.
Example 1.4 (Sums of subarrays). The problem is to compute, for each subarray
a[ j.. j + m − 1] of size m in an array a of size n, the partial sum of its elements s[ j] =
m−1
∑k=0 a[ j + k]; j = 0, . . . , n − m. The total number of these subarrays is n − m + 1. At first
glance, we need to compute n − m + 1 sums, each of m items, so that the running time
is proportional to m(n − m + 1). If m is fixed, the time depends still linearly on n.
But if m is growing with n as a fraction of n, such as m = 2n , then T (n) = c 2n n2 + 1
= 0.25cn2 + 0.5cn. The relative weight of the linear part, 0.5cn, decreases quickly with
respect to the quadratic one as n increases. For example, if T (n) = 0.25n2 + 0.5n, we
see in the last column of Table 1.1 the rapid decrease of the ratio of the two terms.
Table 1.1: Relative growth of linear and quadratic terms in an expression.
Thus, for large n only the quadratic term becomes important and the running
time is roughly proportional to n2 , or is quadratic in n. Such algorithms are some-
times called quadratic algorithms in terms of relative changes of running time with
respect to changes of the data size: if T (n) ≈ cn2 then T (10) ≈ 100T (1), or T (100) ≈
10000T (1), or T (100) ≈ 100T (10).
algorithm slowSums
Input: array a[0..2m − 1]
begin
array s[0..m]
for i ← 0 to m do
s[i] ← 0
for j ← 0 to m − 1 do
s[i] ← s[i] + a[i + j]
end for
end for
return s
end
The “brute-force” quadratic algorithm has two nested loops (see Figure 1.2). Let
us analyse it to find out whether it can be simplified. It is easily seen that repeated
computations in the innermost loop are unnecessary. Two successive sums s[i] and
s[i − 1] differ only by two elements: s[i] = s[i − 1] + a[i + m − 1] − a[i − 1]. Thus we need
not repeatedly add m items together after getting the very first sum s[0]. Each next
sum is formed from the current one by using only two elementary operations (ad-
dition and subtraction). Thus T (n) = c(m + 2(n − m)) = c(2n − m). In the first paren-
theses, the first term m relates to computing the first sum s[0], and the second term
2(n − m) reflects that n − m other sums are computed with only two operations per
sum. Therefore, the running time for this better organized computation is always
linear in n for each value m, either fixed or growing with n. The time for comput-
ing all the sums of the contiguous subsequences is less than twice that taken for the
single sum of all n items in Example 1.3
The linear algorithm in Figure 1.3 excludes the innermost loop of the quadratic
algorithm. Now two simple loops, doing m and 2(n − m) elementary operations, re-
spectively, replace the previous nested loop performing m(n − m + 1) operations.
algorithm fastSums
Input: array a[0..2m − 1]
begin
array s[0..m]
s[0] ← 0
for j ← 0 to m − 1 do
s[0] ← s[0] + a[ j]
end for
for i ← 1 to m do
s[i] ← s[i − 1] + a[i + m − 1] − a[i − 1]
end for
return s;
end
Such an outcome is typical for algorithm analysis. In many cases, a careful analy-
sis of the problem allows us to replace a straightforward “brute-force” solution with
much more effective one. But there are no “standard” ways to reach this goal. To ex-
clude unnecessary computation, we have to perform a thorough investigation of the
problem and find hidden relationships between the input data and desired outputs.
In so doing, we should exploit all the tools we have learnt. This book presents many
examples where analysis tools are indeed useful, but knowing how to analyse and
solve each particular problem is still close to an art. The more examples and tools
are mastered, the more the art is learnt.
Exercises
Exercise 1.1.1. A quadratic algorithm with processing time T (n) = cn2 uses 500 ele-
mentary operations for processing 10 data items. How many will it use for processing
1000 data items?
Exercise 1.1.2. Algorithms A and B use exactly TA (n) = cA n lg n and TB (n) = cB n2 ele-
mentary operations, respectively, for a problem of size n. Find the fastest algorithm
for processing n = 220 data items if A and B spend 10 and 1 operations, respectively,
to process 210 ≡ 1024 items.
Additional conditions for executing inner loops only for special values of the
outer variables also decrease running time.
Example 1.6. Let us roughly estimate the running time of the following nested loops:
m←2
for j ← 1 to n do
if j = m then
m ← 2m
for i ← 1 to n do
. . . constant number of elementary operations
end for
end if
end for
m←1
for j ← 1 step j ← j + 1 until n do
if j = m then m ← m · (n − 1)
for i ← 0 step i ← i + 1 until n − 1 do
. . . constant number of elementary operations
end for
end if
end for
Exercise 1.2.2. What is the running time for the following code fragment as a func-
tion of n?
for i ← 1 step i ← 2 ∗ i while i < n do
for j ← 1 step j ← 2 ∗ j while j < n do
if j = 2 ∗ i
for k = 0 step k ← k + 1 while k < n do
. . . constant number of elementary operations
end for
else
for k ← 1 step k ← 3 ∗ k while k < n do
. . . constant number of elementary operations
end for
end if
end for
end for
• The input data size, or the number n of individual data items in a single data
instance to be processed when solving a given problem. Obviously, how to
measure the data size depends on the problem: n means the number of items
to sort (in sorting applications), number of nodes (vertices) or arcs (edges) in
graph algorithms, number of picture elements (pixels) in image processing,
length of a character string in text processing, and so on.
The running time of a program which implements the algorithm is c f (n) where
c is a constant factor depending on a computer, language, operating system, and
compiler. Even if we don’t know the value of the factor c, we are able to answer
the important question: if the input size increases from n = n1 to n = n2 , how does
the relative running time of the program change, all other things being equal? The
(n2 )
answer is obvious: the running time increases by a factor of TT (n = cc ff (n 2) f (n2 )
(n1 ) = f (n1 ) .
1)
As we have already seen, the approximate running time for large input sizes gives
enough information to distinguish between a good and a bad algorithm. Also, the
constant c above can rarely be determined. We need some mathematical notation to
avoid having to say “of the order of . . .” or “roughly proportional to . . .”, and to make
this intuition precise.
The standard mathematical tools “Big Oh” (O), “Big Theta” (Θ), and “Big Omega”
(Ω) do precisely this.
Note. Actually, the above letter O is a capital “omicron” (all letters in this notation
are Greek letters). However, since the Greek omicron and the English “O” are indis-
tinguishable in most fonts, we read O() as “Big Oh” rather than “Big Omicron”.
The algorithms are analysed under the following assumption: if the running time
of an algorithm as a function of n differs only by a constant factor from the running
time for another algorithm, then the two algorithms have essentially the same time
complexity. Functions that measure running time, T (n), have nonnegative values
because time is nonnegative, T (n) ≥ 0. The integer argument n (data size) is also
nonnegative.
Definition 1.7 (Big Oh). Let f (n) and g(n) be nonnegative-valued functions defined
on nonnegative integers n. Then g(n) is O( f (n)) (read “g(n) is Big Oh of f (n)”) iff there
exists a positive real constant c and a positive integer n0 such that g(n) ≤ c f (n) for all
n > n0 .
Note. We use the notation “iff ” as an abbreviation of “if and only if”.
In other words, if g(n) is O( f (n)) then an algorithm with running time g(n) runs for
large n at least as fast, to within a constant factor, as an algorithm with running time
f (n). Usually the term “asymptotically” is used in this context to describe behaviour
of functions for sufficiently large values of n. This term means that g(n) for large n
may approach closer and closer to c · f (n). Thus, O( f (n)) specifies an asymptotic
upper bound.
Note. Sometimes the “Big Oh” property is denoted g(n) = O( f (n)), but we should not
assume that the function g(n) is equal to something called “Big Oh” of f (n). This
notation really means g(n) ∈ O( f (n)), that is, g(n) is a member of the set O( f (n)) of
functions which are increasing, in essence, with the same or lesser rate as n tends to
infinity (n → ∞). In terms of graphs of these functions, g(n) is O( f (n)) iff there exists
a constant c such that the graph of g(n) is always below or at the graph of c f (n) after
a certain point, n0 .
Example 1.8. Function g(n) = 100 log10 n in Figure 1.4 is O(n) because the graph g(n)
is always below the graph of f (n) = n if n > 238 or of f (n) = 0.3n if n > 1000, etc.
T(n) f(n)=n
f(n) = 0.3n
400
g(n)=100 log 10 n
300
200
100
n0 n0
0 200 400 600 800 1000 1200 n
Definition 1.9 (Big Omega). The function g(n) is Ω( f (n)) iff there exists a positive
real constant c and a positive integer n0 such that g(n) ≥ c f (n) for all n > n0 .
“Big Omega” is complementary to “Big Oh” and generalises the concept of “lower
bound” (≥) in the same way as “Big Oh” generalises the concept of “upper bound”
(≤): if g(n) is O( f (n)) then f (n) is Ω(g(n)), and vice versa.
Definition 1.10 (Big Theta). The function g(n) is Θ( f (n)) iff there exist two positive
real constants c1 and c2 and a positive integer n0 such that c1 f (n) ≤ g(n) ≤ c2 f (n) for
all n > n0 .
Whenever two functions, f (n) and g(n), are actually of the same order, g(n) is
Θ( f (n)), they are each “Big Oh” of the other: f (n) is O(g(n)) and g(n) is O( f (n)). In
other words, f (n) is both an asymptotic upper and lower bound for g(n). The “Big
Theta” property means f (n) and g(n) have asymptotically tight bounds and are in
some sense equivalent for our purposes.
In line with the above definitions, g(n) is O( f (n)) iff g(n) grows at most as fast as
f (n) to within a constant factor, g(n) is Ω( f (n)) iff g(n) grows at least as fast as f (n) to
within a constant factor, and g(n) is Θ( f (n)) iff g(n) and f (n) grow at the same rate to
within a constant factor.
“Big Oh”, “Big Theta”, and “Big Omega” notation formally capture two crucial
ideas in comparing algorithms: the exact function, g, is not very important because
it can be multiplied by any arbitrary positive constant, c, and the relative behaviour
of two functions is compared only asymptotically, for large n, but not near the origin
where it may make no sense. Of course, if the constants involved are very large, the
asymptotic behaviour loses practical interest. In most cases, however, the constants
remain fairly small.
In analysing running time, “Big Oh” g(n) ∈ O( f (n)), “Big Omega” g(n) ∈ Ω( f (n)),
and “Big Theta” g(n) ∈ Θ( f (n)) definitions are mostly used with g(n) equal to “exact”
running time on inputs of size n and f (n) equal to a rough approximation to running
time (like log n, n, n2 , and so on).
To prove that some function g(n) is O( f (n)), Ω( f (n)), or Θ( f (n)) using the defi-
nitions we need to find the constants c, n0 or c1 , c2 , n0 specified in Definitions 1.7,
1.9, 1.10. Sometimes the proof is given only by a chain of inequalities, starting with
f (n). In other cases it may involve more intricate techniques, such as mathemati-
cal induction. Usually the manipulations are quite simple. To prove that g(n) is not
O( f (n)), Ω( f (n)), or Θ( f (n)) we have to show the desired constants do not exist, that
is, their assumed existence leads to a contradiction.
Example 1.11. To prove that linear function g(n) = an + b; a > 0, is O(n), we form
the following chain of inequalities: g(n) ≤ an + |b| ≤ (a + |b|)n for all n ≥ 1. Thus,
Definition 1.7 with c = a + |b| and n0 = 1 shows that an + b is O(n).
“Big Oh” hides constant factors so that both 10−10 n and 1010 n are O(n). It is point-
less to write something like O(2n) or O(an + b) because this still means O(n). Also,
only the dominant terms as n → ∞ need be shown as the argument of “Big Oh”, “Big
Omega”, or “Big Theta”.
Example 1.13. The exponential function g(n) = 2n+k , where k is a constant, is O(2n )
because 2n+k = 2k 2n for all n. Generally, mn+k is O(l n ); l ≥ m > 1, because mn+k ≤ l n+k =
l k l n for any constant k.
Example 1.14. For each m > 1, the logarithmic function g(n) = logm (n) has the same
rate of increase as lg(n) because logm (n) = logm (2) lg(n) for all n > 0. Therefore we may
omit the logarithm base when using the “Big-Oh” and “Big Theta” notation: logm n is
Θ(log n).
Constant factors are ignored, and only the powers and functions are taken into
account. It is this ignoring of constant factors that motivates such a notation.
Lemma 1.19 (Limit Rule). Suppose limn→∞ f (n)/g(n) exists (may be ∞), and denote
the limit by L. Then
Proof. If L = 0 then from the definition of limit, in particular there is some n0 such
that f (n)/g(n) ≤ 1 for all n ≥ n0 . Thus f (n) ≤ g(n) for all such n, and f (n) is O(g(n)) by
definition. On the other hand, for each c > 0, it is not the case that f (n) ≥ cg(n) for
all n past some threshold value n1 , so that f (n) is not Ω(g(n)). The other two parts are
proved in the analogous way.
To compute the limit if it exists, the standard L’Hôpital’s rule of calculus is useful
(see Section D.5).
More specific relations follow directly from the basic ones.
Example 1.20. Higher powers of n grow more quickly than lower powers: nk is O(nl )
if 0 ≤ k ≤ l. This follows directly from the limit rule since nk /nl = nk−l has limit 1 if
k = l and 0 if k < l.
Example 1.21. The growth rate of a polynomial is given by the growth rate of its
leading term (ignoring the leading coefficient by the scaling feature): if Pk (n) is a
polynomial of exact degree k then Pk (n) is Θ(nk ). This follows easily from the limit
rule as in the preceding example.
Example 1.22. Exponential functions grow more quickly than powers: nk is O(bn ),
for all b > 1, n > 1, and k ≥ 0. The restrictions on b, n, and k merely ensure that
both functions are increasing. This result can be proved by induction or by using the
limit-L’Hôpital approach above.
Example 1.23. Logarithmic functions grow more slowly than powers: logb n is O(nk )
for all b > 1, k > 0. This is the inverse of the preceding feature. Thus, as a result, log n
is O(n) and n log n is O(n2 ).
Exercises
Exercise 1.3.1. Prove that 10n3 − 5n + 15 is not O(n2 ).
Exercise 1.3.4. Prove that f (n) is Θ(g(n)) if and only if both f (n) is O(g(n) and f (n) is
Ω(g(n)).
Exercise 1.3.5. Using the definition, show that each function f (n) in Table 1.3 stands
in “Big-Oh” relation to the preceding one, that is, n is O(n log n), n log n is O(n1.5 ), and
so forth.
Exercise 1.3.7. Decide on how to reformulate the Rule of Sums (Lemma 1.17) for
“Big Omega” and “Big Theta” notation.
Exercise 1.3.8. Reformulate and prove Lemmas 1.15–1.18 for “Big Omega” notation.
An algorithm is called polynomial time if its running time T (n) is O(nk ) where k is
some fixed positive integer. A computational problem is considered intractable iff
no deterministic algorithm with polynomial time complexity exists for it. But many
problems are classed as intractable only because a polynomial solution is unknown,
and it is a very challenging task to find such a solution for one of them.
Table 1.2: Relative growth of running time T (n) when the input size increases from n = 8 to
n = 1024 provided that T (8) = 1.
Table 1.3: The largest data sizes n that can be processed by an algorithm with time complexity
f (n) provided that T (10) = 1 minute.
Table 1.3 is even more expressive in showing how the time complexity of an algo-
rithm affects the size of problems the algorithm can solve (we again use log2 = lg). A
linear algorithm solving a problem of size n = 10 in exactly one minute will process
about 5.26 million data items per year and 10 times more if we can wait a decade. But
an exponential algorithm with T (10) = 1 minute will deal only with 29 data items af-
ter a year of running and add only 3 more items after a decade. Suppose we have
computers 10, 000 times faster (this is approximately the ratio of a week to a minute).
Then we can solve a problem 10, 000 times, 100 times, or 21.5 times larger than before
if our algorithm is linear, quadratic, or cubic, respectively. But for exponential algo-
rithms, our progress is much worse: we can add only 13 more input values if T (n) is
Θ(2n ).
Therefore, if our algorithm has a constant, logarithmic, log-square, linear, or even
“n log n” time complexity we may be happy and start writing a program with no doubt
that it will meet at least some practical demands. Of course, before taking the plunge,
it is better to check whether the hidden constant c, giving the computation volume
per data item, is sufficiently small in our case. Unfortunately, order relations can be
drastically misleading: for instance, two linear functions 10−4 n and 1010 n are of the
same order O(n), but we should not claim an algorithm with the latter time complex-
ity as a big success.
Therefore, we should follow a simple rule: roughly estimate the computation vol-
ume per data item for the algorithms after comparing their time complexities in a
“Big-Oh” sense! We may estimate the computation volume simply by counting the
number of elementary operations per data item.
In any case we should be very careful even with simple quadratic or cubic algo-
rithms, and especially with exponential algorithms. If the running time is speeded
up in Table 1.3 so that it takes one second per ten data items in all the cases, then we
will still wait about 12 days (220 ≡ 1, 048, 576 seconds) for processing only 30 items by
the exponential algorithm. Estimate yourself whether it is practical to wait until 40
items are processed.
In practice, quadratic and cubic algorithms cannot be used if the input size ex-
ceeds tens of thousands or thousands of items, respectively, and exponential algo-
rithms should be avoided whenever possible unless we always have to process data
of very small size. Because even the most ingenious programming cannot make an
inefficient algorithm fast (we would merely change the value of the hidden constant
c slightly, but not the asymptotic order of the running time), it is better to spend more
time to search for efficient algorithms, even at the expense of a less elegant software
implementation, than to spend time writing a very elegant implementation of an
inefficient algorithm.
Worst-case and average-case performance
We have introduced asymptotic notation in order to measure the running time
of an algorithm. This is expressed in terms of elementary operations. “Big Oh”, “Big
Omega” and “Big Theta” notations allow us to state upper, lower and tight asymp-
totic bounds on running time that are independent of inputs and implementation
details. Thus we can classify algorithms by performance, and search for the “best”
algorithms for solving a particular problem.
However, we have so far neglected one important point. In general, the running
time varies not only according to the size of the input, but the input itself. The ex-
amples in Section 1.4 were unusual in that this was not the case. But later we shall
see many examples where it does occur. For example, some sorting algorithms take
almost no time if the input is already sorted in the desired order, but much longer if
it is not.
If we wish to compare two different algorithms for the same problem, it will be
very complicated to consider their performance on all possible inputs. We need a
simple measure of running time.
The two most common measures of an algorithm are the worst-case running
time, and the average-case running time.
The worst-case running time has several advantages. If we can show, for example,
that our algorithm runs in time O(n log n) no matter what input of size n we consider,
we can be confident that even if we have an “unlucky” input given to our program,
it will not fail to run fairly quickly. For so-called “mission-critical” applications this
is an essential requirement. In addition, an upper bound on the worst-case running
time is usually fairly easy to find.
The main drawback of the worst-case running time as a measure is that it may be
too pessimistic. The real running time might be much lower than an “upper bound”,
the input data causing the worst case may be unlikely to be met in practice, and the
constants c and n0 of the asymptotic notation are unknown and may not be small.
There are many algorithms for which it is difficult to specify the worst-case input.
But even if it is known, the inputs actually encountered in practice may lead to much
lower running times. We shall see later that the most widely used fast sorting algo-
rithm, quicksort, has worst-case quadratic running time, Θ(n2 ), but its running time
for “random” inputs encountered in practice is Θ(n log n).
By contrast, the average-case running time is not as easy to define. The use of
the word “average” shows us that probability is involved. We need to specify a prob-
ability distribution on the inputs. Sometimes this is not too difficult. Often we can
assume that every input of size n is equally likely, and this makes the mathematical
analysis easier. But sometimes an assumption of this sort may not reflect the inputs
encountered in practice. Even if it does, the average-case analysis may be a rather
difficult mathematical challenge requiring intricate and detailed arguments. And of
course the worst-case complexity may be very bad even if the average case complex-
ity is good, so there may be considerable risk involved in using the algorithm.
Whichever measure we adopt for a given algorithm, our goal is to show that its
running time is Θ( f ) for some function f and there is no algorithm with running
time Θ(g) for any function g that grows more slowly than f when n → ∞. In this case
our algorithm is asymptotically optimal for the given problem.
Proving that no other algorithm can be asymptotically better than ours is usually
a difficult matter: we must carefully construct a formal mathematical model of a
computer and derive a lower bound on the complexity of every algorithm to solve
the given problem. In this book we will not pursue this topic much. If our analysis
does show that an upper bound for our algorithm matches the lower one for the
problem, then we need not try to invent a faster one.
Exercises
Exercise 1.4.1. Add columns to Table 1.3 corresponding to one century (10 decades)
and one millennium (10 centuries).
Exercise 1.4.2. Add rows to Table 1.2 for algorithms with time complexity f (n) =
lg lg n and f (n) = n2 lg n.
Example 1.25 (Fibonacci numbers). These are defined by one of the most famous
recurrence relations: F(n) = F(n − 1) + F(n − 2); F(1) = 1, and F(2) = 1. The last two
equations are called the base of the recurrence or initial condition. The recurrence
relation uniquely defines the function F(n) at any number n because any particular
value of the function is easily obtained by generating all the preceding values until
the desired term is produced, for example, F(3) = F(2) + F(1) = 2; F(4) = F(3) +
F(2) = 3, and so forth. Unfortunately, to compute F(10000), we need to perform
9998 additions.
Example 1.26. One more recurrence relation is T (n) = 2T (n − 1) + 1 with the base
condition T (0) = 0. Here, T (1) = 2 · 0 + 1 = 1, T (2) = 2 · 1 + 1 = 3, T (3) = 2 · 3 + 1 = 7,
T (4) = 2 · 7 + 1 = 15, and so on.
T (n) = 22 (2T (n − 3) + 1) + 2 + 1 = 23 T (n − 3) + 22 + 2 + 1
Step 3 substitute T (n − 3) = 2T (n − 4) + 1:
T (n) = 23 (2T (n − 4) + 1) + 22 + 2 + 1
= 24 T (n − 4) + 23 + 22 + 2 + 1
Step . . . . . .
Step n − 2 . . .
As shown in Figure 1.5, rather than successively substitute the terms T (n − 1),
T (n − 2), . . . , T (2), T (1), it is more convenient to write down a sequence of the scaled
relationships for T (n), 2T (n − 1), 22 T (n − 2), . . . , 2n−1 T (1), respectively, then individ-
ually sum left and right columns, and eliminate similar terms in the both sums (the
terms are scaled to facilitate their direct elimination). Such solution is called tele-
scoping because the recurrence unfolds like a telescopic tube.
Although telescoping is not a powerful technique, it returns the desired explicit
forms of most of the basic recurrences that we need in this book (see Examples 1.29–
1.32 below). But it is helpless in the case of the Fibonacci recurrence because after
proper scaling of terms and reducing similar terms in the left and right sums, tele-
scoping returns just the same initial recurrence.
Example 1.29. T (n) = T (n − 1) + n; T (0) = 1.
This relation arises when a recursive algorithm loops through the input to elimi-
nate one item and is easily solved by telescoping:
T (n) = T (n − 1) + n
T (n − 1) = T (n − 2) + (n − 1)
...
T (1) = T (0) + 1
Basic recurrence: an implicit relationship between T(n) and n; the base condition: T(0) = 0
T(n−1) = 2 T(n−2) + 1
substitution
T(n−2) = 2 T(n−3) + 1
substitution
T(1) = 2 T(0) + 1
substitution
n
T(n) = 2 T(0) + 2
n−1
+ ... + 4 + 2 + 1 2n−1 T(1) = 2 n T(0) + 2 n−1
left−side sum right−side sum
n
=2 −1
Explicit relationship between T(n) and n by reducing common left− and right−side terms
By summing left and right columns and eliminating the similar terms, we obtain that
T (n) = T (0) + 1 + 2 + . . . + (n − 2) + (n − 1) + n = n(n+1)
2 so that T (n) is Θ(n2 ).
T (2m ) = T (2m−1 ) + 1
T (2m−1 ) = T (2m−2 ) + 1
...
1
T (2 ) = T (20 ) + 1
T (2m ) = T (2m−1 ) + n
T (2m−1 ) = T (2m−2 ) + n/2
T (2m−2 ) = T (2m−3 ) + n/4
...
T (2) = T (1) + 2
T (1) = T (0) + 1
T (2m ) = 2T (2m−1 ) + 2m
m−1
T (2 ) = 2T (2m−2 ) + 2m−1
...
T (2) = 2T (1) + 2
so that
T (2m ) T (2m−1 )
= +1
2m 2m−1
T (2 m−1 ) T (2m−2 )
= +1
2m−1 2m−2
...
T (2) T (1)
= + 1.
2 1
There exist very helpful parallels between the differentiation / integration in cal-
culus and recurrence analysis by telescoping.
• The difference equation T (n)−2T (n−1) = c rewritten as T (n)−T1 (n−1) = T (n−1)+
c resembles the differential equation dTdx(x) = T (x). Telescoping of the difference
equation results in the formula T (n) = c(2n − 1) whereas the integration of the
differential equation produces the analogous exponential one T (x) = cex .
• The difference equation T (n)−T (n−1) = cn has the differential analogue dTdx(x) =
cx, and both equations have similar solutions T (n) = c n(n+1)
2 and T (x) = 2c x2 , re-
spectively.
The parallels between difference and differential equations may help us in deriving
the desired closed-form solutions of complicated recurrences.
Exercise 1.5.1. Show that the solution in Example 1.31 is also in Ω(n) for general n.
Exercise 1.5.2. Show that the solution T (n) to Example 1.32 is no more than n lg n +
n − 1 for every n ≥ 1. Hint: try induction on n.
Exercise 1.5.4. The running time T (n) of a slightly different algorithm is given by the
recurrence T (n) = kT nk + ckn; T (1) = 0. Derive the explicit expression for T (n) in
terms of c, n, and k under the same assumption n = km and find time complexity of
this algorithm in the “Big-Oh” sense.
Large constants have to be taken into account when an algorithm is very com-
plex, or when we must discriminate between cheap or expensive access to input
data items, or when there may be lack of sufficient memory for storing large data
sets, etc. But even when constants and lower-order terms are considered, the per-
formance predicted by our analysis may differ from the empirical results. Recall that
for very large inputs, even the asymptotic analysis may break down, because some
operations (like addition of large numbers) can no longer be considered as elemen-
tary.
In order to analyse algorithm performance we have used a simplified mathemat-
ical model involving elementary operations. In the past, this allowed for fairly accu-
rate analysis of the actual running time of program implementing a given algorithm.
Unfortunately, the situation has become more complicated in recent years. Sophis-
ticated behaviour of computer hardware such as pipelining and caching means that
the time for elementary operations can vary wildly, making these models less useful
for detailed prediction. Nevertheless, the basic distinction between linear, quadratic,
cubic and exponential time is still as relevant as ever. In other words, the crude
differences captured by the Big-Oh notation give us a very good way of comparing
algorithms; comparing two linear time algorithms, for example, will require more
experimentation.
We can use worst-case and average-case analysis to obtain some meaningful es-
timates of possible algorithm performance. But we must remember that both re-
currences and asymptotic “Big-Oh”, “Big-Omega”, and “Big-Theta” notation are just
mathematical tools used to model certain aspects of algorithms. Like all models,
they are not universally valid and so the mathematical model and the real algorithm
may behave quite differently.
Exercises
Exercise 1.6.1. Algorithms A and B use TA (n) = 5n log10 n and TB (n) = 40n elementary
operations, respectively, for a problem of size n. Which algorithm has better per-
formance in the “Big Oh” sense? Work out exact conditions when each algorithm
outperforms the other.
Exercise 1.6.2. We have to choose one of two algorithms, A and B, to process a
database containing 109 records.√ The average running time of the algorithms is
TA (n) = 0.001n and TB (n) = 500 n, respectively. Which algorithm should be used,
assuming the application is such that we can tolerate the risk of an occasional long
running time?
1.7 Notes
The word algorithm relates to the surname of the great mathematician Muham-
mad ibn Musa al-Khwarizmi, whose life spanned approximately the period 780–850.
His works, translated from Arabic into Latin, for the first time exposed Europeans
to new mathematical ideas such as the Hindu positional decimal notation and step-
by-step rules for addition, subtraction, multiplication, and division of decimal num-
bers. The translation converted his surname into “Algorismus”, and the computa-
tional rules took on this name. Of course, mathematical algorithms existed well
before the term itself. For instance, Euclid’s algorithm for computing the greatest
common divisor of two positive integers was devised over 1000 years before.
The Big-Oh notation was used as long ago as 1894 by Paul Bachmann and then
Edmund Landau for use in number theory. However the other asymptotic notations
Big-Omega and Big-Theta were introduced in 1976 by Donald Knuth (at time of writ-
ing, perhaps the world’s greatest living computer scientist).
Algorithms running in Θ(n log n) time are sometimes called linearithmic, to match
“logarithmic”, “linear”, “quadratic”, etc.
The quadratic equation for φ in Example 1.28 is called the characteristic equa-
tion of the recurrence. A similar technique can be used for solving any constant
coefficient linear recurrence of the form F(n) = ∑Kk=1 ak F(n − k) where K is a fixed
positive integer and the ak are constants.
Chapter 2
Efficiency of Sorting
Sorting rearranges input data according to a particular linear order (see Section D.3
for definitions of order and ordering relations). The most common examples are the
usual dictionary (lexicographic) order on strings, and the usual order on integers.
Once data is sorted, many other problems become easier to solve. Some of these
include: finding an item, finding whether any duplicate items exist, finding the fre-
quency of each distinct item, finding order statistics such as the maximum, mini-
mum, median and quartiles. There are many other interesting applications of sort-
ing, and many different sorting algorithms, each with their own strengths and weak-
nesses. In this chapter we describe and analyse some popular sorting algorithms.
• are the items only related by the order relation, or do they have other restric-
tions (for example, are they all integers from the range 1 to 1000);
• can they be placed into an internal (fast) computer memory or must they be
sorted in external (slow) memory, such as on disk (so called external sorting ).
No one algorithm is the best for all possible situations, and so it is important to
understand the strengths and weaknesses of several algorithms.
As far as computer implementation is concerned, sorting makes sense only for
linear data structures. We will consider lists (see Section C.1 for a review of ba-
sic concepts) which have a first element (the head), a last element (the tail) and a
method of accessing the next element in constant time (an iterator). This includes
array-based lists, and singly- and doubly-linked lists. For some applications we will
need a method of accessing the previous element quickly; singly-linked lists do not
provide this. Also, array-based lists allow fast random access. The element at any
given position may be retrieved in constant time, whereas linked list structures do
not allow this.
Exercises
Exercise 2.1.1. The well-known and obvious selection sort algorithm proceeds as
follows. We split the input list into a head and tail sublist. The head (“sorted”) sublist
is initially empty, and the tail (“unsorted”) sublist is the whole list. The algorithm
successively scans through the tail sublist to find the minimum element and moves
it to the end of the head sublist. It terminates when the tail sublist becomes empty.
(Java code for an array implementation is found in Section A.1).
How many comparisons are required by selection sort in order to sort the input
list (6, 4, 2, 5, 3, 1) ?
Exercise 2.1.2. Show that selection sort uses the same number of comparisons on
every input of a fixed size. How many does it use, exactly, for an input of size n?
Before each step i = 1, 2, . . . , n − 1, the sorted and unsorted parts have i and n −
i elements, respectively. The first element of the unsorted sublist is moved to the
correct position in the sorted sublist by exhaustive backward search, by comparing
it to each element in turn until the right place is reached.
Example 2.2. Table 2.1 shows the execution of insertion sort. Variables Ci and Mi
denote the number of comparisons and number of positions to move backward, re-
spectively, at the ith iteration. Elements in the sorted part are italicized, the currently
sorted element is underlined, and the element to sort next is boldfaced.
Table 2.1: Sample execution of insertion sort.
i Ci Mi Data to sort
25 8 2 91 70 50 20 31 15 65
1 1 1 8 25 2 91 70 50 20 31 15 65
2 2 2 2 8 25 91 70 50 20 31 15 65
3 1 0 2 8 25 91 70 50 20 31 15 65
4 2 1 2 8 25 70 91 50 20 31 15 65
5 3 2 2 8 25 50 70 91 20 31 15 65
6 5 4 2 8 20 25 50 70 91 31 15 65
7 4 3 2 8 20 25 31 50 70 91 15 65
8 7 6 2 8 15 20 25 31 50 70 91 65
9 3 2 2 8 15 20 25 31 50 65 70 91
Since the best case is so much better than the worst, we might hope that on aver-
age, for random input, insertion sort would perform well. Unfortunately, this is not
true.
Proof. We first calculate the average number Ci of comparisons at the ith step. At the
beginning of this step, i elements of the head sublist are already sorted and the next
element has to be inserted into the sorted part. This element will move backward j
steps, for some j with 0 ≤ j ≤ i. If 0 ≤ j ≤ i − 1, the number of comparisons used will
be j + 1. But if j = i (it ends up at the head of the list), there will be only i comparisons
(since no final comparison is needed).
Assuming all possible inputs are equally likely, the value of j will be equally likely
to take any value 0, . . . , i. Thus the expected number of comparisons will be
1 1 i(i + 1) i i
Ci = (1 + 2 + · · · + i − 1 + i + i) = +i = + .
i+1 i+1 2 2 i+1
The running time of insertion sort is strongly related to inversions. The number
of inversions of a list is one measure of how far it is from being sorted.
Definition 2.5. An inversion in a list a is an ordered pair of positions (i, j) such that
i < j but a[i] > a[ j].
Example 2.6. The list (3, 2, 5) has only one inversion corresponding to the pair (3, 2),
the list (5, 2, 3) has two inversions, namely, (5, 2) and (5, 3), the list (3, 2, 5, 1) has four
inversions (3, 2), (3, 1), (2, 1), and (5, 1), and so on.
Example 2.7. Table 2.2 shows the number of inversions, Ii , for each element a[i] of
the list in Table 2.1 with respect to all preceding elements a[0], . . . , a[i − 1] (Ci and Mi
are the same as in Table 2.1).
Note that Ii = Mi in Table 2.1. This is not merely a coincidence—it is always true.
See Exercise 2.2.4.
n−1
The total number of inversions I = ∑i=1 Ii in a list to be sorted by insertion sort
is equal to the total number of positions an element moves backward during the
n−1
sort. The total number of data comparisons C = ∑i=1 Ci is also equal to the total
number of inversions plus at most n − 1. For the initial list in Tables 2.1 and 2.2,
Table 2.2: Number of inversions Ii , comparisons Ci and data moves Mi for each element a[i] in
sample list.
Index i 0 1 2 3 4 5 6 7 8 9
List element a[i] 25 8 2 91 70 50 20 31 15 65
Ii 1 2 0 1 2 4 3 6 2
Ci 1 2 1 2 3 5 4 7 3
Mi 1 2 0 1 2 4 3 6 2
Exercise 2.2.3. Prove that the worst-case time complexity of insertion sort is Θ(n2 )
and the best case is Θ(n).
Exercise 2.2.4. Prove that the number of inversions, Ii , of an element a[i] with respect
to the preceding elements, a[0], . . . , a[i − 1], in the initial list is equal to the number of
positions moved backward by a[i] in the execution of insertion sort.
Exercise 2.2.5. Suppose a sorting algorithm swaps elements a[i] and a[i + gap] of a
list a which were originally out of order. Prove that the number of inversions in the
list is reduced by at least 1 and at most 2 gap − 1.
Exercise 2.2.6. Bubble sort works as follows to sort an array. There is a sorted left
subarray and unsorted right subarray; the left subarray is initially empty. At each
iteration we step through the right subarray, comparing each pair of neighbours in
turn, and swapping them if they are out of order. At the end of each such pass, the
sorted subarray has increased in size by 1, and we repeat the entire procedure from
the beginning of the unsorted subarray. (Java code is found in Section A.1.)
Prove that the average time complexity of bubble sort is Θ(n2 ), and that bubble
sort never makes fewer comparisons than insertion sort.
2.3 Mergesort
This algorithm exploits a recursive divide-and-conquer approach resulting in a
worst-case running time of Θ(n log n), the best asymptotic behaviour that we have
seen so far. Its best, worst, and average cases are very similar, making it a very good
choice if predictable runtime is important. Versions of mergesort are particularly
good for sorting data with slow access times, such as data that cannot be held in
internal memory or are stored in linked lists.
Mergesort is based on the following basic idea.
• Otherwise, separate the list into two lists of equal or nearly equal size and re-
cursively sort the first and second halves separately.
• Finally, merge the two sorted halves into one sorted list.
Clearly, almost all the work is in the merge step, which we should make as effi-
cient as possible. Obviously any merge must take at least time that is linear in the
total size of the two lists in the worst case, since every element must be looked at in
order to determine the correct ordering. We can in fact achieve a linear time merge,
as we see in the next section.
Analysis of mergesort
Lemma 2.8. Mergesort is correct.
Proof. We use induction on the size n of the list. If n = 0 or 1, the result is obviously
correct. Otherwise, mergesort calls itself recursively on two sublists each of which
has size less than n. By induction, these lists are correctly sorted. Provided that the
merge step is correct, the top level call of mergesort then returns the correct answer.
Almost all the work occurs in the merge steps, so we need to perform those effi-
ciently.
Theorem 2.9. Two input sorted lists A and B of size nA and nB , respectively, can be
merged into an output sorted list C of size nC = nA + nB in linear time.
Proof. We first show that the number of comparisons needed is linear in n. Let i,
j, and k be pointers to current positions in the lists A, B, and C, respectively. Ini-
tially, the pointers are at the first positions, i = 0, j = 0, and k = 0. Each time the
smaller of the two elements A[i] and B[ j] is copied to the current entry C[k], and the
corresponding pointers k and either i or j are incremented by 1. After one of the
input lists is exhausted, the rest of the other list is directly copied to list C. Each
comparison advances the pointer k so that the maximum number of comparisons is
nA + nB − 1.
All other operations also take linear time.
The above proof can be visualized easily if we think of the lists as piles of playing
cards placed face up. At each step, we choose the smaller of the two top cards and
move it to the temporary pile.
Example 2.10. If a = (2, 8, 25, 70, 91) and b = (15, 20, 31, 50, 65), then merge into c =
(2, 8, 15, 20, 25, 31, 50, 65, 70, 91) as follows.
Step 1 a[0] = 2 and b[0] = 15 are compared, 2 < 15, and 2 is copied to c, that is, c[0] ← 2,
i ← 0 + 1, and k ← 0 + 1.
Step 2 a[1] = 8 and b[0] = 15 are compared to copy 8 to c, that is, c[1] ← 8, i ← 1 + 1,
and k ← 1 + 1.
Step 3 a[2] = 25 and b[0] = 15 are compared and 15 is copied to c so that c[2] ← 15,
j ← 0 + 1, and k ← 2 + 1.
Step 4 a[2] = 25 and b[1] = 20 are compared and 20 is copied to c: c[3] ← 20, j ← 1 + 1,
and k ← 3 + 1.
Step 5 a[2] = 25 and b[2] = 31 are compared, and 25 is copied to c: c[4] ← 25, i ← 2 + 1,
and k ← 4 + 1.
The process continues as follows: comparing a[3] = 70 and b[2] = 31, a[3] = 70 and
b[3] = 50, and a[3] = 70 and b[4] = 65 results in c[5] ← (b[2] = 31), c[6] ← (b[3] = 50),
and c[7] ← (b[4] = 65), respectively. Because the list b is exhausted, the rest of the list
a is then copied to c, c[8] ← (a[3] = 70) and c[9] ← (a[4] = 91).
We can now see that the running time of mergesort is much better asymptotically
than the naive algorithms that we have previously seen.
Theorem 2.11. The running time of mergesort on an input list of size n is Θ(n log n)
in the best, worst, and average case.
algorithm mergeSort
Input: array a[0..n − 1]; array indices l, r; array t[0..n − 1]
sorts the subarray a[l..r]
begin
if l < r then
m ← l+r
2
mergeSort(a, l, m,t)
mergeSort(a, m + 1, r,t)
merge(a, l, m + 1, r,t)
end if
end
It is easy to see that the recursive version simply divides the list until it reaches
lists of size 1, then merges these repeatedly. We can eliminate the recursion in a
straightforward manner. We first merge lists of size 1 into lists of size 2, then lists of
size 2 into lists of size 4, and so on. This is often called straight mergesort .
Example 2.12. Starting with the input list (1, 5, 7, 3, 6, 4, 2) we merge repeatedly. The
merged sublists are shown with parentheses.
This method works particularly well for linked lists, because the merge steps can
be implemented simply by redefining pointers, without using the extra space re-
quired when using arrays (see Exercise 2.3.4).
Exercises
Exercise 2.3.1. What is the minimum number of comparisons needed when merg-
ing two nonempty sorted lists of total size n into a single list?
Exercise 2.3.2. Give two sorted lists of size 8 whose merging requires the maximum
number of comparisons.
algorithm merge
Input: array a[0..n − 1]; array indices l, r; array index s; array t[0..n − 1]
merges the two sorted subarrays a[l..s − 1] and a[s..r] into a[l..r]
begin
i ← l; j ← s; k ← l
while i ≤ s − 1 and j ≤ r do
if a[i] ≤ a[ j] then t[k] ← a[i]; k ← k + 1; i ← i + 1
else t[k] ← a[ j]; k ← k + 1; j ← j + 1
end if
end while
while i ≤ s − 1 do copy the rest of the first half
t[k] ← a[i]; k ← k + 1; i ← i + 1
end while
while j ≤ r do copy the rest of the second half
t[k] ← a[ j]; k ← k + 1; j ← j + 1
end while
return a ← t
end
Exercise 2.3.3. The 2-way merge in this section can be generalized easily to a k-
way merge for any positive integer k. The running time of such a merge is c(k − 1)n.
Assuming that the running time of insertion sort is cn2 with the same scaling factor
c, analyse the asymptotic running time of the following sorting algorithm (you may
assume that n is an exact power of k).
Find the optimum value of k to get the fastest sort and compare its worst/average
case asymptotic running time with that of insertion sort and mergesort.
Exercise 2.3.4. Explain how to merge two sorted linked lists in linear time into a
bigger sorted linked list, using only a constant amount of extra space.
2.4 Quicksort
This algorithm is also based on the divide-and-conquer paradigm. Unlike merge-
sort, quicksort dynamically forms subarrays depending on the input, rather than
sorting and merging predetermined subarrays. Almost all the work of mergesort was
in the combining of solutions to subproblems, whereas with quicksort, almost all the
work is in the division into subproblems.
Quicksort is very fast in practice on “random” data and is widely used in software
libraries. Unfortunately it is not suitable for mission-critical applications, because it
has very bad worst case behaviour, and that behaviour can sometimes be triggered
more often than an analysis based on random input would suggest.
Basic quicksort is recursive and consists of the following four steps.
• Finally, return the result of quicksort of the “head” sublist, followed by the
pivot, followed by the result of quicksort of the “tail” sublist.
The first step takes into account that recursive dynamic partitioning may pro-
duce empty or single-item sublists. The choice of a pivot at the next step is most
critical because the wrong choice may lead to quadratic time complexity while a
good choice of pivot equalizes both sublists in size (and leads to “n log n” time com-
plexity). Note that we must specify in any implementation what to do with items
equal to the pivot. The third step is where the main work of the algorithm is done,
and obviously we need to specify exactly how to achieve the partitioning step (we do
this below). The final step involves two recursive calls to the same algorithm, with
smaller input.
Analysis of quicksort
All analysis depends on assumptions about the pivot selection and partitioning
methods used. In particular, in order to partition a list about a pivot element as
described above, we must compare each element of the list to the pivot, so at least
n − 1 comparisons are required. This is the right order: it turns out that there are
several methods for partitioning that use Θ(n) comparisons (we shall see some of
them below).
Proof. We use mathematical induction on the size of the list. If the size is 1, the al-
gorithm is clearly correct. Suppose then that n ≥ 2 and the algorithm works correctly
on lists of size smaller than n. Suppose that a is a list of size n, p is the pivot ele-
ment, and i is the position of p after partitioning. Due to the partitioning principle
of quicksort, all elements of the head sublist are no greater than p, and all elements
of the tail sublist are no smaller than p. By the induction hypothesis, the left and
right sublists are sorted correctly, so that the whole list a is sorted correctly.
Unlike mergesort, quicksort does not perform well in the worst case.
Proof. The worst case for the number of comparisons is when the pivot is always the
smallest or the largest element, one of two sublists is empty, and the second sublist
contains all the elements other than the pivot. Then quicksort is recursively called
only on this second group. We show that a quadratic number of comparisons are
needed; considering data moves or swaps only increases the running time.
Let T (n) denote the worst-case running time for sorting a list containing n ele-
ments. The partitioning step needs (at least) n − 1 comparisons. At each next step
for n ≥ 1, the number of comparisons is one less, so that T (n) satisfies the easy recur-
rence, T (n) = T (n − 1) + n − 1; T (0) = 0, similar to the basic one, T (n) = T (n − 1) + n,
in Example 1.29. This yields that T (n) is Ω(n2 ).
Lemma 2.15. The average-case time complexity of quicksort is Θ(n log n).
Proof. Let T (n) denote the average-case running time for sorting a list containing n
elements. In the first step, the time taken to compare all the elements with the pivot
is linear, cn. If i is the final position of the pivot, then two sublists of size i and n− 1− i,
respectively, are quicksorted, and in this particular case T (n) = T (i) + T (n − 1 − i) + cn.
Therefore, the average running time to sort n elements is equal to the partitioning
time plus the average time to sort i and n − 1 − i elements, where i varies from 0 to
n − 1.
All the positions i may be met equiprobably—the pivot picked with the same
chance 1n —as the final pivot position in a sorted array. Hence, the average time of
each recursive call is equal to the average, over all possible subset sizes, of the aver-
age running time of recursive calls on sorting both subsets:
1 n−1 2 n−1
T (n) = ∑
n i=0
(T (i) + T (n − 1 − i)) + cn = ∑ T (i) + cn
n i=0
T (n) T (n − 1) 3c c
= + − .
n+1 n n+1 n
Telescoping of this recurrence results in the following relationship.
T (n) T (0) 1 1 1 1 1 1 1 1 1
= + 3c + + +...+ −c + + + +...+
n+1 1 2 3 4 n+1 1 2 3 4 n
= c(3Hn+1 − 3 − Hn )
where Hn is the n-th harmonic number. Thus (see Section D.6) T (n) is Θ(n log n).
This is the first example we have seen of an algorithm whose worst-case perfor-
mance and average-case performance differ dramatically.
The finer details of the performance of quicksort depend on several implemen-
tation issues, which we discuss next.
Implementation of quicksort
There are many variants of the basic quicksort algorithm, and it makes sense to
speak of “a quicksort”. There are several choices to be made:
C HOOSING A PIVOT A passive pivot strategy of choosing a fixed (for example the
first, last, or middle) position in each sublist as the pivot seems reasonable under
the assumption that all inputs are equiprobable. The simplest version of quicksort
just chooses the first element of the list. We call this the naive pivot selection rule.
For such a choice, the likelihood of a random input resulting in quadratic running
time (see Lemma 2.14 above) is very small. However, such a simple strategy is a bad
idea. There are two main reasons:
• (nearly) sorted lists occur in practice rather frequently (think of a huge database
with a relatively small number of updates daily);
• a malicious adversary may exploit the pivot choice method by deliberately
feeding the algorithm an input designed to cause worst case behaviour (a so-
called “algorithmic complexity attack”).
If the input list is already sorted or reverse sorted, quadratic running time is ob-
tained when actually quicksort “should” do almost no work. We should look for a
better method of pivot selection.
A more reasonable choice for the pivot is the middle element of each sublist. In
this case an already sorted input list produces the perfect pivot at each recursion.
Of course, it is still possible to construct input sequences that result in quadratic
running time for this strategy. They are very unlikely to occur at random, but this
still leaves the door open for a malicious adversary.
As an alternative to passive strategies, an active pivot strategy makes good use of
the input data to choose the pivot. The best active pivot is the exact median of the list
because it partitions the list into (almost) equal sized sublists. But it turns out that
the median cannot be computed quickly enough, and such a choice considerably
slows quicksort down rather than improving it. Thus, we need a reasonably good yet
computationally efficient estimate of the median. (See Section 2.6 for more on the
problem of exact computation of the median of a list.)
A reasonable approximation to the true median is obtained by choosing the me-
dian of the first, middle, and last elements as the pivot (the so-called median-of-
three method). Note that this strategy is also the best for an already-sorted list be-
cause the median of each subarray is recursively used as the pivot.
Example 2.16. Consider the input list of integers (25, 8, 2, 91, 70, 50, 20, 31, 15, 65). Us-
ing the median-of-three rule (with the middle of a size n list defined as the element
at position ⌊n/2⌋), the first pivot chosen is the median of 25, 70 and 65, namely 65.
Thus the left and right sublists after partitioning have sizes 7 and 2 respectively.
The standard choice for pivot would be the left element, namely 25. In this case
the left and right sublists after partitioning have sizes 4 and 5 respectively.
The median-of-three strategy does not completely avoid bad performance, but
the chances of such a case occurring are much less than for a passive strategy.
Finally, another simple method is to choose the pivot randomly. We can show
(by the same calculation as for the average-case running time) that the expected
running time on any given input is Θ(n log n). It is still possible to encounter bad
cases, but these now occur by bad luck, independent of the input. This makes it
impossible for an adversary to force worst-case behaviour by choosing a nasty input
in advance. Of course, in practice an extra overhead is incurred because of the work
needed to generate a “random” number.
A similar idea is to first randomly shuffle the input (which can be done in linear
time), and then use the naive pivot selection method. Again, bad cases still occur,
but by bad luck only.
PARTITIONING There are several ways to partition with respect to the pivot element
in linear time. We present one of them here.
Our partitioning method uses a pointer L starting at the head of the list and an-
other pointer R starting at the end plus one. We first swap the pivot element to the
head of the list. Then, while L < R, we loop through the following procedure:
Finally, once L = R, we swap the pivot element with the element pointed to by L.
Example 2.17. Table 2.3 exemplifies the partitioning of a 10-element list. We choose
the pivot p = 31. The bold element is the pivot; elements in italics are those pointed
to by the pointers L and R.
algorithm quickSort
Input: array a[0..n − 1]; array indices l, r
sorts the subarray a[l..r]
begin
if l < r then
i ← pivot(a, l, r) return position of pivot element
j ← partition(a, l, r, i) return final position of pivot
quickSort(a, l, j − 1) sort left subarray
quickSort(a, j + 1, r) sort right subarray
end if
return a
end
Exercises
Exercise 2.4.1. Analyse in the same way as in Example 2.17 the next partitioning of
the left sublist (20, 8, 2, 25, 15) obtained in Table 2.3.
Exercise 2.4.2. Show in detail what happens in the partitioning step if all keys are
equal. What would happen if we change the partition subroutine so that L skipped
over keys equal to the pivot? What if R did? What if both did? (hint: the algorithm
would still be correct, but its performance would differ).
Exercise 2.4.3. Analyse partitioning of the list (25, 8, 8, 8, 8, 8, 2) if the pointers L and
R are advanced while L ≤ p and p ≤ R, respectively, rather than stopping on equality
as in Table 2.3.
Exercise 2.4.4. Suppose that we implement quicksort with the naive pivot choice
rule and the partitioning method of the text. Find an array of size 8, containing each
integer 1, . . . 8 exactly once, that makes quicksort do the maximum number of com-
parisons. Find one that makes it do the minimum number of comparisons.
2.5 Heapsort
This algorithm is an improvement over selection sort that uses a more sophisti-
cated data structure to allow faster selection of the minimum element. It, like merge-
sort, achieves Θ(n log n) in the worst case. Nothing better can be achieved by an al-
gorithm based on pairwise comparisons, as we shall see later, in Section 2.7.
A heap is a special type of tree. See Section D.7 if necessary for general facts about
trees.
Definition 2.19. A complete binary tree is a binary tree which is completely filled at
all levels except, possibly, the bottom level, which is filled from left to right with no
missing nodes.
In such a tree, each leaf is of depth h or h − 1, where h is the tree height, and each
leaf of height h is on the left of each leaf of height h − 1.
Example 2.20. Figure 2.5 demonstrates a complete binary tree with ten nodes. If the
node J had been the right child of the node E, the tree would have not been complete
because of the left child node missed at the bottom level.
A 1
2 C 3
B
4 5 6 7
D E F G
8 9 10
H I J
positions: 1 2 3 4 5 6 7 8 9 10
A B C D E F G H I J
indices: 0 1 2 3 4 5 6 7 8 9
Example 2.21. In Figure 2.5, the node in position p = 1 is the root with no parent
node. The nodes in positions from 6 to 10 are the leaves. The root has its left child in
position 2 and its right child in position 3. The nodes in positions 2 and 3 have their
left child in position 4 and 6 and their right child in position 5 and 7, respectively.
The node in position 4 has a left child in position 8 and a right child in position 9,
and the node 5 has only a left child, in position 10.
Definition 2.22. A (maximum) heap is a complete binary tree having a key associ-
ated with each node, the key of each parent node being greater than or equal to the
keys of its child nodes.
The heap order provides easy access to the maximum key associated with the
root.
Example 2.23. Figure 2.6 illustrates a maximum heap. Of course, we could just as
easily have a minimum heap where the key of the parent node is less than or equal
to the keys of its child nodes. Then the minimum key is associated with the root.
The heapsort algorithm now works as follows. Given an input list, build a heap
by successively inserting the elements. Then delete the maximum repeatedly (ar-
ranging the elements in the output list in reverse order of deletion) until the heap is
empty. Clearly, this is a variant of selection sort that uses a different data structure.
Analysis of heapsort
Heapsort is clearly correct for the same reason as selection sort. To analyse its
performance, we need to analyse the running time of the insertion and deletion op-
erations.
Lemma 2.24. The height of a complete binary tree with n nodes is at most ⌊lg n⌋.
Proof. Depending on the number of nodes at the bottom level, a complete tree of
height h contains between 2h and 2h+1 − 1 nodes, so that 2h ≤ n < 2h+1 , or h ≤ lg n <
h + 1.
91 1
3
65 2 70
4 5 6 7
31 8 50 25
8 9 10
20 15 2
1 2 3 4 5 6 7 8 9 10
91 65 70 31 8 50 25 20 15 2
Lemma 2.25. Insertion of a new node into a heap takes logarithmic time.
Proof. To add one more node to a heap of n elements, a new, (n + 1)-st, leaf position
has to be created. The new node with its associated key is placed first in this leaf. If
the inserted key preserves the heap order, the insertion is completed. Otherwise, the
new key has to swap with its parent, and this process of bubbling up, (or percolating
up) the key is repeated toward the root until the heap order is restored. Therefore,
there are at most h swaps where h is the heap height, so that the running time is
O(log n).
Example 2.26. To insert an 11th element, 75, into the heap in Figure 2.6 takes three
steps.
• The new key is swapped with its parent key, 8, in position 5 = ⌊11/2⌋ to restore
the heap order.
• The same type of swap is repeated for the parent key, 65, in position 2 = ⌊5/2⌋.
Because the heap order condition is now satisfied, the process terminates.
This is shown in Table 2.4. The elements moved to restore the heap order are
italicized.
Table 2.4: Inserting a new node with the key 75 in the heap in Figure 2.6.
Position 1 2 3 4 5 6 7 8 9 10 11
Index 0 1 2 3 4 5 6 7 8 9 10
Array at step 1 91 65 70 31 8 50 25 20 15 2 75
Array at step 2 91 65 70 31 75 50 25 20 15 2 8
Array at step 3 91 75 70 31 65 50 25 20 15 2 8
Lemma 2.27. Deletion of the maximum key from a heap takes logarithmic time in
the worst case.
Proof. The maximum key occupies the tree’s root, that is, position 1 of the array. The
deletion reduces the heap size by 1 so that its last leaf node has to be eliminated.
The key associated with this leaf replaces the deleted key in the root and then is
percolated down the tree. First, the new root key is compared to each child and
swapped with the larger child if at least one child is greater than the parent. This
process is repeated until the order is restored. Therefore, there are h moves in the
worst case where h is the heap height, and the running time is O(log n).
Because we percolate down the previous leaf key, the process usually terminates
at or near the leaves.
Example 2.28. To delete the maximum key, 91, from the heap in Figure 2.6, takes
three steps, as follows.
Table 2.5: Deletion of the maximum key from the heap in Figure 2.6.
Position 1 2 3 4 5 6 7 8 9
Index 0 1 2 3 4 5 6 7 8
Array at step 1 2 65 70 31 8 50 25 20 15
Array at step 2 70 65 2 31 8 50 25 20 15
Array at step 3 70 65 50 31 8 2 25 20 15
See also Table 2.5. The leaf key replacing the root key is boldfaced, and the moves
to restore the heap are italicized.
Lemma 2.29. Heapsort runs in time in Θ(n log n) in the best, worst, and average case.
Proof. The heap can be constructed in time O(n log n) (in fact it can be done more
quickly as seen in Lemma 2.31 but this does not affect the result). Heapsort then re-
peats n times the deletion of the maximum key and restoration of the heap property.
In the best, worst, and average case, each restoration is logarithmic, so the total time
is log(n) + log(n − 1) + ... + log(1) = log n! which is Θ(n log n).
Implementation of heapsort
There are several improvements that can be made to the basic idea above. First,
the heap construction phase can be simplified. There is no need to maintain the
heap property as we add each element, since we only require the heap property once
the heap is fully built. A nice recursive approach is shown below. Second, we can
eliminate the recursion. Third, everything can be done in-place starting with an
input array.
We consider each of these in turn.
A heap can be considered as a recursive structure of the form left subheap ← root
→ right subheap, built by a recursive “heapifying” process. The latter assumes that
the heap order exists everywhere except at the root and percolates the root down
to restore the total heap order. Then it is recursively applied to the left and right
subheaps.
Lemma 2.30. A complete binary tree satisfies the heap property if and only if the
maximum key is at the root, and the left and right subtrees of the root also satisfy the
heap property with respect to the same total order.
Proof. Suppose that T is a complete binary tree that satisfies the heap condition.
Then the maximum key is at the root. The left and right subtrees at the root are also
complete binary trees, and they inherit the heap property from T .
Conversely, suppose that T is a complete binary tree with the maximum at the
root and such that the left and right subtrees are themselves heaps. Then the value
at the root is at least as great as that of the keys of the children of the root. For each
other node of T , the same property holds by our hypotheses. Thus the heap property
holds for all nodes in the tree.
Lemma 2.31. A heap can be constructed from a list of size n in Θ(n) time.
Proof. Let T (h) denote the worst-case time to build a heap of height at most h. To
construct the heap, each of the two subtrees attached to the root are first trans-
formed into heaps of height at most h − 1 (the left subtree is always of height h − 1,
whereas the right subtree could be of lesser height, h − 2). Then in the worst case
the root percolates down the tree for a distance of at most h steps that takes time
O(h). Thus heap construction is asymptotically described by the recurrence similar
to Example 1.27, T (h) = 2T (h − 1) + ch and so T (h) is O(2h ). Because a heap of size n
is of height h = ⌊lg n⌋, we have 2h ≤ n and thus T (h) is O(n). But since every element
of the input must be inspected, we clearly have a lower bound of Ω(n), which yields
the result.
Now we observe that the recursion above can be eliminated. The key at each po-
sition p percolates down only after all its descendants have been already processed
by the same percolate-down procedure. Therefore, if this procedure is applied to the
nodes in reverse level order, the recursion becomes unnecessary. In this case, when
the node p has to be processed, all its descendants have been already processed.
Because leaves need not percolate down, a non-recursive heapifying process by per-
colating nodes down can start at the non-leaf node with the highest number. This
leads to an extremely simple algorithm for converting an array into a heap (see the
first for loop in Figure 2.7).
Figure 2.7 presents the basic pseudocode for heapsort (for details of the pro-
cedure percolateDown, see the Java code in Section A.1). After each deletion, the
heap size decreases by 1, and the emptied last array position is used to place the just
deleted maximum element. After the last deletion the array contains the keys in as-
cending sorted order. To get them in descending order, we have to build a minimum
heap instead of the above maximum heap.
The first for-loop converts an input array a into a heap by percolating elements
down. The second for-loop swaps each current maximum element to be deleted
with the current last position excluded from the heap and restores the heap by per-
colating each new key from the root down.
algorithm heapSort
Input: array a[0..n − 1]
begin
for i ← ⌊n/2⌋ − 1 while i ≥ 0 step i ← i − 1 do
percolateDown(a, i, n ) build a heap
end for
for i ← n − 1 while i ≥ 1 step i ← i − 1 do
swap(a[0], a[i]) delete the maximum
percolateDown(a, 0, i ) restore the heap
end for
end
Position 1 2 3 4 5 6 7 8 9 10
Index 0 1 2 3 4 5 6 7 8 9
Initial array 70 65 50 20 2 91 25 31 15 8
Building max heap 70 65 50 20 8 91 25 31 15 2
70 65 50 31 8 91 25 20 15 2
70 65 91 31 8 50 25 20 15 2
70 65 91 31 8 50 25 20 15 2
91 65 70 31 8 50 25 20 15 2
Max heap 91 65 70 31 8 50 25 20 15 2
Deleting max 1 2 65 70 31 8 50 25 20 15 91
Restoring heap 1-9 70 65 50 31 8 2 25 20 15 91
Deleting max 2 15 65 50 31 8 2 25 20 70 91
Restoring heap 1-8 65 31 50 20 8 2 25 15 70 91
Deleting max 3 15 31 50 20 8 2 25 65 70 91
Restoring heap 1-7 50 31 25 20 8 2 15 65 70 91
Deleting max 4 15 31 25 20 8 2 50 65 70 91
Restoring heap 1-6 31 20 25 15 8 2 50 65 70 91
Deleting max 5 2 20 25 15 8 31 50 65 70 91
Restoring heap 1-5 25 20 2 15 8 31 50 65 70 91
Deleting max 6 8 20 2 15 25 31 50 65 70 91
Restoring heap 1-4 20 15 2 8 25 31 50 65 70 91
Deleting max 7 8 15 2 20 25 31 50 65 70 91
Restoring heap 1-3 15 8 2 20 25 31 50 65 70 91
Deleting max 8 2 8 15 20 25 31 50 65 70 91
Restoring heap 1-2 8 2 15 20 25 31 50 65 70 91
Deleting max 9 2 8 15 20 25 31 50 65 70 91
• If the size of the list is 0, return “not found”; if the size is 1, return the element
of that list. Otherwise:
• Partition the remaining items into two disjoint sublists: reorder the list by plac-
ing all items greater than the pivot to follow it, and all elements less than the
pivot to precede it. Let j be the index of the pivot after partitioning.
• If k < j, then return the result of quickselect on the “head” sublist; otherwise
if k = j, return the element p; otherwise return the result of quickselect on the
“tail” sublist.
Analysis of quickselect
Correctness of quickselect is established just as for quicksort (see Exercise 2.6.2).
In the worst case, the running time can be quadratic; for example, if the input list is
already sorted and we use the naive pivot selection rule, then to find the maximum
element takes quadratic time.
Proof. Let T (n) denote the average time to select the k-th smallest element among
n elements, for fixed k where the average is taken over all possible input sequences.
Partitioning uses no more than cn operations and forms two subarrays, of size i and
n − 1 − i, respectively, where 0 ≤ i < n.
As in quicksort, the final pivot position in the sorted array has equal probability,
1
n , of taking each value of i. Then T (n) averages the average running time for all the
above pairs of the subarrays over all possible sizes. Because only one subarray from
each pair is recursively chosen, the average running time for the pair of size i and
n − 1 − i is (T (i) + T (n − 1 − i))/2 so that
1 n−1 1 n−1
T (n) = ∑
2n i=0
(T (i) + T (n − 1 − i)) + cn = ∑ T (i) + cn
n i=0
n−1
As for quicksort, the above recurrence can be rewritten as nT (n) = ∑i=0 T (i) + cn2
n−2
and subtracting the analogous equation (n − 1)T (n − 1) = ∑i=0 T (i) + c · (n − 1)2 and
rearranging, we are eventually led to the familiar recurrence T (n) = T (n − 1) + c and
can see that T (n) is Θ(n).
Implementation of quickselect
The only change from quicksort is that instead of making two recursive calls on
the left and right subarrays determined by the pivot, quickselect chooses just one of
these subarrays.
algorithm quickSelect
Input: array a[0..n]; array indices l, r; integer k
finds kth smallest element in the subarray a[l..r]
begin
if l ≤ r then
i ← pivot(a, l, r) return position of pivot element
j ← partition(a, l, r, i) return final position of pivot
q ← j − l + 1 the rank of the pivot in a[l..r]
if k = q then return a[ j]
else if k < q then return quickSelect(a, l, j − 1, k)
else return quickSelect(a, j + 1, r, k − q)
end if
else return “not found”
end
1:2
a1 ≥ a2 a1 < a2
2:3 2:3
a2 ≥ a3 a2 < a3 a2 ≥ a3 a2 < a3
a1 ≥ a3 a1 ≥ a3
a1 < a3 a1 < a3
Proof. We first claim that each binary tree of height h has at most 2h leaves. Once this
claim is established, we proceed as follows. The least value h such that 2h ≥ n! has
the lower bound h ≥ lg(n!) which is in Ω(n log n) (the asymptotic result follows from
Section D.6). This will prove the theorem.
To prove the above claim about tree height, we use mathematical induction on
h. A tree of height 0 has obviously at most 20 leaves. Now suppose that h ≥ 1 and
that each tree of height h − 1 has at most 2h−1 leaves. The root of a decision tree of
height h is linked to two subtrees, being each at most of height h− 1. By the induction
hypothesis, each subtree has at most 2h−1 leaves. The number of leaves in the whole
decision tree is equal to the total number of leaves in its subtrees, that is, at most
2h−1 + 2h−1 = 2h .
This result shows that heapsort and mergesort have asymptotically optimal worst-
case time complexity for comparison-based sorting.
As for average-case complexity, one can also prove the following theorem by us-
ing the decision tree idea. Since we are now at the end of our introductory analysis
of sorting, we omit the proof and refer the reader to the exercises, and to more ad-
vanced books.
Theorem 2.36. Every comparison-based sorting algorithm takes Ω(n log n) time in
the average case.
Exercises
Exercise 2.7.1. Prove Theorem 2.36. The following hints may be useful. First show
that the sum of all depths of leaves in a binary decision tree with k leaves is at least
k lg k. Do this by induction on k, using the recursive structure of these trees. Then
apply the above inequality with k = n!.
Exercise 2.7.2. Consider the following sorting method (often called counting sort )
applied to an array a[n] all of whose entries are integers in the range 1..1000. Intro-
duce a new array t[1000] all of whose entries are initially zero. Scan through the array
a[n] and each time an integer i is found, increment the counter t[i − 1] by 1. Once this
is complete, loop through 0 ≤ i ≤ 999 and print out t[i] copies of integer i + 1 at each
step.
What is the worst-case time complexity of this algorithm? How do we reconcile
this with Theorem 2.35?
2.8 Notes
It was once said that sorting consumes 25% of all CPU time worldwide. What-
ever the true proportion today, sorting clearly remains a fundamental problem to be
solved in a wide variety of applications.
Given the rise of object-oriented programming languages, comparison-based sort-
ing algorithms are perhaps even more important than in the past. In practice the
time taken to perform a basic comparison operation is often much more than that
taken to swap two objects: this differs from the case of, say, 32-bit integers, for which
most analysis was done in the past.
Shellsort was proposed by D. Shell in 1959, quicksort by C. A. R. Hoare in 1960,
mergesort in 1945 by J. von Neumann, and heapsort in 1964 by J. W. J. Williams.
Insertion sort and the other quadratic algorithms are very much older.
At the time of writing, versions of mergesort are used in the standard libraries for
the languages Python, C++ and Java, and a hybrid of quicksort and heapsort is used
by C++.
We have not discussed whether there is an algorithm that will find the median
(or any other given order statistic) in worst-case linear time. For a long time this
was unknown, but the answer was shown to be yes in 1973 by Blum, Floyd, Pratt,
Rivest and Tarjan. The algorithm is covered in more advanced books and is fairly
complicated.
Chapter 3
Efficiency of Searching
As can be seen from this example, we may map the keys to integers.
We deal with both static (where the database is fixed in advance and no inser-
tions, deletions or updates are done) and dynamic (where insertions, deletions or
updates are allowed) implementations of the table ADT.
In all our implementations of the table ADT, we may simplify the analysis as fol-
lows. We use lists and trees as our basic containers. We treat each query or update
of a list element or tree node, or comparison of two of them, as an elementary oper-
ation. The following lemma summarizes some obvious relationships.
Lemma 3.3. Suppose that a table is built up from empty by successive insertions,
and we then search for a key k uniformly at random. Let Tss (k) (respectively Tus (k)) be
the time to perform successful (respectively unsuccessful) search for k. Then
• the time taken to retrieve, delete, or update an element with key k is at least
Tss (k);
• the time taken to insert an element with key k is at least Tus (k);
In addition
• the worst case value for Tss (k) equals the worst case value for Tus (k);
• the average value of Tss (k) equals one plus the average of the times for the un-
successful searches undertaken while building the table.
Proof. To insert a new element, we first try to find where it would be if it were con-
tained in the data structure, and then perform a single insert operation into the con-
tainer. To delete an element, we first find it, and then perform a delete operation
on the container. Analogous statements hold for updating and retrieval. Thus for a
given state of the table formed by insertions from an empty table, the time for suc-
cessful search for a given element is the time that it took for unsuccessful search for
that element, as we built the table, plus one. This means that the time for unsuc-
cessful search is always at least the time for successful search for a given element
(the same in the worst case), and the average time for successful search for an ele-
ment in a table is the average of all the times for unsuccessful searches plus one.
If the data structure used to implement a table arranges the records in a list, the
efficiency of searching depends on whether the list is sorted. In the case of the tele-
phone book, we quickly find the desired phone number (data record) by name (key).
But it is almost hopeless to search directly for a phone number unless we have a spe-
cial reverse directory where the phone number serves as a key. We discuss unsorted
lists in the Exercises below, and sorted lists in the next section.
Exercises
Exercise 3.1.1. The sequential search algorithm simply starts at the head of a list
and examines elements in order until it finds the desired key or reaches the end of
the list. An array-based version is shown in Figure 3.1.
algorithm sequentialSearch
Input: array a[0..n − 1]; key k
begin
for i ← 0 while i < n step i ← i + 1 do
if a[i] = k then return i
end for
return not found
end
Show that both successful and unsuccessful sequential search in a list of size n
have worst-case and average-case time complexity Θ(n).
Exercise 3.1.2. Show that sequential search is slightly more efficient for sorted lists
than unsorted ones. What is the time complexity of successful and unsuccessful
search?
l=0 m=7 r = 15
7 14 27 33 42 49 51
42 49 51
l=4 r=6
m=5
42
Proof. Unsuccessful binary search takes ⌈lg(n+1)⌉ comparisons in every case, which
is Θ(log n). By Lemma 3.3, successful search also takes time in Θ(log n) on average
and in the worst case. Insertion and deletion in arrays takes Θ(n) time on average
and in the worst case.
For linked lists, the searches take time in Θ(n) and the list insertion and deletion
take constant time.
Binary search performs a predetermined sequence of comparisons depending
on the data size n and the search key k. This sequence is better analysed when a
sorted list is represented as a binary search tree. For simplicity of presentation, we
suppress the data records and make all the keys integers. Background information
on trees is found in Section D.7.
Definition 3.6. A binary search tree (BST) is a binary tree that satisfies the follow-
ing ordering relation: for every node ν in the tree, the values of all the keys in the
left subtree are smaller than (or equal, if duplicates allowed) to the key in ν and the
values of all the keys in the right subtree are greater than the key in ν.
In line with the ordering relation, all the keys can be placed in sorted order by
traversing the tree in the following way: recursively visit, for each node, its left sub-
tree, the node itself, then its right subtree (this is the so-called inorder traversal).
The relation is not very desirable for duplicate keys; for exact data duplicates, we
should have one key and attach to it the number of duplicates.
The element in the middle position, m0 = ⌊(n − 1)/2⌋, of a sorted array is the
root of the tree representation. The lower subrange, [0, . . . , m0 − 1], and the upper
subrange, [m0 + 1, . . . , n − 1], of indices are related to the left and right arcs from the
root. The elements in their middle positions, mleft,1 = ⌊(m0 − 1)/2⌋ and mright,1 = ⌊(n +
m0 )/2⌋, become the left and right child of the root, respectively. This process is re-
peated until all the array elements are related to the nodes of the tree. The middle
element of each subarray is the left child if its key is less than the parent key or the
right child otherwise.
Example 3.7. Figure 3.3 shows a binary search tree for the 16 sorted keys in Fig-
ure 3.2. The key a[7] = 53 is the root of the tree. The lower [0..6] and the upper [8..15]
subranges of the search produce the two children of the root: the left child a[3] = 33
and the right child a[11] = 81. All other nodes are built similarly.
The tree representation interprets binary search as a pass through the tree from
the root to a desired key. If a leaf is reached but the key is not found, then the search
is unsuccessful. The number of comparisons to find a key is equal to the number of
nodes along the unique path from the root to the key (the depth of the node, plus
one).
A static binary search always yields a tree that is well-balanced: for each node
in the tree, the left and right subtrees have height that differs by at most 1. Thus all
leaves are on at most two levels. This property is used to define AVL trees in Sec-
tion 3.4.
Implementation of binary search
Algorithm binarySearch in Figure 3.4 searches for key k in a sorted array a.
The search starts from the whole array from l = 0 to r = n − 1. If l and r ever cross,
l > r, then the desired key is absent, indicating an unsuccessful search. Otherwise,
7
53
0..6 8..15
33 3 11 81
14 1 49 5 9 70 13 94
0 2 4 6 8 10 12 14..15
7 27 42 51 67 77 89 95
0 2 4 6 8 10 12 14
15
• If k < a[m], then the key may be in the range l to m − 1 so that r ← m − 1 at next
step.
Binary search is slightly accelerated if the test for a successful search is removed
from the inner loop in Figure 3.4 and the search range is reduced by one in all cases.
To determine whether the key k is present or absent, a single test is performed out-
side the loop (see Figure 3.5). If the search key k is not larger than the key a[m] in the
middle position m, then it may be in the range from l to m. The algorithm breaks the
while-loop when the range is 1, that is, l = r, and then tests whether there is a match.
algorithm binarySearch
Input: array a[0..n − 1]; key k
begin
l ← 0; r ← n − 1
while l ≤ r do
m ← ⌊(l + r)/2⌋
if a[m] < k then l ← m + 1
else if a[m] > k then r ← m − 1 else return m
end if
end while
return not found
end
algorithm binarySearch2
Input: array a[0..n − 1]; key k
begin
l ← 0; r ← n − 1
while l < r do
m ← ⌊(l + r)/2⌋
if a[m] < k then l ← m + 1
else r ← m
end while
if a[l] = k then return l
else return not found
end
Exercises
Exercise 3.2.1. Perform a search for the key 41 in the situation of Example 3.4.
Exercise 3.2.2. How many comparisons will binary search make in the worst case to
find a key in a sorted array of size n = 106 ?
Exercise 3.2.3. Prove that both given array implementations of binary search cor-
rectly find an element or report that it is absent.
Exercise 3.2.4. If we have more information about a sorted list, interpolation search
can be (but is not always) much faster than binary search. It is the method used
when searching a telephone book: to find a name starting with “C” we open the
book not in the middle, but closer to the beginning.
k−a[l]
To search for an element between position l and r with l < r, we choose ρ = a[r]−a[l]
and the next guess is m = l + ⌈ρ(r − l)⌉ .
Give an example of a sorted input array of 8 distinct integers for which interpola-
tion search performs no better than sequential search.
Exercise 3.2.5. Determine how many positions interpolation search will examine to
find the key 85 among the ten keys (10, 20, 35, 45, 55, 60, 75, 80, 85, 100).
10 10 10
3 15 3 15 3 15
1 5 1 5 1 11
4 8 2 8 4 12
Figure 3.6: Binary trees: only the leftmost tree is a binary search tree.
Example 3.8. In Figure 3.6, two trees are not binary search trees because the key 2
in the middle tree is in the right subtree of the key 3, and the keys 11 and 12 in the
rightmost tree are in the left subtree of the key 10.
Binary search trees implement efficiently the basic search, insert, and remove
operations of the table ADT. In addition a BST allows for more operations such as
sorting records by their keys, finding the minimum key, etc (see Exercises).
Binary search trees are more complex than heaps. Only a root or leaves are added
to or removed from a heap, whereas any node of a binary search tree may be re-
moved.
The search operation resembles usual binary search in that it starts at the root
and moves either left or right along the tree, depending on the result of comparing
the search key to the key in the node.
10 8<10? 10 7<10?
3 8<3? 15 3 7<3? 15
1 5 8<5? 1 5 7<5?
4 8 8=8 4 8 7<8?
found node
7 inserted node
Example 3.9. To find key 8 in the leftmost tree in Figure 3.6, we start at the root 10
and go left because 8 < 10. This move takes us to 3, so we turn right (because 8 > 3)
to 5. Then we go right again (8 > 5) and encounter 8.
Example 3.10. To find key 7, we repeat the same path. But, when we are at node 8,
we cannot turn left because there is no such link. Hence, 7 is not found.
Figure 3.7 illustrates both Examples 3.9 and 3.10. It shows that the node with key
7 can be inserted just at the point at which the unsuccessful search terminates.
The removal operation is the most complex because removing a node may dis-
connect the tree. Reattachment must retain the ordering condition but should not
needlessly increase the tree height. The standard method of removing a node is as
follows.
• An internal node with only one child is deleted after its child is linked to its
parent node.
• If the node has two children, then it should be swapped with the node having
the smallest key in its right subtree. The latter node is easily found (see Exer-
cise 3.3.1). After swapping, the node can be removed as in the previous cases,
since it is now in a position where it has at most one child.
This approach appears asymmetric but various modifications do not really im-
prove it. The operation is illustrated in Figure 3.8.
5 remove 10
3 10
1 4 8 15
5
replace 10
0 2 6 12 18
3 10
1 4 8 15
5
3 12 2
0 6 12 18
minimum
1 4 8 15 in the
right
subtree
0 2 6 18
Figure 3.8: Removal of the node with key 10 from the binary search tree.
Proof. The running time of these operations is proportional to the number of nodes
visited. For the find and insert operations, this equals 1 plus the depth of the node,
but for the remove operation it equals the depth of the node plus at most the height
of the node. In each case this is O(h).
For a well-balanced tree, all operations take logarithmic time. The problem is
that insertions and deletions may destroy the balance, and in practice BSTs may be
heavily unbalanced as in Figure 3.11. So in the worst case the search time is linear,
Θ(n), and we have an inferior form of sequential search (because of the extra over-
head in creating the tree arcs).
Figure 3.9 presents all variants of inserting the four keys 1, 2, 3, and 4 into an
empty binary search tree. Because only relative ordering is important, this corre-
sponds to any four keys (a[i] : i = 1, . . . , 4) such that a[1] < a[2] < a[3] < a[4].
1 1 1 1 1 4 4 4 4 4
4 4 3 2 2 3 3 2 1 1
3 2 2 4 4 3 2 1 1 3 3 2
2 3 1324 3 4 2 4213
1342 1 4231 2 3
1 4 2 4 1 3 1 4
2 1 4 3
There are in total 4! = 24 possible insertion orders given in Figure 3.9. Some trees
result from several different insertion sequences, and more balanced trees appear
more frequently than unbalanced ones.
Definition 3.12. The total internal path length, Sτ (n), of a binary tree τ is the sum
of the depths of all its nodes.
The average node depth, 1n Sτ (n), gives the average time complexity of a success-
ful search in a particular tree τ. The average-case time complexity of searching is
obtained by averaging the total internal path lengths for all the trees of size n, that
is, for all possible n! insertion orders, assuming that these latter occur with equal
probability, n!1 .
Lemma 3.13. Suppose that a BST is created by n random insertions starting from
an empty tree. Then the expected time for successful and unsuccessful search is
Θ(log n). The same is true for update, retrieval, insertion and deletion.
Proof. Let S(n) denote the average of Sτ over all insertion sequences, each sequence
considered as equally likely. We need to prove that S(n) is Θ(n log n).
It is obvious that S(1) = 0. Furthermore, any n-node tree, n > 1, contains a left
subtree with i nodes, a root at height 0, and a right subtree with (n − i − 1) nodes
where 0 ≤ i ≤ n − 1, each value of i being by assumption equiprobable.
For a fixed i, S(i) is the average total internal path length in the left subtree with re-
spect to its own root and S(n−i−1) is the analogous total path length in the right sub-
tree. The root of the tree adds 1 to the path length of each other node. Because there
are n − 1 such nodes, the following recurrence holds: S(n) = (n − 1) + S(i) + S(n − i − 1)
for 0 ≤ i ≤ n − 1. After summing these recurrences for all i = 0, . . . , n − 1 and averaging,
just the same recurrence as for the average-case quicksort analysis (see Lemma 2.15)
n−1
is obtained: S(n) = (n − 1) + 2n ∑i=0 S(i). Therefore, the average total internal path
length is Θ(n log n). The expected depth of a node is therefore in Θ(log n). This means
that search, update, retrieval and insertion take time in Θ(log n). The same result is
true for deletion (this requires the result that the expected height of a random BST is
Θ(log n), which is harder to prove—see Notes).
Thus in practice, for random input, all BST operations take time about Θ(log n).
However the worst-case linear time complexity, Θ(n), is totally unsuitable in many
applications, and deletions can also destroy balance.
We tried to eliminate the worst-case degradation of quicksort by choosing a pivot
that performs well on random input data and relying on the very low probability of
the worst case. Fortunately, binary search trees allow for a special general technique,
called balancing , that guarantees that the worst cases simply cannot occur. Balanc-
ing restructures each tree after inserting new nodes in such a way as to prevent the
tree height from growing too much. We discuss this more in Section 3.4.
5 5
3 10 4 10
1 4 8 15 3 8 15
15 1 1
10 3 15
8 4 3
5 5 10
4 8 4
3 10 8
1 15 5
Implementation of BST
A concrete implementation of a BST requires explicit links between nodes, each
of which contains a data record, and is more complicated than the other implemen-
tations so far. A programming language that supports easy manipulation of objects
makes the coding easier. We do not present any (pseudo)code in this book.
Exercises
Exercise 3.3.1. Show how to find the maximum (minimum) key in a binary search
tree. What is the running time of your algorithm? How could you find the median,
or an arbitrary order statistic?
Exercise 3.3.2. Suppose that we insert the elements 1, 2, 3, 4, 5 into an initially empty
BST. If we do this in all 120 possible orders, which trees occur most often? Which
tree shapes (in other words, ignore the keys) occur most often?
Exercise 3.3.3. Suppose that we insert the elements 1, 2, 3, 4 in some order into an
initially empty BST. Which insertion orders yield a tree of maximum height? Which
yield a tree of minimum height?
Exercise 3.3.4. Show how to output all records in ascending order of their keys, given
a BST. What is the running time of your algorithm?
This balance condition ensures height Θ(log n) for an AVL tree despite being less
restrictive than requiring the tree to be complete. Complete binary trees have too
rigid a balance condition which is difficult to maintain when new nodes are inserted.
Lemma 3.15. The height of an AVL tree with n nodes is Θ(log n).
Proof. Due to the possibly different heights of its subtrees, an AVL tree of height h
may contain fewer than 2h+1 − 1 nodes. We will show that it contains at least ch nodes
for some constant c > 1, so that the maximum height of a tree with n items is logc n.
Let Sh be the size of the smallest AVL tree of height h. It is obvious that S0 = 1 (the
root only) and S1 = 2 (the root and one child). The smallest AVL tree of height h has
subtrees of height h − 1 and h − 2, because at least one subtree is of height h − 1 and
the second height can differ at most by 1 due to the AVL balance condition. These
subtrees must also be the smallest AVL trees for their height, so that Sh = Sh−1 +Sh−2 +
1.
By mathematical induction, we show easily that Sh = Fh+3 − 1 where Fi is the i-
th Fibonacci number (see Example 1.28) because S0 = F3 − 1, S1 = F4 − 1, and Si+1 =
(Fi+3 − 1) + (Fi+2 − 1) + 1 ≡ Fi+4 − 1. Therefore, for each AVL tree with n nodes n ≥ Sh ≈
h+3
φ√
5
− 1 where φ ≈ 1.618. Thus, the height of an AVL tree with n nodes satisfies the
following condition: h ≤ 1.44 · lg(n + 1) − 1.33, and the worst-case height is at most
44% more than the minimum height for binary trees.
Note that AVL trees need in the worst case only about 44% more comparisons
than complete binary trees. They behave even better in the average case. Although
theoretical estimates are unknown, the average-case height of a randomly constructed
AVL tree is close to lg n.
All basic operations in an AVL tree have logarithmic worst-case running time.
The difficulty is, of course, that simple BST insertions and deletions can destroy the
AVL balance condition. We need an efficient way to restore the balance condition
when necessary. All self-balancing binary search trees use the idea of rotation to do
this.
Example 3.16. Figure 3.12 shows a left rotation at the node labelled p and a right
rotation at the node labelled q (the labels are not related to the keys stored in the
BST, which are not shown here). These rotations are mutually inverse operations.
Each rotation involves only local changes to the tree, and only a constant number
of pointers must be updated, independent of the tree size. If, for example, there is a
subtree of large height below a, then the right rotation will decrease the overall tree
height.
q p
p c a q
Right Rotation
a b Left Rotation b c
The precise details of exactly when a rotation is required, and which kind, differ
depending on the type of balanced BST. In each case the programming of the inser-
tion and removal operations is quite complex, as is the analysis. We will not go into
more details here—the reader should consult the recommended references.
Balancing of AVL trees requires extra memory and heavy computations. This is
why even more relaxed efficient balanced search trees such as red-black trees are
more often used in practice.
Definition 3.17. A red-black tree is a binary search tree such that every node is
coloured either red or black, and every non-leaf node has two children. In addition,
it satisfies the following properties:
• every path from the root node to a leaf must contain the same number of black
nodes.
Theorem 3.18. If every path from the root to a leaf contains b black nodes, then the
tree contains at least 2b − 1 black nodes.
Proof. The statement holds for b = 1 (in this case the tree contains either the black
root only or the black root and one or two red children). In line with the induction
hypothesis, let the statement hold for all red-black trees with b black nodes in every
path. If a tree contains b + 1 black nodes in every path and has two black children
of the root, then the tree contains two subtrees with b black nodes just under the
root and has in total at least 1+2 · (2b − 1) = 2b+1 − 1 black nodes. If the root has a red
child, the latter has only black children, so that the total number of the black nodes
can become even larger.
Each path cannot contain two consecutive red nodes and increase more than
twice after all the red nodes are inserted. Therefore, the height of a red-black tree is
at most 2⌈lg n⌉, and the search in it is logarithmic, O(log n).
Red-black trees allow for a very fast search. This data structure has still no precise
analysis of its average-case performance. Its properties are found either experimen-
tally or by analysing red-black trees containing random n keys. There are about lg n
comparisons per search on the average and fewer than 2 lg n + 2 comparisons in the
worst case. Restoring the tree after insertion or deletion of single node requires O(1)
rotations and O(log n) colour changes in the worst case.
Another variety of balanced tree, the AA-tree, becomes more efficient than a red-
black tree when node deletions are frequent. An AA-tree has only one extra condi-
tion with respect to a red-black tree, namely that the left child may not be red. This
property simplifies the removal operation considerably.
Balanced B-trees: efficiency of external search
The B-tree is a popular structure for ordered databases in external memory such
as magnetic or optical disks. The previous “Big-Oh” analysis is invalid here because
it assumes all elementary operations have equal time complexity. This does not hold
for disk input / output where one disk access corresponds to hundreds of thousands
of computer instructions and the number of accesses dominates running time. For
a large database of many millions of records, even logarithmic worst-case perfor-
mance of red-black or AA-trees is unacceptable. Each search should involve a very
small number of disk accesses, say, 3–4, even at the expense of reasonably complex
computations (which will still take only a small fraction of a disk access time).
Binary tree search cannot solve the problem because even an optimal tree has
height lg n. To decrease the height, each node must have more branches. The height
of an optimal m-ary search tree (m-way branching) is roughly logm n (see Table 3.2),
or lg m times smaller than with an optimal binary tree (for example, 6.6 times for
m = 100).
Table 3.2: Height of the optimal m-ary search tree with n nodes.
4,10
0 1 3 4 6 8 10 14 17 20
Figure 3.13 shows that the search and the traversal of a multiway search tree gen-
eralize in a straightforward way the binary search tree operations. If in the latter
case the search key is compared to a single key in a node in order to choose one of
two branches or stop in the node, in an m-ary tree, the search key is compared to at
most m − 1 keys in a node to choose one of m branches. The major difference is that
multiple data records are now associated only with leaves although some multiway
trees do not strictly follow this condition. Thus the worst-case and the average-case
search involve the tree height and the average leaf height, respectively.
Example 3.19. Search for a desired key k in Figure 3.13 is guided by thresholds, for
example at the root it goes left if k < 4, down if 4 ≤ k < 10, and right if k ≥ 10. The
analogous comparisons are repeated at every node until the record with the key k
is found at a leaf or its absence is detected. Let k = 17. First, the search goes right
from the root (as 17 > 10), then goes to the third child of the right internal node (as
17 ≤ 17 < 20), and finally finds the desired record in that leaf.
• each nonleaf node (except possibly the root) has between ⌈m/2⌉ and m children
inclusive;
• each nonleaf node with µ children has µ − 1 keys, (θ[i] : i = 1, . . . , µ − 1), to guide
the search where θ[i] is the smallest key in subtree i + 1;
• data items are stored in leaves, each storing between ⌈l/2⌉ and l items, for some
l.
Other definitions of B-trees (mostly with minor changes) also exist, but the above
one is most popular. The first three conditions specify the memory space each node
needs (first of all, for m links and m − 1 keys) and ensure that more than half of it, ex-
cept possibly in the root, will be used. The last two conditions form a well-balanced
tree.
Note. B-trees are usually named by their branching limits, that is, ⌈m/2⌉–m, so that
2–3 and 6–11 trees are B-trees with m = 3 and m = 11, respectively.
Example 3.21. In a 2−4 B-tree in Figure 3.14 all nonleaf nodes have between ⌈4/2⌉ =
2 and 4 children and thus from 1 to 3 keys. The number l of data records associated
with a leaf depends on the capacity of external memory and the record size. In Fig-
ure 3.14, l = 7 and each leaf stores between ⌈7/2⌉ = 4 and 7 data items.
Because the nodes are at least half full, a B-tree with m ≥ 8 cannot be a simple
binary or ternary tree. Simple ternary 2–3 B-trees with only two or three children per
node are sometimes in use for storing ordered symbol tables in internal computer
RAM. But branching limits for B-trees on external disks are considerably greater to
make one node fit in a unit data block on the disk. Then the number of nodes ex-
amined (and hence the number of disk accesses) decreases lg m times or more com-
pared with a binary tree search.
m = 4 55 75
l = 7
23 31 40 60 71 84 91
11 23 31 40 55 60 71 75 84 91
13 25 33 42 57 61 72 77 85 93
15 27 34 44 58 62 73 78 89 94
16 28 38 50 59 65 74 79 90 95
18 39 51 66 82 98
20 52 68 99
22 70
Figure 3.14: 2–4 B-tree with the leaf storage size 7 (2..4 children per node and 4..7 data items
per leaf ).
In each particular case, the tree order m and the leaf capacity l depend on the disk
block size and the size of records to store. Let one disk block hold d bytes, each key
be of κ bytes, each branch address be of b bytes, and the database contain s records,
each of size r bytes. In a B-tree of order m, each nonleaf node stores at most m − 1
keys and m branch addresses, that is, in total, κ(m − 1) + bm = (κ + b)m − κ bytes. The
largest order m such that one node fits in one disk block, (κ+b)m−κ ≤ d, is m = d+κ
b+κ .
m d
Each internal node, except the root, has at least 2 branches. At most l = r records
l
fit in one block, and each leaf addresses from 2 to l records. Assuming each leaf is
s
full, the total number of the leaves is n = l , so that in the worst case the leaves are
at level ⌈logm/2 n⌉ + 1.
Example 3.22. Suppose the disk block is d = 215 ≡ 32768 bytes, the key size is κ =
26 ≡ 64 bytes, the branch address has b = 8 bytes, and the database contains s =
230 ∼
= 1.07 · 109 records of size r = 210 ≡ 1024 bytes each. Then the B-tree order is
m = ⌊ 32768+64 32832
8+64 ⌋ = ⌊ 72 ⌋ = 456 so that each internal node, except the root, has at least
228 branches. One block contains at most l = 327681024 = 32 records, and the number of
leaves is at least n = 230 /32 = 225 . The worst-case level of the leaves in this B-tree is
Exercises
Exercise 3.4.1. Draw two different red-black trees containing at most two black nodes
along every path from the root to a leaf.
Exercise 3.4.2. Draw two different AVL trees of size n = 7 and compare them to the
complete binary tree of the same size. Is the latter also an AVL tree?
Exercise 3.4.3. Draw an AA-tree containing at most 2 black nodes along every path
from a node to a leaf and differing from the complete binary tree of order n = 7.
Exercise 3.4.4. Draw a binary search tree of minimum size such that a left rotation
reduces the height of the tree.
C HAINING In separate chaining synonyms with the same hash address are stored
in a linked list connected to that address. We still hash the key of each item to obtain
an array index. But if there is a collision, the new item is simply placed in this hash
address, along with all other synonyms. Each array element is a head reference for
the associated linked list, and each node of this list stores not only the key and data
values for a particular table entry but also a link to the next node. The head node of
the list referenced by the array element always contains the last inserted item.
O PEN ADDRESSING Open addressing uses no extra space for collision resolution. In-
stead, we move one of the colliding elements to another slot in the array. We may
use LIFO (last-in, first out — the new element must move), FIFO (first in, first out —
the old element must move), or more complicated methods such as Robin Hood or
cuckoo hashing (see Notes). For our purposes here, we use LIFO.
Each collision resolution policy probes another array slot, and if empty inserts
the currently homeless element. If the probed slot is not empty, we probe again to
find a slot in which to insert the currently homeless element, and so on until we
finish insertion. The probe sequence used can be a simple fixed sequence, or given
by a more complicated rule (but is always deterministic). They all have the property
that they “wrap around” the array when they reach the end. The two most common
probing methods are:
• (Linear probing) always probe the element to the left;
• (Double hashing) probe to the left by an amount determined by the value of a
secondary hash function.
Note. The choice of probing to the left versus probing to the right is clearly a matter
of convention; the reader should note that other books may use rightward probing
in their definitions.
Example 3.25. Table 3.3 shows how OALP fills the hash table of size 10 using the two-
digit keys and the hash function of Example 3.24. The first five insertions have found
empty addresses. However, the key–value pair [31, F] has a collision because the
address h(31) = 3 is already occupied by the pair [39, E] with the same hash address,
h(39) = 3. Thus, the next lower table address, location 2, is probed to see if it is empty,
and in the same way the next locations 1 and 0 are checked. The address 0 is empty
so that the pair [31, F] can be placed there.
A similar collision occurs when we try to insert the next pair, [24, G], because the
hash address h(24) = 2 for the key 24 is already occupied by the previous pair [20, A].
Consequently, we probe successive lower locations 1 and 0, and since they both are
already occupied, we wrap around and continue the search at the highest location
9. Because it is empty, the pair [24, G] is inserted in this location yielding the final
configuration given in Table 3.3.
OALP is simple to implement but the hash table may degenerate due to clus-
tering . A cluster is a sequence of adjacent occupied table entries. OALP tends to
form clusters around the locations where one or more collisions have occurred. Each
collision is resolved using the next empty location available for sequential probing.
Therefore, other collisions become more probable in that neighbourhood, and the
larger the clusters, the faster they grow. As a result, a search for an empty address to
place a collided key may turn into a very long sequential search.
Another probing scheme, double hashing , reduces the likelihood of clustering.
In double hashing, when a collision occurs, the key is moved by an amount deter-
mined by a secondary hash function ∆. Let h denote the primary hash function.
Then for each key k we have the starting probe address i0 = h(k) and the probe decre-
ment ∆(k). Each next successive probe position is it = (it−1 − ∆(k)) mod m; t = 1, 2, . . .
where m is the table size.
Example 3.26. Table 3.4 shows how OADH fills the same hash table as in Exam-
ple 3.25 if the hash function is given by ∆(k) = (h(k) + k) mod 10.
Now when we try to place the key–value pair [31, F] into position h(31) = 3, the
collision is resolved by probing the table locations with decrement ∆(31) = 4. The
first position, (3 − 4) mod 10 = 9 is empty so that the pair [31, F] can be placed there.
For the collision of the pair, [24, G], at location 2 the decrement ∆(24) = 6 immedi-
ately leads to the empty location 6. The final table in Figure 3.4 contains three small
clusters instead of one large cluster in Figure 3.3.
Generally, OADH results in more uniform hashing that forms more clusters than
OALP but of smaller sizes. Linear probing extends each cluster from its end with the
lower table address, and nearby clusters join into larger clusters growing even faster.
Double hashing does not extend clusters only at one end and does not tend to join
nearby clusters.
Analysis of hash tables
The time complexity of searching in and inserting items in a hash table of size m
with n already occupied entries is determined by the load factor, λ := mn . In open
addressing, 0 ≤ λ < 1: λ equals the fraction of occupied slots in the array, and cannot
be exactly equal to 1 because a hash table should have at least one empty entry in
order to efficiently terminate the search for a key or the insertion of a new key.
Open addressing and separate chaining require n probes in the worst case, since
all elements of the hash table may be synonyms. However the basic intuition is that
provided the table is not too full, collisions should be rare enough that searching for
an element requires only a constant number of probes on average.
Thus we want a result such as: “Provided the load factor is kept bounded (and
away from 1 in the case of open addressing), all operations in a hash table take Θ(1)
time in the average case.”
In order to have confidence in this result, we need to describe our mathematical
model of hashing. Since a good hash function should scatter keys randomly, and we
have no knowledge of the input data, it is natural to use the “random balls in bins”
model. We assume that we have thrown n balls one at a time into m bins, each ball
independently and uniformly at random.
For our analysis, it will be useful to use the function Q defined below.
m! m m−1 m−n+1
Q(m, n) = = ... .
(m − n)!mn m m m
B ASIC ANALYSIS OF COLLISIONS It is obvious that if we have more items than the size
of the hash table, at least one collision must occur. But the distinctive feature of
collisions is that they are relatively frequent even in almost empty hash tables.
The birthday paradox refers to the following surprising fact: if there are 23 or
more people in a room, the chance is greater than 50% that two or more of them have
the same birthday. Note: this is not a paradox in the sense of a logical contradiction,
but just a “counter-intuitive” fact that violates “common sense”.
1
More precisely, if each of 365 bins is selected with the same chance 365 , then after
23 entries have been inserted, the probability that at least one collision has occurred
(at least one bin has at least two balls) is more than 50%. Although the table is only
23/365 (∼= 6.3%) full, more than half of our attempts to insert one more entry will
result in a collision!
Let us see how the birthday paradox occurs. Let m and n denote the size of a table
and the number of items to insert, respectively. Let Prm (n) be the probability of at
least one collision when n balls are randomly placed into m bins.
Lemma 3.28. The probability of no collisions when n balls are thrown indepen-
dently into m boxes uniformly at random is Q(m, n). Thus Prm (n) = 1 − Q(m, n) and
the expected number of balls thrown until the first collision is ∑n≤m Q(m, n).
Proof. Let πm (n) be the probability of no collisions. The “no collision” event after
inserting ν items; ν = 2, . . . , n, is a joint event of “no collision” after inserting the
preceding ν − 1 items and “no collision” after inserting one more item, given ν − 1
positions are already occupied. Thus Prm (ν) = Prm (ν − 1)Pm (no collision | ν − 1) where
Pm (no collision | ν) denotes the conditional probability of no collision for a single item
inserted into the table with m − ν unoccupied positions. This latter probability is
simply m−νm .
This then yields immediately
m m−1 m − n + 1 m(m − 1) · · · (m − n + 1) m!
πm (n) = ... = = n
m m m mn m (m − n)!
m!
Therefore, Prm (n) = 1 − mn (m−n)! = 1 − Q(m, n) which gives the first result.
The number of balls is at least n + 1 with probability Q(m, n). Since the expected
value of a random variable T taking on nonegative integer values can always be com-
puted by E[T ] = ∑n≥1 i Pr(T = i) = ∑ j≥0 Pr(T > j), and these latter probabilities are zero
when j > m, the second result follows.
Table 3.5 presents (to 4 decimal places) some values of Prm (n) for m = 365 and
n = 5 . . . 100. As soon as m = 47 (the table with 365 positions is only 12.9% full),
the probability of collision is greater than 0.95. Thus collisions are frequent even
in sparsely occupied tables.
n 5 10 15 20 22
Pr365 (n) 0.0271 0.1169 0.2529 0.4114 0.4757
n 23 25 30 35 40
Pr365 (n) 0.5073 0.5687 0.7063 0.8144 0.8912
n 45 50 55 60 65
Pr365 (n) 0.9410 0.9704 0.9863 0.9941 0.9977
n 70 75 80 90 100
Pr365 (n) 0.9992 0.9997 0.9999 1.0000 1.0000
1.0
0.8
0.6
0.4
0.2
Figure 3.15 shows the graph of Pr365 (n) as a function of n. The median of this
distribution occurs around n = 23, as we have said above, and so 23 or more balls
suffice for the probability of a collision to exceed 1/2. Also, the expected number of
balls before the first collision is easily computed to be 25.
When the load factor is much less than 1, the average number of balls per bin is
small. If the load factor exceeds 1 then the average number is large. In each case
analysis is not too difficult. For hashing to be practical, we need to be able to fill
a hash table as much as possible before we spend valuable time rehashing — that
is, allocating more space for the table and reassigning hash codes via a new hash
function. Thus we need to analyse what happens when the load factor is comparable
to 1, that is, when the number of balls is comparable to the number of bins. This also
turns out to be the most interesting case mathematically.
T HEORETICAL ANALYSIS OF HASHING In addition to the basic operations for arrays,
we also consider the computation of the hash address of an item to be an elementary
operation.
Chaining is relatively simple to analyse.
Lemma 3.29. The expected running time for unsuccessful search in a hash table
with load factor λ using separate chaining is given by
Tus (λ) = 1 + λ.
To analyse open addressing, we must make some extra assumptions. We use the
uniform hashing hypothesis: each configuration of n keys in a hash table of size m
is equally likely to occur. This is what we would expect of a truly “random” hash
function, and it seems experimentally to be a good model for double hashing. Note
that this is stronger than just requiring that each key is independently and uniformly
likely to hash initially to each slot before collision resolution (“random balls in bins”).
It also implies that all probe sequences are equally likely.
Lemma 3.30. Assuming the uniform hashing hypothesis holds, the expected num-
ber of probes for search in a hash table satisfy
1
Tus (λ) ≤
1−λ
and
1 1
Tss (λ) ≤ ln .
λ 1−λ
Proof. The average number of probes for an unsuccessful search is Tus (λ) = ∑ni=1 ipm,n (i)
where pm,n (i) denotes the probability of exactly i probes during the search. Obvi-
ously, pm,n (i) = Pr(m, n, i) − Pr(m, n, i + 1) where Pr(m, n, i) is the probability of i or more
probes in the search. By a similar argument to that used in the birthday problem
analysis we have for i ≥ 2
n n−1 n−i+2
Pr(m, n, i) = · ···
m m−1 m−i+2
1 n−1 m
Tss (λ) ≤ ∑ m−i
n i=0
m
1 1
=
λ ∑
j=m−n+1 j
Z n
1 1 m 1 1
≤ dx/x = ln = ln .
λ n−m λ m−n λ 1−λ
Proof. The proof is beyond the scope of this book (see Notes).
The relationships in Lemma 3.31 and Lemma 3.30 completely fail when λ = 1.
But the latter situation indicates a full hash table, and we should avoid getting close
to it anyway.
Unlike OALP and OADH, the time estimates for separate chaining (SC) remain
valid with data removals. Because each chain may keep several table elements, the
load factor may be more than 1.
Table 3.6 presents the above theoretical estimates of the search time in the OALP,
OADH, and SC hash tables under different load factors. Average time measurements
Table 3.6: Average search time bounds in hash tables with load factor λ.
for actual hash tables [12] are close to the estimates for SC tables in the whole range
λ ≤ 0.99 and seem to be valid for larger values of λ, too. The measurements for OADH
tables remain also close to the estimates up to λ = 0.99. But for OALP tables, the
measured time is considerably less than the estimates if λ > 0.90 for a successful
search and λ > 0.75 for an unsuccessful search.
Example 3.32. The expected performance of hashing depends only on the load fac-
tor. If λ = 0.9, OADH double hashing takes on the average 2.56 and 10 probes for
successful and unsuccessful search, respectively. But if λ = 0.5, that is, the same keys
are stored in a roughly twice larger table, the same numbers decrease to 1.39 and 2
probes.
Implementation of hashing
R ESIZING One problem with open addressing is that successive insertions may cause
the table to become almost full, which degrades performance. Eventually we will
need to increase the table size. Doing this each time an element is inserted is very
inefficient. It is better to use an upper bound, say 0.75, on the load factor, and to
double the array size when this threshold is exceeded. This will then require recom-
puting the addresses of each element using a new hash function.
The total time required to resize, when growing a table from 0 to m = 2k elements,
is of order 1 + 2 + 4 + 8 + · · · + 2k−1 = 2k − 1 = m − 1. Since the m insertions take time
of order m (recall the table always has load factor bounded away from 1), the average
insertion time is still Θ(1).
D ELETION It is quite easy to delete a table entry from a hash table with separate
chaining (by mere node deletion from a linked list). However, open addressing en-
counters difficulties. If a particular table entry is physically removed from a OA-hash
table leaving an empty entry in that place, the search for subsequent keys becomes
invalid. This is because the OA-search terminates when the probe sequence en-
counters an empty table entry. Thus if a previously occupied entry is emptied, all
probe sequences that previously travelled through that entry will now terminate be-
fore reaching the right location.
To avoid this problem, the deleted entry is normally marked in such a way that
insertion and search operations can treat it as an empty and nonempty location,
respectively. Unfortunately, such a policy results in hash tables packed with entries
which are marked as deleted. But in this case the table entries can be rehashed to
preserve only actual data and really delete all marked entries. In any case, the time
to delete a table entry remains O(1) both for SC and OA hash tables.
C HOOSING A HASH FUNCTION Ideally, the hash function, h(k), has to map keys uni-
formly and randomly onto the entire range of hash table addresses. Therefore, the
choice of this function has much in common with the choice of a generator of uni-
formly distributed pseudorandom numbers. A randomly chosen key k has to equiprob-
ably hash to each address so that uniformly distributed keys produce uniformly dis-
tributed indices h(k). A poorly designed hash function distributes table addresses
nonuniformly and tends to cluster indices for contiguous clusters of keys. A well
designed function scatters the keys as to avoid their clustering as much as possible.
If a set of keys is fixed, there always exists a perfect hash function that maps
the set one-to-one onto a set of table indices and thus entirely excludes collisions.
However, the problem is how to design such a function as it should be computed
quickly but without using large tables. There exist techniques to design perfect hash
functions for given sets of keys. But perfect hashing is of very limited interest be-
cause in most applications data sets are not static and the sets of keys cannot be
pre-determined.
Four basic methods for choosing a hash function are division, folding , middle-
squaring , and truncation.
Division assuming the table
size is a prime number m and keys, k, are integers, the
quotient, q(k, m) = mk , and the remainder, r(k, m) = k mod m, of the integer
division of k by m specify the probe decrement for double hashing and the value
of the hash function h(k), respectively:
h(k) = r(k, m) and ∆(k) = max {1, q(k, m) mod m} .
The probe decrement is put to the range [1, . . . , m − 1] because all decrements
should be nonzero and point to the indices [0, 1, . . . , m − 1] of the table. The
reason that m should be prime is that otherwise some slots may be unreachable
by a probe sequence: for example if m = 12 and ∆(k) = 16, only 3 slots will be
probed before the sequence returns to the starting position.
Folding an integer key k is divided into sections and the value h(k) combines sums,
differences, and products of the sections (for example, a 9-digit decimal key,
such as k = 013402122, can be split into three sections: 013, 402, and 122, to be
added together for getting the value h(k) = 537 in the range [0, . . . , 2997]).
Middle-squaring a middle section of an integer key k, is selected and squared, then
a middle section of the result is the value h(k) (for example, the squared middle
section, 402, of the above 9-digit key, k = 013402122, results in 161604, and the
middle four digits give the value h(k) = 6160 in the range [0, . . . , 9999]).
Truncation parts of a key are simply cut out and the remaining digits, or bits, or
characters are used as the value h(k) (for example, deletion of all but last three
digits in the above 9-digit key, k = 013402122, gives the value h(k) = 122 in the
range [0, . . . , 999]). While truncation is extremely fast, the keys do not scatter
randomly and uniformly over the hash table indices. This is why truncation is
used together with other methods, but rarely by itself.
Many real-world hash functions combine some of the above methods.
We conclude by discussing the idea of universal hashing. We have seen (in the
section on quicksort) the idea of using randomization to protect against bad worst-
case behaviour. An analogous idea works for hashing.
If a hash table is dynamically changing and its elements are not known in ad-
vance, any fixed hash function can result in very poor performance on certain in-
puts, because of collisions. Universal hashing allows us to reduce the probability of
this occurrence by randomly selecting the hash function at run time from a large
set of such functions. Each selected function still may be bad for a particular input,
but with a low probability which remains equally low if the same input is met once
again. Due to its internal randomisation, universal hashing behaves well even for
totally nonrandom inputs.
Definition 3.33. Let K, m, and F denote a set of keys, a size of a hash table (the range
of indices), and a set of hash functions mapping K to {0, . . . , m−1}, respectively. Then
F is a universal class if any distinct pair of keys k, κ ∈ K collide for no more than m1 of
the functions in the class F, that is,
1 1
{h ∈ F | h(k) = h(κ)} ≤ .
|F| m
Thus in the universal class all key pairs behave well and the random selection of
a function from the class results in a probability of at most m1 that any pair collide.
One popular universal class of hashing functions is produced by a simple division
method. It assumes the keys are integers and cardinality of the set of keys K is a
prime number larger than the largest actual key. The size m of the hash table can be
arbitrary. This universal class is described by the next theorem.
Theorem 3.34 (Universal Class of Hash Functions). Let K = {0, . . . , p − 1} and |K| = p
be a prime number. For any pair of integers a ∈ {1, . . . , p − 1} and b ∈ {0, . . . , p − 1}, let
ha,b (k) = ((ak + b) mod p) mod m. Then
is a universal class.
Proof. It is easily shown that the number of collisions in the class F,
{h ∈ F | h(k) = h(κ); k, κ ∈ K} ,
is the number of distinct numbers (x, y); 0 ≤ x, y < p such that x mod m = y mod m. Let
us denote the latter property: x ≡ y (mod m). It is evident that ha,b (k) = ha,b (κ) iff
Then for any fixed k, κ < p, there is one-to-one correspondence between the pairs
(a, b) such that 0 < a < p and 0 < b < p and ha,b (k) = ha,b (κ), and the pairs of distinct
numbers (x, y) with the property that 0 ≤ x, y < p and x ≡ y (mod n). The correspon-
dence is given in one direction by
where x 6= y since
{az + b | z = 0, . . . , p − 1} = {0, . . . , p − 1}
when p is prime and a 6= 0. In the other direction the correspondence is given by the
condition that a and b are the unique integers in {0, . . . , p − 1} such that
These equations have a unique solution for a and b since p is prime, and a 6= 0 since
x 6= y.
Clearly |F| = p(p − 1). Now let us find out how many pairs of distinct numbers
(x, y) exist
such that 0 ≤ x, y < p and x ≡ y (mod m). For any fixed s < m there are
at most mp numbers x < p such that x ≡ s (mod m). Since p and m are integers,
p p−1 p p−1
m ≤ m + 1. Therefore for each x < p there are no more than m − 1 ≤ m numbers
y < p distinct from x such that x ≡ y (mod m), and the total number of such pairs (x, y)
is at most p(p−1)
m . Hence for any fixed distinct pair of keys (k, κ) the fraction of F that
cause k and κ to collide is at most m1 , so the class F is universal.
This suggests the following strategy for choosing a hash function at run time: (i)
find the current size of the set of keys to hash; (ii) select the next prime number p
larger than the size of the key set found; (iii) randomly choose integers a and b such
that 0 < a < p and 0 ≤ b < p, and (iv) use the function ha,b defined in Theorem 3.34.
Exercises
Exercise 3.5.1. The Java programming language (as of time of writing) uses the fol-
lowing hash function h for character strings. Each character has a Unicode value
represented by an integer (for example, the upper case letters A, B, . . . , Z correspond
to 65, 66, . . . , 90 and the lower case a, b, . . . , z correspond to 97, 98, . . . , 122). Then h is
computed using 32-bit integer addition via
Find two 2-letter strings that have the same hash value. How could you use this
to make 2100 different strings all of which have the same hash code?
Exercise 3.5.2. Place the sequence of keys k = 10, 26, 52, 76, 13, 8, 3, 33, 60, 42 into a
hash table of size 13 using the modulo-based hash address i = k mod 13 and linear
probing to resolve collisions.
Exercise 3.5.3. Place the sequence of keys k = 10, 26, 52, 76, 13, 8, 3, 33, 60, 42 into a
hash table of size 13 using the modulo-based hash address i = k mod 13 and double
hashing with the secondary hash function ∆(k) = max{1, k/13} to resolve collisions.
Exercise 3.5.4. Place the sequence of keys k = 10, 26, 52, 76, 13, 8, 3, 33, 60, 42 into a
hash table of size 13 using separate chaining to resolve collisions.
3.6 Notes
Binary search, while apparently simple, is notoriously hard to program correctly
even for professional programmers: see [2] for details.
The expected height of a randomly grown BST was shown to be Θ(log n) by J. M.
Robson in 1979. After much work by many authors it is now known that the average
value is tightly concentrated around α ln n where α is the root of x ln(2e/x) = 1, α ∼
=
4.311.
The historically first balanced binary search tree was proposed in 1962 by G. M.
Adelson-Velskii and E. M. Landis, hence the name AVL tree. Red-black trees were
developed in 1972 by R. Bayer under the name “symmetric binary B-trees” and re-
ceived their present name and definition from L. Guibas and R. Sedgewick in 1978.
AA-trees were proposed by A. Anderson in 1993.
Multiway B-trees were proposed in 1972 by R. Bayer and E. McCreight.
According to D. Knuth, hashing was invented at IBM in early 1950’s simultane-
ously and independently by H. P. Luhn (hash tables with SC) and G. M. Amdahl
(OALP).
The analysis of OALP hashing was first performed by D. Knuth in 1962. This was
the beginning of the modern research field of analysis of algorithms.
The random balls in bins model can be analysed in detail by more advanced
methods than we present in this book (see for example [6]). Some natural questions
are
The answers are applicable to our analysis of chaining: when are all chains expected
to be nonempty? how many chains are empty when the average chain length is Θ(1)?
what is the maximum chain length when the average chain length is Θ(1)? The an-
swers are known to be, respectively: when n ≈ m ln m; about e−λ ; Θ(log n/ log log n).
The last result is much harder to derive than the other two.
Part II
Note. In the mathematical language of relations, the definition says that E is a rela-
tion on V . If (u, v) ∈ E, we say that v is adjacent to u, that v is an out-neighbour of u,
and that u is an in-neighbour of v.
We can think of a node as being a point and an arc as an arrow from one node
to another. This allows us to draw pictures that suggest ideas. The pictures cannot
prove anything, however.
G1 0 G2 0
1 2 1 2
3 4 3 4
Very often the adjacency relation is symmetric (all streets are two-way). There
are two ways to deal with this. We can use a digraph that happens to be symmetric
(in other words, (u, v) is an arc if and only if (v, u) is an arc). However, it is sometimes
simpler to reduce this pair of arcs into a single undirected edge that can be traversed
in either direction.
Definition 4.2. A graph G = (V, E) is a finite nonempty set V of vertices together with
a (possibly empty) set E of unordered pairs of vertices of G called edges.
Note. Since we defined E to be a set, there are no multiple arcs/edges between a
given pair of nodes/vertices.
Non-fluent speakers of English please note: the singular of “vertices” is not “ver-
tice”, but “vertex”.
For a given digraph G we may also denote the set of nodes by V (G) and the set of
arcs by E(G) to lessen any ambiguity.
Example 4.3. We display a graph G1 and a digraph G2 in Figure 4.1. The nodes/vertices
are labelled 0, 1, . . . as in the picture. The arcs and edges are as follows.
E(G1 ) = {{0, 1}, {0, 2}, {1, 2}, {2, 3}, {2, 4}, {3, 4}}
E(G2 ) = {(0, 2), (1, 0), (1, 2), (1, 3), (3, 1), (3, 4), (4, 2)}
Note. Some people like to view a graph as a special type of digraph where every
unordered edge {u, v} is replaced by two directed arcs (u, v) and (v, u). This has the
advantage of allowing us to consider only digraphs, and we shall use this approach
in our Java implementation in Appendix B. It works in most instances.
However, there are disadvantages; for some purposes we must know whether our
object is really a graph or just a symmetric digraph. Whenever there is (in our opin-
ion) a potential ambiguity, we shall point it out.
Example 4.4. Every rooted tree (see Section D.7) can be interpreted as a digraph:
there is an arc from each node to each of its children.
Every free tree is a graph of a very special type (see Appendix D.7).
Note. (Graph terminology) The terminology in this subject is unfortunately not com-
pletely standard. Some authors call a graph by the longer term “undirected graph”
and use the term “graph” to mean what we call a directed graph. However when us-
ing our definition of a graph, it is standard practice to abbreviate the phrase “directed
graph” with the word digraph.
We shall be dealing with both graphs and digraphs throughout these notes. In
order to save writing “(di)graph” too many times, we make the following conven-
tion. We treat the digraph as the fundamental concept. In other words, we shall use
the terminology of digraphs, nodes and arcs, with the understanding that if this is
changed to graphs, edges, and vertices, the resulting statement is still true. However,
if we talk about graphs, edges, and vertices, our statement is not necessarily true for
digraphs. Whenever a result is true for digraphs but not for graphs, we shall say this
explicitly (this happens very rarely).
There is another convention to discuss. An arc that begins and ends at the same
node is called a loop. We make the convention that loops are not allowed in our di-
graphs. Again, other authors may differ. If our conventions are relaxed to allow mul-
tiple arcs and/or loops, many of the algorithms below work with no modification or
with only very minor modification required. However dealing with loops frequently
requires special cases to be considered, and would distract us from our main goal
of introducing the field of graph algorithms. As an example of the problems caused
by loops, suppose that we represent a graph as a symmetric digraph as described
above. How do we represent a loop in the graph?
Definition 4.5. The order of a digraph G = (V, E) is |V |, the number of nodes. The
size of G is |E|, the number of arcs.
Example 4.7. For the graph G1 of Figure 4.1 the following sequences of vertices are
classified as being walks, paths, or cycles.
Example 4.8. For the digraph G2 of Figure 4.1 the following sequences of nodes are
classified as being walks, paths, or cycles.
Definition 4.9. In a graph, the degree of a vertex v is the number of edges meeting
v. In a digraph, the outdegree of a node v is the number of out-neighbours of v, and
the indegree of v is the number of in-neighbours of v.
A node of indegree 0 is called a source and a node of outdegree 0 is called a sink.
If the nodes have a natural order, we may simply list the indegrees or outdegrees
in a sequence.
Example 4.10. For our graph G1 , the degree sequence is (2, 2, 4, 2, 2). The in-degree
sequence and out-degree sequence of the digraph G2 are (1, 1, 3, 1, 1) and (1, 3, 0, 2, 1),
respectively. Node 2 is a sink.
Definition 4.11. The distance from u to v in G, denoted by d(u, v), is the minimum
length of a path from u to v. If no path exists, the distance is undefined (or +∞).
There are several ways to create new digraphs from old ones.
One way is to delete (possibly zero) nodes and arcs in such a way that the result-
ing object is still a digraph (there are no arcs missing any endpoints!).
Example 4.14. Figure 4.2 shows (on the left) a subdigraph and (on the right) a span-
ning subdigraph of the digraph G2 of Figure 4.1.
1 2 1 2
3 3 4
Example 4.16. Figure 4.3 shows the subdigraph of the digraph G2 of Figure 4.1 in-
duced by {1, 2, 3}.
Definition 4.17. The reverse digraph of the digraph G = (V, E), is the digraph Gr =
(V, E ′ ) where (u, v) ∈ E ′ if and only if (v, u) ∈ E.
Example 4.18. Figure 4.4 shows the reverse of the digraph G2 of Figure 4.1.
1 2
1 2
3 4
It is sometimes useful to replace all one-way streets with two-way streets. The
formal definition must take care not to introduce multiple edges. Note below that if
(u, v) and (v, u) belong to E, then only one edge joins u and v in G′ . This is because
{u, v} and {v, u} are equal as sets, so appear only once in the set E ′ .
Example 4.20. Figure 4.5 shows the underlying graph of the digraph G2 of Figure 4.1.
1 2
3 4
Exercise 4.1.2. Let G be a digraph of order n and u, v nodes of G. Show that d(u, v) ≤
n − 1 if there is a walk from u to v.
Exercise 4.1.3. Prove that in a sparse digraph, the average indegree of a node is O(1),
while in a dense digraph, the average indegree of a node is Ω(n).
The sequence Li may or may not be sorted in order of increasing node number.
Our convention is to sort them whenever it is convenient. (However, many imple-
mentations, such as the one given in Appendix B, do not enforce that their adjacency
lists be sorted.)
We can see the structure of these representations more clearly with examples.
Example 4.23. For the graph G1 and digraph G2 of Example 4.3, the adjacency ma-
trices are given below.
0 1 1 0 0 0 0 1 0 0
1 0 1 0 0 1 0 1 1 0
1 1 0
G1 : 1 1
0
G2 : 0 0 0 0
0 0 1 0 1 0 1 0 0 1
0 0 1 1 0 0 0 1 0 0
Notice that the number of 1’s in a row (column) is the outdegree (indegree) of the
corresponding node. The corresponding adjacency lists are now given.
1 2 2
0 2 0 2 3
G1 : 0 1 3 4 G2 :
2 4 1 4
2 3 2
Note. Only the out-neighbours are listed in the adjacency lists representation. An
empty sequence can occur (for example, sequence 2 of the digraph G2 ). If the nodes
are not numbered in the usual way (for example, they are numbered 0, . . . , n − 1 or
labelled A, B,C, . . . ), we may include these labels if necessary.
It is often useful to input several digraphs from a single file. Our standard format
is as follows. The file consists of several digraphs one after the other. To distinguish
the beginning of one and the end of the other we have a single line giving the order at
the beginning of each graph. If the order is n then the next n lines give the adjacency
matrix or adjacency lists representation of the digraph. The end of the file is marked
with a line denoting a digraph of order 0.
0 1 2
3
2
0
1
0
There are also other specialized (di)graph representations besides the two men-
tioned in this section. These data structures take advantage of special structure for
improved storage or access time, often for families of graphs sharing a common
property. For such specialized purposes they may be better than either the adja-
cency matrix or lists representations.
For example, trees can be stored more efficiently. We have already seen in Sec-
tion 2.5 how a complete binary tree can be stored in an array. A general rooted tree
of n nodes can be stored in an array pred of size n. The value pred[i] gives the parent
of node i. The root is a special case and can be given value −1 (representing a NULL
pointer), for example, if we number nodes from 0 to n − 1 in the usual way. This of
course is a form of adjacency lists representation, where we use in-neighbours in-
stead of out-neighbours.
We will sometimes need to represent ∞ when processing graphs. For example,
it may be more convenient to define d(u, v) = ∞ than to say it is undefined. From a
programming point of view, we can use any positive integer that can not be confused
with any other that might legitimately arise. For example, the distance between 2
nodes in a digraph on n nodes cannot be more than n − 1 (see Exercise 4.1.2). Thus
in this case we may use n to represent the fact that there is no path between a given
pair of nodes. We shall return to this subject in Chapter 6.
Exercises
Exercise 4.2.1. Write down the adjacency matrix of the digraph of order 7 whose ad-
jacency lists representation is given below.
2
0
0 1
4 5 6
5
3 4 6
1 2
Exercise 4.2.2. Consider the digraph G of order 7 whose adjacency matrix represen-
tation is given below.
0 1 0 0 1 1 0
1 0 0 1 0 0 0
1 0 0 0 0 0 1
1 0 0 0 0 1 0
0 0 0 0 0 1 0
0 0 0 0 0 0 0
0 0 0 0 0 1 0
Write down the adjacency lists representation of G.
Exercise 4.2.3. Consider the digraph G of order 7 given by the following adjacency
lists representation.
2
0
0 1
4 5 6
5
3 4 6
1 2
Exercise 4.2.4. Consider the digraph G whose nodes are the integers from 1 to 12
inclusive and such that (i, j) is an arc if and only if i is a proper divisor of j (that is, i
divides j and i 6= j).
Write down the adjacency matrix representation of G and of Gr .
Exercise 4.2.5. Write the adjacency lists and adjacency matrix representation for a
complete binary tree with 7 vertices, assuming they are ordered 1, . . . , 7 as in Sec-
tion 2.5.
Table 4.1: Digraph operations in terms of data structures.
size of list i, which equals the outdegree of i. In the worst case this might be Θ(m)
since all nodes but i might be sinks. On the other hand, for a sparse digraph, the
average outdegree is Θ(1), so arc lookup can be done on average in constant time.
Note that if we want to print out all arcs of a digraph, this will take time Θ(n + m) in
the lists case and Θ(n2 ) in the matrix case.
Finding outdegree with the lists representation merely requires accessing the
correct list (constant time) plus finding the size of that list (constant time). Find-
ing indegree with the lists representation requires scanning all lists except one, and
this requires us to look at every arc in the worst case, taking time Θ(n + m) (the n is
because we must consider every node’s list even if it is empty). If we wish to compute
just one indegree, this might be acceptable, but if all indegrees are required, this will
be inefficient. It is better to compute the reverse digraph once and then read off the
outdegrees, this last step taking time Θ(n) (see Exercise 4.3.1).
One way around all this work is to use in our definition of adjacency lists repre-
sentation, instead of just the out-neighbours, a list of in-neighbours also. This may
be useful in some contexts but in general requires more space than is needed.
We conclude by discussing space requirements. The adjacency matrix represen-
tation requires Θ(n2 ) storage: we simply need n2 bits. It appears that an adjacency
lists representation requires Θ(n + m) storage, since we must store an endpoint of
each arc, and we need to allocate space for each node’s list. However this is not
strictly true for large graphs. Each node number requires some storage; the number
k requires on average Θ(log k) bits. If, for example, we have a digraph on n nodes
where every possible arc occurs, then the total storage required is of order n2 log n,
worse than with a matrix representation. For small, sparse digraphs, it is true that
lists use less space than a matrix, whereas for small dense digraphs the space re-
quirements are comparable. For large sparse digraphs, a matrix can still be more
efficient, but this happens rarely.
The remarks above show that it is not immediately clear which representation to
use. We will mostly use adjacency lists, which are clearly superior for many common
tasks (such as graph traversals, covered in Chapter 5) and generally better for sparse
digraphs.
Any implementation of an abstract data type (for example as a Java class) must
include objects and “methods”. While most people would include methods for adding
nodes, deleting arcs, and so on, it is not clear where to draw the line. In Appendix B,
one way of writing Java classes to deal with graphs is presented in detail. There are
obviously a lot of different choices one can make. In particular for our Java lists rep-
resentation we use ArrayList<ArrayList<Integer>>.
Exercises
Exercise 4.3.1. Show how to compute the (sorted) adjacency lists representation of
the reverse digraph of G from the (sorted) adjacency lists representation of G itself.
It should take time Θ(n + m).
4.4 Notes
This chapter shows how to represent and process graphs with a computer. Al-
though Appendix B uses the Java programming language, the ideas and algorithms
are applicable to other industrial programming languages. For example, C++ has
several standard graph algorithms libraries such as the Boost Graph Library [11],
LEDA (Library of Efficient Data structures and Algorithms) [9] and GTL (Graph Tem-
plate Library) [3]. All algorithms discussed here are provided in these libraries and
in other mathematical interpreted languages like Mathematica and Maple.
Chapter 5
Many graph problems require us to visit each node of a digraph in a systematic way.
For example, we may want to print out the labels of the nodes in some order, or per-
haps we are in a maze and we have no idea where to find the door. More interesting
examples will be described below. The important requirements are that we must
be systematic (otherwise an algorithm is hard to implement), we must be complete
(visit each node at least once), and we must be efficient (visit each node at most
once).
There are several ways to perform such a traversal. Here we present the two
most common, namely breadth-first search and depth-first search. We also discuss a
more general but also more complicated and slower algorithm, priority-first search.
First we start with general remarks applicable to all graph traversals.
algorithm visit
Input: node s of digraph G
begin
colour[s] ← GREY; pred[s] ← NULL
while there is a grey node do
choose a grey node u
if there is a white neighbour of u
choose such a neighbour v
colour[v] ← GREY; pred[v] ← u
else colour[u] ← BLACK
end if
end while
end
In the first three cases in Definition 5.1, u and v must belong to the same tree of
F. However, a cross arc may join two nodes in the same tree or point from one tree
to another.
The following theorem collects all the basic facts we need for proofs in later sec-
tions. Figure 5.3 illustrates the first part.
Theorem 5.2. Suppose that we have carried out traverse on G, resulting in a search
forest F. Let v, w ∈ V (G).
• Let T1 and T2 be different trees in F and suppose that T1 was explored before T2 .
Then there are no arcs from T1 to T2 .
• Suppose that G is a graph. Then there can be no edges joining different trees
of F.
Proof. If the first part were not true, then since w is reachable from v, and w has not
been visited before T1 is started, w must be reached in the generation of T1 , contra-
dicting w ∈ T2 . The second part follows immediately for symmetric digraphs and
hence for graphs. Now suppose that v is seen before w. Let r be the root of the tree T
containing v. Then w is reachable from r and so since it has not already been visited
when r is chosen, it belongs to T . Finally, if v and w are in the same tree, then any
path from v to w in G going outside the tree must re-enter it via some arc; either the
leaving or the entering arc will contradict the first part.
T1 T2 Tk
We now turn to the analysis of traverse. The generality of our traversal proce-
dure makes its complexity hard to determine. Its running time is very dependent on
how one chooses the next grey node u and its white neighbour v. It also apparently
depends on how long it takes to determine whether there exist any grey nodes or
whether u has any white neighbours. However, any sensible rule for checking exis-
tence of either type of node should simply return false if there is no such node, and
take no more time in this case than if it does find one. Thus we do not need to take
account of the checking in our analysis.
Since the initialization of the array colour takes time Θ(n), the amount of time
taken by traverse is clearly Θ(n + t), where t is the total time taken by all the calls to
visit.
Each time through the while-loop of visit a grey node is chosen, and either a
white node is turned grey or a grey node is turned black. Note that the same grey
node can be chosen many times. Thus we execute the while-loop in total Θ(n) times
since every node must eventually move from white through grey to black. Let a, A be
lower and upper bounds on the time taken to choose a grey node (note that they may
depend on n and be quite large if the rule used is not very simple). Then the time
taken in choosing grey nodes is O(An) and Ω(an). Now consider the time taken to
find a white neighbour. This will involve examining each neighbour of u and check-
ing whether it is white, then applying a selection rule. If the time taken to apply the
rule is at least b and at most B (which may depend on n), then the total time in choos-
ing white neighbours is O(Bm) and Ω(bm) if adjacency lists are used and O(Bn2 ) and
Ω(bn2 ) if an adjacency matrix is used.
In summary, then, the running time of traverse is O(An + Bm) and Ω(an + bm) if
adjacency lists are used, and O(An + Bn2 ) and Ω(an + bn2 ) if adjacency matrix format
is used.
A more detailed analysis would depend on the rule used. We shall see in Sec-
tion 5.2 that BFS and DFS have a, b, A, B all constant, and so each yields a linear-time
traversal algorithm. In this case, assuming a sparse input digraph, the adjacency list
format seems preferable. On the other hand, for example, suppose that a is at least
of order n (a rather complex rule for grey nodes is being used) and b, B are constant
(for example, the first white node found is chosen). Then asymptotically both repre-
sentations take time Ω(n2 ), so using the adjacency matrix is not clearly ruled out (it
may even be preferable if it makes programming easier).
Exercises
Exercise 5.1.1. Draw a moderately complicated graph representing a maze (corri-
dors are edges and intersections are nodes). Label one node as the start and another
as the end. One rule for getting through a maze is to try to go forward, always make
a right turn when faced with a choice of direction, and back up as little as possible
when faced with a dead end. Apply this method to your example. Interpret what you
do in terms of the procedure traverse.
Exercise 5.1.2. Suppose that in traverse, the grey node is chosen at random and so
is the white node. Find your way through your maze of the previous exercise using
this method.
• A search forest in which a cross arc points from one tree to another.
• A search forest in which a cross arc joins two nodes in the same tree.
In the examples above, all nodes were reachable from the root, and so there is a
single search tree in the forest. For general digraphs, this may not be true. We should
distinguish between BFS or DFS originating from a given node, or just BFS/DFS run
on a digraph. The former we call BFSvisit/DFSvisit (a special case of visit) and
the latter just BFS/DFS (a special case of traverse). The algorithm DFS, for example,
repeatedly chooses a root and runs DFSvisit from that root, until all nodes have
been visited. Pseudocode for these procedures is given in the next two sections.
G1 0 G2
0 1
1 2
2 3
3 4
4 5 6
8
5 6 7
1
G1 0 G2 1 2
0 1
2 3
1 2 3
2 4 3
4
3 5 4
5
6 6
4 5 6
7 8 9 8 7
5 6 7
1 1 2 3
2 3
6 4
4
5 6
5
Exercises
Exercise 5.2.1. Draw a graph for which DFS and BFS can visit nodes in the same
order. Then draw one for which they must visit nodes in the same order. Make your
examples as large as possible (maximize n + m).
Theorem 5.4. The call to recursiveDFSvisit with input s terminates only when all
nodes reachable from s via a path of white nodes have been visited. The descendants
of s in the DFS forest are precisely these nodes.
There are not as many possibilities for interleaving of the timestamps as there ap-
pear at first sight. In particular, we cannot have seen[v] < seen[w] < done[v] < done[w].
The following theorem explains why.
• If v is an ancestor of w in F, then
Proof. The first part is clear from the recursive formulation of DFS. Now suppose
that v is not an ancestor of w. Note that w is obviously also not an ancestor of v. Thus
v lives in a subtree that is completely explored before the subtree of w is visited by
recursiveDFSvisit.
algorithm DFS
Input: digraph G
begin
stack S
array colour[0..n − 1], pred[0..n − 1], seen[0..n − 1], done[0..n − 1]
for u ∈ V (G) do
colour[u] ← WHITE; pred[u] ← NULL
end for
time ← 0
for s ∈ V (G) do
if colour[s] = WHITE then
DFSvisit(s)
end if
end for
return pred, seen, done
end
algorithm DFSvisit
Input: node s
begin
colour[s] ← GREY
seen[s] ← time; time ← time + 1
S.insert(s)
while not S.isEmpty() do
u ← S.peek()
if there is a neighbour v with colour[v] = WHITE then
colour[v] ← GREY; pred[v] ← u
seen[v] ← time; time ← time + 1
S.insert(v)
else
S.delete()
colour[u] ← BLACK
done[u] ← time; time ← time + 1
end if
end while
end
All four types of arcs in our search forest classification can arise with DFS. The
different types of non-tree arcs can be easily distinguished while the algorithm is
running. For example, if an arc (u, v) is explored and v is found to be white, then the
arc is a tree arc; if v is grey then the arc is a back arc, and so on (see Exercise 5.3.3).
We can also perform the classification after the algorithm has terminated, just by
looking at the timestamps seen and done (see Exercise 5.3.4).
Exercises
Exercise 5.3.1. Give examples to show that all four types of arcs can arise when DFS
is run on a digraph.
Exercise 5.3.2. Execute depth-first search on the digraph with adjacency lists repre-
sentation given below. Classify each arc as tree, forward, back or cross.
0: 2
1: 0
2: 0 1
3: 4 5 6
4: 5
5: 3 4 6
6: 1 2
Exercise 5.3.3. Explain how to determine, at the time when an arc is first explored
by DFS, whether it is a cross arc or a forward arc.
Exercise 5.3.4. Suppose that we have performed DFS on a digraph G. Let (v, w) ∈
E(G). Show that the following statements are true.
Exercise 5.3.5.
Suppose that DFS is run on a digraph G and the following timestamps obtained.
v 0 1 2 3 4 5 6
seen[v] 0 1 2 11 4 3 6
done[v] 13 10 9 12 5 8 7
• Suppose that (6, 1) is an arc of G. Which type of arc (tree, forward, back or cross)
is it?
• Is it possible that G contains an arc (5, 3)? If so, what type of arc must it be?
• Is it possible that G contains an arc (1, 5)? If so, what type of arc must it be?
Exercise 5.3.6. Is there a way to distinguish tree arcs from non-tree arcs just by look-
ing at timestamps after DFS has finished running?
Exercise 5.3.7. Suppose that DFS is run on a graph G. Prove that cross edges do not
occur.
Exercise 5.3.8. Give an example to show that the following conjecture is not true:
if w is reachable from v and seen[v] < seen[w] then w is a descendant of v in the DFS
forest.
Exercise 5.3.9. DFS allows us to give a so-called pre-order and post-order labelling
to a digraph. The pre-order label indicates the order in which the nodes were turned
grey. The post-order label indicates the order in which the nodes were turned black.
For example, each node of the following tree is labelled with a pair of integers
indicating the pre- and post- orders, respectively, of the layout.
1,8
2,3 5,7
3,2 7,6
6,4
4,1
8,5
post-order traversal
This is obviously strongly related to the values in the arrays seen and done. What
is the exact relationship between the two?
Exercise 5.3.10. Prove Theorem 5.4 by using induction.
Proof. Note that since d[v] is the length of a path of tree arcs from r to v, we have
d[v] ≥ d(r, v). We prove the result by induction on the distance. Denote the BFS search
algorithm BFS
Input: digraph G
begin
queue Q
array colour[0..n − 1], pred[0..n − 1], d[0..n − 1]
for u ∈ V (G) do
colour[u] ← WHITE; pred[u] ← NULL
end for
for s ∈ V (G) do
if colour[s] = WHITE then
BFSvisit(s)
end if
end for
return pred, d
end
algorithm BFSvisit
Input: node s
begin
colour[s] ← GREY; d[s] ← 0
Q.insert(s)
while not Q.isEmpty() do
u ← Q.peek()
for each v adjacent to u do
if colour[v] = WHITE then
colour[v] ← GREY; pred[v] ← u; d[v] ← d[u] + 1
Q.insert(v)
end if
end for
Q.delete()
colour[u] ← BLACK
end while
end
We can classify arcs, but the answer is not as nice as with DFS.
Theorem 5.7. Suppose that we are performing BFS on a digraph G. Let (v, w) ∈ E(G)
and suppose that we have just chosen the grey node v. Then
Proof. The arc is added to the tree if and only if w is white. If the arc is a back arc, then
w is an ancestor of v; the FIFO queue structure means w is black before the adjacency
list of v is scanned.
Now suppose that (x, u) is a forward arc. Then since u is a descendant of x but not
a child in the search forest, Theorem 5.6 yields d[u] ≥ d[x] + 2. But by the last theorem
we have d[u] = d(s, u) ≤ d(s, x) + 1 = d[x] + 1, a contradiction. Hence no such arc exists.
A cross arc may join two nodes on the same level, jump up one level, or jump
up more than one level. In the last case, w is already black before v is seen. In the
second case, w may be seen before v, in which case it is black before v is seen (recall
w is not the parent of v), or it may be seen after v, in which case it is grey when (v, w)
is explored. In the first case, w may be seen before v (in which case it is black before v
is seen), or w may be seen after v (in which case it is grey when (v, w) is explored).
Proof. By Theorem 5.7 there can be no forward edges, hence no back edges. A cross
edge may not jump up more than one level, else it would also jump down more than
one level, which is impossible by Theorem 5.6.
For a given BFS tree, we can uniquely label the vertices of a digraph based on
the time they were first seen. For the graph G1 of Figure 5.4, we label vertex 0 with
1, vertices {1, 2} with labels {2, 3}, vertices {3, 4, 8} with labels {4, 5, 6}, and the last
vertex level {5, 6, 7} with labels {7, 8, 9}. These are indicated in Figure 5.5.
Exercises
Exercise 5.4.1. Carry out BFS on the digraph with adjacency list given below. Show
the state of the queue after each change in its state.
0: 2
1: 0
2: 0 1
3: 4 5 6
4: 5
5: 3 4 6
6: 1 2
Exercise 5.4.2. How can we distinguish between a back and a cross arc while BFS is
running on a digraph?
Exercise 5.4.3. Explain how to determine whether the root of a BFS tree is contained
in a cycle, while the algorithm is running. You should find a cycle of minimum length
if it exists.
algorithm PFSvisit
Input: node s
begin
colour[s] ← GREY
Q.insert(s, setKey (s))
while not Q.isEmpty() do
u ← Q.peek()
if u has a neighbour v with colour[v] = WHITE then
colour[v] ← GREY
Q.insert(v, setKey (v))
else
Q.delete()
colour[u] ← BLACK
end if
end while end
*
+
(a+b)*(c-(a+b))*(-c+d) -
-
+
a b c d
Figure 5.11: Digraph describing structure of an arithmetic expression.
possible to draw a picture of G with all nodes in a straight line, and the arcs “pointing
the same way”.
A digraph without cycles is commonly called a DAG, an abbreviation for directed
acyclic graph. It is much easier for a digraph to be a DAG than for its underlying
graph to be acyclic.
For our arithmetic expression example above, a linear order of the sub-expression
DAG gives us an order (actually the reverse of the order) where we can safely evaluate
the expression.
Clearly if the digraph contains a cycle, it is not possible to find such a linear order-
ing. This corresponds to inconsistencies in the precedences given, and no schedul-
ing of the tasks is possible.
Example 5.10. In Figure 5.12 we list three DAGs and possible topological orders for
each. Note that adding more arcs to a DAG reduces the number of topological orders
it has. This is because each arc (u, v) forces u to occur before v, which restricts the
number of valid permutations of the vertices.
The algorithms for computing all topological orders are more advanced than
what we have time or space for here. We show how to compute one such order,
however.
First we note that if a topological sort of a DAG G is possible, then there must be
a source node in G. The source node can be first in a topological order, and no node
that is not a source can be first (because it has an in-neighbour that must precede it
in the topological order).
1 1
1 2
5
2
2 3
3 4 3
4
4 5
0,1,2,3,4 6
0,1,2,4,3 0,1,3,2,4,5
0,1,2,4,5,3,6
0,2,1,3,4 0,1,3,2,5,4
0,1,2,3,4,5 0,1,2,5,3,4,6
0,2,1,4,3 0,1,2,5,4,3,6
0,1,2,3,5,4
.. 0,1,5,2,3,4,6
.
0,1,2,5,4,3 0,1,5,2,4,3,6
Proof. First show that every DAG has a source (see exercise 5.6.2). Given this, we
proceed as follows. Deleting a source node creates a digraph that is still a DAG, be-
cause deleting a node and some arcs cannot create a cycle where there was none
previously. Repeatedly doing this gives a topological order.
Theorem 5.12. Suppose that DFS is run on a digraph G. Then G is acyclic if and only
if there are no back arcs.
Proof. Suppose that we run DFS on G. Note that if there is a back arc (v, u), then u
and v belong to the same tree T , with root s say. Then there is a path from s to u, and
there is a path from u to v by definition of back arc. Adding the arc (v, u) gives a cycle.
Conversely, if there is a cycle v0 v1 . . . vn v0 , we may suppose without loss of gener-
ality that v0 is the first node of the cycle visited by the DFS algorithm. We claim that
(vn , v0 ) is a back arc. To see why this is true, first note that during the DFS v0 is linked
to vn via a path of unvisited nodes (possibly of length shorter than n). We have vn as
a descendant of v0 in the DFS tree and a back arc (vn , v0 ).
One valid topological order is simply the reverse of the DFS finishing times.
Theorem 5.13. Let G be a DAG. Then listing the nodes in reverse order of DFS fin-
ishing times yields a topological order of G.
Proof. Consider any arc (u, v) ∈ E(G). Since G is a DAG, the arc is not a back arc by
Theorem 5.12. In the other three cases, Exercise 5.3.4 shows that done[u] > done[v],
which means u comes before v in the alleged topological order.
We can therefore just run DFS on G, and stop if we find a back arc. Otherwise
printing the nodes in reverse order of finishing time gives a topological order. Note
that printing the nodes in order of finishing time gives a topological order of the
reverse digraph Gr .
Exercises
Exercise 5.6.1. Give an example of a DAG whose underlying graph contains a cycle.
Make your example as small as possible.
Exercise 5.6.2. Prove that every DAG must have at least one source and at least one
sink.
Exercise 5.6.3. Show that the following method for topologically sorting a DAG does
not work in general: print the nodes in order of visiting time.
Exercise 5.6.4. Professor P has the following information taped to his mirror, to help
him to get dressed in the morning.
Socks before shoes; underwear before trousers; trousers before belt; trousers be-
fore shoes; shirt before glasses; shirt before tie; tie before jacket; shirt before hat;
shirt before belt.
Find an acceptable order of dressing for Professor P.
Exercise 5.6.6. Let G be a graph. There is an easy way to show that G is acyclic. It is
not hard to show (see Section D.7) that a graph G is acyclic if and only if G is a forest,
that is, a union of (free) trees.
Give a simple way to check whether a graph G is acyclic. Does the method for
finding a DAG given above work for acyclic graphs also?
5.7 Connectivity
For many purposes it is useful to know whether a digraph is “all in one piece”, and
if not, to decompose it into pieces. We now formalize these notions. The situation
for graphs is easier than that for digraphs.
Definition 5.14. A graph is connected if for each pair of vertices u, v ∈ V (G), there is
a path between them.
In Example 4.3 the graph G1 is connected, as is the underlying graph of G2 .
If a graph is not connected, then it must have more than one “piece”. More for-
mally, we have the following.
Theorem 5.15. Let G be a graph. Then G can be uniquely written as a union of
subgraphs Gi with the following properties:
• each Gi is connected
• if i 6= j, there are no edges from any vertices in Gi to any vertices in G j
Proof. Consider the relation ∼ defined on V (G), given by u ∼ v if and only if there is
a path joining u and v (in other words, u and v are each reachable from the other).
Then ∼ is an equivalence relation and so induces a partition of V (G) into disjoint
subsets. The subgraphs Gi induced by these subsets have no edges joining them by
definition of ∼, and each is connected by definition of ∼.
The subgraphs Gi above are called the connected components of the graph G.
Clearly, a graph is connected if and only if it has exactly one connected component.
Example 5.16. The graph obtained by deleting two edges from a triangle has 2 con-
nected components.
We can determine the connected components of a graph easily by using a traver-
sal algorithm. The following obvious theorem explains why.
Theorem 5.17. Let G be a graph and suppose that DFS or BFS is run on G. Then the
connected components of G are precisely the subgraphs spanned by the trees in the
search forest.
Proof. The result is true for any traversal procedure, as we have already observed in
Theorem 5.2. The trees of the search forest have no edges joining them, and together
they span G.
So we need only run BFS or DFS on the graph, and keep count of the number of
times we choose a root—this is the number of components. We can store or print
the vertices and edges in each component as we explore them. Clearly, this gives a
linear time algorithm for determining the components of a graph.
So far it may seem that we have been too detailed in our treatment of connect-
edness. After all the above results are all rather obvious. However, now consider the
situation for digraphs. The intuition of “being all in one piece” is not as useful here.
In Example 4.3 the graph G1 is connected, as is the underlying graph of G2 . They
are “all in one piece”, but not the same from the point of view of reachability. For
example, in digraph G2 , node 2 is a sink. This motivates the following definition.
Definition 5.18. A digraph G is strongly connected if for each pair of nodes u, v of G,
there is a path in G from u to v.
Note. In other words, u and v are each reachable from the other.
Suppose that the underlying graph of G is connected (some authors call this be-
ing weakly connected), but G is not strongly connected. Then if G represents a road
network, it is possible to get from any place to any other one, but at least one such
route will be illegal: one must go the wrong way down a one-way street.
A strongly connected digraph must contain many cycles: indeed, if v and w are
different nodes, then there is a path from v to w and a path from w to v, so v and w are
contained in a cycle. Conversely, if each pair of nodes is contained in a cycle, then
the digraph is clearly strongly connected.
Again, we can define strongly connected components in a way that is entirely
analogous to component for graphs. The proof above for connected components
generalizes to this situation.
Theorem 5.19. Let G = (V, E) be a digraph. Then V can be uniquely written as a
union of disjoint subsets Vi , with each corresponding induced subdigraph Gi being
a strongly connected component of G.
Example 5.20. A digraph and its three (uniquely determined) strongly connected
components are displayed in Figure 5.13. Note that there are arcs of the digraph not
included in the strongly connected components.
Note that if the underlying graph of G is connected but G is not strongly con-
nected, then there are strong components C1 and C2 such that it is possible to get
from C1 to C2 but not from C2 to C1 . If C1 and C2 are different strong components,
then any arcs between them must either all point from C1 to C2 or from C2 to C1 . Sup-
pose that we imagine each strong component shrunk to a single node (so we ignore
the internal structure of each component, but keep the arcs between components).
0 1
0 4
1
2 3
2 3 5
4 5
Then in the digraph resulting, if v 6= w and we can get from v to w then we cannot
get from w to v. In other words, no pair of nodes can belong to a cycle, and hence
the digraph is acyclic. See Figure 5.14. Note that the converse is also true: if we have
an acyclic digraph G and replace each node by a strongly connected digraph, the
strongly connected components of the resulting digraph are exactly those digraphs
that we inserted.
C1 C2 Cm
Note the similarity between this and the search forest decomposition in Figure 5.3.
In that case, if we shrink each search tree to a point, the resulting digraph is also
acyclic.
How to determine the strongly connected components? First we observe that the
previous method for graphs definitely fails (see Exercise 5.7.1). To decide whether a
digraph is strongly connected we could run BFSvisit or DFSvisit originating from
each node in turn and see whether each of the n trees so generated spans the di-
graph. However the running time of such an algorithm is Θ(n2 + nm).
We can do better by using DFS more cleverly.
Consider the reverse Gr . The strong components of Gr are the same as those of G.
Shrinking each strong component to a point, we obtain acyclic digraphs H and Hr .
Consider a sink S1 in Hr . If we run DFS on Gr starting in the strong component S1 , we
will reach every node in that component and no other nodes of Gr . The DFS tree will
exactly span S1 . Now choose the next root to lie in the strong component S2 node of
Hr whose only possible out-neighbour is S1 (this is possible by the same reasoning
used for zero-indegree sort, except here we deal with outdegree). The DFS will visit
all of S2 and no other nodes of Gr because all other possible nodes have already been
visited. Proceed in this way until we have visited all strong components.
We have shown that if we can choose the roots correctly, then we can find all
strong components. Now of course we don’t know these components a priori, so
how do we identify the roots to choose?
Whenever there is a choice for the root of a new search tree, it must correspond
to a new node of the DAG Hr . We want at least a reverse topological order of Hr . This
is simply a topological order for H. Note that in the case where H = G (each strong
component has just one point), then G is a DAG. The method above will work if and
only if we choose the roots so that each tree in the DFS for Gr has only one point. We
just need a topological order for G, so run DFS on G and print the nodes in reverse
order of finishing time. Then choose the roots for the DFS on Gr in the printed order.
It therefore seems reasonable to begin with a DFS of G. Furthermore, an obvi-
ous choice is: in the DFS of Gr , choose each new root from among white nodes that
finished latest in F.
Then each DFS tree in the search of Gr definitely contains the strong component
S of the root r. To see this, note that no node in that strong component could have
been visited before in Gr , otherwise r would have already been visited. By Theo-
rem 5.4, every node in the strong component of r is a descendant of r.
The only thing that could go wrong is that a search tree in Gr might contain more
than one strong component. This cannot happen, as we now prove.
Theorem 5.21. If the following rule for choosing roots is used in the algorithm de-
scribed above, then each tree in the second search forest spans a strong component
of G, and all strong components arise this way.
Rule: use the white node whose finishing time in F was largest.
Proof. Suppose that a search tree in Gr does contain more than one strong compo-
nent. Let S1 be the first strong component seen in Gr and let S2 be another, and let
the roots be r, s respectively. Note that by the rule for choosing nodes r was the first
node of S1 seen in F (by Theorem 5.4, every node of S1 is a descendant of the first
one seen, which therefore has latest finishing time). The analogous statement holds
for s and S2 .
By the rule for choosing roots, we have done[r] > done[s] in F. We cannot have
seen[s] > seen[r] in F, for then s would be a descendant of r in F and in Gr , so they
would belong to the same strong component. Thus seen[r] > seen[s] in F. Hence S2
was explored in F before r (and hence any node of S1 ) was seen. But then r would
have been reachable from s in G via a path of white nodes, so Theorem 5.4 shows
that r and s are in the same strong component, another contradiction.
The above algorithm runs in linear time with adjacency lists, since each DFS and
the creation of the reverse digraph take linear time. We only need to remember while
performing the first DFS to store the nodes in an array in order of finishing time.
Exercises
Exercise 5.7.1. Give an example to show that a single use of DFS does not in general
find the strongly connected components of a digraph.
Exercise 5.7.2. Carry out the above algorithm by hand on the digraph of Exam-
ple 5.20 and verify that the components given there are correct. Then run it again
on the reverse digraph and verify that the answers are the same.
5.8 Cycles
In this section, we cover three varied topics concerning cycles.
The girth of a graph
The length of the smallest cycle in a graph is an important quantity. For exam-
ple, in a communication network, short cycles are often something to be avoided
because they can slow down message propagation.
Definition 5.22. For a graph (with a cycle), the length of the shortest cycle is called
the girth of the graph. If the graph has no cycles then the girth is undefined but may
be viewed as +∞.
Note. For a digraph we use the term girth for its underlying graph and the (maybe
non-standard) term directed girth for the length of the smallest directed cycle.
Example 5.23. In Figure 5.15 are three (di)graphs. The first has no cycles (it is a free
tree), the second is a DAG of girth 3, and the third has girth 4.
How to compute the girth of a graph? Here is an algorithm for finding the length
of a shortest cycle containing a given vertex v in a graph G. Perform BFSvisit. If we
meet a grey neighbour (that is, we are exploring edge {x, y} from x and we find that y
is already grey), continue only to the end of the current level and then stop. For each
edge {x, y} as above on this level, if v is the lowest common ancestor of x and y in the
BFS tree, then there is a cycle containing x, y, v of length l = d(x) + d(y) + 1. Report the
minimum value of l obtained along the current level.
0 1 0 0 1
1 2 4
2 3 4
5
2 3
5 6 3
5 6
4
7 8 6 7 8
Example 5.26. The graph in Figure 5.16 is bipartite. The isolated vertex could be
placed on either side.
Showing that a graph is bipartite can be done by exhibiting a bipartition (a par-
tition into two subsets as in the definition). Of course finding such a bipartition may
not be easy. Showing a graph is not bipartite seems even harder. In each case, we
certainly do not want to have to test all possible partitions of V into two subsets!
There is a better way.
Definition 5.27. Let k be a positive integer. A graph G has a k-colouring if V (G) can
be partitioned into k nonempty disjoint subsets such that each edge of G joins two
vertices in different subsets.
Example 5.28. The graph in Figure 5.16 has a 2-colouring as indicated.
It is not just a coincidence that our example of a bipartite graph has a 2-colouring.
Theorem 5.29. The following conditions on a graph G are equivalent.
• G is bipartite;
• G has a 2-colouring;
• G does not contain an odd length cycle.
Proof. Given a bipartition, use the same subsets to get a 2-colouring, and vice versa.
This shows the equivalence of the first two conditions. Now suppose G is bipartite.
A cycle must have even length, since the start and end vertices must have the same
colour. Finally suppose that G has no odd length cycle. A 2-colouring is obtained as
follows. Perform BFS and assign each vertex at level i the “colour” i mod 2. If we can
complete this procedure, then by definition each vertex goes from a vertex of one
colour to one of another colour. The only problem could be if we tried to assign a
colour to a node v that was adjacent to a node w of the same colour at the same level
k. But then a cycle of length 2k + 1 is created.
It is now easy to see that we may use the method in the proof above to detect an
odd length cycle if it exists, and otherwise produce a 2-colouring of G. This of course
runs in linear time.
Exercises
Exercise 5.8.1. Give an example to show that in the shortest cycle algorithm, if we
do not continue to the end of the level, but terminate when the first cycle is found,
we may find a cycle whose length is one more than the shortest possible.
Exercise 5.8.2. What is the time complexity of the shortest cycle algorithm?
Example 5.31. Suppose we have a set of workers V0 and a set of tasks V1 that need to
be assigned. A given worker of V0 is able to perform a subset of the tasks in V1 . Now
with each worker capable of doing at most one task at a time, the boss would like to
assign (match) as many workers as possible to as many of the tasks.
Example 5.32. Consider the marriage problem where we have a set of men and
women (as vertices) and edges representing compatible relationships. The goal is to
marry as many couples as possible, which is the same as finding a maximum match-
ing in the relationship graph. If there are no homosexual interests then we have a
bipartite graph problem.
Ann Bob Ann Bob
In Figure 5.17 we illustrate the difference between a maximal and maximum match-
ing in the setting of Example 5.32. The matchings consist of bold-dashed edges (be-
tween females on the left and males on the right) in the drawings.
It is easy to find a maximal matching M in a graph. For example, a simple greedy
approach of iterating over all edges and adding each edge to M if it is non-adjacent
to anything already in M will work. As illustrated in Figure 5.17, a maximal matching
may have fewer edges than a more desirable maximum matching.
Our algorithm to compute a maximum matching will be based on trying to im-
prove an existing matching M by finding certain types of paths in a graph.
Figure 5.18: An algorithm to find an augmenting path, given a matching and an unmatched
starting vertex.
Proof. We first need to show that the algorithm findAugmentingPath, given in Fig-
ure 5.18, does find an augmenting path if one exists. It is sufficient to show that if
there exists at least one augmenting path from vertex x to some other unmatched
vertex that we find any one of them (in our case, by imitating BFS, it will be one of
shortest length). findAugmentingPath builds a traversal tree starting at x using the
following constraints.
This process produces a tree with alternating paths rooted at x as illustrated in Fig-
ure 5.19. The status of the nodes are ODD or EVEN depending if they are an even or
odd distance from x. If a vertex (first seen) is at an odd distance then we have seen
an alternating path where the last edge is not in the matching. If this last vertex is
unmatched then we have found an augmenting path, otherwise we extend the path
by using the matched edge. If a vertex is at an even distance then we have seen an
alternating path where the last edge is in the matching. Since the graph is bipartite
the status of being ODD or EVEN is unambiguous.
Suppose the algorithm findAugmentingPath terminates without finding an aug-
menting path when one does exist. Let x = v0 , v1 , . . . , vk be a counterexample. Con-
sider the first index 0 < i ≤ k such that status[vi ] = WHITE. We know vi−1 was in-
serted in the queue Q. Consider the two cases. If i − 1 is even then since vi is a
neighbour of vi−1 its status would have changed to ODD. If i − 1 is odd then either
(vi−1 , vi ) is in the matching or not. If so, the status of vi would have changed; if not,
a prefix of the counterexample is not an augmenting path. Thus, by contradiction,
findAugmentingPath will find an augmenting path if one exists.
The running time of one invocation of findAugmentingPath is the same as the
running time of BFS since each vertex is added to the queue Q at most once. For
adjacency list representation of graphs this can be carried out in time O(m). If we
find an augmenting path then our best matching increases by one. Since a maxi-
mum matching is bounded by ⌊n/2⌋ we only need to find at most O(n) augmenting
paths. We potentially need to call findAugmentingPath once for each unmatched
vertex, which is bounded by O(n), and repeat the process for each modified match-
ing. Therefore, the total running time to find a maximum matching is at most O(n2 m).
The algorithm presented here can easily be improved to O(nm) by noting that it is
only required to traverse and compute an “alternating path forest” in order to find an
x
EVEN
ODD
EVEN
ODD
EVEN
pred[u] ODD
u = pred[v] EVEN
v — unmatched
Figure 5.19: Structure of the graph traversal tree for finding augmenting paths.
5.10 Notes
The linear-time algorithm for finding strong components of a digraph was intro-
duced by R. E. Tarjan in 1971.
One of the early polynomial-time algorithms for finding maximum matchings
in bipartite graphs is based on the Ford–Fulkerson network flow algorithm [7] from
1956. The first polynomial-time algorithm for finding a maximum matching in an
arbitrary graph was presented by J. Edmonds [5] in 1965.
Chapter 6
So far our digraphs have only encoded information about connection relations be-
tween nodes. For many applications it is important to study not only whether one
can get from A to B, but how much it will cost to do so.
For example, the weight could represent how much it costs to use a link in a com-
munication network, or distance between nodes in a transportation network. We
shall use the terminology of cost and distance interchangeably (so, for example, we
talk about finding a minimum weight path by choosing a shortest edge).
We need a different ADT for this purpose.
We interpret c(u, v) as the cost of using arc (u, v). An ordinary digraph can be
thought of as a special type of weighted digraph where the cost of each arc is 1. A
weighted graph may be represented as a symmetric digraph where each of a pair of
antiparallel arcs has the same weight.
In Figure 6.1 we display a classic unweighted graph (called the 3-cube) of diame-
ter 3, a digraph with arc weights, and a graph with edge weights.
There are two obvious ways to represent a weighted digraph on a computer. One
is via a matrix. The adjacency matrix is modified so that each entry of 1 (signifying
that an arc exists) is replaced by the cost of that arc. Another is a double adjacency
0 1 4
1 0 1
0 1
4 1 2
2 3 2
2 3
2 4 4 3
2 2
5
4 5
3 1
3
4 5
6 7 2
list. In this case, the list associated to a node v contains, alternately, an adjacent node
w and then the cost c(v, w).
If there is no arc between u and v, then in an ordinary adjacency matrix the cor-
responding entry is 0. However, in a weighted adjacency matrix, the “cost” of a non-
existent arc should be set consistantly for most applications. We adopt the following
convention. An entry of null or 0 in a weighted adjacency matrix means that the arc
does not exist, and vice versa. In many of our algorithms below, such entries should
be replaced by the programming equivalent of null for class objects, or ∞ for prim-
itive data types. In the later case, we might use some positive integer greater than
any expected value that might occur during an execution of the program.
Example 6.2. The two weighted (di)graphs of Figure 6.1 are stored as weighted ad-
jacency matrices below.
0 4 1 0 4 0
0 1 4 0 4 0 0 2 3 4
0 0 0 2 1 0 0 0 3 0
0 2 0 5 0 2 0 0 0 1
2 0 0 0 4 3 3 0 0 2
0 4 0 1 2 0
The corresponding weighted adjacency lists representations are
1 1 2 4
3 2
1 2 3 5
0 2
and
1 4 2 1 4 4
0 4 3 2 4 3 5 4
0 1 4 3
1 2 5 1
0 4 1 3 2 3 5 2
1 4 3 1 4 2
See Appendix B.4 for sample Java code for representing the abstract data type of
edge-weighted digraphs.
Note. If the digraph is not strongly connected then the diameter is not defined; the
only “reasonable” thing it could be defined to be would be +∞, or perhaps n (since
no path in G can have length more than n − 1).
Example 6.4. The diameter of the 3-cube in Figure 6.1 is easily seen to be 3. Since
the digraph G2 in Figure 4.1 is not strongly connected, the diameter is undefined.
Example 6.5. An adjacency matrix and a distance matrix for the 3-cube shown in
Figure 6.1 is given below. The maximum entries of value 3 indicate the diameter. The
reader should check these entries by performing a breadth-first search from each
vertex.
0 1 1 0 0 0 1 0 0 1 1 2 2 3 1 2
1 0 0 1 0 0 0 1
1 0 2 1 3 2 2 1
1 0 0 1 1 0 0 0
1 2 0 1 1 2 2 3
0 1 1 0 0 1 0 0
2 1 1 0 2 1 3 2
0 0 1 0 0 1 1 0
2 3 1 2 0 1 1 2
0 0 0 1 1 0 0 1
3 2 2 1 1 0 2 1
1 0 0 0 1 0 0 1 1 2 2 3 1 2 0 1
0 1 0 0 0 1 1 0 2 1 3 2 2 1 1 0
Example 6.6. In the weighted digraph of Figure 6.1, we can see by considering all
possibilities that the unique minimum weight path from 0 to 3 is 0 1 3, of weight 3.
s to v, passing through u, that is shorter than the direct path from s). Now we choose
the node (at “level” 1 or 2) whose current best distance to s is smallest, and update
again. We continue in this way until all nodes belong to S.
The basic structure of the algorithm is presented in Figure 6.2.
Example 6.7. An application of Dijkstra’s algorithm on the second digraph of Fig-
ure 6.1 is given in Table 6.1 for each starting vertex s.
The table illustrates that the distance vector is updated at most n − 1 times (only
before a new vertex is selected and added to S). Thus we could have omitted the lines
with S = {0, 1, 2, 3} in Table 6.1.
Why does Dijkstra’s algorithm work? The proof of correctness is a little longer
than for previous algorithms. The key observation is the following result. By an S-
path from s to w we mean a path all of whose intermediate nodes belong to S. In
other words, w need not belong to S, but all other nodes in the path do belong to S.
Theorem 6.8. Suppose that all arc weights are nonnegative. Then at the top of the
while loop, we have the following properties:
P1: if x ∈ V (G), then dist[x] is the minimum cost of an S-path from s to x;
P2: if w ∈ S, then dist[w] is the minimum cost of a path from s to w.
Table 6.1: Illustrating Dijkstra’s algorithm.
Note. Assuming the result to be true for a moment, we can see that once a node u
is added to S and dist[u] is updated, dist[u] never changes in subsequent iterations.
When the algorithm terminates, all nodes belong to S and hence dist holds the cor-
rect distance information.
Proof. Note that at every step, dist[x] does contain the length of some path from s to
x; that path is an S-path if x ∈ S. Also, the update formula ensures that dist[x] never
increases.
To prove P1 and P2, we use induction on the number of times k we have been
through the while-loop. Let Sk denote the value of S at this stage. When k = 0, S0 =
{s}, and since dist[s] = 0, P1 and P2 obviously hold. Now suppose that they hold after
k times through the while-loop and let u be the next special node chosen during that
loop. Thus Sk+1 = Sk ∪ {u}.
We first show that P2 holds after k + 1 iterations. Suppose that w ∈ Sk+1 . If w 6= u
then w ∈ S and so P2 trivially holds for w by the inductive hypothesis. On the other
hand, if w = u, consider any Sk+1 -path γ from s to u. We shall show that dist[u] ≤ |γ|
where |γ| denotes the weight of γ. The last node before u is some y ∈ Sk . Let γ1 be the
subpath of γ ending at y. Then dist[u] ≤ dist[y] + c(y, u) by the update formula. Fur-
thermore dist[y] ≤ |γ1 | by the inductive hypothesis applied to y ∈ Sk . Thus, combining
x
y
S
Figure 6.3: Picture for proof of Dijkstra’s algorithm.
these inequalities, we obtain dist[u] ≤ |γ1 | + c(y, u) = |γ| as required. Hence P2 holds
for every iteration.
Now suppose x ∈ V (G). Let γ be any Sk+1 -path to x. If u is not involved then γ is
an Sk path and so |γ| ≤ dist[x] by the inductive hypothesis. Now suppose that γ does
include u. If γ goes straight from u to x, we let γ1 denote the subpath of γ ending at
u. Then |γ| = |γ1 | + c(u, x) ≥ dist[x] by the update formula. Otherwise, after reaching
u, the path returns into Sk directly, emerging from Sk again, at some node y before
going straight to x (see Figure 6.3). Let γ1 be the subpath of γ ending at y. Since P2
holds for Sk , there is a minimum weight Sk -path β from s to y of length dist[y]. Thus
by the update formula,
The study of the time complexity of Dijkstra’s algorithm leads to many interesting
topics.
Note that the value of dist[x] will change only if x is adjacent to u. Thus if we use a
weighted adjacency list, the block inside the second for-loop need only be executed
m times. However, if using the adjacency matrix representation, the block inside the
for-loop must still be executed n2 times.
The time complexity is of order an + m if adjacency lists are used, and an + n2
with an adjacency matrix, where a represents the time taken to find the node with
algorithm Dijkstra2
Input: weighted digraph (G, c); node s ∈ V (G)
begin
priority queue Q
array colour[0..n − 1], dist[0..n − 1]
for u ∈ V (G) do
colour[u] ← WHITE
end for
colour[s] ← GREY
Q.insert(s, 0)
while not Q.isEmpty() do
u ← Q.peek(); t1 ← Q.getKey(u)
for each x adjacent to u do
t2 ← t1 + c(u, x)
if colour[x] = WHITE then
colour[x] ← GREY
Q.insert(x,t2 )
else if colour[x] = GREY and Q.getKey(x) > t2 then
Q.decreaseKey(x, t2 )
end if
end for
Q.delete()
colour[u] ← BLACK
dist[u] ← t1
end while
return dist
end
minimum value of dist. The obvious method of finding the minimum is simply to
scan through array dist sequentially, so that a is of order n, and the running time of
Dijkstra is therefore Θ(n2 ). Dijkstra himself originally used an adjacency matrix and
scanning of the dist array.
The above analysis is strongly reminiscent of our analysis of graph traversals in
Section 5.1, and in fact Dijkstra’s algorithm fits into the priority-first search frame-
work discussed in Section 5.5. The key value associated to a node u is simply the
value dist[u], the current best distance to that node from the root s. In Figure 6.4 we
present Dijkstra’s algorithm in this way.
It is now clear from this formulation that we need to perform n delete-min oper-
ations and at most m decrease-key operations, and that these dominate the running
algorithm BellmanFord
Input: weighted digraph (G, c); node s
begin
array dist[0..n − 1]
for u ∈ V (G) do
dist[u] ← ∞
end for
dist[s] ← 0
for i from 0 to n − 1 do
for x ∈ V (G) do
for v ∈ V (G) do
dist[v] ← min(dist[v], dist[x] + c(x, v))
end for
end for
end for
return dist
end
time. Hence using a binary heap (see Section 2.5), we can make Dijkstra’s algorithm
run in time O((n + m) log n). Thus if every node is reachable from the source, it runs
in time O(m log n).
The quest to improve the complexity of algorithms like Dijkstra’s has led to some
very sophisticated data structures that can implement the priority queue in such a
way that the decrease-key operation is faster than in a heap, without sacrificing the
delete-min or other operations. Many such data structures have been found, mostly
complicated variations on heaps; some of them are called Fibonacci heaps and 2–3
heaps. The best complexity bound for Dijkstra’s algorithm, using a Fibonacci heap,
is O(m + n log n).
Bellman–Ford algorithm
This algorithm, unlike Dijkstra’s handles negative weight arcs, but runs slower
than Dijkstra’s when all arcs are nonnegative. The basic idea, as with Dijkstra’s al-
gorithm, is to solve the SSSP under restrictions that become progressively more re-
laxed. Dijkstra’s algorithm solves the problem one node at a time based on their
current distance estimate.
In contrast, the Bellman–Ford algorithm solves the problem for all nodes at “level”
0, 1, . . . , n − 1 in turn. By level we mean the minimum possible number of arcs in a
minimum weight path to that node from the source.
Theorem 6.9. Suppose that G contains no negative weight cycles. Then after the i-th
iteration of the outer for-loop, dist[v] contains the minimum weight of a path to v for
all nodes v with level at most i.
Proof. Note that as for Dijkstra, the update formula is such that dist values never
increase.
We use induction on i. When i = 0 the result is true because of our initialization.
Suppose it is true for i − 1. Let v be a node at level i, and let γ be a minimum weight
path from s to v. Since there are no negative weight cycles, γ has i arcs. If y is the
last node of γ before v, and γ1 the subpath to y, then by the inductive hypothesis
we have dist[y] ≤ |γ1 |. Thus by the update formula we have dist[v] ≤ dist[y] + c(y, v) ≤
|γ1 | + c(y, v) ≤ |γ| as required.
The Bellman–Ford algorithm runs in time Θ(nm) using adjacency lists, since the
statement in the inner for-loop need only be executed if v is adjacent to x, and the
outer loop runs n times. Using an adjacency matrix it runs in time Θ(n3 ).
Exercises
Exercise 6.3.1. Run the Bellman–Ford algorithm on the digraph with weighted adja-
cency matrix given below. Choose each node as the source in turn as in Example 6.7.
0 6 0 0 7
0 0
5 −4 8
0 −2 0 0 0
2 0 7 0 0
0 0 −3 9 0
Exercise 6.3.2. Explain why the SSSP problem makes no sense if we allow digraphs
with cycles of negative total weight.
Exercise 6.3.3. The graph shows minimum legal driving times (in multiples of 5
minutes) between various South Island towns. What is the shortest time to drive
legally from Picton to (a) Wanaka, (b) Queenstown and (c) Invercargill? Explain
which algorithm you use and show your work.
19
NELSON PICTON
5
23
GREYMOUTH 26 BLENHEIM
28
51
50
MURCHISON CHRISTCHURCH
44
90 OMARAMA
16
WANAKA
13 54
16
8
QUEENSTOWN 10
CROMWELL
28
DUNEDIN
33
INVERCARGILL
Exercise 6.3.4. Suppose the input to the Bellman–Ford algorithm is a digraph with
a negative weight cycle. How does the algorithm detect this, so it can exit gracefully
with an error message?
Exercise 6.3.5. Give an example to show that Dijkstra’s algorithm may fail to give
the correct answer if some weights are negative. Make your example as small as
possible. Then run the Bellman–Ford algorithm on the example and verify that it
gives the correct answer.
Exercise 6.3.6. Where in the proof of Dijkstra’s algorithm do we use the fact that all
the arc weights are nonnegative?
Example 6.10. For the digraph of Figure 6.1, we have already calculated the all-pairs
distance matrix in Example 6.7:
0 1 4 3
4 0 8 2
6 2 0 4 .
2 3 6 0
algorithm Floyd
Input: weighted digraph (G, c)
begin
array d[0..n − 1, 0..n − 1]
for u ∈ V (G) do
for v ∈ V (G) do
d[u, v] ← c(u, v)
end for
end for
for x ∈ V (G) do
for u ∈ V (G) do
for v ∈ V (G) do
d[u, v] ← min(d[u, v], d[u, x] + d[x, v])
end for
end for
end for
return d
end
Clearly we may compute this matrix as above by solving the single-source short-
est path problem with each node taken as the root in turn. The time complexity is
of course Θ(nA) where A is the complexity of our single-source algorithm. Thus run-
ning the adjacency matrix version of Dijkstra n times gives a Θ(n3 ) algorithm, and
the Bellman–Ford algorithm Θ(n2 m).
There is a simpler method discovered by R. W. Floyd. Like the Bellman-Ford algo-
rithm, it is an example of an algorithm design technique called dynamic program-
ming . This is where smaller, less-difficult subproblems are first solved, and the so-
lutions recorded, before the full problem is solved. Floyd’s algorithm computes a
distance matrix from a cost matrix in time Θ(n3 ). It is faster than repeated Bellman–
Ford for dense digraphs and unlike Dijkstra’s algorithm, it can handle negative costs.
For sparse graphs with positive costs repeated Dijkstra is competitive with Floyd, but
for dense graphs they have the same asymptotic complexity. A key point in favour
of Floyd’s algorithm is its simplicity, as can be seen from the algorithm of Figure 6.6.
Floyd’s algorithm is basically a simple triple for-loop.
Note. Observe that we are altering the value of d[u, v] in the update formula. If we
already have a weighted adjacency matrix d, there is no need for the first double
loop. We simply overwrite entries in d via the update formula, and everything works.
Example 6.11. An application of Floyd’s algorithm on the third graph of Figure 6.1
is given below. The initial cost matrix is as follows.
0 4 1 ∞ 4 ∞
4 0 ∞ 2 3 4
1 ∞ 0 ∞ 3 ∞
∞ 2 ∞ 0 ∞ 1
4 3 3 ∞ 0 2
∞ 4 ∞ 1 2 0
In the matrices below, the index k refers to the number of times we have been
through the outer for-loop.
0 4 1 ∞ 4 ∞ 0 4 1 6 4 8 0 4 1 6 4 8
4 0 5 2 3 4 4 0 5 2 3 4 4 0 5 2 3 4
1 5 0 ∞ 3 ∞ 1 5 0 7 3 9 1 5 0 7 3 9
∞ 2 ∞ 0 ∞ 1 6 2 7 0 5 1 6 2 7 0 5 1
4 3 3 ∞ 0 2 4 3 3 5 0 2 4 3 3 5 0 2
∞ 4 ∞ 1 2 0 8 4 9 1 2 0 8 4 9 1 2 0
k=1 k=2 k=3
0 4 1 6 4 7 0 4 1 6 4 6 0 4 1 6 4 6
4 0 5 2 3 3 4 0 5 2 3 3 4 0 5 2 3 3
1 5 0 7 3 8 1 5 0 7 3 5 1 5 0 6 3 5
6 2 7 0 5 1 6 2 7 0 5 1 6 2 6 0 3 1
4 3 3 5 0 2 4 3 3 5 0 2 4 3 3 3 0 2
7 3 8 1 2 0 6 3 5 1 2 0 6 3 5 1 2 0
k=4 k=5 k=6
In the above matrices we list the entries that change in bold after each increment
of k. Notice that undirected graphs, as expected, have symmetric distance matrices.
Why does Floyd’s algorithm work? The proof is again by induction.
Theorem 6.12. At the bottom of the outer for loop, for all nodes u and v, d[u, v] con-
tains the minimum length of all paths from u to v that are restricted to using only
intermediate nodes that have been seen in the outer for loop.
Note. Given this fact, when the algorithm terminates, all nodes have been seen in
the outer for loop and so d[u, v] is the length of a shortest path from u to v.
Proof. To establish the above property, we use induction on the outer for-loop. Let
Sk be the set of nodes seen after k times through the outer loop, and define an Sk -path
to be one all of whose intermediate nodes belong to Sk . The corresponding value of d
is denoted dk . We need to show that for all k, after k times through the outer for-loop,
dk [u, v] is the minimum length of an Sk -path from u to v.
When k = 0, S0 = 0/ and the result holds. Suppose it is true after k times through the
outer loop and consider what happens at the end of the (k + 1)-st time through the
outer loop. Suppose that x was the last node seen in the outer loop, so Sk+1 = Sk ∪ {x}.
Fix u, v ∈ V (G) and let L be the minimum length of an Sk+1 -path from u to v. Obviously
L ≤ dk+1 [u, v]; we show that dk+1 [u, v] ≤ L.
Choose an Sk+1 -path γ from u to v of length L. If x is not involved then the result
follows by inductive hypothesis. If x is involved, let γ1 , γ2 be the subpaths from u to x
and x to v respectively. Then γ1 and γ2 are Sk -paths and by the inductive hypothesis,
L ≥ |γ1 | + |γ2 | ≥ dk [u, x] + dk [x, v] ≥ dk+1 [u, v].
The proof does not use the fact that weights are nonnegative—in fact Floyd’s al-
gorithm works for negative weights (provided of course that a negative weight cycle
is not present).
Exercises
Exercise 6.4.1. Run Floyd’s algorithm on the matrix of Exercise 6.3.1 and check your
answer against what was obtained there.
Exercise 6.4.2. Suppose the input to Floyd’s algorithm is a digraph with a negative
weight cycle. How does the algorithm detect this, so it can exit gracefully with an
error message?
Exercise 6.4.3.
The matrix M shows costs of direct flights between towns A, B, C, D, E, F (where ∞,
as usual, means that no direct flight exists). You are given the job of finding the
cheapest route between each pair of towns. Solve this problem. Hint: save your
working.
0 1 2 6 4 ∞
1 0 7 4 2 11
2 7 0 ∞ 6 4
M= .
6 4 ∞ 0 ∞ 1
4 2 6 ∞ 0 3
∞ 11 4 1 3 0
The next day, you are told that in towns D, E, F, political troubles mean that no
passenger is allowed to both take off and land there. Solve the problem with this
additional constraint.
for Kruskal O(m log n). The disjoint sets ADT can be implemented in such a way that
the union and find operations in Kruskal’s algorithm runs in almost linear time (the
exact bound is very complicated). So if the edge weights are presorted, or can be
sorted in linear time (for example, if they are known to be integers in a fixed range),
then Kruskal’s algorithm runs for practical purposes in linear time.
Exercises
Exercise 6.5.1. Carry out each of these algorithms on the weighted graph of Fig-
ure 6.1. Do the two algorithms give the same spanning tree?
Exercise 6.5.2. Prove the assertion made above that when Kruskal’s or Prim’s algo-
rithm terminates, the current set of edges forms a spanning tree.
algorithm Kruskal
Input: weighted graph (G, c)
begin
disjoint sets ADT A
initialize A with each vertex in its own set
sort the edges in increasing order of cost
for each edge {u, v} in increasing cost order do
if not A.set(u) = A.set(v) then
add this edge
A.union(A.set(u), A.set(v))
end if
end for
return A
end
Exercise 6.5.3. Consider the following algorithm for the MST problem. Repeatedly
delete edges from a connected graph G, at each step choosing the most expensive
edge we can, subject to maintaining connectedness. Does it solve the MST problem
sometimes? always?
Exercise 6.6.2. What is the exact relation between the independent set and vertex
cover problems?
6.7 Notes
Dijkstra’s algorithm was proposed by E. W. Dijkstra in 1959. The Bellman–Ford al-
gorithm was proposed independently by R. Bellman (1958) and L. R. Ford, Jr (1956).
Floyd’s algorithm was developed in 1962 by R. W. Floyd. Prim’s algorithm was pre-
sented by R. C. Prim in 1957 and reinvented by E. W. Dijkstra in 1959, but had been
previously introduced by V. Jarnik in 1930. Kruskal’s algorithm was introduced by J.
Kruskal in 1956.
Part III
Appendices
Appendix A
This appendix contains Java implementations for many of the common search and
sorting algorithms presented in the book.
// Main loop
while( left <= lend && right <= rend )
if ( a[ left ] < a[ right ] )
tmp[ tpos++ ] = a[ left++ ];
else
tmp[ tpos++ ] = a[ right++ ];
// Begin partitioning
int i, j;
for ( i = lo, j = hi - 1; ; ) {
while( a[ ++i ] < p );
while( p < a[ --j ] );
if ( i < j ) swap( a, i, j );
else break;
}
// Restore pivot
swap( a, i, hi - 1 );
// Sort small elements
quickSort( a, lo, i - 1 );
// Sort large elements
quickSort( a, i + 1, hi );
}
}
private static void quickSelect( int[] a, int lo, int hi, int k )
{
if ( lo + CUTOFF > hi ) {
insertionSort( a, lo, hi );
} else {
// Sort low, middle, high
int mi = ( lo + hi ) / 2;
if ( a[ mi ] < a[ lo ] ) swap( a, lo, mi );
if ( a[ hi ] < a[ lo ] ) swap( a, lo, hi );
if ( a[ hi ] < a[ mi ] ) swap( a, mi, hi );
// Begin partitioning
int i, j;
for( i = lo, j = hi - 1; ; ) {
while( a[ ++i ] < p );
while( p < a[ --j ] );
if ( i < j ) swap( a, i, j ); else break;
}
// Restore pivot
swap( a, i, hi - 1 );
while( lo <= hi ) {
mi = ( lo + hi ) / 2;
if ( a[ mi ] < key ) lo = mi + 1;
else if( a[ mi ] > key ) hi = mi - 1;
else return mi;
}
throw new ItemNotFound( "BinarySearch fails" );
}
int lo = 0;
int hi = a.length - 1;
int mi;
while( lo < hi ) {
mi = ( lo + hi ) / 2;
if ( a[ mi ] < key ) lo = mi + 1;
else hi = mi;
}
if ( a[ lo ] == key ) return lo;
throw new ItemNotFound( "BinarySearch fails" );
}
Appendix B
This appendix presents a simplified abstract class for representing a graph abstract
data type (ADT). Although it is fully functional, it purposely omits most exception
handling and other niceties that should be in any commercial level package. These
details would distract from our overall (introductory) goal of showing how to imple-
ment a basic graph class in Java.
Our plan is to have a common data structure that represents both graphs and
digraphs. A graph will be a digraph with anti-parallel arcs; that is, if (u, v) ∈ E then
(v, u) ∈ E also. The initial abstract class presented below requires a core set of meth-
ods needed for the realized graph ADT. It will be extended with the actual internal
data structure representation in the form of adjacency matrix or adjacency lists (or
whatever the designer picks).
package graphADT;
import java.util.ArrayList;
import java.io.BufferedReader;
/*
* Current Abstract Data Type interface for (di)graph classes.
*/
public interface Graph
{
// Need default, copy and BufferedReader constructors
// (commented since Java doesn’t allow abstract constructors!)
//
// public GraphADT();
// public GraphADT(GraphADT);
// public GraphADT(BufferedReader in);
Right from the beginning we get in trouble since Java does not allow abstract con-
structors. We will leave these as comments and hope the graph class designer will
abide by them. We want to create graphs from an empty graph, copy an existing
graph, or read in one from some external source. In the case of a BufferedReader
constructor the user has to attach one to a string, file or keyboard. We will see exam-
ples later.
We now proceed by presenting the alteration methods required for our graph
class interface.
// data structure modifiers
//
void addVertices(int i); // Add some vertices
void removeVertex(int i); // Remove vertex
This small set of methods allows one to build the graph. We will soon explicitly
define the methods for adding or deleting edges in terms of the two arc methods. An
extended class can override these to improve efficiency if it wants. We now list a few
methods for extracting information from a graph object.
// data structure queries
//
boolean isArc(int i, int j); // Check for arcs
boolean isEdge(int i, int j); // Check for edges
We have the toString method as an interface requirement for the derived classes
to define. We want a BufferedReader constructor for a graph class to accept its own
toString output. Two common external graph representations are handled by the
methods given below.
public String toStringAdjMatrix()
{
StringBuffer o = new StringBuffer();
o.append(order()+"\n");
To make things convenient for ourselves we require that the first line of our (two)
external graph representations contain the number of vertices. Strictly speaking,
this is not needed for an 0/1 adjacency matrix. This makes our parsing job easier
and this format allows us to store more than one graph per input stream. (We can
terminate a stream of graphs with a sentinel graph of order zero.)
import java.io.*;
import java.util.*;
The default constructor simply creates an empty graph and thus there is no need
to allocate any space. The two copy constructors simply copy onto a new n-by-n ma-
trix the boolean adjacency values of the old graph. Notice that we want new storage
and not an object reference for the copy.
An alternative implementation (as given in the first edition of this textbook) would
also keep an integer variable space to represent the total space allocated. Whenever
we delete vertices we do not want to reallocate a new matrix but to reshuffle the en-
tries into the upper sub-matrix. Then whenever adding more vertices we just extend
the dimension of the sub-matrix.
Our last input constructor for GraphAdjMatrix is now given.
public GraphAdjMatrix(BufferedReader buffer)
{
try
{
String line = buffer.readLine().trim();
String[] tokens = line.split("\\s+");
if (tokens.length != 1)
{
throw new Error("bad format: number of vertices");
}
int n = order = Integer.parseInt(tokens[0]);
We next define several methods for altering this graph data structure. The first
two methods allow the user to add or delete vertices from a graph.
// Mutator Methods
//
public void addVertices(int n)
{
assert(0 <= n );
boolean matrix[][] = new boolean[order+n][order+n];
Next, we have four relatively trivial methods for adding and deleting arcs (and
edges). Like the mutator methods for checking for valid vertex indices we add some
important assert statements that can be turned on with an option to the java com-
piler for debugging graph algorithms.
// Mutator Methods (cont.)
The methods to access properties of the graph are also pretty straightforward.
// Access Methods
//
public boolean isArc(int i, int j)
{
assert(0 <= i && i < order);
assert(0 <= j && j < order);
return adj[i][j];
}
public boolean isEdge(int i, int j)
{
return isArc(i,j) && isArc(j,i);
}
return nbrs;
}
The order of the graph is stored in an integer variable _order. However, we have
to count all true entries in the boolean adjacency matrix to return the size. Notice
that if we are working with an undirected graph this returns twice the expected num-
ber (since we store each edge as two arcs). If we specialize this class we may want to
uncomment the indicated statements to autodetect undirected graphs (whenever
the matrix is symmetric). It is probably safer to leave it as it is written, with the
understanding that the user knows how size is defined for this implementation of
Graph.
// default output is readable by constructor
//
public String toString() { return toStringAdjMatrix(); }
import java.io.*;
import java.util.*;
public GraphAdjLists()
{
adj = new ArrayList<ArrayList<Integer>>();
}
public GraphAdjLists(Graph G)
{
int n = G.order();
adj = new ArrayList<ArrayList<Integer>>();
for (int i = 0; i < n; i++)
{
adj.add(G.neighbours(i));
}
}
if (tokens.length != 1)
{
throw new Error("bad format: number of vertices");
}
Our stream constructor reads in an integer denoting the order n of the graph and
then reads in n lines denoting the adjacency lists. Notice that we do not check for
correctness of the data. For example, a graph of 5 vertices could have erroneous
adjacency lists with numbers outside the range 0 to 4. We leave these robustness
considerations for an extended class to fulfil, if desired. Also note that we do not list
the vertex index in front of the individual lists and we use white space to separate
items. A blank line indicates an empty list (that is, no neighbours) for a vertex.
// Mutator Methods
//
public void addVertices(int n)
{
assert(0 <= n);
if ( n > 0 )
{
for (int i = 0; i < n; i++)
{
adj.add(new ArrayList<Integer>());
}
}
}
Adding vertices is easy for our adjacency lists representation. Here we just ex-
pand the internal _adj list by appending new empty lists. The removeVertex method
is a little complicated in that we have to scan each list to remove arcs pointing to the
vertex being deleted. We also have chosen to relabel vertices so that there are no
gaps (that is, we want vertex indexed by i to be labeled Integer(i)). A good ques-
tion would be to find a more efficient removeVertex method. One way would be
to also keep an in-neighbour list for each vertex. However, the extra data structure
overhead is not desirable for our simple implementation.
public void addArc(int i, int j)
{
assert(0 <= i && i < order());
assert(0 <= j && j < order());
if (isArc(i,j)) return;
(adj.get(i)).add(j);
}
Adding and removing arcs is easy since the methods to do this exist in the Vector
class. All we have to do is access the appropriate adjacency list. We have decided
to place a safeguard in the addArc method to prevent parallel arcs from being added
between two vertices.
// Access Methods
//
public boolean isArc(int i, int j)
{
assert(0 <= i && i < order());
assert(0 <= j && j < order());
return (adj.get(i)).contains(new Integer(j));
}
Note how we assume that the contains method of a Vector object does a data
equality check and not just a reference check. The outDegree method probably
runs in constant time since we just return the list’s size. However, the inDegree
method has to check all adjacency lists and could have to inspect all arcs of the
graph/digraph.
public ArrayList<Integer> neighbours(int i)
{
assert(0 <= i && i < order());
ArrayList<Integer> nei = new ArrayList<Integer>();
for (Integer vert : adj.get(i))
{
nei.add(vert);
}
return nei;
//return (ArrayList<Integer>)(adj.get(i)).clone();
}
We do not want to have any internal references to the graph data structure being
available to non-class members. Thus, we elected to return a clone of the adjacency
list for our neighbours method. We did not want to keep redundant data so the order
of our graph is simply the size of the adj list.
// default output readable by constructor
//
public String toString() { return toStringAdjLists(); }
Again, we have the default output format for this class be compatible with the
constructor BufferedReader. (The method toStringAdjLists is defined on page 180.)
We next present a simple test program for how one would use our graph imple-
mentations. We encourage the reader to trace through the steps and to try to obtain
the same output.
import java.io.*; import graphADT.*;
G1.addVertices(5);
G1.addArc(0,2); G1.addArc(0,3); G1.addEdge(1,2);
G1.addArc(2,3); G1.addArc(2,0); G1.addArc(2,4);
G1.addArc(3,2); G1.addEdge(4,1); G1.addArc(4,2);
System.out.println(G1);
Graph G2 = new GraphAdjMatrix(G1);
System.out.println(G2);
G3.addVertices(2);
G3.addArc(5,4); G3.addArc(5,2); G3.addArc(5,6);
G3.addArc(2,6); G3.addArc(0,6); G3.addArc(6,0);
System.out.println(G3);
System.out.println(G4);
}
} // test
The expected output, using JDK version 1.6, is given in Figure B.1. Note that the
last version of the digraph G has a vertex of out-degree zero in the adjacency lists.
(To compile our program we type ‘javac test.java’ and to execute it we type ‘java test’
at our command-line prompt ‘$’.)
5
0 0 1 1 0
0 0 1 0 1
0 1 0 0 1
0 0 1 0 0
0 0 1 0 0
7
2 3 6
2 4
1 4 6
2
2
4 2 6
0
7
2 3
2 6
1 5
2
2 5
public wGraphMatrix()
{
order = 0;
}
public wGraphMatrix(wGraphMatrix G)
{
int n = order = G.order();
if ( n > 0 )
{
adjW = new Weight[n][n];
}
if (tokens.length != 1)
{
throw new Error("bad format: number of vertices");
}
int n = order = Integer.parseInt(tokens[0]);
if ( n > 0 )
{
adjW = new Weight[n][n];
}
// mutator methods
// accessor methods
One thing to note is that if one wants to output the underlying graph represen-
tation (that is, without weights) one can simply call the toString method of Graph
reference.
We assume that the reader is familiar with basic data structures such as arrays and
with the basic data types built in to most programming languages (such as integer,
floating point, string, etc). Many programming applications require the program-
mer to create complicated combinations of the built-in structures. Some languages
make this easy by allowing the user to define new data types (for example Java or C++
classes), and others do not (for example C, Fortran). These new data types are con-
crete implementations in the given language of abstract data types (ADT s), which
are mathematically specified.
• isEmpty(ε) = 1
• pop(push(S, x)) = S
• peek(push(S, x)) = x
Appendix D
Mathematical Background
We collect here some basic useful facts, all of which can be found in standard text-
books on calculus and discrete mathematics, to which the reader should refer for
proofs.
D.1 Sets
A set is a collection of distinguishable objects (its elements) whose order is unim-
portant. Two sets are the same if and only if they have the same elements. We denote
the statement that x is an element of the set X by x ∈ X and the negation of this state-
ment by x 6∈ X. We can list finite sets using the braces notation: for example, the set S
consisting of all integers between 1 and 10 that are divisible by 3 is denoted {3, 6, 9}.
A subset of a set X is a set all of whose elements are also elements of X. Each set has
precisely one subset with zero elements, the empty set which is denoted 0. / A subset
can be described by set-builder notation; for example, the subset of S consisting of
all multiples of 3 between 1 and 7 can be written {s | s ∈ S and s ≤ 7}.
For sets X and Y , the union and intersection of X and Y are, respectively, the sets
defined as follows (note that the “or” is inclusive, so “P or Q” is true if and only if P is
true, Q is true, or both P and Q are true):
X ∪Y = {x | x ∈ X or x ∈ Y }
X ∩Y = {x | x ∈ X and x ∈ Y }.
(n + 1)4 ≤ 16n4
• for each n ≥ n0 , if P(i) is true for each i with n0 ≤ i ≤ n, then P(n + 1) is true
D.3 Relations
A relation on a set S is a set R of ordered pairs of elements of S, that is, a subset
of S × S. If (x, y) is a such a pair, we say that x is related to y and sometimes write x R y.
An example is the relation of divisibility on the positive integers; x R y if and only if y
is a multiple of x. Here 2 R 12, 1 R x for every x, and x R 1 only if x = 1.
There are some special types of relations that are important for our purposes.
An equivalence relation is a relation that is reflexive, symmetric and transitive.
That is, we have for every x, y, z ∈ S
• xRx
• if xRy then yRx
• if xRy and yRz then xRz
An equivalence relation amounts to the same thing as a partition: a decomposi-
tion of S as a union of disjoint subsets. Each subset consists of all elements that are
related to any one of them, and no elements in different subsets of the partition are
related.
Examples of equivalence relations: “having the same mother” on the set of all
humans; “being divisible by 7” on the set of all positive integers; “being mutually
reachable via a path in a given graph”.
Another important type of relation is a partial order. This is a relation that is
reflexive, antisymmetric and transitive. Antisymmetry means that if x R y and y R x
then x = y. Examples are: x R y if and only if x is a factor of y , where x and y are
positive integers.
A linear order or total order is a partial order where every pair of elements is
related. For example, the usual relation ≤ on the real numbers. The elements of a
finite set with a total order can be arranged in a line so that each is related to the next
and none is related to any preceding element.
The last rule is easily proven by taking logarithm to base x of each side of the equality.
The notation loge = ln is commonly used, as also is log2 = lg.
Often we want to convert the real values returned from functions like logarithms
to integers. Let x be a real number. The floor ⌊x⌋ of x is the greatest integer not greater
than x, and the ceiling ⌈x⌉ of x is the least integer not less than x. If x is an integer,
then ⌊x⌋ = x = ⌈x⌉.
which can be proved by induction. Similar explicit formulae hold for the sum ∑ni=1 i p
where p is a fixed positive integer, but they become rather complicated. More useful
for our purposes is the asymptotic formula for large n
n
∑ i p ∈ Θ(n p+1 )
i=1
which also holds for negative integers p provided p 6= −1. When p = −1 we have an
asymptotic statement about the harmonic numbers Hn = ∑ni=1 i−1 ,
n
∑ 1/i ∈ Θ(log n).
i=1
and so we have
n
n4 − 1 (n + 1)4 − 1
≤ ∑ i3 ≤ .
4 i=1 4
This easily yields that ∑ni=1 i3 is Θ(n4 ) since (n + 1)4 /n4 ≤ 16 for n ≥ 1 and n4 − 1 ≥
15n4 /16 for n ≥ 2.
D.7 Trees
A rooted ordered tree is what computer scientists call simply a “tree”. These trees
are defined recursively. An ordered rooted tree is either a single node or a distin-
guished node (the root ) attached to some ordered rooted trees given in a fixed order
(hence such a tree is defined recursively). In a picture, these subtrees are usually
drawn from left to right below the parent node. The parent of a node is defined as
follows. The root has no parent. Otherwise the node was attached in some recursive
call, and the root it was attached to at that time is its parent. The roots of the subtrees
in the definition are the children of the root. A rooted ordered tree can be thought
of as a digraph in which there is an arc from each node to each of its children.
A node with no children is called a leaf . The depth of a node is the distance from
the root to that node (the length of the unique path between them). The height of a
node is the length of a longest path from the node to a leaf. The height of tree is the
height of the root. Note that a tree with a single node has height zero. Some other
books use a definition of height whose value equals the value given by our definition,
plus one.
A binary tree is an ordered rooted tree where the number of children of each
node is always 0, 1, or 2.
A free tree (what mathematicians call a tree) has no order (so a mirror image of
a picture of a tree is a picture of the same tree) and no distinguished root. Every
free tree can be given a root arbitrarily (in n ways, if the number of nodes is n), and
ordered in many different ways.
A free tree can be thought of as the underlying graph of an ordered rooted tree.
A free tree is a very special type of graph. First, if n is the number of nodes and e the
number of edges, then e = n − 1. To see this, note that in the underlying graph of an
ordered rooted tree, each edge connects a node with its parent. Each node except
one has a parent. Thus there is a one-to-one correspondence between nodes other
than the root and edges, yielding the result.
One can easily show that the following are equivalent for a graph G:
• G is a free tree.
Note that Lemma 1.19, the Limit Rule, can be used instead. In this case f (n) =
f (n)
10n − 5n + 15; g(n) = n3 , and f (n) ∈ Θ(g(n)) because limn→∞ g(n)
3 = 10.
S OLUTION TO E XERCISE 1.3.3 ON PAGE 27:
As above, 10n3 − 5n + 15 ∈ Ω(n4 ) iff there exist a positive real constant c and a posi-
tive integer n0 such that the inequality 10n3 − 5n + 15 ≥ cn4 holds for all n > n0 . We
need to show that for any value of c and n0 this inequality, or what is the same, the
reduced one, 10n−1 − 5n−3 + 15n−4 ≥ c, does not hold true for all n > n0 . We know
limn→∞ (10n−1 − 5n−3 + 15n−4) = 0, so no matter which values c and n0 are picked, the
inequality cannot be true for all n > n0 . Therefore, 10n3 − 5n + 15 ∈
/ Ω(n4 ).
S OLUTION TO E XERCISE 1.3.5 ON PAGE 27:
To show that each function f (n) in Table 1.2 stands in “Big Oh” relation to the preced-
ing one, g(n), that is, f (n) ∈ O(g(n)), it is sufficient to use the Limit Rule (Lemma 1.19)
and show that limn→∞ f (n)/g(n) = 0:
n ∈ O(n log n) because n/(n log n) = (log n)−1 and limn→∞ (log n)−1 = 0;
n log n ∈ O(n1.5 ) because n log n/n1.5 = log n/n0.5 and any positive power of n grows faster
than any logarithm (Example 1.14): limn→∞ log n/n0.5 = 0;
n1.5 ∈ O(n2 ) and n2 ∈ O(n3 ) because higher powers of n grow faster than lower powers
(Example 1.20);
n3 ∈ O(2n ) because exponential functions with base greater than 1 grow faster than
any positive power of n (Example 1.13): so limn→∞ n3 /2n = 0.
Lemma 1.17
Proof. It follows from g1 (n) ∈ O( f1 (n)) and g2 (n) ∈ O( f2 (n)) that g1 (n) ≤ c1 f1 (n) for all
n > n1 and g2 (n) ≤ c2 f2 (n) for all n > n2 , respectively, with positive real constants c1
and c2 and positive integers n1 and n2 . Then for all n ≥ max{n1 , n2 } this is also true:
But f1 (n) + f2 (n) ≤ 2 max{ f1 (n), f2 (n)}, so g1 (n) + g2 (n) ≤ c max{ f1 (n), f2 (n)} where c =
2 max{c1 , c2 }. Therefore, g1 (n) + g2 (n) ∈ O(max{ f1 (n), f2 (n)}), and the rule of sums for
“Big Oh” is true.
Lemma 1.18
Proof. It follows from g1 (n) ∈ O( f1 (n)) and g2 (n) ∈ O( f2 (n)) that g1 (n) ≤ c1 f1 (n) for all
n > n1 and g2 (n) ≤ c2 f2 (n) for all n > n2 , respectively, with positive real constants c1
and c2 and positive integers n1 and n2 . Then for all n ≥ max{n1 , n2 } this is also true:
g1 (n)g2 (n) ≤ c f1 (n) f2 (n) where c = c1 c2 . Therefore the rule of products for “Big Oh” is
true.
Proof. The relationship c f (n) ≥ c f (n) holds for all n > 0. Thus, constant factors are
ignored.
Proof. It follows from g1 (n) ∈ Ω( f1 (n)) and g2 (n) ∈ Ω( f2 (n)) that g1 (n) ≥ c1 f1 (n) for all
n > n1 and g2 (n) ≥ c2 f2 (n) for all n > n2 , respectively, with positive real constants c1
and c2 and positive integers n1 and n2 . Then for all n ≥ max{n1 , n2 } this is also true:
But f1 (n) + f2 (n) ≥ max{ f1 (n), f2 (n)}, so g1 (n) + g2 (n) ≥ c max{ f1 (n), f2 (n)} where c =
min{c1 , c2 }. Therefore, g1 (n) + g2 (n) ∈ Ω(max{ f1 (n), f2 (n)}), and the rule of sums for
“Big Omega” is true.
T (n) ≤ 2 2n lg n2 + 1 − 1 = n lg n − 2 ≤ n lg n + n − 1; n ≥ 4
n+1
≤ 2 lg(n + 1) + n−1
2 lg(n − 1) − 2
n+1
≤ 2 (lg n + 1) + n−1
2 (lg n + 1) − 1 = n lg n + n − 1; n ≥ 3
or, what is the same, T (n) = cn logk n. Therefore, T (n) ∈ O(n log n).
S OLUTION TO E XERCISE 1.5.4 ON PAGE 36:
Just as in the previous solution, substituting n = km into the recurrence T (n) = kT ( nk )+
ckn; T (1) = 0, produces T (km ) = kT (km−1 ) + ckm+1 ; T (1) = 0. Telescoping the latter
recurrence gives:
or, what is the same, T (n) = ckn logk n. Therefore, T (n) ∈ O(n log n).
S OLUTION TO E XERCISE 1.6.1 ON PAGE 37:
Because n ∈ O(n log n), in “Big-Oh” sense the linear algorithm B has better perfor-
mance than the “n log n” algorithm A. But for small enough n, the latter algorithm is
faster, e.g. TA (10) = 50 and TB (10) = 400 elementary operations. The cutoff point is
when TA (n) = TB (n), that is: 5n log10 n = 40n, or log10 n = 8, or n = 108 . Therefore, even
though the algorithm B is faster in “Big Oh” sense, this only occurs when more than
100 million data items have to be processed.
S OLUTION TO E XERCISE 1.6.2 ON PAGE 38:
In “Big-Oh” sense,√the average-case time complexity of the linear algorithm A is
larger than of the “ n” algorithm B. But for a database of the given size, TA (109 ) = 106
and TB (109 ) = 1.58 × 107 elementary operations. So in this case the algorithm A is, in
the average, over ten times faster than the algorithm B. Because we can tolerate the
risk of an occasional long running time that might occur more likely with the more
complex algorithm, the algorithm A should be used.
S OLUTION TO E XERCISE 2.1.2 ON PAGE 41:
Regardless of the initial ordering of the list, selection sort searches at each iteration
i through the entire unsorted part of the size n − i and makes n − i − 1 comparisons
to find the minimum element, so in total, ∑i=1 n−1
i = n(n−1)
2 ∈ Θ(n2 ) comparisons in the
worst, average, and best case. The maximum number of data moves is n, because
each iteration moves at most one element into its correct position, and their average
number is n2 . Thus, both the maximum and the average individual time complexity
in selection sort is Θ(n) for data moves and Θ(n2 ) for comparisons.
S OLUTION TO E XERCISE 2.2.1 ON PAGE 44:
Adding up the columns in the next table gives, in total, 90 comparisons plus data
moves.
i Ci Mi Data to sort
91 70 65 50 31 25 20 15 8 2
1 1 1 70 91 65 50 31 25 20 15 8 2
2 2 2 65 70 91 50 31 25 20 15 8 2
3 3 3 50 65 70 91 31 25 20 15 8 2
4 4 4 31 50 65 70 91 25 20 15 8 2
5 5 5 25 30 50 65 70 91 20 15 8 2
6 6 6 20 25 30 50 65 70 91 15 8 2
7 7 7 15 20 25 30 50 65 70 91 8 2
8 8 8 8 15 20 25 30 50 65 70 91 2
9 9 9 2 8 15 20 25 30 50 65 70 91
Position 1 2 3 4 5 6 7 8 9 10 11 12
Index 0 1 2 3 4 5 6 7 8 9 10 11
Array at step 1 91 75 70 31 65 50 25 20 15 2 8 85
Array at step 2 91 75 70 31 65 85 25 20 15 2 8 50
Array at step 3 91 75 85 31 65 70 25 20 15 2 8 50
Position 1 2 3 4 5 6 7 8 9
Index 0 1 2 3 4 5 6 7 8
Initial array 10 20 30 40 50 60 70 80 90
i=3 10 20 30 90 50 60 70 80 40
i=2 10 20 70 90 50 60 30 80 40
i=1 10 90 70 20 50 60 30 80 40
10 90 70 80 50 60 30 20 40
i=0 90 10 70 80 50 60 30 20 40
90 80 70 10 50 60 30 20 40
90 80 70 40 50 60 30 20 10
Max heap 90 80 70 40 50 60 30 20 10
H(n!)
Havg (n!) = = log n! ≈ n log n − 1.44n
n!
This means that the lower bound of the average-case complexity of sorting n
items by pairwise comparisons is Ω(n log n).
S OLUTION TO E XERCISE 2.7.2 ON PAGE 67:
The time complexity is linear, Θ(n), as it takes n steps to scan through array a, and
then a further n steps to print out the contents of t. Theorem 2.35 says that any
algorithm that sorts by comparing only pairs of elements must use at least ⌈log n!⌉
comparisons in the worst case. This algorithm uses the specific knowledge that the
contents of the array a are integers in the range 1..1000. Thus, this algorithm would
not work if the keys can only be compared to each other because contrary to this
case their absolute values are totally unknown.
S OLUTION TO E XERCISE 3.2.1 ON PAGE 76:
It will be identical to Figure 3.3 except the very last step will not return 4, but find
instead that a[m] > k so r ← m − 1 and l > r, so that the loop will terminate and return
“not found”.
S OLUTION TO E XERCISE 3.2.2 ON PAGE 76:
Binary search halves the array at each step, thus the worst case is when it does not
find the key until there is only one element left in the range. Using the improved
binary search that only does one comparison to split the array, we are looking for
the smallest integer k such that 2k ≥ 106 , or k = ⌈lg 106 ⌉ = 20. Thus 20 comparisons
are needed to reduce the range to 1, and in total there are 21 comparisons as at the
end the comparison to the key is done.
S OLUTION TO E XERCISE 3.2.5 ON PAGE 77:
1 3 4 1 8 2 15 3
0 2 6 12 18 1
1 1 1 1
h(s) = 31n−2 (31s[0] + s[1]) + 31n−4 (31s[2] + s[3]) + . . . + (31s[n − 2] + s[n − 1])
v 0 1 2 3 4 5 6
seen[v] 0 2 1 6 7 8 9
done[v] 5 3 4 13 12 11 10
u v
0 1 2 3 4 5 6
0 Tree Forward Tree Forward Forward Forward
1 Back Tree Cross Forward Forward Forward
2 Back Back Cross Forward Tree Forward
3 Back Cross Cross Cross Cross Cross
4 Back Back Back Cross Back Cross
5 Back Back Back Cross Tree Tree
6 Back Back Back Cross Cross Back
Of course, if some of these arcs existed, and DFS was run on the graph, some of
the timestamps would change.
(iii) No, because they are on different branches of the tree (hence if (3, 2) was an arc
in the graph, it would be a cross arc)
(iv) No, because then when DFS is at time 4, instead of expanding node 4, it would
expand node 3, and the DFS tree would be entirely different.
(v) Yes, because it is DFS the tree would be the same (but not if it was BFS), the arc
would be a forward arc.
S OLUTION TO E XERCISE 5.3.9 ON PAGE 129:
The order of the nodes in the digraph in the seen array is equal to the preorder label-
ing of the nodes, and the order of the nodes in the diagraph in the done array is equal
to the post order labeling of the nodes.
S OLUTION TO E XERCISE 5.3.10 ON PAGE 129:
To prove via induction, we need both a base case and an inductive step.
The base case is when there are no white nodes as neighbours of node s, then the
algorithm does no recursion and returns. In this case recursiveDFSvisit only visits
node s, which is intended.
The inductive step is that given all the white nodes that are neighbours of node s
our theorem is true for them, then it is also true for node s. For each node reachable
from s via a path of white nodes, the start of every path is one of the neighbours
of s. Because the call to recursiveDFSvisit with input s only terminates when the
recursive calls with input of each of the white neighbours of s finish. All the recursive
calls then cover each of the paths from s to each node reachable by a path of white
nodes, and thus satisfies the inductive step.
Finally, because each path cannot have a loop in it, there is a finite number of
recursions and recursiveDFSvisit is guaranteed to terminate.
Therefore, by mathematical induction, Theorem 5.4 is true.
S OLUTION TO E XERCISE 5.6.2 ON PAGE 137:
By Theorem 5.11, every DAG has a topological ordering (v1 , v2 , ...vn ) such that there
are no arcs (vi , v j ) ∈ E(G) such that i < j. This means that there are no arcs going from
right to left in the topological ordering. This means that node v1 has no arcs going
into it and node vn has no nodes going away from it, they are respectively, a source
and sink node. Therefore for every DAG there is at least one source and sink node.
S OLUTION TO E XERCISE 5.6.4 ON PAGE 137:
Shirt, hat, tie, jacket, glasses, underwear, trousers, socks, shoes, belt.
S OLUTION TO E XERCISE 5.6.5 ON PAGE 137:
The standard implementation uses an array of indegrees. This can be computed in
time O(m) from either adjacency lists or adjacency matrices. The algorithm can find
a node v of degree 0 in time O(n) and can decrement the indegrees of the neighbours
of v in constant time for adjacency lists. Since we have at most m decrements of
elements of the array of indegree, the running time is at most O(n2 + m). If a priority
queue is used to extract nodes of indegree 0 the running time slightly improves.
S OLUTION TO E XERCISE 5.6.6 ON PAGE 137:
Simply delete vertices with 0 or 1 edges on them from the graph (including the edges),
if at anytime there are no vertices with the number of edges less than 2, then the
graph has a cycle. Otherwise, if the entire graph can be deleted by only deleting ver-
tices with 0 or 1 edges, then the graph is acyclic.
S OLUTION TO E XERCISE 5.7.1 ON PAGE 142:
Adjacency list: 0
4
1, 2
0 1 2
3
2
3
There are two strongly connected components in this graph, DFS only finds one
tree.
S OLUTION TO E XERCISE 5.8.1 ON PAGE 145:
Adjacency list: 0
5
1, 2, 3
0, 4
1 2 3
0, 4
0, 2
4
If the algorithm does not check to the end of the level, it will return that the short-
est cycle is {0,1,4,2} instead of {0,2,4}.
S OLUTION TO E XERCISE 5.8.3 ON PAGE 145:
We need to find two disjoint subsets. Consider the number of 1’s (it can just as easily
be the number of 0’s) to be k in a bit vector of length n, an edge can only be between
another bit vector with either k − 1 or k + 1 1’s. This is because if the number of 1’s is
less than k − 1 or greater than k + 1 then there will be more than 1 difference in the
bits. Also, two different bit vectors with the same number of 1’s will not have an edge
because they will differ in two places exactly (not the required one).
One way of satisfying this condition is if all the odd number of 1’s are on one side,
and all the even number of 1’s are on the other. This means that for any n-cube you
can find a bipartite consisting of the odd number of 1’s bit vectors in one group, and
the even number of 1’s in the other.
S OLUTION TO E XERCISE 6.2.2 ON PAGE 154:
The running time is the same as the time to compute the distance matrix. The ec-
centricity of a node v is simply the maximum entry of row v of the distance matrix.
The radius is the minimum over all maximum values per row. This can be computed
in time Θ(n2 ) if we have access to a distance matrix.
S OLUTION TO E XERCISE 6.3.2 ON PAGE 160:
If a cycle v1 , v2 , . . . , vk exists with the sum of its arc weights is less than zero then we
can find a walk of total weight as small as we want from v1 to v2 by repeating the cycle
as many times as we want before stopping at v2 .
S OLUTION TO E XERCISE 6.3.6 ON PAGE 161:
Property P2 fails if we allow arcs of negative weight. Suppose u is the next vertex
added to S. If arc (u, w) is of negative weight for some other vertex w that is currently
in S, then the previous distance from s to w, dist[w], may no longer be the smallest.
S OLUTION TO E XERCISE 6.4.2 ON PAGE 164:
If a diagonal entry in the distance matrix ever becomes less than zero when using
Floyd’s algorithm then know that a negative weight cycle has been found.
S OLUTION TO E XERCISE 6.5.1 ON PAGE 167:
For this weighted graph, both Prim’s and Kruskal’s algorithms will find the unique
minimum spanning tree of weight 9.
Bibliography
[2] J. Bentley. Programming Pearls, Second Edition. Addison-Wesley, Inc., 2000. 101
[7] L. R. Ford and D. R. Fulkerson “Maximal flow through a network,” Canadian Jour-
nal of Mathematics 8 (1956), pages 399–404. 150
[8] M. T. Goodrich and R. Tamassia. Data Structures and Algorithms in Java, John
Wiley and Sons, Inc., 2001.
[9] K. Mehlhorn and St. Nadher. The LEDA Platform of Combinatorial and Geomet-
ric Computing, Cambridge University Press, 1999.
(see http://www.mpi-sb.mpg.de/LEDA/leda.html) 116
[11] J.G. Siek, L-Q. Lee and A. Lumsdaine. The Boost Graph Library: User Guide and
Reference Manual Addison-Wesley, 2001. (see http://www.boost.org) 116