Randomized Speedup of The Bellman-Ford Algorithm
Randomized Speedup of The Bellman-Ford Algorithm
do
relax(u, v)
C {vertices v for which D[v] changed}
Suppose that, at the start of an iteration of the outer loop of the algorithm, vertex u is
accurate, and that is a path in the shortest path tree rooted at s that starts at u, with all
vertices of inaccurate. Suppose also that all of the edges in G
+
that belong to path are
earlier in the path than all of the edges in G
in a topological ordering of G
colored red.
at least two; once a vertex becomes accurate, it can be the rst argument of a relaxation
operation in only a single additional iteration of the algorithm. Therefore, iteration i relaxes
at most n(n2i) edges. Summing over all iterations yields a total number of relaxations that
is less than n
3
/4. Experiments conducted by Yen have demonstrated the practicality of these
speedups in spite of extra time needed to maintain the set of recently changed vertices [21].
3 The Randomized Algorithm
Our randomized algorithm makes only a very small change to Yens algorithm, by choosing
the numbering of the vertices randomly rather than arbitrarily. In this way, it makes the
worst case of Yens algorithm (in which a shortest path alternates between edges in G
+
and
G
) very unlikely.
Algorithm 5 Randomized variant of the BellmanFord algorithm
number the vertices randomly such that all permutations with s rst are equally likely
C {s}
while C = do
for each vertex u in numerical order do
if u C or D[v] has changed since start of iteration then
for each edge uv in graph G
+
do
relax(u, v)
for each vertex u in reverse numerical order do
if u C or D[v] has changed since start of iteration then
for each edge uv in graph G
do
relax(u, v)
C {vertices v for which D[v] changed}
To analyze the algorithm, we rst consider the structure of its worst-case instances.
5
Lemma 1. Let G and s dene an input to Algorithm 5. Then the number of iterations of
the outer loop of the algorithm depends only on the combinatorial structure of the subgraph
S of G formed by the set of edges belonging to shortest paths of G; it does not depend in any
other way on the weights of the edges in G.
Proof. In each iteration, a vertex v becomes accurate if there is a path in S from an
accurate vertex u to v with the property that is the concatenation of a path
+
S G
+
with a path
S G
2t
2
d
_
and Pr[f > E[f] t] exp
_
2t
2
d
_
where d =
d
2
i
.
From our previous analysis of Yens algorithm we see that each iteration processes the
vertices on a shortest path up to the rst local minimum in the sequence of vertex labels.
For this reason we will be interested in the distribution of local minima in random sequences.
The problem of counting local minima is closely related to the problem of determining the
length of the longest alternating subsequence [11, 18].
Lemma 4. If X
1
, . . . , X
n
is a sequence of random variables for which ties have probability
zero and each permutation is equally likely (e.g. i.i.d. real random variables), then
6
(1) the expected number of local minima is (n 2)/3 not counting endpoints;
(2) and, the probability that there are more than
n 2
3
+
_
2cnlog n
n 2
3
_
1 + 3
2
_
c log n
n
_
local minima is at most 1/n
c
.
Proof. For (1) notice that there are six ways that X
j1
, X
j
, X
j+1
may be ordered when
1 < j < n, and two of these orderings make X
j
a local minima. For (2) let f(X
1
, . . . , X
n
)
equal the number of local minima in the sequence. Changing any one of the X
i
changes the
value of f(X
1
, . . . , X
n
) by at most 2. Hence by Lemma 3 with t =
2
_
c log n
n
_
with probability at least 1 1/n
c
.
Proof. Let G be a worst-case instance of the algorithm, as given by Lemma 2. In each
iteration of the algorithm other than the rst and last, let v be the last accurate vertex on
the single maximal shortest path in G. Since this is neither the rst nor the last iteration, v
must be neither the rst nor the last vertex on the path; let u be its predecessor and let w be
its successor. Then, in order for v to have become accurate in the previous iteration without
letting w become accurate as well, it must be the case that v is the rst of the three vertices
{u, v, w} in the ordering given by the random permutation selected by the algorithm: if u
0 7
2
3
8
4
6
1
0
7
2
3
8
4
6
1
Figure 3: Longest shortest path from Figure 1 (left) with height used to represent vertex
label (right).
7
were rst then edge uv would belong to G
+
and no matter whether edge vw belonged to G
+
or G
it would be relaxed later than uv in the same iteration. And if w were rst then vw
would belong to G
.
Thus, we may bound the expected number of iterations of Algorithm 5 on this input
by bounding the number of vertices v that occur earlier in the random permutation than
both their predecessor and their successor in the shortest path, i.e., the local minima in
sequence of labels. The start vertex s is already assumed accurate so applying Lemma 4 to
the remaining n 1 vertices yields (n 3)/3 iterations for the interior vertices. Therefore,
the expected number of iterations is 2+(n3)/3 = (n+3)/3. Each iteration relaxes at most
m edges, so the total expected number of relaxations is at most mn/3 + m. An application
of the second part of Lemma 4 nishes the proof.
Lemma 2 does not directly apply to the dense case, because we need to bound the number
of relaxations within each iteration and not just the number of iterations. Nevertheless the
same reasoning shows that the same graph (a graph with a unique shortest path tree in the
form of a single path) forms the worst case of the algorithm.
Theorem 2. For dense graphs the expected number of relaxations performed by Algorithm 5
is at most n
3
/6, and the number of relaxations is less than
n
3
6
+
2n
5/2
_
c log n
n
3
6
_
1 +
2
_
c log n
n
_
with probability 1 1/n
c1
.
Proof. Let v be a vertex in the input graph whose path from s in the shortest path tree is of
length k. Then the expected number of iterations needed to correct v is k/3, assuming the
worst case that v is processed in each of these iterations we will relax at most
n
n
k=1
k
3
n
3
/6
edges. Also, Theorem 1 implies that v will be corrected after at most
k/3 +
_
2ck log n
with probability at least 1 1/n
c
. Again, assuming the worst case, the edges from v will be
relaxed in each iteration, we will relax at most
n
n
k=1
k/3 +
_
2ck log k n
3
/6 +
2cn
5/2
_
log n
edges with probability at least 1 1/n
c1
.
8
4 Negative Cycle Detection
If G is a directed-acyclic graph with a negative cycle reachable from the source, then the
distance to some vertices is eectively . If we insist on nding shortest simple paths,
then the problem is NP-hard [10].
Because of this diculty, rather than seeking the shortest simple paths we settle for a
timely notication of the existence of a negative cycle. There are several ways in which
single-source-shortest-path algorithms can be modied to detect the presence of negative
cycles [4]. We will use what is commonly referred to as subtree traversal. After some
number of iterations of the BellmanFord algorithm, dene G
p
to be the parent graph of
G; this is a graph with the same vertex set as G and with an edge from v to u whenever
the tentative distance D[v] was set by relaxing the edge in G from u to v. That is, for each
v other than the start vertex, there is an edge from v to P[v]. Cycles in G
p
correspond
to negative cycles in G [19]. Moreover, if G contains a negative cycle, then after n 1
iterations G
p
will contain a cycle after each additional iteration [4]. We would like to lower
this requirement from n 1 to something more in line with runtime of Algorithm 5.
For each vertex v in any input graph G there exists a shortest simple path from the source
s to v; denote the length of this path by D
2cnlog n. Since G
p
9
has only one outgoing edge per vertex, cycles in it may be detected in time O(n). With
probability at least 1 1/n
c1
we will only perform one round of cycle detection, and in the
worst case Yens analysis guarantees that a cycle will be found after at most n/2 iterations.
Therefore, this version of the algorithm has similar high probability time performance to our
analysis for sparse graphs that do not have negative cycles.
5 Conclusion
We have shown that randomizing the vertices in a graph before applying Yens improvement
of the BellmanFord algorithm causes the algorithm to use 2/3 of the number of relaxations
(either in expectation or with high probability) compared to its performance in the worst
case without this optimization. This is the rst constant factor improvement in this basic
graph algorithm since Yens original improvements in the early 1970s. Further we can expect
practical improvements in runtime inline with Yens observations [21], as we have only added
a single linear time step for randomization.
Our improvement for negative cycle detection works only for our sparse graph analysis.
For dense graphs, we get the same bound on the number of iterations until a negative cycle
can be detected with high probability using subtree traversal, but (if a negative cycle exists)
we may not be able to control the number of relaxation steps per iteration of the algorithm,
leading to a worse bound on the total number of relaxations than in the case when a negative
cycle does not exist. However, our high probability bounds also allow us to turn the dense
graph shortest path algorithm into a Monte Carlo algorithm for negative cycle detection.
We simply run the algorithm for dense graphs without negative cycles, and if the algorithm
runs for more than the n
3
/6 + o(n
3
) relaxations given by our high probability bound, we
declare that the graph has a negative cycle, with only a small probability of an erroneous
result. We leave as an open question the possibility of obtaining an equally fast Las Vegas
algorithm for this case.
Acknowledgments
This research was supported in part by the National Science Foundation under grant 0830403,
and by the Oce of Naval Research under MURI grant N00014-08-1-1015.
References
[1] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Shortest paths: label-correcting
algorithms. Network Flows: Theory, Algorithms, and Applications, pp. 133165.
Prentice Hall, 1993.
[2] R. Bellman. On a routing problem. Quarterly of Applied Mathematics 16:8790, 1958.
[3] W.-K. Chen. Theory of Nets. John Wiley & Sons Inc., New York, 1990.
10
[4] B. V. Cherkassky and A. V. Goldberg. Negative-cycle detection algorithms.
Mathematical Programming 85(2):277311, 1999, doi:10.1007/s101070050058.
[5] B. V. Cherkassky, A. V. Goldberg, and T. Radzik. Shortest paths algorithms: theory
and experimental evaluation. Mathematical Programming 73(2):129174, 1996,
doi:10.1016/0025-5610(95)00021-6.
[6] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Problem 241: Yens
improvement to BellmanFord. Introduction to Algorithms, 2nd edition, pp. 614615.
MIT Press, 2001.
[7] S. E. Dreyfus. An appraisal of some shortest-path algorithms. Operations Research
17(3):395412, 1969, doi:10.1287/opre.17.3.395.
[8] D. P. Dubhashi and A. Panconesi. Concentration of Measure for the Analysis of
Randomized Algorithms. Cambridge University Press, Cambridge, 2009.
[9] L. R. Ford, Jr. and D. R. Fulkerson. Flows in Networks. Princeton University Press,
1962.
[10] M. R. Garey and D. S. Johnson. Computers and Intractability: A guide to the theory
of NP-completeness. W. H. Freeman and Co., San Francisco, Calif., 1979.
[11] C. Houdre and R. Restrepo. A probabilistic approach to the asymptotics of the length
of the longest alternating subsequence. Electronic Journal of Combinatorics
17(1):Research Paper 168, 2010,
http://www.combinatorics.org/Volume_17/Abstracts/v17i1r168.html.
[12] T. C. Hu and M. T. Shing. Shortest paths in a general network. Combinatorial
Algorithms (Enlarged Second Edition), pp. 2427. Dover, 2002.
[13] E. L. Lawler. Improvements in Eciency: Yens Modications. Combinatorial
Optimization: Networks and Matroids, pp. 7677. Dover, 2001.
[14] C. McDiarmid. On the method of bounded dierences. Surveys in combinatorics,
1989 (Norwich, 1989), pp. 148188. Cambridge Univ. Press, London Math. Soc.
Lecture Note Ser. 141, 1989.
[15] E. F. Moore. The shortest path through a maze. Proc. Internat. Sympos. Switching
Theory 1957, Part II, pp. 285292. Harvard Univ. Press, 1959.
[16] S. V. Pemmaraju and S. S. Skiena. Computational Discrete Mathematics:
Combinatorics and Graph Theory with Mathematica. Cambridge University Press,
Cambridge, 2003, p. 328.
[17] R. Sedgewick. Algorithms. Addison-Wesley Professional, 4th edition, 2011, p. 976.
11
[18] R. P. Stanley. Longest alternating subsequences of permutations. Michigan
Mathematical Journal 57:675687, 2008, doi:10.1307/mmj/1220879431.
[19] R. E. Tarjan. Data Structures and Network Algorithms. CBMS-NSF Regional
Conference Series in Applied Mathematics 44. Society for Industrial and Applied
Mathematics (SIAM), Philadelphia, PA, 1983.
[20] J. Y. Yen. An algorithm for nding shortest routes from all source nodes to a given
destination in general networks. Quarterly of Applied Mathematics 27:526530, 1970.
[21] J. Y. Yen. Shortest Path Network Problems. Verlag Anton Hain, Meisenheim am
Glan, 1975. Mathematical Systems in Economics, Heft 18.
12