1ALGO
1ALGO
Asymptotic Analysis
LEARNING OBJECTIVES
Analyzing Algorithms
The process of comparing 2 algorithms rate of growth with respect
to time, space, number of registers, network, bandwidth etc is
called analysis of algorithms.
This can be done in two ways
Tower A
1. Priori Analysis: This analysis is done before the execution;
the main principle behind this is frequency count of Tower B
fundamental instruction.
This analysis is independent of CPU, OS and system Tower C
architecture and it provides uniform estimated values. Figure 1 Towers of Hanoi
3.82 | Unit 3 • Algorithms
Assume that the number of disks is ‘n’. To get the largest To move ‘3’ disks from tower A to tower ‘C’ requires 7 disk
disk to the bottom of tower B, we move the remaining (n movements
– 1) disks to tower C and then move the largest to tower B. \ For ‘n’ disks, the number of disk movements required
Now move the disks from tower C to tower B. is 2n – 1 = 23 – 1 = 7
Example:
Time complexity
T(n) = 1 + 2T(n – 1)
3 T(n) = 1 + 2(1 + 2 (T(n – 2)))
2 T(n) = 1 + 2 + 22 T(n – 2)
1 T(n) = 1 + 2 + 22 (1 + 2T(n – 3))
A B C T(n) = 1 + 2 + 22 + 23 + T(n – 3)
T(n) = 1 + 2 + 22 + … + 2i–1 + 2i T(n – i)
n −1
T ( n ) = ∑ 2i
2 i =0
1 3
A B C The time complexity is exponential, it grows as power of 2.
\ T(n) @ O(2n)
Space complexity
The space complexity of an algorithm is the amount of
1 2 3
memory it needs to run to completion. The measure of the
A B C
quantity of input data is called the size of the problem. For
example, the size of a matrix multiplication problem might
be the largest dimension of the matrices to be multiplied.
The size of a graph problem might be the number of edges.
3 The limiting behavior of the complexity as size increases is
1 2 called the asymptotic time complexity.
A B C
•• It is the asymptotic complexity of an algorithm which
ultimately determines the size of problems that can be
solved by the algorithm.
•• If an algorithm processes inputs of size ‘n’ in time cn2 for
3 some constant c, then we say that the time complexity of
2 1 that algorithm is O(n2), more precisely a function g(n) is
A B C said to be O( f (n)) if there exists a constant c such that
g(n) ≤ c( f (n)) for all but some finite set of non-negative
values for n.
•• As computers become faster and we can handle larger
problems, it is the complexity of an algorithm that deter-
3 2 1 mines the increase in problem size that can be achieved
A B C with an increase in computer speed.
•• Suppose we have 5 algorithms Algorithm 1 – Algorithm 5
with the following time complexities.
The following figure gives the sizes of problems that can •• Main drawback of using adjacency matrix is that it
be solved in one second, one minute, and one hour by each requires |V|2 storage even if the graph has only O(|V|)
of these five algorithms. edges.
•• Another representation for a graph is by means of lists.
Maximum
The adjacency list for a vertex v is a list of all vertices
Problem Size
Time W adjacent to V. A graph can be represented by |V| adja-
Algorithm Complexity 1 sec 1 min 1 hour cency lists, one for each vertex.
Algorithm – 1 n 1000 6 × 104 3.6 × 106
Algorithm – 2 n log n 140 4893 2.0 × 105 Example:
Algorithm – 3 n2 31 244 1897
1 2
Algorithm – 4 n 3
10 39 153
Algorithm – 5 2n 9 15 21
4 3
From the above table, we can say that different algorithms
will give different results depending on the input size. Figure 2 Directed graph
Algorithm – 5 would be best for problems of size 2 ≤ n ≤ 9,
Algorithm – 3 would be best for 10 ≤ n ≤ 58, Algorithm – 2
1 2 3 4
would be best for 59 ≤ n ≤ 1025, and Algorithm – 1 is best
for problems of size greater than 1024. 1 0 1 0 1
2 0 0 1 0
3 0 0 0 0
Set Representation
4 0 1 1 0
A common use of a list is to represent a set, with this rep-
resentation the amount of memory required to represent a Figure 3 Adjacency matrix
set is proportional to the number of elements in the set. The
amount of time required to perform a set operation depends
on the nature of the operation. Vertex – 1 2 4 0
1 15
6 18
2 13
7 8 16
3 4
9 11 14
17
5 10 12
(c)
Left child Right child Figure 6 (a) Pre-order, (b) Post-order (c) In-order
1 2 6
2 3 4 Post-order traversal
3 0 0 A post-order traversal of T is defined recursively as follows:
4 0 5
5 0 0 1. Visit in post-order the sub trees with roots v1, v2, v3,
6 7 8 … vk in that order.
7 0 0 2. Visit the root r.
8 0 9
9 0 10 In-order Traversal
10 0 0
An in-order traversal is defined recursively as follows:
Figure 5 A binary tree and its representation
1. Visit in in-order the left sub tree of the root ‘r’.
•• Vertex 3 is of depth ‘2’, height ‘0’ and the level is 2 2. Visit ‘r’.
(Height of tree - depth of ‘3’ = 4 – 2 = 2). 3. Visit in inorder the right sub tree of r.
•• A binary tree is represented by 2 arrays: left child and
right child. Example: Consider the given tree
•• A binary tree is said to be complete if for some integer
C
k, every vertex of depth less than k has both a left child
and a right child and every vertex of depth k is a leaf. A D
B
complete binary tree of height k has exactly (2k+1 – 1)
E
vertices.
A
•• A complete binary tree of height k is often represented by
a single array. Position 1 in the array contains the root.
What are the pre-order, post-order and in-order traversals of
The left child of the vertex in position ‘i’ is located at
the above tree?
position ‘2i’ and the right child at position ‘2i + 1’.
Solution: Pre-order – CBADE
Tree Traversals Post-order – ABEDC
Many algorithms which make use of trees often traverse the In-order – ABCDE
tree in some order. Three commonly used traversals are pre-
order, postorder and inorder.
Data Structure
Pre-order Traversal A data structure is a way to store and organize data in-order
A pre-order traversal of T is defined recursively as follows: to facilitate access and modifications. No single data struc-
ture works well for all purposes, it is important to know the
1. Visit the root.
strengths and limitations of several data structures.
2. Visit in pre-order the sub trees with roots v1, v2 … vk in
that order.
Efficiency
11 Algorithms devised to solve the same problem often differ
18
dramatically in their efficiency. Let us compare efficiencies
16 17 of Insertion sort and merge sort; insertion sort, takes time
12
14
17 16
equal to C1n2 to sort ‘n’ elements, where C1 is a constant
13 15 13 that does not depend on ‘n’. It takes time proportional to
12
n2, merge sort takes time equal to C2nlog n, C2 is another
18
15 constant that also does not depend on ‘n’. Insertion sort has
14 11
a smaller constant factor than merge sort (C1 < C2) constant
(a) (b) factors are far less significant in the running time.
Chapter 1 • Asymptotic Analysis | 3.85
Merge sort has a factor of ‘log n’ in its running time, Asymptotic Notations
insertion sort has a factor of ‘n’, which is much larger.
Asymptotic notations are mostly used in computer science
Insertion sort is faster than merge sort for small input sizes,
to describe the asymptotic running time of an algorithm.
once the input size ‘n’ becomes large enough, merge sort
As an example, an algorithm that takes an array of size n
will perform better. No matter how much smaller C1 is than
as input and runs for time proportional to n2 is said to take
C2. There will always be a crossover point beyond which
O(n2) time.
merge sort is faster.
5 Asymptotic Notations:
Example: Consider 2 computers, computer A (faster
computer), B (slower computer). Computer A runs insertion •• O (Big-oh)
sort and computer B runs merge sort. Each computer is •• q (Theta)
given 2 million numbers to sort. Suppose that computer A •• W (Omega)
executes one billion instruction per second and computer B •• o (Small-oh)
executes only 10 million instructions per second, computer •• w
A is 100 times faster than computer B (C1 = 4, C2 = 50).
How much time is taken by both the computers? How to Use Asymptotic Notation
Solution: Insertion sort takes C1 * n time
2 for Algorithm Analysis?
Merge sort takes C2 * n * log n time Asymptotic notation is used to determine rough estimates
C1 = 4, C2 = 50 of relative running time of algorithms. A worst-case anal-
Computer A takes ysis of any algorithm will always yeild such an estimate,
because it gives an upper bound on the running time T(n) of
4 × ( 2 × 106 ) 2 instructions the algorithm, that is T(n) g(n).
≅ 4000 seconds
10 9 instructions/second Example:
is better than B, using the fact that n2(quadratic) is better •• The lower order terms of an asymptotically positive func-
than n3(cubic) time, since n2 ∈ O(n3). tion can be ignored in determining asymptotically tight
bounds because they are insignificant for large n.
•• A small fraction of the highest order term is enough to
Order of Growth dominate the lower order term. Thus setting C1 to a value
In the rate of growth or order of growth, we consider only that is slightly smaller than the coefficient of the highest
the leading term of a formula. Suppose the worst case run- order term and setting C2 to a value that is slightly larger
ning time of an algorithm is an2 + bn + c for some constants permits the inequalities in the definition of q-notation to
a, b and c. The leading term is an2. We ignore the leading be satisfied. If we take a quadratic function f (n) = an2 +
term’s constant coefficient, since constant factors are less bn + c, where a, b and c are constants and a > 0. Throwing
significant than the rate of growth in determining compu- away the lower order terms and ignoring the constant
tational efficiency for large inputs. Thus we can write, the yields f (n) = q (n2).
worst-case running time is q(n2). •• We can express any constant function as q(n0), or q(1) we
We usually consider one algorithm to be more efficient shall often use the notation q(1) to mean either a constant
than another if its worst-case running time has a lower order or a constant function with respect to some variable.
of growth. Due to constant factors and lower order terms,
this evaluation may be in error for small inputs. But for O-Notation
large inputs, q(n2) algorithm will run more quickly in the We use O-notation to give an upper bound on a function,
worst-case than q(n3) algorithm. within a constant factor.
q-Notation Cg(n)
A function f (n) belongs to the set q(g(n)) if there exists a f (n)
positive constant C1 and C2 such that it can be “sand witched”
between C1g(n) and C2g(n) for sufficiently large n. We write
f (n) ∈ q (g(n)) to indicate that f (n) is a member of q (g(n))
or we can write f (n) = q (g(n)) to express the same notation. n0 n
Example 2: Let f (n) = 5.5n2 – 7n, verity whether f (n) is is on or above Cg(n). For any 2 functions f (n) and g(n) we
O(n2) have f (n) = q(g(n)) if f (n) = O(g(n)) and f (n) = Ω(g(n)).
From the above statement we can say that, an2 + bn + c =
Solution: Let C be a constant such that
q(n2) for any constants a, b and c, where a > 0, immediately
7 implies that
5.5n 2 − 7 n ≤ Cn 2 , or n ≥
c − 5.5 \ an2 + bn + c = Ω(n2)
Fix C = 9, to get n ≥ 2 \ an2 + bn + c = O(n2)
So our n0 = 2 and C = 9
This shows that there exists, positive constants C = 9 and n0 Example 4: Let f (n) = 5.5n2 - 7n.
= 2 such that Verity whether f (n) is W(n2)
0 ≤ f (n) ≤ Cn2, ∀ n ≥ n0 Solution: Let C be a constant such that 5.5n2 – 7n ≥ Cn2 or
Example 3: 7
n≥ . Fix C = 3, to get n ≥ 2.8. So, our n0 = 2.8 and
h(n) = 3n + 10n + 1000 log n ∈ O(n )
3 3 5.5 − C
C=3
h(n) = 3n3 + 10n + 1000 log n ∈ O(n4) This shows that there exists positive constants C = 3 and
n0 = 2.8, such that 0 ≤ Cn2 ≤ f (n), ∀n ≥ n0.
•• Using O-notation, we can describe the running time of
an algorithm by inspecting the algorithm’s overall struc-
ture. For example, the doubly nested loop structure of the Cg(n)
insertion sort algorithm yields an O(n2) upper bound on f (n)
the worst-case running time. The cost of each iteration of
the inner loop is bounded from above by O(1) (constant), 0 ≤ f (n) ≤ Cg(n), ∀ n ≥ n0
the inner loop is executed almost once for each of the n2 n0
pairs.
•• O(n2) bound on worst-case running time of insertion sort (a) f (n) = O(g(n))
also applies to its running time on every input.
•• The q(n2) bound on the worst-case running time of inser- f (n)
tion sort, however, does not imply a q(n2) bound on the
running time of insertion sort on every input, when the
input is already sorted, insertion sort runs in q(n) time. Cg(n)
0 ≤ Cg(n) ≤ f (n), ∀ n ≥ n0
W (omega)-notation
n0
n0 n
C 2g(n)
The W-notation is used for asymptotically lower bound- 0 ≤ C 2g(n) ≤ f (n) ≤ C 1g(n), ∀ n ≥ n0
ing a function. We would use Ω(big-omega) notation to n0
represent a set of functions that lower bounds a particular
function. (c) f (n) = q(g(n))
Figure 7 A diagrammatic representation of the asymptotic notations
Definition We say that a function f (n) is big-omega of g(n) O, W and q
written as f (n) = Ω(g(n)) if there exists positive constants C •• W-notation describes a lower bound; it is used to bound
and n0 such that the best-case running time of an algorithm. The best-case
0 ≤ Cg(n) ≤ f (n) , ∀ n ≥ n0 running time of insertion sort is W(n). The running time
of insertion sort falls between W(n) and O(n2), since it
The intuition behind Ω-notation is shown in the above falls anywhere between a linear function of ‘n’ and a
figure. For all values ‘n’ to the right of n0, the value of f (n) quadratic function of ‘n’.
3.88 | Unit 3 • Algorithms
•• When we say that the running time of an algorithm is x − 1 < x ≤ x ≤ x < x + 1 for any integer n,
W(g(n)), we mean that no matter what particular input of
size ‘n’ is chosen for each value of n, the running time on n n
that input is at least a constant times g(n), for sufficiently 2 + 2 = n,
large ‘n’.
For any real number n ≥ 0 and integer a, b > 0
O-notation
n
The asymptotic upper bound provided by O-notation may a n
or may not be asymptotically tight. The bound 2n3 = O(n3) =
is asymptotically tight, but the bound 2n = O(n2) is not. b ab
We use O-notation to denote an upper bound that is not
asymptotically tight.
n
ω -notation a n
=
By analogy, w-notation is to W-notation as o-notation is to b ab
O-notation. We use w-notation to denote a lower bound that
is not asymptotically tight.
It is defined as
f (n) ∈ w(g(n)) if and only if g(n) ∈ o(f (n)) Polynomials
Given a non-negative integer k, a polynomial in n of degree
k
‘k’ is a function p(n) of the form p (n) = ∑ ai ni
Comparison of functions i =0
Transitivity Where the constants a0, a1, … ak are the coefficients of
1. f (n) = q(g(n)) and g(n) = q(h(n)) the polynomial and ak ≠ 0.
⇒ f (n) = q(h(n)) For an asymptotically positive polynomial p(n) of degree
2. f (n) = O(g(n)) and g(n) = O(h(n)) k, we have p(n) = q(nk)
⇒ f (n) = O(h(n))
3. f (n) = W(g(n)) and g(n) = W(h(n)) Exponentials
⇒ f (n) = W(h(n))
4. f (n) = o(g(n)) and g(n) = o(h(n)) For all real a > 0, m and n, we have the following identities:
⇒ f (n) = o(h(n)) a0 = 1
5. f (n) = w(g(n)) and g(n) = w(h(n)) a1 = a
⇒ f (n) = w(h(n)) 1
a −1 =
Reflexivity a
1. f (n) = q(f (n)) (am)n = amn
2. f (n) = O(f (n)) (am)n = (an)m
3. f (n) = W(f (n)) aman = am+n
∞
x 2 x3 xi
Symmetry ex = 1 + x + + + = ∑
f (n) = q(g(n)) if and only if g(n) = q(f (n)) 2 ! 3! i =0 i !
Substitution Method
log b a n = n log b a
In this method one has to guess the form of the solution.
log c a It can be applied only in cases when it is easy to guess the
log b a =
log c b form of the answer. Consider the recurrence relation
Cn 2
T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1)
The sub-problem size for a node at depth ‘i’ is n/4i, at this Master Method
depth, the size of the sub-problem would be n = 1, when n/4i Let a ≥ 1 and b > 1 be cons-tants, let f (n) be a function
= 1 or i = log4n, the tree has log4n+1 levels. and let T(n) be defined on the non-negative integers by the
•• We have to determine the cost at each level of the tree. recurrence
Each level has 3 times more nodes than the level above,
so the number of nodes at depth ‘i’ is 3i. T(n) = aT(n/b) + f (n)
•• Sub problem sizes reduce by a factor of ‘4’ for each level T(n) can be bounded asymptotically as follows
we go down from the root, each node at depth i, for i = 0,
1, 2 … log4n–1, has a cost of c(n/4i)2. 1. If f (n) = O(nlogba–∈) for some constant ∈ > 0, then T(n)
= q(nlogba)
Total cost over all nodes at depth i, for i = 0, 1, … log4n–1 2. If f (n) = q(nlogba) then T(n) = q(nlogba. log n)
2 i
n 3 3. If f (n) = W(nlogba+∈) for some constant ∈ > 0, and
= 3i * c i = cn 2
4 16 if af (n/b) ≤ cf (n) for some constant c < 1 and all
The last level, at depth log4n has 3i nodes = 3log4n = nlog43 each sufficiently large n, then T(n) = q (f (n)).
contributing cost T(1), for a total cost of nlog43 T(1), which is
q (nlog43) cost of the entire tree is equal to sum of costs over Note: In the first case, not only must f (n) be smaller than n
all levels. logba, it must be polynomially smaller. That is, f (n) must be
2 asymptotically smaller than nlogba by a factor of n∈, for some
3 2 3 constant ∈ > 0.
T (n) = cn 2 + cn + cn 2 + +
16 16 In the third case, not only must f (n) be larger than nlogba,
log 4n −1 it must be polynomially larger and in addition satisfy the
3 regularity condition af (n/b) ≤ Cf (n).
cn 2 + + θ (n log 43 )
16
log 4n−1 i Example: Consider the given recurrence relation T(n)
3 2
= ∑ cn + θ (n 4 )
16
log 3
= 9T(n/3) + n.
i =0 To apply master theorem, the recurrence relation must be in
∞
3
i the following form:
< ∑ cn 2 + θ (n log 43 )
i = 0 16 T(n) = aT(n/b) + f (n)
1 a = 9, b = 3, f (n) = n
= cn 2 + θ (n log 43 )
3
1− nlogba = nlog39 = n2
16
Since f (n) = O(nlog39–∈), where ∈ = 1
16
= 2 cn + θ (n log 43 ) = O(n 2 ) We can apply case 1 of the master theorem and the solution
13 is T(n) = q(n2).
Chapter 1 • Asymptotic Analysis | 3.91
Exercises
Practice Problems 1 10. Solve the recurrence relation using master method:
Directions for questions 1 to 15: Select the correct alterna- T(n) = 4T (n/2) + n2
tive from the given choices. (A) q(n log n) (B) q(n2 log n)
(C) q(n )
2
(D) q(n3)
1. What is the time complexity of the recurrence relation
n 11. Arrange the following functions according to their
T (n) = 2T + n 2 ? order of growth (from low to high):
2
(A) q(n2) (B) q(n) (A) 3 n , 0.001n 4 + 3n3 + 1, 3n , 22 n
(C) q(n3) (D) q(n log n) (B) 3n , 22 n , 3 n , 0.001n 4 + 3n3 + 1
2. What is the time complexity of the recurrence relation
n (C) 22 n , 3 n , 3n , 0.001n 4 + 3n3 + 1
by using masters theorem T (n) = 2T + n ?
2 (D) 3
n , 22 n , 3n , 0.001n 4 + 3n3 + 1
(A) q(n2) (B) q(n)
12. The following algorithm checks whether all the ele-
(C) q(n3) (D) q(n log n)
ments in a given array are distinct:
3. What is the time complexity of the recurrence relation
Input: array A[0 … n – 1]
n
by using master theorem, T (n) = 2T + n 0.51 Output: true (or) false
4
(A) q(n2) (B) q(n) For i ← 0 to n – 2 do
(C) q(n3) (D) (n0.51) For j ← i + 1 to n – 1 do
4. What is the time complexity of the recurrence relation if A[i] = A[ j] return false
n
using master theorem, T (n) = 7T + n 2 ? return true
3 The time complexity in worst case is
(A) q(n2) (B) q(n) (A) q(n2) (B) q(n)
(C) q(n3) (D) (log n) (C) q(log n) (D) q(n log n)
5. Time complexity of f (x) = 4x2 - 5x + 3 is 13. The order of growth for the following recurrence rela-
(A) O(x) (B) O(x2) tion is T(n) = 4T(n/2) + n3, T(1) = 1
(B) O(x ) 3/2
(D) O(x0.5) (A) q(n) (B) q(n3)
6. Time complexity of f (x) = (x2 + 5 log2 x)/(2x + 1) is (C) q(n )2
(D) q(log n)
(A) O(x) (B) O(x2) n
(C) O(x ) 3/2
(D) O(x0.5) 14. Time complexity of T (n) = 2T + 3 is
4
( )
7. For the recurrence relation, T (n) = 2T n + lg n,
which is tightest upper bound? (A) q ( n log n) (B) q ( n log n )
(A) T(n) = O(n2) (B) T(n) = O(n3)
(C) q ( n ) (D) q(n2)
(C) T(n) = O(log n) (D) T(n) = O(lg n lg lg n)
15. Consider the following three claims
8. Consider T(n) = 9T(n/3) + n, which of the following is
(I) (n + k)m = θ(nm), where k and m are constants
TRUE?
(II) 2n + 1 = O(2n)
(A) T(n) = q(n2) (B) T(n) = q(n3)
(III) 22n + 1 = O(2n)
(C) T(n) = W(n3) (D) T(n) = O(n)
Which one of the following is correct?
9. If f (n) is 100 * n seconds and g(n) is 0.5 * n seconds then
(A) I and III (B) I and II
(A) f (n) = g(n) (B) f (n) = W(g(n))
(C) II and III (D) I, II and III
(C) f (n) = w(g(n)) (D) None of these
(A) (i) is true (ii) is false (B) Both are true (C) Investigation of worst case is more complex than
(C) Both are false (D) (ii) is true (i) is false average case.
4. 2n2 = x (n3), x is which notation? (D) None of these
(A) Big-oh (B) Small-oh 11. Time complexity of T(n) = T(n/3) + T(2n/3) + O(n) is
(C) W – notation (D) q – notation (A) O(1)
5. Master method applies to recurrence of the form T(n) (B) O(n log n)
= a T(n/b) + f (n) where (C) O(log n)
(A) a ≥ 1, b > 1 (B) a = 1, b > 1 (D) O(n2)
(C) a > 1, b = 1 (D) a ≥ 1, b ≥ 1 12. Solve the recurrence relation to find T(n): T(n) = 4(n/2)
6. What is the time complexity of the recurrence relation +n
using master method? (A) q(n2) (B) q(log2n)
(C) q(n2 log2n) (D) q(n3)
n
T (n) = 4T + n 13. What is the worst case analysis for the given code?
2 int search (int a[ ], int x, int n)
(A) q(n2) (B) q(n) {
(C) q(log n) (D) q(n log n) int i;
7. Use the informal definitions of O, q W to determine these for (i = 0 ; i < n; i ++)
assertions which of the following assertions are true. if (a [i] = = x)
(A) n(n + 1)/2 ∈ O(n3) (B) n(n + 1)/2 ∈ O(n2) return i;
(C) n(n + 1)/2 ∈ W(n) (D) All the above return –1;
}
8. Match the following: (A) O(n) (B) O(n log n)
(i) Big-oh (A) ≥ (C) O(log n) (D) O(n2)
(ii) Small-o (B) ≤ 14. Find the time complexity of the given code.
(iii) Ω (C) = void f (int n)
{
(iv) θ (D) <
if (n > 0)
(v) ω (E) >
{
(A) (i) – D, (ii) – A, (iii) – C, (iv) -B , (v) – E f (n/2);
(B) (i) – B, (ii) – D, (iii) – A, (iv) – C, (v) – E f (n/2);
(C) (i) – C, (ii) – A, (iii) – B, (iv) – E, (v) – D }
(D) (i) – A, (ii) – B, (iii) – C, (iv) – D, (v) – E }
(A) θ(n2)
9. Which one of the following statements is true? (B) θ(n)
(A) Both time and space efficiencies are measured as (C) θ(n log n)
functions of the algorithm input size. (D) θ(2n)
(B) Only time efficiencies are measured as a function
of the algorithm input size. 15. The running time of the following algorithm procedure
(C) Only space efficiencies are measured as a function A(n)
of the algorithm input size. if n ≤ 2
(D) Neither space nor time efficiencies are measured return (1)
as a function of the algorithm input size. else
return ( A( n ))
10. Which of the following is true?
is described by
(A) Investigation of the average case efficiency is con-
siderably more difficult than investigation of the (A) O( n log n)
worst case and best case efficiencies.
(B) O(log n)
(B) Investigation of best case is more complex than
average case. (C) O(log log n)
(D) O(n)
Chapter 1 • Asymptotic Analysis | 3.93
(A) Q(log n) (B) Q(n) executed on input of size n. Which of the following is
(C) Q(n log n) (D) Q(n2) ALWAYS TRUE? [2012]
12. The running time of an algorithm is represented by (A) A(n) = W(W(n)) (B) A(n) = Q(W(n))
the following recurrence relation: [2009] (C) A(n) = O(W(n)) (D) A(n) = o(W(n))
17. The recurrence relation capturing the optimal execu-
n n ≤ 3 tion time of the Towers of Hanoi problem with n discs
T ( n) = n is [2012]
T 3 + cn otherwise (A) T(n) = 2T(n – 2) + 2
(B) T(n) = 2T(n – 1) + n
Which one of the following represents the time com-
(C) T(n) = 2T(n/2) + 1
plexity of the algorithm?
(D) T(n) = 2T(n – 1) + 1
(A) q(n) (B) q(n log n)
18. A list of n strings, each of length n, is sorted into
(C) q(n2) (D) q(n2 log n)
lexicographic order using the merge sort algorithm.
13. Two alternative packages A and B are available for The worst-case running time of this computation is
processing a database having 10k records. Package A [2012]
requires 0.0001 n2 time units and package B requires (A) O(n log n) (B) O(n2log n)
10n log10 n time units to process n records. What is (C) O(n2 + log n) (D) O(n2)
the smallest value of k for which package B will be 19. Consider the following function:
preferred over A? [2010]
int unknown (int n) {
(A) 12 (B) 10
(C) 6 (D) 5 int i, j, k = 0;
14. An algorithm to find the length of the longest mono- for (i = n/2; i < = n; i++)
tonically increasing sequence of numbers in an array for (j = 2; j < = n; j = j*2)
A[0 : n - 1] is given below. k = k + n/2;
Let L denote the length of the longest monotonically return (k);
increasing sequence starting at index in the array.
}
Initialize Ln-1 = 1,
The return value of the function is [2013]
For all i such that 0 ≤ i ≤ n - 2 (A) Θ(n2) (B) Θ(n2log n)
Li {= 1 + Li +1 , if A[i ] < A [i + 1], (C) Θ(n3) (D) Θ(n3log n)
1 otherwise 20. The number of elements that can be sorted in Θ(log n)
time using heap sort is [2013]
Finally the length of the longest monotonically
(A) Θ(1)
increasing sequence is Max (L0, L1,…Ln–1)
Which of the following statements is TRUE? [2011] (B) Θ( log n )
(A) The algorithm uses dynamic programming para-
log n
digm. (C) Θ
(B) The algorithm has a linear complexity and uses log log n
branch and bound paradigm. (D) Θ(log n)
(C) The algorithm has a non-linear polynomial com-
21. Which one of the following correctly determines the
plexity and uses branch and bound paradigm.
solution of the recurrence relation with T(1) = 1
(D) The algorithm uses divide and conquer paradigm.
n
15. Which of the given options provides the increasing T (n) = 2T + log n ? [2014]
order of asymptotic complexity of functions f1, f2, f3 2
and f4? [2011] (A) q(n) (B) q(n log n)
(C) q(n2) (D) q(log n)
f1(n) = 2n
22. An algorithm performs (log N)1/2 find operations, N
f2(n) = n3/2
insert operations, (log N)1/2 delete operations, and (log
f3(n) = n log2n N)1/2 decrease-key operations on a set of data items
f 4 (n) = n log 2 n with keys drawn from a linearly ordered set. For a
(A) f3, f2, f4, f1 (B) f3, f2, f1, f4 delete operation, a pointer is provided to the record
(C) f2, f3, f1, f4 (D) f2, f3, f4, f1 that must be deleted For the decrease – key opera-
tion, a pointer is provided to the record that has its
16. Let W(n) and A(n) denote respectively, the worst-
key decreased Which one of the following data struc-
case and average-case running time of an algorithm
tures is the most suited for the algorithm to use, if the
Chapter 1 • Asymptotic Analysis | 3.95
34. Consider the recurrence function Which one of the following is the time complexity
of the most time-efficient implementation of enqueue
2T
T (n) =
( n ) +1 n>2 and dequeue, respectively, for this data structure?
[2018]
2, 0<n≤2
(A) q(1), θ(1) (B) θ(1), θ(n)
(C) θ(n), θ(1) (D) θ(n), θ(n)
Then T(n) in terms of Θ notation is[2017]
(A) Θ (log log n) (B) Θ (log n) 37. Consider the following C code. Assume that unsigned
long int type length is 64 bits.
(C) Θ ( n) (D) Θ (n) unsigned long int fun (unsigned long int n) {
unsigned long int i, j = 0, sum = 0;
35. Consider the following C function.
for (i = n; i > 1. i = i/2) j++;
int fun (int n) {
for (; j > 1; j = j/2) sum++;
int i, j;
for(i = 1; i <= n; i++) { return (sum);
for (j = l; j < n; j += i) { }
printf{“ %d %d”,i, j); The value returned when we call fun with the input 240
} is:[2018]
} (A) 4 (B) 5
(C) 6 (D) 40
Chapter 1 • Asymptotic Analysis | 3.97
Answer Keys
Exercises
Practice Problems 1
1. A 2. D 3. D 4. A 5. B 6. A 7. D 8. A 9. A 10. B
11. A 12. A 13. B 14. A 15. B
Practice Problems 2
1. A 2. A 3. A 4. B 5. A 6. A 7. D 8. B 9. A 10. A
11. B 12. A 13. A 14. B 15. C