0% found this document useful (0 votes)
3 views17 pages

1ALGO

Chapter 1 covers asymptotic analysis, focusing on algorithms, recursive algorithms, and the Towers of Hanoi problem. It discusses the concepts of time and space complexity, various algorithm representations, and the importance of algorithm validation and analysis. Additionally, it introduces tree traversals and data structures, emphasizing the efficiency of different algorithms and the use of asymptotic notations to describe their performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views17 pages

1ALGO

Chapter 1 covers asymptotic analysis, focusing on algorithms, recursive algorithms, and the Towers of Hanoi problem. It discusses the concepts of time and space complexity, various algorithm representations, and the importance of algorithm validation and analysis. Additionally, it introduces tree traversals and data structures, emphasizing the efficiency of different algorithms and the use of asymptotic notations to describe their performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Chapter 1

Asymptotic Analysis
LEARNING OBJECTIVES

 Algorithm  In order traversal


 Recursive algorithms  Data structure
 Towers of Hanoi  Worst-case and average-case analysis
 Time complexity  Asymptotic notations
 Space complexity  Notations and functions
 SET representation  Floor and ceil
 TREE representation  Recurrence
 Preorder traversal  Recursion-tree method
 Post-order traversal  Master method

aLGoRithm 2. Posterior analysis: This analysis is done after the execution.


It is dependent on system architecture, CPU, OS etc. it
An algorithm is a finite set of instructions that, if followed, accom-
provides non-uniform exact values.
plishes a particular task.
All algorithms must satisfy the following.
Recursive Algorithms
• Input: Zero or more quantities are externally supplied. A recursive function is a function that is defined in terms of itself.
• Output: Atleast one quantity is produced. An algorithm is said to be recursive if the same algorithm is
• Definiteness: Each instruction should be clear and unambiguous. invoked in the body.
• Finiteness: The algorithm should terminate after finite number
of steps. Towers of Hanoi
• Effectiveness: Every instruction must be very basic.
There was a diamond tower (labeled A) with 64-golden disks. The
Once an algorithm is devised, it is necessary to show that it disks were of decreasing size and were stacked on the tower in
computes the correct answer for all possible inputs. This process decreasing order of size bottom to top. Besides this tower there
is called algorithm validation. Analysis of algorithms refers to the were 2 other diamond towers (labeled B and C) we have to move
task of determining how much computing time and storage an the disks from tower A to tower B using tower C for intermediate
algorithm requires. storage. As the disks are very heavy, they can be moved only one at
a time. No disk can be on top of a smaller disk .

Analyzing Algorithms
The process of comparing 2 algorithms rate of growth with respect
to time, space, number of registers, network, bandwidth etc is
called analysis of algorithms.
This can be done in two ways
Tower A
1. Priori Analysis: This analysis is done before the execution;
the main principle behind this is frequency count of Tower B
fundamental instruction.
This analysis is independent of CPU, OS and system Tower C
architecture and it provides uniform estimated values. Figure 1 Towers of Hanoi
3.82 | Unit 3 • Algorithms

Assume that the number of disks is ‘n’. To get the largest To move ‘3’ disks from tower A to tower ‘C’ requires 7 disk
disk to the bottom of tower B, we move the remaining (n movements
– 1) disks to tower C and then move the largest to tower B. \ For ‘n’ disks, the number of disk movements required
Now move the disks from tower C to tower B. is 2n – 1 = 23 – 1 = 7
Example:
Time complexity
T(n) = 1 + 2T(n – 1)
3 T(n) = 1 + 2(1 + 2 (T(n – 2)))
2 T(n) = 1 + 2 + 22 T(n – 2)
1 T(n) = 1 + 2 + 22 (1 + 2T(n – 3))
A B C T(n) = 1 + 2 + 22 + 23 + T(n – 3)
T(n) = 1 + 2 + 22 + … + 2i–1 + 2i T(n – i)
n −1
T ( n ) = ∑ 2i
2 i =0
1 3
A B C The time complexity is exponential, it grows as power of 2.
\ T(n) @ O(2n)

Space complexity
The space complexity of an algorithm is the amount of
1 2 3
memory it needs to run to completion. The measure of the
A B C
quantity of input data is called the size of the problem. For
example, the size of a matrix multiplication problem might
be the largest dimension of the matrices to be multiplied.
The size of a graph problem might be the number of edges.
3 The limiting behavior of the complexity as size increases is
1 2 called the asymptotic time complexity.
A B C
•• It is the asymptotic complexity of an algorithm which
ultimately determines the size of problems that can be
solved by the algorithm.
•• If an algorithm processes inputs of size ‘n’ in time cn2 for
3 some constant c, then we say that the time complexity of
2 1 that algorithm is O(n2), more precisely a function g(n) is
A B C said to be O( f (n)) if there exists a constant c such that
g(n) ≤ c( f (n)) for all but some finite set of non-negative
values for n.
•• As computers become faster and we can handle larger
problems, it is the complexity of an algorithm that deter-
3 2 1 mines the increase in problem size that can be achieved
A B C with an increase in computer speed.
•• Suppose we have 5 algorithms Algorithm 1 – Algorithm 5
with the following time complexities.

Algorithm Time Complexity


2
3 1 Algorithm – 1 n
A B C Algorithm – 2 n log n
Algorithm – 3 n2
Algorithm – 4 n3
Algorithm – 5 2n
3
2 The time complexity is, the number of time units required
1 to process an input of size ‘n’. Assume that input size ‘n’ is
A B C 1000 and one unit of time equals to 1 millisecond.
Chapter 1 • Asymptotic Analysis | 3.83

The following figure gives the sizes of problems that can •• Main drawback of using adjacency matrix is that it
be solved in one second, one minute, and one hour by each requires |V|2 storage even if the graph has only O(|V|)
of these five algorithms. edges.
•• Another representation for a graph is by means of lists.
Maximum
The adjacency list for a vertex v is a list of all vertices
Problem Size
Time W adjacent to V. A graph can be represented by |V| adja-
Algorithm Complexity 1 sec 1 min 1 hour cency lists, one for each vertex.
Algorithm – 1 n 1000 6 × 104 3.6 × 106
Algorithm – 2 n log n 140 4893 2.0 × 105 Example:
Algorithm – 3 n2 31 244 1897
1 2
Algorithm – 4 n 3
10 39 153
Algorithm – 5 2n 9 15 21

4 3
From the above table, we can say that different algorithms
will give different results depending on the input size. Figure 2 Directed graph
Algorithm – 5 would be best for problems of size 2 ≤ n ≤ 9,
Algorithm – 3 would be best for 10 ≤ n ≤ 58, Algorithm – 2
1 2 3 4
would be best for 59 ≤ n ≤ 1025, and Algorithm – 1 is best
for problems of size greater than 1024. 1 0 1 0 1
2 0 0 1 0

3 0 0 0 0

Set Representation 
4 0 1 1 0

A common use of a list is to represent a set, with this rep-
resentation the amount of memory required to represent a Figure 3 Adjacency matrix
set is proportional to the number of elements in the set. The
amount of time required to perform a set operation depends
on the nature of the operation. Vertex – 1 2 4 0

•• Suppose A and B are 2 sets. An operation such as A ∩ B Vertex – 2


3 0
Vertex – 3
requires time atleast proportional to the sum of the sizes
of the 2 sets, since the list representing A and the list rep- Vertex – 4 2 3 0
resenting B must be scanned atleast once.
•• The operation A ∪ B requires time atleast proportional to Figure 4 Adjacency lists
the sum of the set sizes, we need to check for the same
element appearing in both sets and delete one instance of There are edges from vertex – 1 to vertex – 2 and 4, so the
each such element. adjacency list for 1 has items 2 and 4 linked together in the
•• If A and B are disjoint, we can find A ∪ B in time inde- format given above.
pendent of the size of A and B by simply concatenating
the two lists representing A and B. •• The adjacency list representation of a graph requires stor-
age proportional to |V| + |E|, this representation is used
when |E|< < |V|2.
Graph Representation
A graph G = (V, E) consists of a finite, non-empty set of
vertices V and a set of edges E. If the edges are ordered pairs
(V, W) of vertices, then the graph is said to be directed; V is Tree Representation
called the tail and W the head of the edge (V, W). There are A directed graph with no cycles is called a directed acyclic
several common representations for a graph G = (V, E). One graph. A directed graph consisting of a collection of trees is
such representation is adjacency matrix, a V X V matrix called a forest. Suppose the vertex ‘v’ is root of a sub tree,
M of 0’s and 1’s, where the ijth element, m[i, j] = 1, if and then the depth of a vertex ‘v’ in a tree is the length of the
only if there is an edge from vertex i to vertex j. path from the root to ‘v’.
•• The adjacency matrix representation is convenient for •• The height of a vertex ‘v’ in a tree is the length of a long-
graph algorithms which frequently require knowledge of est path from ‘v’ to a leaf.
whether certain edges are present. •• The height of a tree is the height of the root
•• The time needed to determine whether an edge is present •• The level of a vertex ‘v’ in a tree is the height of the tree
is fixed and independent of |V| and |E|. minus the depth of ‘v’.
3.84 | Unit 3 • Algorithms

1 15
6 18
2 13
7 8 16
3 4
9 11 14
17
5 10 12
(c)
Left child Right child Figure 6 (a) Pre-order, (b) Post-order (c) In-order
1 2 6
2 3 4 Post-order traversal
3 0 0 A post-order traversal of T is defined recursively as follows:
4 0 5
5 0 0 1. Visit in post-order the sub trees with roots v1, v2, v3,
6 7 8 … vk in that order.
7 0 0 2. Visit the root r.
8 0 9
9 0 10 In-order Traversal
10 0 0
An in-order traversal is defined recursively as follows:
Figure 5 A binary tree and its representation
1. Visit in in-order the left sub tree of the root ‘r’.
•• Vertex 3 is of depth ‘2’, height ‘0’ and the level is 2 2. Visit ‘r’.
(Height of tree - depth of ‘3’ = 4 – 2 = 2). 3. Visit in inorder the right sub tree of r.
•• A binary tree is represented by 2 arrays: left child and
right child. Example: Consider the given tree
•• A binary tree is said to be complete if for some integer
C
k, every vertex of depth less than k has both a left child
and a right child and every vertex of depth k is a leaf. A D
B
complete binary tree of height k has exactly (2k+1 – 1)
E
vertices.
A
•• A complete binary tree of height k is often represented by
a single array. Position 1 in the array contains the root.
What are the pre-order, post-order and in-order traversals of
The left child of the vertex in position ‘i’ is located at
the above tree?
position ‘2i’ and the right child at position ‘2i + 1’.
Solution: Pre-order – CBADE
Tree Traversals   Post-order – ABEDC
Many algorithms which make use of trees often traverse the   In-order – ABCDE
tree in some order. Three commonly used traversals are pre-
order, postorder and inorder.
Data Structure
Pre-order Traversal A data structure is a way to store and organize data in-order
A pre-order traversal of T is defined recursively as follows: to facilitate access and modifications. No single data struc-
ture works well for all purposes, it is important to know the
1. Visit the root.
strengths and limitations of several data structures.
2. Visit in pre-order the sub trees with roots v1, v2 … vk in
that order.
Efficiency
11 Algorithms devised to solve the same problem often differ
18
dramatically in their efficiency. Let us compare efficiencies
16 17 of Insertion sort and merge sort; insertion sort, takes time
12
14
17 16
equal to C1n2 to sort ‘n’ elements, where C1 is a constant
13 15 13 that does not depend on ‘n’. It takes time proportional to
12
n2, merge sort takes time equal to C2nlog n, C2 is another
18
15 constant that also does not depend on ‘n’. Insertion sort has
14 11
a smaller constant factor than merge sort (C1 < C2) constant
(a) (b) factors are far less significant in the running time.
Chapter 1 • Asymptotic Analysis | 3.85

Merge sort has a factor of ‘log n’ in its running time, Asymptotic Notations
insertion sort has a factor of ‘n’, which is much larger.
Asymptotic notations are mostly used in computer science
Insertion sort is faster than merge sort for small input sizes,
to describe the asymptotic running time of an algorithm.
once the input size ‘n’ becomes large enough, merge sort
As an example, an algorithm that takes an array of size n
will perform better. No matter how much smaller C1 is than
as input and runs for time proportional to n2 is said to take
C2. There will always be a crossover point beyond which
O(n2) time.
merge sort is faster.
5 Asymptotic Notations:
Example: Consider 2 computers, computer A (faster
computer), B (slower computer). Computer A runs insertion •• O (Big-oh)
sort and computer B runs merge sort. Each computer is •• q (Theta)
given 2 million numbers to sort. Suppose that computer A •• W (Omega)
executes one billion instruction per second and computer B •• o (Small-oh)
executes only 10 million instructions per second, computer •• w
A is 100 times faster than computer B (C1 = 4, C2 = 50).
How much time is taken by both the computers? How to Use Asymptotic Notation
Solution: Insertion sort takes C1 * n time
2 for Algorithm Analysis?
Merge sort takes C2 * n * log n time Asymptotic notation is used to determine rough estimates
C1 = 4, C2 = 50 of relative running time of algorithms. A worst-case anal-
Computer A takes ysis of any algorithm will always yeild such an estimate,
because it gives an upper bound on the running time T(n) of
4 × ( 2 × 106 ) 2 instructions the algorithm, that is T(n) g(n).
≅ 4000 seconds
10 9 instructions/second Example:

Computer B takes a←0 1 unit 1 time

for i ← 1 to n do{ 1 unit n times


50 × 2 × 106 × log( 2 × 106 ) insturctions
=
10 7 instructions/second for j ← 1 to i do{ 1 unit n(n + 1)/2 times
= 209 seconds a←a+1 1 unit n(n +1)/2 times
By using an algorithm whose running time grows more
slowly, even with an average compiler, computer B runs Where the times for the inner loop have been computed as fol-
20 times faster than computer A. The advantage of merge lows: For each i from 1 to n, the loop is executed i times, so the
n
sort is even more pronounced when we sort ten million total number of times is 1 + 2 + 3 +  + n = ∑ i = n(n + 1)/ 2
numbers. As the problem size increases, so does the relative i =1
advantage of merge sort. Hence in this case
T(n) = 1 + n + 2n (n +1)/2 = n2 + 2n + 1
If we write g(n) = n2 + 2n + 1, then T(n) ∈ q(g(n)),
Worst-case and average-case analysis That is T(n) ∈ q(n2 + 2n + 1), we actually write T(n) ∈ q(n2),
In the analysis of insertion sort, the best case occurs when as recommended by the following rule:
the array is already sorted and the worst case, in which the •• Although the definitions of asymptotic notation allow one
input array is reversely sorted. We concentrate on finding to write, for example, T(n) ∈ O(3n2 + 2).
the worst-case running time, that is the longest running time
We simplify the function in between the parentheses as
for any input of size ‘n’.
much as possible (in terms of rate of growth), and write
•• The worst-case running time of an algorithm is an upper instead T(n) ∈ O(n2)
bound on the running time for any input. It gives us a For example: T(n) ∈ q(4n3 – n2 + 3)
guarantee that the algorithm will never take any longer. T(n) ∈ q(n3)
•• The ‘average-case’ is as bad as the worst-case. Suppose  n 
that we randomly choose ‘n’ numbers and apply inser- For instance O  ∑ i  , write O(n2) after computing the sum.
tion sort. To insert an element A[j], we need to determine  i =1 
where to insert in sub-array A [1 … J – 1]. On average •• In the spirit of the simplicity rule above, when we are to
half the elements in A[1 … J – 1] are less than A[j] and compare, for instance two candidate algorithms A and B
half the elements are greater. So tj = j/2. The average-case having running times (TA(n) = n2 – 3n + 4 and TB(n) = 5n3
running time turns out to be a quadratic function of the + 3, rather than writing TA(n) ∈ O(TB(n)), we write TA(n)
input size. ∈ q(n2), and TB(n) ∈ q(n3), and then we conclude that A
3.86 | Unit 3 • Algorithms

is better than B, using the fact that n2(quadratic) is better •• The lower order terms of an asymptotically positive func-
than n3(cubic) time, since n2 ∈ O(n3). tion can be ignored in determining asymptotically tight
bounds because they are insignificant for large n.
•• A small fraction of the highest order term is enough to
Order of Growth dominate the lower order term. Thus setting C1 to a value
In the rate of growth or order of growth, we consider only that is slightly smaller than the coefficient of the highest
the leading term of a formula. Suppose the worst case run- order term and setting C2 to a value that is slightly larger
ning time of an algorithm is an2 + bn + c for some constants permits the inequalities in the definition of q-notation to
a, b and c. The leading term is an2. We ignore the leading be satisfied. If we take a quadratic function f (n) = an2 +
term’s constant coefficient, since constant factors are less bn + c, where a, b and c are constants and a > 0. Throwing
significant than the rate of growth in determining compu- away the lower order terms and ignoring the constant
tational efficiency for large inputs. Thus we can write, the yields f (n) = q (n2).
worst-case running time is q(n2). •• We can express any constant function as q(n0), or q(1) we
We usually consider one algorithm to be more efficient shall often use the notation q(1) to mean either a constant
than another if its worst-case running time has a lower order or a constant function with respect to some variable.
of growth. Due to constant factors and lower order terms,
this evaluation may be in error for small inputs. But for O-Notation
large inputs, q(n2) algorithm will run more quickly in the We use O-notation to give an upper bound on a function,
worst-case than q(n3) algorithm. within a constant factor.

q-Notation Cg(n)
A function f (n) belongs to the set q(g(n)) if there exists a f (n)
positive constant C1 and C2 such that it can be “sand witched”
between C1g(n) and C2g(n) for sufficiently large n. We write
f (n) ∈ q (g(n)) to indicate that f (n) is a member of q (g(n))
or we can write f (n) = q (g(n)) to express the same notation. n0 n

The above figure shows the intuition behind O-notation. For


C 2g(n) all values ‘n’ to the right of n0, the value of the function f (n)
f(n) is on or below g(n). We write f (n) = O(g(n)) to indicate that
C 1g(n) a function f (n) is a member of the set O(g(n)).
f (n) = q(g(n)) implies f (n) = O(g(n)). Since q notation is
stronger notation than O-notation set theoretically, we have
n0 n q(g(n)) ⊆ O(g(n)). Thus any quadratic function an2 + bn
The above figure gives an intuitive picture of functions f (n) + c, where a > 0, is in q(n2) also shows that any quadratic
and g(n), where we have that f (n) = q (g(n)), for all the val- function is in O(n2) when we write f (n) = O(g(n)), we are
ues of ‘n’ to the right of no, the value of f (n) lies at or above claiming that some constant multiple of g(n) is an asymp-
C1g(n) and at or below C2g(n). g(n) is asymptotically tight totic upper bound on f (n), with no claim about how tight an
bound for f (n). The definition of q(g(n)) requires that every upper bound it is.
member f (n) ∈ q(g(n)) be asymptotically non-negative, that The O-notation is used for asymptotically upper bound-
is f (n) must be non-negative whenever ‘n’ is sufficiently large. ing a function. We would use O (big-oh) notation to represent
The q-notation is used for asymptotically bounding a a set of functions that upper bounds a particular function.
function from both above and below. We would use q(theta) Definition We say that a function f (n) is big oh of g(n) writ-
notation to represent a set of functions that bounds a par- ten as f (n) = O(g(n)) if there exists positive constants C and
ticular function from above and below. n0 such that
0 ≤ f (n) ≤ Cg(n), ∀ n ≥ no
Definition: We say that a function f (n) is theta of g(n) writ-
ten as f (n) = q(g(n)) if such exists positive constants C1, C2
and n0 such that 0 ≤ C1g(n) ≤ f (n) ≤ C2 g(n), ∀ n ≥ n0. Solved Examples
Example: Let f (n) = 5.5n2 – 7n, verify whether f (n) is Example 1: let f (n) = n2
q(n2). Lets have constants c1 = 9 and n0 = 2, such that 0 ≤ Then f (n) = O(n2)
f (n) ≤ C1 n2, ∀n ≥ n0. From example, 4 we have constants f (n) = O(n2log n)
C2 = 3, and n0 = 2.8, such that 0 ≤ C2 n2 ≤ f (n), ∀n ≥ n0. f (n) = O(n2.5)
To show f (n) is q(n2), we have got hold of two constants f (n) = O(n3)
C1 and C2. We fix the n0 for q as maximum {2, 2.8} = 2.8. f (n) = O(n4) … so on.
Chapter 1 • Asymptotic Analysis | 3.87

Example 2: Let f (n) = 5.5n2 – 7n, verity whether f (n) is is on or above Cg(n). For any 2 functions f (n) and g(n) we
O(n2) have f (n) = q(g(n)) if f (n) = O(g(n)) and f (n) = Ω(g(n)).
From the above statement we can say that, an2 + bn + c =
Solution: Let C be a constant such that
q(n2) for any constants a, b and c, where a > 0, immediately
7 implies that
5.5n 2 − 7 n ≤ Cn 2 , or n ≥
c − 5.5 \ an2 + bn + c = Ω(n2)
Fix C = 9, to get n ≥ 2 \ an2 + bn + c = O(n2)
So our n0 = 2 and C = 9
This shows that there exists, positive constants C = 9 and n0 Example 4: Let f (n) = 5.5n2 - 7n.
= 2 such that Verity whether f (n) is W(n2)
0 ≤ f (n) ≤ Cn2, ∀ n ≥ n0 Solution: Let C be a constant such that 5.5n2 – 7n ≥ Cn2 or
Example 3: 7
n≥ . Fix C = 3, to get n ≥ 2.8. So, our n0 = 2.8 and
h(n) = 3n + 10n + 1000 log n ∈ O(n )
3 3 5.5 − C
C=3
h(n) = 3n3 + 10n + 1000 log n ∈ O(n4) This shows that there exists positive constants C = 3 and
n0 = 2.8, such that 0 ≤ Cn2 ≤ f (n), ∀n ≥ n0.
•• Using O-notation, we can describe the running time of
an algorithm by inspecting the algorithm’s overall struc-
ture. For example, the doubly nested loop structure of the Cg(n)
insertion sort algorithm yields an O(n2) upper bound on f (n)
the worst-case running time. The cost of each iteration of
the inner loop is bounded from above by O(1) (constant), 0 ≤ f (n) ≤ Cg(n), ∀ n ≥ n0
the inner loop is executed almost once for each of the n2 n0
pairs.
•• O(n2) bound on worst-case running time of insertion sort (a) f (n) = O(g(n))
also applies to its running time on every input.
•• The q(n2) bound on the worst-case running time of inser- f (n)
tion sort, however, does not imply a q(n2) bound on the
running time of insertion sort on every input, when the
input is already sorted, insertion sort runs in q(n) time. Cg(n)

0 ≤ Cg(n) ≤ f (n), ∀ n ≥ n0
W (omega)-notation
n0

(b) f (n) = W(g(n))


f(n)
Cg(n)
C 1g(n)
f (n)

n0 n
C 2g(n)

The W-notation is used for asymptotically lower bound- 0 ≤ C 2g(n) ≤ f (n) ≤ C 1g(n), ∀ n ≥ n0
ing a function. We would use Ω(big-omega) notation to n0
represent a set of functions that lower bounds a particular
function. (c) f (n) = q(g(n))
Figure 7 A diagrammatic representation of the asymptotic notations
Definition We say that a function f (n) is big-omega of g(n) O, W and q
written as f (n) = Ω(g(n)) if there exists positive constants C •• W-notation describes a lower bound; it is used to bound
and n0 such that the best-case running time of an algorithm. The best-case
0 ≤ Cg(n) ≤ f (n) , ∀ n ≥ n0 running time of insertion sort is W(n). The running time
of insertion sort falls between W(n) and O(n2), since it
The intuition behind Ω-notation is shown in the above falls anywhere between a linear function of ‘n’ and a
figure. For all values ‘n’ to the right of n0, the value of f (n) quadratic function of ‘n’.
3.88 | Unit 3 • Algorithms

•• When we say that the running time of an algorithm is x − 1 <  x  ≤ x ≤  x  < x + 1 for any integer n,
W(g(n)), we mean that no matter what particular input of
size ‘n’ is chosen for each value of n, the running time on n n
that input is at least a constant times g(n), for sufficiently  2  +  2  = n,
   
large ‘n’.
For any real number n ≥ 0 and integer a, b > 0
O-notation
n
The asymptotic upper bound provided by O-notation may a  n 
or may not be asymptotically tight. The bound 2n3 = O(n3)   =  
is asymptotically tight, but the bound 2n = O(n2) is not.  b   ab 
We use O-notation to denote an upper bound that is not  
 
asymptotically tight.
n
ω -notation a  n 
  =  
By analogy, w-notation is to W-notation as o-notation is to  b   ab 
O-notation. We use w-notation to denote a lower bound that  
 
is not asymptotically tight.
It is defined as
f (n) ∈ w(g(n)) if and only if g(n) ∈ o(f (n)) Polynomials
Given a non-negative integer k, a polynomial in n of degree
k
‘k’ is a function p(n) of the form p (n) = ∑ ai ni
Comparison of functions i =0
Transitivity Where the constants a0, a1, … ak are the coefficients of
1. f (n) = q(g(n)) and g(n) = q(h(n)) the polynomial and ak ≠ 0.
⇒ f (n) = q(h(n)) For an asymptotically positive polynomial p(n) of degree
2. f (n) = O(g(n)) and g(n) = O(h(n)) k, we have p(n) = q(nk)
⇒ f (n) = O(h(n))
3. f (n) = W(g(n)) and g(n) = W(h(n)) Exponentials
⇒ f (n) = W(h(n))
4. f (n) = o(g(n)) and g(n) = o(h(n)) For all real a > 0, m and n, we have the following identities:
⇒ f (n) = o(h(n)) a0 = 1
5. f (n) = w(g(n)) and g(n) = w(h(n)) a1 = a
⇒ f (n) = w(h(n)) 1
a −1 =
Reflexivity a
1. f (n) = q(f (n)) (am)n = amn
2. f (n) = O(f (n)) (am)n = (an)m
3. f (n) = W(f (n)) aman = am+n

x 2 x3 xi
Symmetry ex = 1 + x + + + = ∑
f (n) = q(g(n)) if and only if g(n) = q(f (n)) 2 ! 3! i =0 i !

Transpose symmetry •• For all real x, we have inequality ex ≥ 1 + x


1. f (n) = O(g(n)) if and only if g(n) = W (f (n)) •• If x = 0, we have 1 + x ≤ ex ≤ 1 + x + x2
2. f (n) = o(g(n)) if and only if g(n) = w(f (n))
Logarithms
Notations and Functions lg n = log2n (binary logarithm)
ln n = logen (natural logarithm)
Floor and Ceil lgk n = (log n)k (exponentiation)
For any real number ‘x’, we denote the greatest integer less lg lg n = lg (lg n) (composition)
than or equal to x by  x  called as floor of x and the least For all real a > 0, b > 0, c > 0 and n,
integer greater than or equal to x by  x  called as ceiling of x. logc (ab) = logca + logcb
Chapter 1 • Asymptotic Analysis | 3.89

Substitution Method
log b a n = n log b a
In this method one has to guess the form of the solution.
log c a It can be applied only in cases when it is easy to guess the
log b a =
log c b form of the answer. Consider the recurrence relation

logb (1/a) = –logba T(n) = 2T(n/2) + n

1 We guess that the solution is T(n) = O(n log n) we have


log b a = to prove that
log a b
T(n) ≤ c n log n (\ c > 0)
c a
a logb = c logb Assume that this bound holds for  n/2 
T(n/2) ≤ c(n/2). log (n/2) + n
Factorials T(n) ≤ 2(c(n/2 log (n/2)) + n
n! is defined for integers n ≥ 0 as ≤ cn log n – cn log2 + n
1 if n = 0 ≤ cn log n – cn + n
n! =  ≤ cn log (\ c ≥ 1)
 ( n − 1)!* n n>0
A weak upper bound on the factorial function is n! ≤ nn
since each of the n terms in the factorial product is almost n. Recursion-tree Method
n! = o(nn) In a recursion-tree, each node represents the cost of single
n! = w(2n) sub problem somewhere in the set of recursive function
lg (n!) = q(n log n) invocations. We sum the costs within each level of the tree
to obtain a set of per-level costs, and then we sum all the
Iterated Logarithm per-level costs to determine the total cost of all levels of
the recursion. Recursion trees are useful when the recur-
The notation lg*n is used to denote the iterated logarithm.
rence describes the running time of a divide-and-conquer
Let ‘lg(i) n’ be as defined above, with f (n) = lg n. The log-
algorithm.
arithm of a non-positive number is undefined, ‘lg(i) n’ is
defined only if lg(i–1) n > 0; Example:
The iterated logarithm function is defined as lg*n = min Consider the given recurrence relation
{i ≥ 0 : lg(i) n ≤ 1}. This function is a very slowly growing T(n) = 3T (n/4) + q(n2)
function.
We create a recursion tree for the recurrence
lg*2 = 1
lg*4 = 2 T(n) = 3T(n/4) + Cn2
lg*16 = 3
lg*65536 = 4 Cn 2
lg*(265536) = 5

Recurrences T (n/4) T(n/4) T (n/4)


When an algorithm contains a recursive call to itself,
its running time can often be described by a recurrence. The Cn2 term at the root represents the cost at the top level
A recurrence is an equation that describes a function in of recursion, and the three sub trees of the root represent the
terms of its value on smaller inputs. For example, the costs incurred by the sub problems of size n/4.
worst-case running time T(n) of the merge-sort can be
described as
Cn 2
T(n) = q (1) if n = 1
2T (n/2) + q (n) if n > 1
The time complexity of merge-sort algorithm in the worst- C(n/4)2
C(n/4)2
case is T(n) = q(n log n) C(n/4)2
There are 3 methods to solve recurrence relations: T (n/16)
T (n/16) T (n/16) T (n/16)
1. Substitution method T (n/16) T (n/16) T (n/16)
T (n/16) T (n/16)
2. Recursion-tree method
3. Master method Figure 8 Recursion tree for T(n) = 3T (n/4) + cn2
3.90 | Unit 3 • Algorithms

Cn 2

C (n /4)2 C (n /4)2 C (n/4)2

C (n/16)2 C (n/16)2 C (n /16)2 C (n /16)2 C (n/16)2 C (n /16)2 C (n /16)2 C (n /16)2 C (n /16)2

T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1) T (1)

Figure 9 Expanded Recursion tree with height log4n (\ levels log4n + 1)

The sub-problem size for a node at depth ‘i’ is n/4i, at this Master Method
depth, the size of the sub-problem would be n = 1, when n/4i Let a ≥ 1 and b > 1 be cons-tants, let f (n) be a function
= 1 or i = log4n, the tree has log4n+1 levels. and let T(n) be defined on the non-negative integers by the
•• We have to determine the cost at each level of the tree. recurrence
Each level has 3 times more nodes than the level above,
so the number of nodes at depth ‘i’ is 3i. T(n) = aT(n/b) + f (n)
•• Sub problem sizes reduce by a factor of ‘4’ for each level T(n) can be bounded asymptotically as follows
we go down from the root, each node at depth i, for i = 0,
1, 2 … log4n–1, has a cost of c(n/4i)2. 1. If f (n) = O(nlogba–∈) for some constant ∈ > 0, then T(n)
= q(nlogba)
Total cost over all nodes at depth i, for i = 0, 1, … log4n–1 2. If f (n) = q(nlogba) then T(n) = q(nlogba. log n)
2 i
n  3 3. If f (n) = W(nlogba+∈) for some constant ∈ > 0, and
= 3i * c  i  =   cn 2
 4   16  if af (n/b) ≤ cf (n) for some constant c < 1 and all
The last level, at depth log4n has 3i nodes = 3log4n = nlog43 each sufficiently large n, then T(n) = q (f (n)).
contributing cost T(1), for a total cost of nlog43 T(1), which is
q (nlog43) cost of the entire tree is equal to sum of costs over Note: In the first case, not only must f (n) be smaller than n
all levels. logba, it must be polynomially smaller. That is, f (n) must be
2 asymptotically smaller than nlogba by a factor of n∈, for some
3 2  3 constant ∈ > 0.
T (n) = cn 2 + cn +   cn 2 +  +
16  16  In the third case, not only must f (n) be larger than nlogba,
log 4n −1 it must be polynomially larger and in addition satisfy the
 3 regularity condition af (n/b) ≤ Cf (n).
  cn 2 +  + θ (n log 43 )
 16 
log 4n−1 i Example: Consider the given recurrence relation T(n)
 3 2
= ∑   cn + θ (n 4 )
 16 
log 3
= 9T(n/3) + n.
i =0 To apply master theorem, the recurrence relation must be in

 3
i the following form:
< ∑   cn 2 + θ (n log 43 )
i = 0  16  T(n) = aT(n/b) + f (n)
1 a = 9, b = 3, f (n) = n
= cn 2 + θ (n log 43 )
 3
1−   nlogba = nlog39 = n2
 16 
Since f (n) = O(nlog39–∈), where ∈ = 1
16
= 2 cn + θ (n log 43 ) = O(n 2 ) We can apply case 1 of the master theorem and the solution
13 is T(n) = q(n2).
Chapter 1 • Asymptotic Analysis | 3.91

Exercises
Practice Problems 1 10. Solve the recurrence relation using master method:
Directions for questions 1 to 15: Select the correct alterna- T(n) = 4T (n/2) + n2
tive from the given choices. (A) q(n log n) (B) q(n2 log n)
(C) q(n )
2
(D) q(n3)
1. What is the time complexity of the recurrence relation
n 11. Arrange the following functions according to their
T (n) = 2T   + n 2 ? order of growth (from low to high):
2
(A) q(n2) (B) q(n) (A) 3 n , 0.001n 4 + 3n3 + 1, 3n , 22 n
(C) q(n3) (D) q(n log n) (B) 3n , 22 n , 3 n , 0.001n 4 + 3n3 + 1
2. What is the time complexity of the recurrence relation
n (C) 22 n , 3 n , 3n , 0.001n 4 + 3n3 + 1
by using masters theorem T (n) = 2T   + n ?
2 (D) 3
n , 22 n , 3n , 0.001n 4 + 3n3 + 1
(A) q(n2) (B) q(n)
12. The following algorithm checks whether all the ele-
(C) q(n3) (D) q(n log n)
ments in a given array are distinct:
3. What is the time complexity of the recurrence relation
Input: array A[0 … n – 1]
n
by using master theorem, T (n) = 2T   + n 0.51 Output: true (or) false
4
(A) q(n2) (B) q(n) For i ← 0 to n – 2 do
(C) q(n3) (D) (n0.51) For j ← i + 1 to n – 1 do
4. What is the time complexity of the recurrence relation if A[i] = A[ j] return false
n
using master theorem, T (n) = 7T   + n 2 ? return true
3 The time complexity in worst case is
(A) q(n2) (B) q(n) (A) q(n2) (B) q(n)
(C) q(n3) (D) (log n) (C) q(log n) (D) q(n log n)
5. Time complexity of f (x) = 4x2 - 5x + 3 is 13. The order of growth for the following recurrence rela-
(A) O(x) (B) O(x2) tion is T(n) = 4T(n/2) + n3, T(1) = 1
(B) O(x ) 3/2
(D) O(x0.5) (A) q(n) (B) q(n3)
6. Time complexity of f (x) = (x2 + 5 log2 x)/(2x + 1) is (C) q(n )2
(D) q(log n)
(A) O(x) (B) O(x2) n
(C) O(x ) 3/2
(D) O(x0.5) 14. Time complexity of T (n) = 2T   + 3 is
4
( )
7. For the recurrence relation, T (n) = 2T  n  + lg n,
which is tightest upper bound? (A) q ( n log n) (B) q ( n log n )
(A) T(n) = O(n2) (B) T(n) = O(n3)
(C) q ( n ) (D) q(n2)
(C) T(n) = O(log n) (D) T(n) = O(lg n lg lg n)
15. Consider the following three claims
8. Consider T(n) = 9T(n/3) + n, which of the following is
(I) (n + k)m = θ(nm), where k and m are constants
TRUE?
(II) 2n + 1 = O(2n)
(A) T(n) = q(n2) (B) T(n) = q(n3)
(III) 22n + 1 = O(2n)
(C) T(n) = W(n3) (D) T(n) = O(n)
Which one of the following is correct?
9. If f (n) is 100 * n seconds and g(n) is 0.5 * n seconds then
(A) I and III (B) I and II
(A) f (n) = g(n) (B) f (n) = W(g(n))
(C) II and III (D) I, II and III
(C) f (n) = w(g(n)) (D) None of these

Practice Problems 2 2. n = Ω(log n) means


Directions for questions 1 to 15: Select the correct alterna- (A) To the least n is log n
tive from the given choices. (B) n is log n always
1. Arrange the order of growth in ascending order: (C) n is at most log n
(A) O(1) > O(log n ) > O(n) > O(n2) (D) None of these
(B) O(n) > O(1) > O(log n) > O(n2) 3. Which of the following is correct?
(C) O(log n) > O(n) > O(1) > O(n2) (i) q (g(n)) = O(g(n)) ∩ W(g(n))
(D) O(n2) > O(n) > O(log n) > O(1) (ii) q (g(n)) = O(g(n)) ∪ W(g(n))
3.92 | Unit 3 • Algorithms

(A) (i) is true (ii) is false (B) Both are true (C) Investigation of worst case is more complex than
(C) Both are false (D) (ii) is true (i) is false average case.
4. 2n2 = x (n3), x is which notation? (D) None of these
(A) Big-oh (B) Small-oh 11. Time complexity of T(n) = T(n/3) + T(2n/3) + O(n) is
(C) W – notation (D) q – notation (A) O(1)
5. Master method applies to recurrence of the form T(n) (B) O(n log n)
= a T(n/b) + f (n) where (C) O(log n)
(A) a ≥ 1, b > 1 (B) a = 1, b > 1 (D) O(n2)
(C) a > 1, b = 1 (D) a ≥ 1, b ≥ 1 12. Solve the recurrence relation to find T(n): T(n) = 4(n/2)
6. What is the time complexity of the recurrence relation +n
using master method? (A) q(n2) (B) q(log2n)
(C) q(n2 log2n) (D) q(n3)
n
T (n) = 4T   + n 13. What is the worst case analysis for the given code?
2 int search (int a[ ], int x, int n)
(A) q(n2) (B) q(n) {
(C) q(log n) (D) q(n log n) int i;
7. Use the informal definitions of O, q W to determine these for (i = 0 ; i < n; i ++)
assertions which of the following assertions are true. if (a [i] = = x)
(A) n(n + 1)/2 ∈ O(n3) (B) n(n + 1)/2 ∈ O(n2) return i;
(C) n(n + 1)/2 ∈ W(n) (D) All the above return –1;
}
8. Match the following: (A) O(n) (B) O(n log n)
(i) Big-oh (A) ≥ (C) O(log n) (D) O(n2)
(ii) Small-o (B) ≤ 14. Find the time complexity of the given code.
(iii) Ω (C) = void f (int n)
{
(iv) θ (D) <
if (n > 0)
(v) ω (E) >
{
(A) (i) – D, (ii) – A, (iii) – C, (iv) -B , (v) – E f (n/2);
(B) (i) – B, (ii) – D, (iii) – A, (iv) – C, (v) – E f (n/2);
(C) (i) – C, (ii) – A, (iii) – B, (iv) – E, (v) – D }
(D) (i) – A, (ii) – B, (iii) – C, (iv) – D, (v) – E }
(A) θ(n2)
9. Which one of the following statements is true? (B) θ(n)
(A) Both time and space efficiencies are measured as (C) θ(n log n)
functions of the algorithm input size. (D) θ(2n)
(B) Only time efficiencies are measured as a function
of the algorithm input size. 15. The running time of the following algorithm procedure
(C) Only space efficiencies are measured as a function A(n)
of the algorithm input size. if n ≤ 2
(D) Neither space nor time efficiencies are measured return (1)
as a function of the algorithm input size. else
return ( A( n ))
10. Which of the following is true?
is described by
(A) Investigation of the average case efficiency is con-
siderably more difficult than investigation of the (A) O( n log n)
worst case and best case efficiencies.
(B) O(log n)
(B) Investigation of best case is more complex than
average case. (C) O(log log n)
(D) O(n)
Chapter 1 • Asymptotic Analysis | 3.93

Previous Years’ Questions


1. The median of n elements can be found in O(n) time. these n numbers needs to be determined. Which of the
Which one of the following is correct about the com- following is TRUE about the number of comparisons
plexity of quick sort, in which median is selected as needed? [2007]
pivot? [2006] (A) At least 2n – c comparisons, for some constant c,
(A) q(n) (B) q(n log n) are needed.
(C) q(n2) (D) q(n3) (B) At most 1.5n − 2 comparisons are needed.
2. Given two arrays of numbers a1 … an and b1 … bn where (C) At least n log2 n comparisons are needed.
each number is 0 or 1, the fastest algorithm to find the (D) None of the above.
largest span (i, j) such that ai + ai+ 1 + … + aj = bi + bi + 7. Consider the following C code segment:
1 + … + bj, or report that there is not such span, [2006] int IsPrime(n)
(A) Takes O(3n) and W(2n) time if hashing is permit- {
ted int i,n;
(B) Takes O(n3) and W(n2.5) time in the key compari-
for(i=2; i<= sqrt(n); i ++)
son model
if (n%i = =0)
(C) Takes Q(n) time and space
{printf(“Not Prime\n”); return 0;}
(D) Takes O( n ) time only if the sum of the 2n ele-
return 1;
ments is an even number
}
3. Consider the following segment of C-code:
Let T(n) denote the number of times the for loop is exe-
int j, n;
cuted by the program on input n. Which of the following
j = 1; is TRUE? [2007]
while (j <=n)
j = j*2; (A) T (n) = Ο( n ) and T (n) = Ω( n )
The number of comparisons made in the execution of (B) T (n) = Ο( n ) and T (n) = Ω(1)
the loop for any n > 0 is: [2007]
(C) T (n) = Ο(n) and T (n) = Ω( n )
(A) log 2 n  + 1 (B) n
(D) None of the above
(C) log 2 + n  (D) log 2 n  + 1
8. The most efficient algorithm for finding the number
4. In the following C function, let n ≥ m. of connected components in an undirected graph on
int gcd(n,m) n vertices and m edges has time complexity [2008]
{ (A) Q(n) (B) Q(m)
if (n%m = =0) return m; (C) Q(m + n) (D) Q(mn)
n = n%m;
9. Consider the following functions:
return gcd(m,n);
}
f (n) = 2n
g(n) = n!
How many recursive calls are made by this function?
 [2007] h(n) = nlogn
(A) Θ(log 2 n) (B) Ω( n ) Which of the following statements about the asymp-
totic behavior of f (n), g(n), and h(n) is true? [2008]
(C) Θ(log 2 log 2 n) (D) Θ( n ) (A) f (n) = O(g(n)); g(n) = O(h(n))
5. What is the time complexity of the following recursive (B) f (n) = W(g(n)); g(n) = O(h(n))
function: (C) g(n) = O(f (n)); h(n) = O(f (n))
int DoSomething (int n) { (D) h(n) = O(f (n)); g(n) = W(f (n))
if (n <= 2) 10. The minimum number of comparisons required to
return 1; determine if an integer appears more than n/2 times
else in a sorted array of n integers is [2008]
return(DoSomething (floor(sqrt(n)))+ n);} [2007] (A) Q(n) (B) Q(log n)
(C) Q(log * n) (D) Q(1)
(A) Θ(n 2 ) (B) Θ(n log 2 n)
11. We have a binary heap on n elements and wish to
(C) Θ(log 2 n) (D) Θ(log 2 log 2 n) insert n more elements (not necessarily one after
6. An array of n numbers is given, where n is an even another) into this heap. The total time required for this
number. The maximum as well as the minimum of is [2008]
3.94 | Unit 3 • Algorithms

(A) Q(log n) (B) Q(n) executed on input of size n. Which of the following is
(C) Q(n log n) (D) Q(n2) ALWAYS TRUE? [2012]
12. The running time of an algorithm is represented by (A) A(n) = W(W(n)) (B) A(n) = Q(W(n))
the following recurrence relation: [2009] (C) A(n) = O(W(n)) (D) A(n) = o(W(n))
17. The recurrence relation capturing the optimal execu-
n n ≤ 3 tion time of the Towers of Hanoi problem with n discs

T ( n) =   n  is [2012]
T  3  + cn otherwise (A) T(n) = 2T(n – 2) + 2
  
(B) T(n) = 2T(n – 1) + n
Which one of the following represents the time com-
(C) T(n) = 2T(n/2) + 1
plexity of the algorithm?
(D) T(n) = 2T(n – 1) + 1
(A) q(n) (B) q(n log n)
18. A list of n strings, each of length n, is sorted into
(C) q(n2) (D) q(n2 log n)
lexicographic order using the merge sort algorithm.
13. Two alternative packages A and B are available for The worst-case running time of this computation is
processing a database having 10k records. Package A  [2012]
requires 0.0001 n2 time units and package B requires (A) O(n log n) (B) O(n2log n)
10n log10 n time units to process n records. What is (C) O(n2 + log n) (D) O(n2)
the smallest value of k for which package B will be 19. Consider the following function:
preferred over A? [2010]
int unknown (int n) {
(A) 12 (B) 10
(C) 6 (D) 5 int i, j, k = 0;
14. An algorithm to find the length of the longest mono- for (i = n/2; i < = n; i++)
tonically increasing sequence of numbers in an array for (j = 2; j < = n; j = j*2)
A[0 : n - 1] is given below. k = k + n/2;
Let L denote the length of the longest monotonically return (k);
increasing sequence starting at index in the array.
}
Initialize Ln-1 = 1,
The return value of the function is [2013]
For all i such that 0 ≤ i ≤ n - 2 (A) Θ(n2) (B) Θ(n2log n)
Li {= 1 + Li +1 , if A[i ] < A [i + 1], (C) Θ(n3) (D) Θ(n3log n)
1 otherwise 20. The number of elements that can be sorted in Θ(log n)
time using heap sort is  [2013]
Finally the length of the longest monotonically
(A) Θ(1)
increasing sequence is Max (L0, L1,…Ln–1)
Which of the following statements is TRUE? [2011] (B) Θ( log n )
(A) The algorithm uses dynamic programming para-
 log n 
digm. (C) Θ  
(B) The algorithm has a linear complexity and uses  log log n 
branch and bound paradigm. (D) Θ(log n)
(C) The algorithm has a non-linear polynomial com-
21. Which one of the following correctly determines the
plexity and uses branch and bound paradigm.
solution of the recurrence relation with T(1) = 1
(D) The algorithm uses divide and conquer paradigm.
n
15. Which of the given options provides the increasing T (n) = 2T   + log n ?  [2014]
order of asymptotic complexity of functions f1, f2, f3 2
and f4? [2011] (A) q(n) (B) q(n log n)
(C) q(n2) (D) q(log n)
f1(n) = 2n
22. An algorithm performs (log N)1/2 find operations, N
f2(n) = n3/2
insert operations, (log N)1/2 delete operations, and (log
f3(n) = n log2n N)1/2 decrease-key operations on a set of data items
f 4 (n) = n log 2 n with keys drawn from a linearly ordered set. For a
(A) f3, f2, f4, f1 (B) f3, f2, f1, f4 delete operation, a pointer is provided to the record
(C) f2, f3, f1, f4 (D) f2, f3, f4, f1 that must be deleted For the decrease – key opera-
tion, a pointer is provided to the record that has its
16. Let W(n) and A(n) denote respectively, the worst-
key decreased Which one of the following data struc-
case and average-case running time of an algorithm
tures is the most suited for the algorithm to use, if the
Chapter 1 • Asymptotic Analysis | 3.95

goal is to achieve the best total asymptotic complexity (A) 4 (B) 5


considering all the operations?[2015] (C) 2 (D) 3
(A) Unsorted array 28. Let f(n) = n and g(n) = n(1 + sin n), where n is a positive
(B) Min-heap integer. Which of the following statements is/are cor-
(C) Sorted array rect?[2015]
(D) Sorted doubly linked list
I. f(n) = O(g(n))
23. Consider the following C function.
II. f(n) = Ω(g(n))
int fun1(int n) { (A) Only I (B) Only II
int i, j, k, p, q=0; (C) Both I and II (D) Neither I nor II
for (i=1; i<n; ++i) { 29. A queue is implemented using an array such that
p=0; ENQUEUE and DEQUEUE operations are per-
for (j=n; j>1; j=j/2) formed efficiently. Which one of the following state-
++p; ments is CORRECT (n refers to the number of items
for (k=1; k<p; k=k*2) in the queue)? [2016]
++q; (A) Both operations can be performed in O(1) time.
} (B) At most one operation can be performed in O(1)
return q; time but the worst case time for the other opera-
} tion will be Ω (n).
(C) The worst case time complexity for both opera-
tions will be Ω (n).
Which one of the following most closely approxi-
(D) Worst case time complexity for both operations
mates the return value of the function fun1?[2015]
will be Ω (logn).
(A) n3 (B) n(log n)2
(C) n log n
   (D) n log(log n) 30. Consider a carry look ahead adder for adding two n -
bit integers, built using gates of fan - in at most two.
24. An unordered list contains n distinct elements. The
The time to perform addition using this adder is
number of comparisons to find an element in this list
 [2016]
that is neither maximum nor minimum is[2015]
(A) θ(n log n) (B) θ(n) (A) Θ(1) (B) Θ(log(n))
(C) θ(log n) (D) θ(1) (C) Θ n ( ) (D) Θ(n)
25. Consider a complete binary tree where the left and the 31. N items are stored in a sorted doubly linked list. For a
right subtrees of the root are max-heaps. The lower delete operation, a pointer is provided to the record to
bound for the number of operations to convert the tree be deleted. For a decrease - key operation, a pointer is
to a heap is  [2015] provided to the record on which the operation is to be
(A) Ω(log n) (B) Ω(n) performed.
(C) Ω(n log n) (D) Ω(n2) An algorithm performs the following operations on
n the list in this order: Q (N) delete, O (logN) insert,
26. Consider the equality ∑ i 3 = and the following O (log N) find, and Q (N) decrease - key. What is the
choices for X i =0
time complexity of all these operations put together?
1. θ(n4) [2016]
2. θ(n5) (A) O (log2N) (B) O(N)
3. O(n5) (C) O(N2) (D) Q (N2logN)
4. Ω(n3)
32. In an adjacency list representation of an undirected
The equality above remains correct if X is replaced by simple graph G = (V,E), each edge (u,v) has two adja-
[2015] cency list entries:[v] in the adjacency list of u, and
(A) Only 1 [u] in the adjacency list of v. These are called twins of
(B) Only 2 each other. A twin pointer is a pointer from an adja-
(C) 1 or 3 or 4 but not 2 cency list entry to its twin. If |E| = m and |V| = n, and
(D) 2 or 3 or 4 but not 1 the memory size is not a constraint, what is the time
27. Consider the following array of elements complexity of the most efficient algorithm to set the
<89, 19, 50, 17, 12, 15, 2, 5, 7, 11, 6, 9, 100> twin pointer in each entry in each adjacency list?
[2016]
The minimum number of interchanges needed to con-
(A) Θ(n2) (B) Θ(n + m)
vert it into a max-heap is [2015]
(C) Θ(m2) (D) Θ(n4)
3.96 | Unit 3 • Algorithms

33. Consider the following functions from positive inte- }


gers to real numbers: Time complexity of fun in terms of Θ notation is
100 [2017]
10, n , n, log2 n, .
n
(
(A) Θ n n ) (B) Θ(n2)
The CORRECT arrangement of the above functions
(C) Θ(n log n) (D) Θ(n2 log n)
in increasing order of asymptotic complexity is:
[2017] 36. A queue is implemented using a non-circular sin-
gly linked list. The queue has a head pointer and a
100 tail pointer, as shown in the figure. Let n denote the
(A) log 2 n, ,10, n, n
n number of nodes in the queue. Let enqueue be imple-
100 mented by inserting a new node at the head, and
(B) ,10, log 2 n, n , n dequeue be implemented by deletion of a node from
n
the tail.
100
(C) 10, , n , log 2 n, n
n
100
(D) , log 2 n, 10, n , n head
n tail

34. Consider the recurrence function Which one of the following is the time complexity
of the most time-efficient implementation of enqueue
2T
T (n) = 
 ( n ) +1 n>2 and dequeue, respectively, for this data structure?
[2018]
2, 0<n≤2
 (A) q(1), θ(1) (B) θ(1), θ(n)
(C) θ(n), θ(1) (D) θ(n), θ(n)
Then T(n) in terms of Θ notation is[2017]
(A) Θ (log log n) (B) Θ (log n) 37. Consider the following C code. Assume that unsigned
long int type length is 64 bits.
(C) Θ ( n) (D) Θ (n) unsigned long int fun (unsigned long int n) {
unsigned long int i, j = 0, sum = 0;
35. Consider the following C function.
for (i = n; i > 1. i = i/2) j++;
int fun (int n) {
for (; j > 1; j = j/2) sum++;
int i, j;
for(i = 1; i <= n; i++) { return (sum);
for (j = l; j < n; j += i) { }
printf{“ %d %d”,i, j); The value returned when we call fun with the input 240
} is:[2018]
} (A) 4 (B) 5
(C) 6 (D) 40
Chapter 1 • Asymptotic Analysis | 3.97

Answer Keys
Exercises
Practice Problems 1
1. A 2. D 3. D 4. A 5. B 6. A 7. D 8. A 9. A 10. B
11. A 12. A 13. B 14. A 15. B

Practice Problems 2
1. A 2. A 3. A 4. B 5. A 6. A 7. D 8. B 9. A 10. A
11. B 12. A 13. A 14. B 15. C

Previous Years’ Questions


1. B 2. C 3. A 4. C 5.  6. B 7. B 8. C 9. D 10. A
11. B 12. A 13. C 14. A 15. A 16. C 17. D 18. B 19. B 20. C
21. A 22. A 23. D 24. D 25. A 26. C 27. D 28. D 29. A 30. B
31. C 32. B 33. B 34. B 35. C 36. B 37. B

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy