0% found this document useful (0 votes)
260 views12 pages

33 - BD - Data Structures and Algorithms - Narasimha Karumanchi

The document discusses various topics related to analyzing algorithms, including: 1. Arithmetic, geometric, and harmonic series formulas. 2. The master theorem for analyzing divide-and-conquer algorithms. It presents the three cases of the theorem and examples of applying it. 3. A method of guessing and confirming to solve recurrences that don't fit the master theorem cases. It walks through an example of using this method. 4. Amortized analysis for determining the time-averaged running time of a sequence of operations, as opposed to worst-case analysis of individual operations. It then provides examples and solutions for analyzing various recurrence relations and determining their time complexities.

Uploaded by

TritonCPC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
260 views12 pages

33 - BD - Data Structures and Algorithms - Narasimha Karumanchi

The document discusses various topics related to analyzing algorithms, including: 1. Arithmetic, geometric, and harmonic series formulas. 2. The master theorem for analyzing divide-and-conquer algorithms. It presents the three cases of the theorem and examples of applying it. 3. A method of guessing and confirming to solve recurrences that don't fit the master theorem cases. It walks through an example of using this method. 4. Amortized analysis for determining the time-averaged running time of a sequence of operations, as opposed to worst-case analysis of individual operations. It then provides examples and solutions for analyzing various recurrence relations and determining their time complexities.

Uploaded by

TritonCPC
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Arithmetic

series

Geometric series

Harmonic series

Other important formulae

1.22 Master Theorem for Divide and Conquer Recurrences

All divide and conquer algorithms (also discussed in detail in the Divide and Conquer chapter)
divide the problem into sub-problems, each of which is part of the original problem, and then
perform some additional work to compute the final answer. As an example, a merge sort
algorithm [for details, refer to Sorting chapter] operates on two sub-problems, each of which is
half the size of the original, and then performs O(n) additional work for merging. This gives the
running time equation:

The following theorem can be used to determine the running time of divide and conquer
algorithms. For a given program (algorithm), first we try to find the recurrence relation for the
problem. If the recurrence is of the below form then we can directly give the answer without fully
solving it. If the recurrence is of the form , where a ≥ 1,b >
1,k ≥ 0 and p is a real number, then:
1) If a > bk , then
2) If a= bk
a. If p > –1, then
b. If p = –1, then
c. If p < –1, then
3) If a < bk
a. If p ≥ 0, then T(n) = Θ(nk logpn)
b. If p < 0, then T(n) = O(nk )

1.23 Divide and Conquer Master Theorem: Problems & Solutions

For each of the following recurrences, give an expression for the runtime T(n) if the recurrence
can be solved with the Master Theorem. Otherwise, indicate that the Master Theorem does not
apply.

Problem-1  T(n) = 3T (n/2) + n2
Solution: T(n) = 3T (n/2) + n2 => T (n) =Θ(n2) (Master Theorem Case 3.a)

Problem-2  T(n) = 4T (n/2) + n2
Solution: T(n) = 4T (n/2) + n2 => T (n) = Θ(n2logn) (Master Theorem Case 2.a)

Problem-3  T(n) = T(n/2) + n2
Solution: T(n) = T(n/2) + n2 => Θ(n2) (Master Theorem Case 3.a)

Problem-4  T(n) = 2nT(n/2) + nn
Solution: T(n) = 2nT(n/2) + nn => Does not apply (a is not constant)
Problem-5  T(n) = 16T(n/4) + n
Solution: T(n) = 16T (n/4) + n => T(n) = Θ(n2) (Master Theorem Case 1)
Problem-6  T(n) = 2T(n/2) + nlogn
Solution: T(n) = 2T(n/2) + nlogn => T(n) = Θ(nlog2n) (Master Theorem Case 2.a)
Problem-7  T(n) = 2T(n/2) + n/logn
Solution: T(n) = 2T(n/2)+ n/logn =>T(n) = Θ(nloglogn) (Master Theorem Case 2. b)

Problem-8  T(n) = 2T (n/4) + n051


Solution: T(n) = 2T(n/4) + n051 => T (n) = Θ(n0.51) (Master Theorem Case 3.b)
Problem-9  T(n) = 0.5T(n/2) + 1/n
Solution: T(n) = 0.5T(n/2) + 1/n => Does not apply (a < 1)

Problem-10  T (n) = 6T(n/3)+ n2 logn


Solution: T(n) = 6T(n/3) + n2logn => T(n) = Θ(n2logn) (Master Theorem Case 3.a)

Problem-11  T(n) = 64T(n/8) – n2logn


Solution: T(n) = 64T(n/8) – n2logn => Does not apply (function is not positive)

Problem-12  T(n) = 7T(n/3) + n2
Solution: T(n) = 7T(n/3) + n2 => T(n) = Θ(n2) (Master Theorem Case 3.as)
Problem-13  T(n) = 4T(n/2) + logn
Solution: T(n) = 4T(n/2) + logn => T(n) = Θ(n2) (Master Theorem Case 1)
Problem-14  T(n) = 16T (n/4) + n!
Solution: T(n) = 16T (n/4) + n! => T(n) = Θ(n!) (Master Theorem Case 3.a)
Problem-15  T(n) = T(n/2) + logn
Solution: T(n) = T(n/2) + logn => T(n) = Θ( ) (Master Theorem Case 1)

Problem-16  T(n) = 3T(n/2) + n
Solution: T(n) = 3T(n/2) + n =>T(n) = Θ(nlog3) (Master Theorem Case 1)
Problem-17  T(n) = 3T(n/3) +
Solution: T(n) = 3T(n/3) + => T(n) = Θ(n) (Master Theorem Case 1)

Problem-18  T(n) = 4T(n/2) + cn
Solution: T(n) = 4T(n/2) + cn => T(n) = Θ(n2) (Master Theorem Case 1)
Problem-19  T(n) = 3T(n/4) + nlogn
Solution: T(n) = 3T(n/4) + nlogn => T(n) = Θ(nlogn) (Master Theorem Case 3.a)
Problem-20  T (n) = 3T(n/3) + n/2
Solution: T(n) = 3T(n/3)+ n/2 => T (n) = Θ(nlogn) (Master Theorem Case 2.a)

1.24 Master Theorem for Subtract and Conquer Recurrences


Let T(n) be a function defined on positive n, and having the property

for some constants c,a > 0,b ≥ 0,k ≥ 0, and function f(n). If f(n) is in O(nk ), then

1.25 Variant of Subtraction and Conquer Master Theorem

The solution to the equation T(n) = T(α n) + T((1 – α)n) + βn, where 0 < α < 1 and β > 0 are
constants, is O(nlogn).

1.26 Method of Guessing and Confirming

Now, let us discuss a method which can be used to solve any recurrence. The basic idea behind
this method is:

guess the answer; and then prove it correct by induction.

In other words, it addresses the question: What if the given recurrence doesn’t seem to match with
any of these (master theorem) methods? If we guess a solution and then try to verify our guess
inductively, usually either the proof will succeed (in which case we are done), or the proof will
fail (in which case the failure will help us refine our guess).

As an example, consider the recurrence . This doesn’t fit into the form
required by the Master Theorems. Carefully observing the recurrence gives us the impression that
it is similar to the divide and conquer method (dividing the problem into subproblems each
with size ). As we can see, the size of the subproblems at the first level of recursion is n. So,
let us guess that T(n) = O(nlogn), and then try to prove that our guess is correct.

Let’s start by trying to prove an upper bound T(n) < cnlogn:


The last inequality assumes only that 1 ≤ c. .logn. This is correct if n is sufficiently large and for
any constant c, no matter how small. From the above proof, we can see that our guess is correct
for the upper bound. Now, let us prove the lower bound for this recurrence.

The last inequality assumes only that 1 ≥ k. .logn. This is incorrect if n is sufficiently large and
for any constant k. From the above proof, we can see that our guess is incorrect for the lower
bound.

From the above discussion, we understood that Θ(nlogn) is too big. How about Θ(n)? The lower
bound is easy to prove directly:

Now, let us prove the upper bound for this Θ(n).

From the above induction, we understood that Θ(n) is too small and Θ(nlogn) is too big. So, we
need something bigger than n and smaller than nlogn. How about ?

Proving the upper bound for :


Proving the lower bound for :

The last step doesn’t work. So, Θ( ) doesn’t work. What else is between n and nlogn?
How about nloglogn? Proving upper bound for nloglogn:

Proving lower bound for nloglogn:

From the above proofs, we can see that T(n) ≤ cnloglogn, if c ≥ 1 and T(n) ≥ knloglogn, if k ≤ 1.
Technically, we’re still missing the base cases in both proofs, but we can be fairly confident at
this point that T(n) = Θ(nloglogn).

1.27 Amortized Analysis


Amortized analysis refers to determining the time-averaged running time for a sequence of
operations. It is different from average case analysis, because amortized analysis does not make
any assumption about the distribution of the data values, whereas average case analysis assumes
the data are not “bad” (e.g., some sorting algorithms do well on average over all input orderings
but very badly on certain input orderings). That is, amortized analysis is a worst-case analysis,
but for a sequence of operations rather than for individual operations.

The motivation for amortized analysis is to better understand the running time of certain
techniques, where standard worst case analysis provides an overly pessimistic bound. Amortized
analysis generally applies to a method that consists of a sequence of operations, where the vast
majority of the operations are cheap, but some of the operations are expensive. If we can show
that the expensive operations are particularly rare we can change them to the cheap operations,
and only bound the cheap operations.

The general approach is to assign an artificial cost to each operation in the sequence, such that the
total of the artificial costs for the sequence of operations bounds the total of the real costs for the
sequence. This artificial cost is called the amortized cost of an operation. To analyze the running
time, the amortized cost thus is a correct way of understanding the overall running time – but note
that particular operations can still take longer so it is not a way of bounding the running time of
any individual operation in the sequence.

When one event in a sequence affects the cost of later events:


• One particular task may be expensive.
• But it may leave data structure in a state that the next few operations become easier.

Example: Let us consider an array of elements from which we want to find the kth smallest
element. We can solve this problem using sorting. After sorting the given array, we just need to
return the kth element from it. The cost of performing the sort (assuming comparison based sorting
algorithm) is O(nlogn). If we perform n such selections then the average cost of each selection is
O(nlogn/n) = O(logn). This clearly indicates that sorting once is reducing the complexity of
subsequent operations.

1.28 Algorithms Analysis: Problems & Solutions

Note: From the following problems, try to understand the cases which have different
complexities (O(n), O(logn), O(loglogn) etc.).
Problem-21  Find the complexity of the below recurrence:
Solution: Let us try solving this function with substitution.
T(n) = 3T(n – 1)

T(n) = 3(3T(n – 2)) = 32T(n – 2)

T(n) = 32(3T(n – 3))


.
.

T(n) = 3nT(n – n) = 3nT(0) = 3n

This clearly shows that the complexity of this function is O(3n).

Note: We can use the Subtraction and Conquer master theorem for this problem.
Problem-22  Find the complexity of the below recurrence:

Solution: Let us try solving this function with substitution.


T(n) = 2T(n – 1) – 1

T(n) = 2(2T(n – 2) – 1) – 1 = 22T(n – 2) – 2 – 1

T(n) = 22(2T(n – 3) – 2 – 1) – 1 = 23T(n – 4) – 22 – 21 – 20

T(n) = 2nT(n – n) – 2n–1 – 2n–2 – 2n–3 .... 22 – 21 – 20

T(n) =2n – 2n–1 – 2n–2 – 2n – 3 .... 22 – 21 – 20

T(n) =2n – (2n – 1) [note: 2n–1 + 2n–2 + ··· + 20 = 2n]


T(n) = 1

∴ Time Complexity is O(1). Note that while the recurrence relation looks exponential, the
solution to the recurrence relation here gives a different result.
Problem-23  What is the running time of the following function?
Solution: Consider the comments in the below function:

We can define the ‘s’ terms according to the relation si = si–1 + i. The value oft’ increases by 1
for each iteration. The value contained in ‘s’ at the ith iteration is the sum of the first ‘(‘positive
integers. If k is the total number of iterations taken by the program, then the while loop terminates
if:

Problem-24  Find the complexity of the function given below.


Solution:

In the above-mentioned function the loop will end, if i2 > n ⇒ T(n) = O( ). This is similar to
Problem-23.
Problem-25  What is the complexity of the program given below:

Solution: Consider the comments in the following function.

The complexity of the above function is O(n2logn).


Problem-26  What is the complexity of the program given below:
Solution: Consider the comments in the following function.

The complexity of the above function is O(nlog2n).


Problem-27  Find the complexity of the program below.

Solution: Consider the comments in the function below.


The complexity of the above function is O(n). Even though the inner loop is bounded by n, due to
the break statement it is executing only once.
Problem-28  Write a recursive function for the running time T(n) of the function given below.
Prove using the iterative method that T(n) = Θ(n3).

Solution: Consider the comments in the function below:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy