0% found this document useful (0 votes)
311 views4 pages

Master's Theorem

The document discusses solving recurrence relations using the master method. It provides examples of applying the master method to analyze divide-and-conquer algorithms for problems like searching an unsorted list and multiplying matrices. The fastest matrix multiplication method yields an asymptotic running time of Θ(n2.795128), which is slower than Strassen's algorithm.

Uploaded by

yashdhingra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
311 views4 pages

Master's Theorem

The document discusses solving recurrence relations using the master method. It provides examples of applying the master method to analyze divide-and-conquer algorithms for problems like searching an unsorted list and multiplying matrices. The fastest matrix multiplication method yields an asymptotic running time of Θ(n2.795128), which is slower than Strassen's algorithm.

Uploaded by

yashdhingra
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

CSCI 303 Homework 2

Problem 1 (4.3-2):
The recurrence T (n) = 7T (n/2) + n2 describes the running time of an algorithm A. A competing
algorithm A0 has a running time of T 0 (n) = aT 0 (n/4) + n2 . What is the largest integer value for a
such that A0 is asymptotically faster than A?
Solution 1:
By the master theorem, the worst-case asymptotic complexity of A is Θ(nlg 7 ) ≈ Θ(n2.80735 ), and
the worst-case asymptotic complexity of A0 is Θ(nlog4 a ), if a > 16. For A0 to be asymptotically
faster than A, log4 a < lg 7 = log4 49. Therefore, the largest integer value for a such that A0 is
asymptotically faster than A is 48.

Problem 2 (Derived from 4-1 and 4-4):


Give asymptotic upper and lower bounds for T (n) in each of the following recurrences. Make your
bounds as tight as possible, and justify your answers.
a. T (n) = 2T (n/2) + n3 .
b. T (n) = 16T (n/4) + n2 .
c. T (n) = 7T (n/3) + n2 .
d. T (n) = 7T (n/2) + n2 .

e. T (n) = 2T (n/4) + n.
f. T (n) = 3T (n/2) + n lg n.

g. T (n) = 4T (n/2) + n2 n.
h. T (n) = T (9n/10) + n.

Solution 2:
Use the master method to solve these recurrences.
a. Case 3 of master theorem. T (n) = Θ(n3 ).
b. Case 2 of master theorem. T (n) = Θ(n2 lg n).
c. Case 3 of master theorem. T (n) = Θ(n2 ).
d. Case 1 of master theorem. T (n) = Θ(nlg 7 ).

e. Case 2 of master theorem. T (n) = Θ( n lg n).
f. Case 1 of master theorem. T (n) = Θ(nlg 3 ).

g. Case 3 of master theorem. T (n) = Θ(n2 n).
h. Case 3 of master theorem. T (n) = Θ(n).

Problem 3 (Derived from 4-1 and 4-4):


Give asymptotic upper and lower bounds for T (n) in each of the following recurrences. Make your
bounds as tight as possible, and justify your answers. You may assume that T (1) is a constant.
a. T (n) = T (n − 1) + n.
1
b. T (n) = T (n − 1) + 1/n.

c. T (n) = T (n − 1) + lg n.

d. T (n) = 2T (n/2) + n lg n.

e. T (n) = 5T (n/5) + n/ lg n.

Solution 3:
For these recurrences, the master theorem does not apply.
a. Assume T (1) = 1, then unroll the recurrence:

T (n) = T (n − 1) + n
= n + (n − 1) + (n − 2) + · · · + 2 + T (1)
n
X
= k
k=1
n(n + 1)
=
2
= Θ(n2 )

b. Assume T (1) = 1, then unroll the recurrence:

1
T (n) = T (n − 1) + n
1 1 1 1 1
= n + n−1 + n−2 + ··· + 2 + T (1)
n
X 1
=
k
k=1

We can bound this sum using integrals (see appendix A.2 in the book, equations A.11–A.14).

n n
X 1 X 1
T (n) = T (n) =
k k
k=1 k=1
Z n+1 n
1 X 1
≥ dx =1+
1 x k
k=2
Z n
1
= ln(n + 1) ≤1+ dx
1 x
≥ ln n = 1 + ln n
= Ω(ln n) = O(ln n)

T (n) = Θ(ln n)
= Θ(lg n)
2
c. Assume T (1) = 1, then unroll the recurrence:
T (n) = T (n − 1) + lg n
= lg n + lg(n − 1) + lg(n − 2) + · · · + lg 2 + lg 1
n
X
= lg k
k=1
n
!
Y
= lg k
k=1
= lg(n!)
= Θ(n lg n)
By Stirling’s approximation (see section 3.2 in the book, equation 3.18).
d. Assume n = 2m for some m and T (1) = c, then unroll the recurrence:
T (n) = 2T n2 + n lg n


= 2 2T n4 + n2 lg n2 + n lg n
 

= 2 2 2T n8 + n4 lg n4 + n2 lg n2 + n lg n
  

= 2 2 · · · 2 (2 (2T (1) + 2 lg 2) + 4 lg 4) + · · · + n4 lg n4 + n2 lg n2 + n lg n
 

= n(c + lg 2 + lg 4 + · · · + lg n4 + lg n2 + lg n)
lg n
!
X
=n c+ k
k=1
 
lg2 n+lg n
=n c+ 2

= 12 n lg2 n + 12 n lg n + cn
= Θ(n lg2 n)

e. Assume n = 5m for some m and T (1) = c, then unroll the recurrence:


T (n) = 5T n5 + n/ lg n


n
 n
+ 5 / lg n5 + n/ lg n

= 5 5T 25
n n
 n
+ 5 / lg n5 + n/ lg n

= 5 5 · · · 5 (5 (5T (1) + 5/ lg 5) + 25/ lg 25) + · · · + 25 / lg 25
= n(c + 1/ lg 5 + 1/ lg 52 + · · · + 1/ lg 5n2 + 1/ lg n5 + 1/ lg n
 
log5 n
X 1
= n c + 
lg 5k
k=1
log5 n
n X 1
= nc +
lg 5 k
k=1
= Θ(n lg lg n)

Problem 4 (Not in book):


The following algorithm uses a divide-and-conquer strategy to search an unsorted list of numbers.
3
Given a list of numbers A and a target number t, the algorithm returns 1 if t it is in the list, and
0 otherwise.

Unsorted-Search(A, t, p, q)
if q − p < 1
if A[p] = t return 1 else return 0
if Unsorted-Search(A, t, p, b p+q 2 c) = 1 return 1
p+q
if Unsorted-Search(A, t, b 2 c + 1, q) = 1 return 1
return 0

Analyze this algorithm with respect to worst-case asymptotic complexity, and give the worst-case
running time in terms of Θ notation. How does this algorithm compare to the naive algorithm that
simply iterates through the list to look for the target?

Solution 4:
The algorithm is given an array of n numbers and a target number. The first two lines take
constant time, call it c. The next two lines recursively call Unsorted-Search on inputs of size
n/2. Therefore, the worst-case asymptotic complexity is
T (n) = 2T (n/2) + c
Using case 1 of the master theorem, we see that T (n) = Θ(n).
This is the same worst-case asymptotic complexity as the naive algorithm, although in practice
the naive algorithm would probably run more quickly.

Problem 5 (28.2-4):
V. Pan has discovered a way of multiplying 68 × 68 matrices using 132,464 multiplications, a way
of multiplying 70 × 70 matrices using 143,640 multiplications, and a way of multiplying 72 × 72
matrices using 155,424 multiplications. Which method yields the best asymptotic running time
when used in a divide-and-conquer matrix-multiplication algorithm? How does it compare to
Strassen’s algorithm?

Solution 5:
For each case, we write the recurrence and solve it using the master theorem:
(1) T (n) = 132464T (n/68) + n2 ⇒ T (n) = Θ(nlog68 132464 ) ≈ Θ(n2.795128 )
(2) T (n) = 143640T (n/70) + n2 ⇒ T (n) = Θ(nlog70 143640 ) ≈ Θ(n2.795123 )
(3) T (n) = 155424T (n/72) + n2 ⇒ T (n) = Θ(nlog72 155424 ) ≈ Θ(n2.795147 )
Strassen’s algorithm runs in Θ(nlg 7 ) ≈ Θ(n2.81 ), so all these algorithms outperform Strassen.

Problem 6 (28.2-6):
Show how to multiply the complex numbers a + bi and c + di using only three real multiplications.
The algorithm should take a, b, c, and d as input and produce the real component ac − bd and the
imaginary component ad + bc separately.
Solution 6:
Let r = (a + b)(c + d) = ac + ad + bc + bd, let s = ac, and let t = bd. Then the real component
of the product of the two complex numbers is ac − bd = s − t and the imaginary component of the
two complex numbers is ad + bc = r − s − t.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy