0% found this document useful (0 votes)
5 views29 pages

Divide N Conquer

The document discusses the divide-and-conquer algorithm design strategy, which involves dividing a problem into smaller instances, solving them recursively, and combining the solutions. It provides examples such as mergesort, quicksort, and Strassen's algorithm for matrix multiplication, along with their analyses. Additionally, it covers various applications of divide-and-conquer in algorithms for sorting, binary trees, and geometric problems.

Uploaded by

appslic23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views29 pages

Divide N Conquer

The document discusses the divide-and-conquer algorithm design strategy, which involves dividing a problem into smaller instances, solving them recursively, and combining the solutions. It provides examples such as mergesort, quicksort, and Strassen's algorithm for matrix multiplication, along with their analyses. Additionally, it covers various applications of divide-and-conquer in algorithms for sorting, binary trees, and geometric problems.

Uploaded by

appslic23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

INF212

ALGORITHMS

Divide and Conquer


Algorithms
Dr Kofi Sarpong Adu-Manu
DIVIDE-AND-CONQUER

The most-well known algorithm design strategy:


1. Divide instance of problem into two or more
smaller instances

2. Solve smaller instances recursively

3. Obtain solution to original (larger) instance by


combining these solutions
DIVIDE-AND-CONQUER TECHNIQUE (CONT.)

a problem of size n

subproblem 1 subproblem 2
of size n/2 of size n/2

a solution to a solution to
subproblem 1 subproblem 2

a solution to
the original problem
DIVIDE-AND-CONQUER EXAMPLES

• Sorting: mergesort and quicksort

• Binary tree traversals

• Multiplication of large integers

• Matrix multiplication: Strassen’s algorithm

• Closest-pair and convex-hull algorithms

• Binary search: decrease-by-half (or degenerate divide & conq.)


General Divide-and-Conquer Recurrence
T(n) = aT(n/b) + f (n) where f(n)  (nd), d  0

Master Theorem: If a < bd, T(n)  (nd)


If a = bd, T(n)  (nd log n)
If a > bd, T(n)  (nlog b a )

Note: The same results hold with O instead of .


MERGESORT

• Split array A[0..n-1] in two about equal halves and make copies of
each half in arrays B and C
• Sort arrays B and C recursively
• Merge sorted arrays B and C into array A as follows:
• Repeat the following until no elements remain in one of the arrays:

• compare the first elements in the remaining


unprocessed portions of the arrays
• copy the smaller of the two into A, while
incrementing the index indicating the unprocessed
portion of that array
• Once all elements in one of the arrays are processed, copy the remaining
unprocessed elements from the other array into A.
PSEUDOCODE OF MERGESORT
PSEUDOCODE OF MERGE
MERGESORT EXAMPLE

8 3 2 9 7 1 5 4

8 3 2 9 7 1 5 4

8 3 2 9 71 5 4

8 3 2 9 7 1 5 4

3 8 2 9 1 7 4 5

2 3 8 9 1 4 5 7

1 2 3 4 5 7 8 9
ANALYSIS OF MERGESORT

• All cases have same efficiency: Θ(n log n)

• Number of comparisons in the worst case is close to theoretical minimum for


comparison-based sorting:
log2 n! ≈ n log2 n - 1.44n

• Space requirement: Θ(n) (not in-place)

• Can be implemented without recursion (bottom-up)


QUICKSORT

• Select a pivot (partitioning element) – here, the first element


• Rearrange the list so that all the elements in the first s positions are
smaller than or equal to the pivot and all the elements in the
remaining n-s positions are larger than or equal to the pivot (see next
slide for an algorithm)
p

A[i]p A[i]p

• Exchange the pivot with the last element in the first (i.e., ) subarray
— the pivot is now in its final position
• Sort the two subarrays recursively
HOARE’S PARTITIONING ALGORITHM
ANALYSIS OF QUICKSORT

• Best case: split in the middle — Θ(n log n)


• Worst case: sorted array! — Θ(n2)
• Average case: random arrays — Θ(n log n)

• Improvements:
• better pivot selection: median of three partitioning
• switch to insertion sort on small subfiles
• elimination of recursion
These combine to 20-25% improvement

• Considered the method of choice for internal sorting of large files (n ≥


10000)
BINARY TREE ALGORITHMS

Binary tree is a divide-and-conquer ready structure!

Ex. 1: Classic traversals (preorder, inorder, postorder)


Algorithm Inorder(T)
if T  
Inorder(Tleft)
print(root of T)
Inorder(Tright)

Efficiency: Θ(n)
BINARY TREE ALGORITHMS (CONT.)

Ex. 2: Computing the height of a binary tree

TL TR

h(T) = max{h(TL), h(TR)} + 1 if T   and h() = -1

Efficiency: Θ(n)
M U LT I P L I C AT I O N O F L A R G E I N T E G E R S

Consider the problem of multiplying two (large) n-digit integers


represented by arrays of their digits such as:

A = 12345678901357986429 B = 87654321284820912836

The grade-school algorithm:


a1 a2 … an
b1 b2 … bn
(d10) d11d12 … d1n
(d20) d21d22 … d2n
…………………
(dn0) dn1dn2 … dnn

Efficiency: n2 one-digit multiplications


FIRST DIVIDE-AND-CONQUER ALGORITHM

A small example: A  B where A = 2135 and B = 4014


A = (21·102 + 35), B = (40 ·102 + 14)
So, A  B = (21 ·102 + 35)  (40 ·102 + 14)
= 21  40 ·104 + (21  14 + 35  40) ·102 + 35  14

In general, if A = A1A2 and B = B1B2 (where A and B are n-digit,


A1, A2, B1, B2 are n/2-digit numbers),
A  B = A1  B1·10n + (A1  B2 + A2  B1) ·10n/2 + A2  B2

Recurrence for the number of one-digit multiplications M(n):


M(n) = 4M(n/2), M(1) = 1
Solution: M(n) = n2
S E C O N D D I V I D E -A N D - C O N Q U E R A L G O R I T H M

A  B = A1  B1·10n + (A1  B2 + A2  B1) ·10n/2 + A2  B2

The idea is to decrease the number of multiplications from 4 to 3:

(A1 + A2 )  (B1 + B2 ) = A1  B1 + (A1  B2 + A2  B1) + A2  B2,

I.e., (A1  B2 + A2  B1) = (A1 + A2 )  (B1 + B2 ) - A1  B1 - A2  B2,


which requires only 3 multiplications at the expense of (4-1) extra
add/sub.

Recurrence for the number of multiplications M(n):


M(n) = 3M(n/2), M(1) = 1
Solution: M(n) = 3log 2n = nlog 23 ≈ n1.585
EXAM P L E OF L ARGE-INTEGER MULTIPLICATION

2135  4014
STRASSEN’S MATRIX
MULTIPLICATION
Strassen observed [1969] that the product of two matrices can be
computed as follows:
FORMUL AS FOR STRASSEN’S ALGORITHM

M1 = (A00 + A11)  (B00 + B11)

M2 = (A10 + A11)  B00

M3 = A00  (B01 - B11)

M4 = A11  (B10 - B00)

M5 = (A00 + A01)  B11

M6 = (A10 - A00)  (B00 + B01)

M7 = (A01 - A11)  (B10 + B11)


ANALYSIS OF STRASSEN’S
ALGORITHM

If n is not a power of 2, matrices can be padded with zeros.

Number of multiplications:
M(n) = 7M(n/2), M(1) = 1
Solution: M(n) = 7log 2n = nlog 27 ≈ n2.807 vs. n3 of brute-force alg.

Algorithms with better asymptotic efficiency are known but


they
are even more complex.
CLOSEST-PAIR PROBLEM BY DIVIDE-
AND-CONQUER

Step 1 Divide the points given into two subsets Pl and Pr by a


vertical line x = m so that half the points lie to the left or
on the line and half the points lie tox =the
m
right or on the
line.

dl
d
r

d d
CLOSEST PAIR BY DIVIDE-AND-CONQUER (CONT.)

Step 2 Find recursively the closest pairs for the left and right
subsets.
Step 3 Set d = min{dl, dr}
We can limit our attention to the points in the symmetric
vertical strip S of width 2d as possible closest pair. (The
points are stored and processed in increasing order of
their y coordinates.)
Step 4 Scan the points in the vertical strip S from the lowest up.
For every point p(x,y) in the strip, inspect points in
in the strip that may be closer to p than d. There can be
no more than 5 such points following p on the strip list!
EFFICIENCY OF THE CLOSEST-PAIR
ALGORITHM

Running time of the algorithm is described by

T(n) = 2T(n/2) + M(n), where M(n)  O(n)

By the Master Theorem (with a = 2, b = 2, d = 1)


T(n)  O(n log n)
QUICKHULL ALGORITHM

Convex hull: smallest convex set that includes given points


• Assume points are sorted by x-coordinate values
• Identify extreme points P1 and P2 (leftmost and rightmost)
• Compute upper hull recursively:
• find point Pmax that is farthest away from line P1P2
• compute the upper hull of the points to the left of line P1Pmax
• compute the upper hull of the points to the left of line PmaxP2
• Compute lower hull in a similar manner
Pmax

P2

P1
EFFICIENCY OF QUICKHULL
ALGORITHM

• Finding point farthest away from line P1P2 can be done in


linear time
• Time efficiency:
• worst case: Θ(n2) (as quicksort)
• average case: Θ(n) (under reasonable assumptions about
distribution of points given)

• If points are not initially sorted by x-coordinate value, this


can be accomplished in O(n log n) time

• Several O(n log n) algorithms for convex hull are known


REFERENCE

Levitin, A. (2012). Introduction to the Design and Analysis of


Algorithms ( 3rd Edition). Harlow: Addison Wesley.
ACKNOWLEDGEMENT

Pearson Education, Inc. Upper Saddle River, NJ.


All Rights Reserved.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy