0% found this document useful (0 votes)
2 views48 pages

Divide N Conquer

The document discusses the divide and conquer algorithm for finding the closest pair of points in both one-dimensional and two-dimensional spaces. It outlines the algorithm's steps, including dividing the points, recursively finding closest pairs, and combining results, achieving a time complexity of O(n log n). Additionally, it touches on the analysis of the algorithm's efficiency and its applications in various fields such as graphics and computer vision.

Uploaded by

akashreddy.6nlr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views48 pages

Divide N Conquer

The document discusses the divide and conquer algorithm for finding the closest pair of points in both one-dimensional and two-dimensional spaces. It outlines the algorithm's steps, including dividing the points, recursively finding closest pairs, and combining results, achieving a time complexity of O(n log n). Additionally, it touches on the analysis of the algorithm's efficiency and its applications in various fields such as graphics and computer vision.

Uploaded by

akashreddy.6nlr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

A Divide & Conquer Example:

Closest Pair of Points

8
closest pair of points: 1 dimensional version
Given n points on the real line, find the closest pair

Closest pair is adjacent in ordered list


Time O(n log n) to sort, if needed
Plus O(n) to scan adjacent pairs

10
closest pair of points: 2 dimensional version
Closest pair. Given n points in the plane, find a pair with smallest
Euclidean distance between them.

Fundamental geometric primitive.


Graphics, computer vision, geographic information systems, molecular
modeling, air traffic control.
Special case of nearest neighbor, Euclidean MST, Voronoi.
fast closest pair inspired fast algorithms for these problems

Brute force. Check all pairs of points p and q with Θ(n2) time.

1-D version. O(n log n) easy if points are on a line.

Assumption. No two points have same x coordinate.

Just to simplify presentation

11
closest pair of points
Algorithm.
Divide: draw vertical line L with ≈ n/2 points on
each side.

12
closest pair of points
Algorithm.
Divide: draw vertical line L with ≈ n/2 points on
each side.
Conquer: find closest pair on each side, recursively.

21

12

13
closest pair of points
Algorithm.
Divide: draw vertical line L with ≈ n/2 points on
each side.
Conquer: find closest pair on each side, recursively.
Combine to find closest pair overall seems
like
Return best solutions. Θ(n2) ?
L

8
21

12

14
closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.

21

δ= min(12, 21)
12

15
closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.
Observation: suffices to consider points within δ of line L.

21

δ = min(12, 21)
12

16
δ
closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.
Observation: suffices to consider points within δ of line L.
Almost the one-D problem again: Sort points in 2δ-strip by
their y coordinate.

L
7

6
5
4 21

δ = min(12, 21)
12 3

2
1 17
δ
closest pair of points
Find closest pair with one point in each side,
assuming distance < δ.
Observation: suffices to consider points within d of line L.
Almost the one-D problem again: Sort points in 2d-strip by
their y coordinate. Only check pts within 11 in sorted list!

L
7

6
5
4 21

δ = min(12, 21)
12 3

2
1 18
δ
closest pair of points
Claim: No two points lie in the
same ½δ-by-½δ box.
39 j

31

30
29 ½δ

28 ½δ
i 27

26
25

δ δ

19
closest pair of points
Claim: No two points lie in the
same ½δ-by-½δ box.
39 j
Pf: Such points would be within
2 2
!1$ !1$ 1 2
δ # & +# & = δ =δ ≈ 0.7δ < δ 31
"2% "2% 2 2
30
29 ½δ

28 ½δ
i 27

26
25

δ δ

20
closest pair of points
Claim: No two points lie in the
same ½δ-by-½δ box.
39 j
Pf: Such points would be within
2 2
!1$ !1$ 1 2
δ # & +# & = δ =δ ≈ 0.7δ < δ 31
"2% "2% 2 2
30
½δ
Def. Let si have the smallest
ith 29

½δ
y-coordinate among points i 27
28

in the 2δ-width-strip.
26
25

Claim: If |i – j| > 11, then the


distance between si and sj is > δ. δ δ

21
closest pair of points
Claim: No two points lie in the
same ½δ-by-½δ box.
39 j
Pf: Such points would be within
2 2
!1$ !1$ 1 2
δ # & +# & = δ =δ ≈ 0.7δ < δ 31
"2% "2% 2 2
30
½δ
Def. Let si have the smallest
ith 29

½δ
y-coordinate among points i 27
28

in the 2δ-width-strip.
26
25

Claim: If |i – j| > 11, then the


distance between si and sj is > δ. δ δ
Pf: only 11 boxes within +δ of y(si). 22
closest pair algorithm
Closest-Pair(p1, …, pn) {
if(n <= ??) return ??

Compute separation line L such that half the points


are on one side and half on the other side.

δ1 = Closest-Pair(left half)
δ2 = Closest-Pair(right half)
δ = min(δ1, δ2)

Delete all points further than δ from separation line


L

Sort remaining points p[1]…p[m] by y-coordinate.

for i = 1..m
for k = 1…11
if i+k <= m
δ = min(δ, distance(p[i], p[i+k]));

return δ.
}
23
closest pair of points: analysis
Analysis, I: Let D(n) be the number of pairwise distance
calculations in the Closest-Pair Algorithm when run on n>1
points

"$ 0 n =1 &$
D(n) ≤ # ' ⇒ D(n) = O(n log n)
$% 2D ( n / 2 ) + 11n n >1 $(

BUT – that’s only the number of distance calculations

What if we counted running time?

24
closest pair of points: analysis
Analysis, II: Let T(n) be the running time in the Closest-Pair
Algorithm when run on n > 1 points
8
<0 if n = 1
T (n)  ) T (n) = O(n log2 n).
:2T (n/2) + O(n log n) if n > 1

Q. Can we achieve O(n log n)?

A. Yes. Don't sort points from scratch each time.


Sort by x at top level only.
Each recursive call returns δ and list of all points sorted by y
Sort by merging two pre-sorted lists.

T(n) ≤ 2T ( n /2) + O(n) ⇒ T(n) = O(n log n) 25


plan
Recurrences

Applications:
multiplying numbers
multiplying matrices
computing medians

36
d & c summary
Idea:
“Two halves are better than a whole”
if the base algorithm has super-linear complexity.
“If a little's good, then more's better”
repeat above, recursively
Applications: Many.
Binary Search, Merge Sort, (Quicksort), Closest
points, Integer multiply,…

37
Recurrences

Above: Where they come


from, how to find them

Next: how to solve them

38
divide and conquer – master recurrence
T(n) = aT(n/b)+cnd then

a > bd ⇒ T(n) = Θ(nlogb a ) [many subprobs → leaves dominate]

a < bd ⇒ T(n) = Θ(nd) [few subprobs → top level dominates]

a = bd ⇒ T(n) = Θ (nd log n) [balanced → all log n levels contribute]

Fine print:
a ≥ 1; b > 1; c, d ≥ 0; T(1) = c;
a, b, k, t integers.

39
Solve: T(n) = a T(n/b) + cnd

Level Num
Level Num Size
Size Work
Work

0 0 1 = a0 1=2
n 0 n cnd
1 cn a1 n/b ac(n/b)d
2 1 a2=2 n/2 2 ac2c(n/b
n/2 2)d
1
2 n/b2
…2 …4=22 n/4 … 4 c n/4…
i … ai … …n/bi …ai c (n/bi)d
…i … 2i n/2
…i 2i c n/2
…i
k-1… ak-1… … k-1
n/b a…k-1c(n/bk-1)d

n = bk ; k = logbn kk-1 ak 2k-1 n/b


n/2 k-1
k = 1 2k-1akcT(1)
n/2k-1
logb n
Total Work: = ∑i=0 a c(n / b ) (add last col)
i i d
40
a useful identity
Theorem:
1 + x + x2 + x3 + … + xk = (xk+1-1)/(x-1)
proof:
S = 1 + x + x 2 + x 3 + … + xk
xS = x + x2 + x3 + … + xk + xk+1
xS-S = xk+1 – 1
S(x-1)= xk+1 - 1
S = (xk+1-1)/(x-1)

44
T(1) = d
T(n) = a T(n/b) + cnd , a> bd
log b n
T(n) = ∑ i
a c(n / b ) i d
i=0
log b n
= cn d
∑ i=0
d i
(a / b )
log b n+1

= cn d ( ) −1
a
bd

( ) −1a
bd

45
Solve: T(1) = d
T(n) = a T(n/b) + cnd , a > bd

log b n+1 log b n+1


nd

cn d
( ) −1 < cn ( )
a
bd d
a
bd
= (b log b n d
)
( ) −1
a
bd ( ) −1 a
bd
= b d logb n
log b n
" % a
'( )
d
n
= c$ a
bd
# & ( ) −1
b
d logb n
a
bd a log b n

log b n
a log b a log b n
= c( ) a
bd
= (b )
( ) −1 a
bd
log b n log b a
log b a
= (b )
= O(n )
= n log b a 46
Solve: T(1) = d
T(n) = a T(n/b) + cnd , a < bd
log b n
T(n) = ∑ a i c(n / bi )d
i=0
log b n
= cn d
∑ i=0
a /b i id

log b n+1

= cn d
1− ( ) a
bd

1− ( ) a
bd

d 1
< cn
1− ( )a
bd
d
= O(n )
Solve: T(1) = d
T(n) = a T(n/b) + cnd , a =bd

log b n
T ( n ) = ∑i=0
i i d
a c(n / b )
log b n
= cn d
∑ i=0
i
a /b id

d
= O(n log b n)
divide and conquer – master recurrence
T(n) = aT(n/b)+cnd for n > b then

a > bd ⇒ T(n) = Θ(nlogb a ) [many subprobs → leaves dominate]

a < bd ⇒ T(n) = Θ(nd) [few subprobs → top level dominates]

a = bd ⇒ T(n) = Θ (nd log n) [balanced → all log n levels contribute]

Fine print:
a ≥ 1; b > 1; c, d ≥ 0; T(1) = c;
a, b, k, t integers.

49
Integer Multiplication

52
integer arithmetic
Add. Given two n-bit 1 1 1 1 1 1 0 1

integers a and b, 1 1 0 1 0 1 0 1

compute a + b.
Add + 0 1 1 1 1 1 0 1
1 0 1 0 1 0 0 1 0

O(n) bit operations.


1 1 0 1 0 1 0 1
* 0 1 1 1 1 1 0 1

Multiply. Given two n-digit 1 1 0 1 0 1 0 1


0 0 0 0 0 0 0 0
Multiply
integers a and b, 1 1 0 1 0 1 0 1

compute a × b. 1 1 0 1 0 1 0 1
1 1 0 1 0 1 0 1
The “grade school” method: 1 1 0 1 0 1 0 1
1 1 0 1 0 1 0 1
Q(n2) bit operations. 0 0 0 0 0 0 0 0
0 1 1 0 1 0 0 0 0 0 0 0 0 0530 1
integer arithmetic
Add. Given two n-bit 1 1 1 1 1 1 0 1

integers a and b, 1 1 0 1 0 1 0 1

compute a + b.
Add + 0 1 1 1 1 1 0 1
1 0 1 0 1 0 0 1 0

O(n) bit operations.


1 1 0 1 0 1 0 1
* 0 1 1 1 1 1 0 1

Multiply. Given two n-bit 1 1 0 1 0 1 0 1


0 0 0 0 0 0 0 0
Multiply
integers a and b, 1 1 0 1 0 1 0 1

compute a × b. 1 1 0 1 0 1 0 1
1 1 0 1 0 1 0 1
The “grade school” method: 1 1 0 1 0 1 0 1
1 1 0 1 0 1 0 1
0 0 0 0 0 0 0 0
0 1 1 0 1 0 0 0 0 0 0 0 0 0540 1
integer arithmetic
Add. Given two n-bit 1 1 1 1 1 1 0 1

integers a and b, 1 1 0 1 0 1 0 1

compute a + b.
Add + 0 1 1 1 1 1 0 1
1 0 1 0 1 0 0 1 0

O(n) bit operations.


1 1 0 1 0 1 0 1
* 0 1 1 1 1 1 0 1

Multiply. Given two n-bit 1 1 0 1 0 1 0 1


0 0 0 0 0 0 0 0
Multiply
integers a and b, 1 1 0 1 0 1 0 1

compute a × b. 1 1 0 1 0 1 0 1
1 1 0 1 0 1 0 1
The “grade school” method: 1 1 0 1 0 1 0 1
1 1 0 1 0 1 0 1
Θ(n2) bit operations. 0 0 0 0 0 0 0 0
0 1 1 0 1 0 0 0 0 0 0 0 0 0550 1
divide & conquer multiplication: warmup
To multiply two 2-digit integers:
Multiply four 1-digit integers.
Add, shift some 2-digit integers to obtain result.

x = 10⋅ x1 + x 0 4 5 y1 y0
y = 10⋅ y1 + y 0 3 2 x1 x0
xy = (10⋅ x1 + x 0 ) (10⋅ y1 + y 0 ) x0 × y0
1 0
= 100 ⋅ x1 y1 + 10⋅ ( x1 y 0 + x 0 y1 ) + x 0 y 0
0 8 x0× y1

Same idea works for long integers – 1 5 x1×y0

1 2
can split them into 4 half-sized ints x1× y1
1 4 4 0

56
divide & conquer multiplication: warmup
To multiply two n-bit integers:
Multiply four ½n-bit integers.
Add two ½n-bit integers, and shift to obtain result.

x = 2 n / 2 ⋅ x1 + x0 1 1 0 1 0 1 0 1 y1 y0
y = 2 n / 2 ⋅ y1 + y0 * 0 1 1 1 1 1 0 1 x1 x0
xy = (2 n / 2 ⋅ x1 + x 0 ) (2 n / 2 ⋅ y1 + y 0 ) 0 1 0 0 0 0 0 1 x0×y0
= 2 n ⋅ x1 y1 + 2 n / 2 ⋅ ( x1 y 0 + x 0 y1 ) + x 0 y 0 1 0 1 0 1 0 0 1 x0×y1

0 0 1 0 0 0 1 1 x1×y0

0 1 0 1 1 0 1 1 x1×y1
T(n) = 4T ( n / 2 ) + Θ(n)
  0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1
recursive calls add, shift

57
divide & conquer multiplication: warmup
To multiply two n-bit integers:
Multiply four ½n-bit integers.
Add two ½n-bit integers, and shift to obtain result.

x = 2 n / 2 ⋅ x1 + x0 1 1 0 1 0 1 0 1 y1 y0
y = 2 n / 2 ⋅ y1 + y0 * 0 1 1 1 1 1 0 1 x1 x0
xy = (2 n / 2 ⋅ x1 + x 0 ) (2 n / 2 ⋅ y1 + y 0 ) 0 1 0 0 0 0 0 1 x0×y0
= 2 n ⋅ x1 y1 + 2 n / 2 ⋅ ( x1 y 0 + x 0 y1 ) + x 0 y 0 1 0 1 0 1 0 0 1 x0×y1

0 0 1 0 0 0 1 1 x1×y0

0 1 0 1 1 0 1 1 x1×y1
T(n) = 4T (n /2 ) + Θ(n) ⇒ T(n) = Θ(n 2 )
  
recursive calls add, shift 0 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1

58
key trick: 2 multiplies for the price of 1:

x = 2 n / 2 ⋅ x1 + x0
y = 2 n / 2 ⋅ y1 + y0 Well, ok, 4 for 3 is
more accurate…
xy = (2 n / 2 ⋅ x1 + x 0 ) (2 n / 2 ⋅ y1 + y 0 )
= 2 n ⋅ x1 y1 + 2 n / 2 ⋅ ( x1 y 0 + x 0 y1 ) + x 0 y 0

α = x1 + x 0
β = y1 + y 0
αβ = ( x1 + x 0 ) ( y1 + y 0 )
= x1 y1 + ( x1 y 0 + x 0 y1 ) + x 0 y 0
( x1 y 0 + x 0 y1 ) = αβ − x1 y1 − x 0 y 0
59
Karatsuba multiplication
To multiply two n-bit integers:
Add two ½n bit integers.
Multiply three ½n-bit integers.
Add, subtract, and shift ½n-bit integers to obtain result.

x = 2 n / 2 ⋅ x1 + x0
y = 2 n / 2 ⋅ y1 + y0
xy = 2 n ⋅ x1 y1 + 2 n / 2 ⋅ ( x1 y0 + x0 y1 ) + x0 y0
= 2 n ⋅ x1 y1 + 2 n / 2 ⋅ ( (x1 + x0 ) (y1 + y0 ) − x1 y1 − x0 y0 ) + x0 y0
A B A C C

Theorem.

[Karatsuba-Ofman, 1962] Can multiply two n-digit
integers in O(n1.585) bit operations.

T (n) ≤ 3T (n / 2) + O(n)
log 2 3
⇒ T(n) = O(n ) = O(n1.585 )

60
multiplication – the bottom line
Naïve: Θ(n2)
Karatsuba: Θ(n1.59…)
Amusing exercise: generalize Karatsuba to do 5 size
n/3 subproblems → Θ(n1.46…)
Best known: Θ(n log n loglog n)
"Fast Fourier Transform”

61
Another Example:

Matrix Multiplication –

Strassen’s Method

62
Multiplying Matrices

& a11 a12 a13 a14 # &b11 b12 b13 b14 #


$a a22 a23 a24 !! $$b21 b22 b23 b24 !!
$ 21 •
$ a31 a32 a33 a34 $b31 b32
! b33 b34 !
$ ! $ !
%a41 a42 a43 a44 " %b41 b42 b43 b44 "

& a11b11 + a12b21 + a13b31 + a14b41 a11b12 + a12b22 + a13b32 + a14b42  a11b14 + a12b24 + a13b34 + a14b44 #
$a b + a b + a b + a b a21b12 + a22b22 + a23b32 + a24b42  a21b14 + a22b24 + a23b34 + a24b44 !!
= $ 21 11 22 21 23 31 24 41
$ a31b11 + a32b21 + a33b31 + a34b41 a31b12 + a32b22 + a33b32 + a34b42  a31b14 + a32b24 + a33b34 + a34b44 !
$ !
%a41b11 + a42b21 + a43b31 + a44b41 a41b12 + a42b22 + a43b32 + a44b42  a41b14 + a42b24 + a43b34 + a44b44 "

n3 multiplications, n3-n2 additions


63
Simple Matrix Multiply
for i = 1 to n
for j = I to n
C[i,j] = 0
for k = 1 to n
C[i,j] = C[i,j] + A[i,k] * B[k,j]

n3 multiplications, n3-n2 additions


64
Multiplying Matrices

& a11 a12 a13 a14 # &b11 b12 b13 b14 #


$a a22 a23 a24 !! $$b21 b22 b23 b24 !!
$ 21 •
$ a31 a32 a33 a34 $b31 b32
! b33 b34 !
$ ! $ !
%a41 a42 a43 a44 " %b41 b42 b43 b44 "

& a11b11 + a12b21 + a13b31 + a14b41 a11b12 + a12b22 + a13b32 + a14b42  a11b14 + a12b24 + a13b34 + a14b44 #
$a b + a b + a b + a b a21b12 + a22b22 + a23b32 + a24b42  a21b14 + a22b24 + a23b34 + a24b44 !!
= $ 21 11 22 21 23 31 24 41
$ a31b11 + a32b21 + a33b31 + a34b41 a31b12 + a32b22 + a33b32 + a34b42  a31b14 + a32b24 + a33b34 + a34b44 !
$ !
%a41b11 + a42b21 + a43b31 + a44b41 a41b12 + a42b22 + a43b32 + a44b42  a41b14 + a42b24 + a43b34 + a44b44 "

65
Multiplying Matrices

& a11 a12 a13 a14 # &b11 b12 b13 b14 #


$a a22 a23 a24 !! $$b21 b22 b23 b24 !!
$ 21 •
$ a31 a32 a33 a34 $b31 b32
! b33 b34 !
$ ! $ !
%a41 a42 a43 a44 " %b41 b42 b43 b44 "

& a11b11 + a12b21 + a13b31 + a14b41 a11b12 + a12b22 + a13b32 + a14b42  a11b14 + a12b24 + a13b34 + a14b44 #
$a b + a b + a b + a b a21b12 + a22b22 + a23b32 + a24b42  a21b14 + a22b24 + a23b34 + a24b44 !!
= $ 21 11 22 21 23 31 24 41
$ a31b11 + a32b21 + a33b31 + a34b41 a31b12 + a32b22 + a33b32 + a34b42  a31b14 + a32b24 + a33b34 + a34b44 !
$ !
%a41b11 + a42b21 + a43b31 + a44b41 a41b12 + a42b22 + a43b32 + a44b42  a41b14 + a42b24 + a43b34 + a44b44 "

66
Multiplying Matrices

& a11 a12 a13 a14 # &b11 b12 b13 b14 #


$a A11 A a ! $bB11b
a23 12
Bb !
b23 12
$ 21 a22 24 ! $ 21
• 22 24 !

$ a31 a32 a33 a34 ! $b31 b32 b33 b34 !


$ A21 ! $ B Bb !
%a41 a42 a43A22
a44 " %b41 21b42 b43 2244 "

& a11b11 + a12b21 + a13b31 + a14b41 a11b12 + a12b22 + a13b32 + a14b42  a11b14 + a12b24 + a13b34 + a14b44 #
$a b + a b + a A b 11
+
Bb11+A
a a b12+Ba 21 b22 + a23b32 + a24b42 
A B b +A
a21b1411+ a2212 + a 12
b
B+ a22b !
$ 21 11 22 21 23 31 24 41 21 12 22 24 23 34 24 44 !
=
$ a31b11 + a32b21 + a33b31 + a34b41 a31b12 + a32b22 + a33b32 + a34b42  a31b14 + a32b24 + a33b34 + a34b44 !
$ A 21 B 11 +A 22B 21  A B +A B !
%a41b11 + a42b21 + a43b31 + a44b41 a41b12 + a42b22 + a43b32 + a44b42 a41b14 + a42b24 + a43b34 + a44b44 "
21 12 22 22

67
Multiplying Matrices

A11 A12 B11 B12

A21 A22 B21 B22

A11B11+A12B21 A11B12+A12B22
=
A21B11+A22B21 A21B12+A22B22

Counting arithmetic operations:


T(n) = 8T(n/2) + 4(n/2)2 = 8T(n/2) + n2
68
Multiplying Matrices

1 if n = 1
T(n) =
{ 8T(n/2) + n2 if n > 1

By Master Recurrence, if
T(n) = aT(n/b)+cnd & a > bd then
T(n) = Θ(nlogb a ) = Θ(nlog 2 8 ) = Θ(n3 )

69
The algorithm
P1 = A12(B11+B21) P2 = A21(B12+B22)
P3 = (A11 - A12)B11 P4 = (A22 - A21)B22
P5 = (A22 - A12)(B21 - B22)
P6 = (A11 - A21)(B12 - B11)
P7 = (A21 - A12)(B11+B22)
C11= P1+P3 C12 = P2+P3+P6-P7
C21= P1+P4+P5+P7 C22 = P2+P4

70
Strassen’s algorithm
Strassen’s algorithm
Multiply 2x2 matrices using 7 instead of 8 multiplications
(and lots more than 4 additions)

T(n)=7 T(n/2)+cn2
7>22 so T(n) is Q(n log27 ) which is O(n2.81)
Fastest algorithms use O(n2.376) time

71

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy