0% found this document useful (0 votes)
9 views43 pages

4 Hafta

The document discusses algorithm analysis, focusing on the importance of selecting efficient algorithms based on time and space complexity. It introduces concepts such as asymptotic analysis, Big O notation, and various growth rates to compare algorithm efficiency. Key points include understanding how the execution time of algorithms is affected by input size and the significance of estimating running times for different algorithmic structures.

Uploaded by

eminyancar691
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views43 pages

4 Hafta

The document discusses algorithm analysis, focusing on the importance of selecting efficient algorithms based on time and space complexity. It introduces concepts such as asymptotic analysis, Big O notation, and various growth rates to compare algorithm efficiency. Key points include understanding how the execution time of algorithms is affected by input size and the significance of estimating running times for different algorithmic structures.

Uploaded by

eminyancar691
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 43

Algorithm Analysis

COMP 2002

1
• An algorithm is a set of instructions to be followed to solve
a problem.

• There are often many approaches (algorithms) to solve a


problem. How do we choose between them?

At the heart of computer program design are two (sometimes


conflicting) goals.

1. To design an algorithm that is easy to understand, code,


debug.

2. To design an algorithm that makes efficient use of the


computer’s resources.

2
• An algorithm is a set of instructions to be followed to solve
a problem.

• There are often many approaches (algorithms) to solve a


problem. How do we choose between them?

At the heart of computer program design are two (sometimes


conflicting) goals.

1. To design an algorithm that is easy to understand, code,


debug. (This is the concern of Software Engineering.)

2. To design an algorithm that makes efficient use of the


computer’s resources. (This is the concern of data
structures and algorithm analysis.)
3
Algorithm Efficiency

There are two aspects of algorithmic efficiency:


• Time
• Instructions take time.
• How fast does the algorithm perform?
• What affects its runtime?

• Space
• Data structures take space
• What kind of data structures can be used?
• How does choice of data structure affect the runtime?

➢We will focus on time:


– How to estimate the time required for an algorithm
– How to reduce the time required
4
How do we compare the time efficiency of two
algorithms that solve the same problem?
1. Empirical comparison (run programs)

2. Asymptotic Algorithm Analysis

The idea is not to find the exact computational time required by the algorithm since this
time is to be affected by other factors:
– How are the algorithms coded?
• Comparing running times means comparing the implementations.
• We should not compare implementations, because they are sensitive to programming
style that may cloud the issue of which algorithm is inherently more efficient.
– What computer should we use?
• We should compare the efficiency of the algorithms independently of a particular
computer.
– What data should the program use?
• Any analysis must be independent of specific data.

5
Asymptotic Algorithm Analysis

We want to identify how the required time for the


algorithm grows as a function of its input size.

For most algorithms, running time depends on “size” of


the input.

Running time is expressed as T(n) for some function T on


input size n.

You will learn asymptotic notations to express the growth


rate of the algorithm.

6
The Execution Time of Algorithms
• Each operation in an algorithm (or a program) has a cost.
➔ Each operation takes a certain of time.

count = count + 1; ➔ take a certain amount of time, but it is constant

A sequence of operations:

count = count + 1; Cost: c1


sum = sum + count; Cost: c2

➔ Total Cost = c1 + c2

7
The Execution Time of Algorithms (cont.)
Example: Simple If-Statement
Cost Times
if (n < 0) c1 1
absval = -n c2 1
else
absval = n; c3 1

Total Cost <= c1 + max(c2,c3)

8
The Execution Time of Algorithms (cont.)
Example: Simple Loop
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}

Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*c5


➔ The time required for this algorithm is proportional to n

9
The Execution Time of Algorithms (cont.)
Example: Nested Loop
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i + 1; c8 n
}

Total Cost = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8


➔ The time required for this algorithm is proportional to n2

10
The Execution Time of Algorithms (cont.)
Example: Function Calls

Find the running time of each function call. Be careful about their analyses if
these are recursive functions.
Recurrence relation:
int factorial( int n ) { T(n) = T(n - 1) + c
if ( n <= 1 ) T(1) = c
return 1;
return n * factorial(n - 1); Solving it with the substitution method
} T(n) = T(n - 1) + c
T(n) = T(n - 2) + c + c
T(n) = T(n - 3) + c + c + c
...
T(n) = T(n - k) + k c
T(n) = T(1) + (n - 1) c
11
T(n) = n c
General Rules for Estimation
• Loops: The running time of a loop is at most the running time
of the statements inside of that loop times the number of
iterations.

• Nested Loops: Running time of a nested loop containing a


statement in the inner most loop is the running time of statement
multiplied by the product of the sized of all loops.

• Consecutive Statements: Just add the running times of those


consecutive statements.

• If/Else: Never more than the running time of the test plus the
larger of running times of S1 and S2.
12
Algorithm Growth Rates
• We measure an algorithm’s time requirement as a function of the
problem size.
– Problem size depends on the application: e.g. number of elements in a list for a
sorting algorithm, the number users for a social network search.
• So, for instance, we say that (if the problem size is n)
– Algorithm A requires 5*n2 time units to solve a problem of size n.
– Algorithm B requires 7*n time units to solve a problem of size n.
• The most important thing to learn is how quickly the algorithm’s
time requirement grows as a function of the problem size.
– Algorithm A requires time proportional to n2.
– Algorithm B requires time proportional to n.
• An algorithm’s proportional time requirement is known as growth
rate.
• We can compare the efficiency of two algorithms by comparing
their growth rates.
13
Compare f(N) = N, g(N) = 1000N and h(N) = N2

● We do not want to claim that g(N) < h(N)


Since you can find points for which g(N) is less than
h(N) and vice versa

● Instead, we want to compare their relative rates of


growth as a function of N

The growth rates of f(N) and g(N) are the same


The growth rates of both of these functions are less than
that of h(N)

14
Algorithm Growth Rates (cont.)

Time requirements as a function


of the problem size n

15
Common Growth Rates

Function Growth Rate Name


c Constant
log N Logarithmic

log2 N Log-squared
N Linear
N log N Log-linear
N2 Quadratic

N3 Cubic
2N Exponential

16
Running Times for Small Inputs

17
Running Times for Large Inputs

18
Asymptotic notations: Big O Notation

• Running time T(n) (problem size is n)


– Algorithm A requires T(n) = 5*n2 time units to solve a problem of size n.
– Algorithm B requires T(n) = 7*n time units to solve a problem of size n.

• The most important thing to learn is how quickly the algorithm’s


time requirement grows as a function of the problem size.
– Algorithm A requires time proportional to n2.
– Algorithm B requires time proportional to n.

• An algorithm’s growth rate is also referred to as the order.


– Algorithm A requires T(n) = O(n2).
– Algorithm B requires T(n) = O(n).

19
Asymptotic notations: Big O Notation

• If Algorithm A requires time proportional to g(n), it is O(g(n)).


• O(g(n)) is read as order g(n).
• The function g(n) is called the algorithm’s growth-rate
function.
• Since the capital O is used in the notation, this notation is called
the Big O notation.

20
Asymptotic notations: Big O Notation
Definition:
Algorithm A is order g(n) – denoted as O(g(n)) –
if constants k and n0 exist such that A requires no more than k*g(n) time
units to solve a problem of size n ≥ n0. ➔ T(n) ≤ k*g(n) for all n ≥ n0

The requirement of n ≥ n0 formalizes the notion of sufficiently large problems.

Big O notation is used to express an upper-bound on a function.

T(n) = O( g(n) )
• g(n) is an upper-bound on T(n)
• The growth rate of T(n) is less than or equal to that of g(n)
• T(n) grows at a rate no faster than g(n)
21
Asymptotic notations: Big O Notation

• If an algorithm requires T(n) = n2–3*n+10 seconds to solve a


problem size n. If constants k and n0 exist such that

n2–3*n+10 < k*n2 for all n ≥ n0 .

the algorithm is order n2 (In fact, k is 3 and n0 is 2)

n2–3*n+10 < 3*n2 for all n ≥ 2 .

Thus, the algorithm requires no more than k*n2 time units for n ≥ n0 ,

So it is O(n2)

22
Asymptotic notations: Big O Notation

23
Asymptotic notations: Big O Notation

• Show 2x + 17 is O(2x)

• 2x + 17 ≤ 2x + 2x = 2*2x for x > 5

• Hence k = 2 and n0 = 5

24
Asymptotic notations: Big O Notation

• Show 2x + 17 is O(3x)

• 2x + 17 ≤ k3x
• Easy to see that rhs grows faster than lhs over time ! k=1
• However when x is small 17 will still dominate ! skip over
some smaller values of x by using n0 = 3

• Hence k = 1 and n0 = 3

25
Asymptotic notations: Big Omega Notation

Definition:
Algorithm A is omega g(n) – denoted as Ω(g(n)) –
if constants k and n0 exist such that A requires more than k*g(n) time
units to solve a problem of size n ≥ n0. ➔ T(n) ≥ k*g(n) for all n ≥ n0

This notation is used to express a lower-bound on a function

T(n) = Ω( g(n) )
• g(n) is a lower-bound on T(n)
• The growth rate of T(n) is greater than or equal to that of g(n)
• T(n) grows at a rate no slower than g(n)

26
Asymptotic notations: Big Omega Notation

Show that

• 2 n2 = Ω( n2 ) → n2 is also a lower-bound on 2 n2

• 100 n2 = Ω( n ) → n is a lower-bound on 100 n2

• n ≠ Ω( n2 ) → But n2 is not a lower-bound on n

27
Asymptotic notations: Big Theta Notation
Definition:
Algorithm A is theta g(n) – denoted as (g(n)) –
if constants k1, k2 and n0 exist such that k1*g(n) ≤ T(n) ≤ k2*g(n) for all n ≥ n0

T(n) = Θ( g(n) ) if and only if T(n) = O( g(n) ) and T(n) = Ω( g(n) )

This notation is used to express a tight-bound on a function

T(n) = Θ( g(n) )
• g(n) is a tight-bound on T(n)
• The growth rate of T(n) is equal to that of g(n)

Although using big-theta would be more precise, big-O answers are typically given.
28
𝚹
Asymptotic notations: Big Theta Notation

• Show T(n) = 7n2 + 1 is (n2)

• You need to show T(n) is O(n2) and T(n) is Ω(n2)

• T(n) is O(n2) because 7n2 + 1 ≤ 7n2 + n2 ∀n ≥ 1 ➔ k1 = 8 n0 = 1

• T(n) is Ω (n2) because 7n2 + 1 ≥ 7n2 ∀n ≥ 0 ➔ k2 = 7 n0 = 0

• Pick the largest n0 to satisfy both conditions naturally ➔ k1 = 8, k2 = 7, n0 = 1

29
𝚹
A Comparison of Growth-Rate Functions

30
Growth-Rate Functions
O(1) Time requirement is constant, and it is independent of the problem’s size.
O(log2n) Time requirement for a logarithmic algorithm increases slowly
as the problem size increases.
O(n) Time requirement for a linear algorithm increases directly with the size
of the problem.
O(n*log2n) Time requirement for a n*log2n algorithm increases more rapidly than
a linear algorithm.
O(n2) Time requirement for a quadratic algorithm increases rapidly with the
size of the problem.
O(n3) Time requirement for a cubic algorithm increases more rapidly with the
size of the problem than the time requirement for a quadratic algorithm.
O(2n) As the size of the problem increases, the time requirement for an
exponential algorithm increases too rapidly to be practical.

31
Growth-Rate Functions
• If an algorithm takes 1 second to run with the problem size 8,
what is the time requirement (approximately) for that algorithm
with the problem size 16?
• If its order is:
O(1) ➔ T(n) = 1 second
O(log2n) ➔ T(n) = (1*log216) / log28 = 4/3 seconds
O(n) ➔ T(n) = (1*16) / 8 = 2 seconds
O(n*log2n) ➔ T(n) = (1*16*log216) / 8*log28 = 8/3 seconds
O(n2) ➔ T(n) = (1*162) / 82 = 4 seconds
O(n3) ➔ T(n) = (1*163) / 83 = 8 seconds
O(2n) ➔ T(n) = (1*216) / 28 = 28 seconds = 256 seconds

32
Properties of Growth-Rate Functions
1. We can ignore low-order terms in an algorithm’s growth-rate
function.
– If an algorithm is O(n3+4n2+3n), it is also O(n3).
– We only use the higher-order term as algorithm’s growth-rate function.

2. We can ignore a multiplicative constant in the higher-order term


of an algorithm’s growth-rate function.
– If an algorithm is O(5n3), it is also O(n3).

3. O(f(n)) + O(g(n)) = O(f(n)+g(n))


– We can combine growth-rate functions.
– If an algorithm is O(n3) + O(4n), it is also O(n3 +4n2) ➔ So, it is O(n3).
– Similar rules hold for multiplication.

33
Properties of Growth-Rate Functions

• If g(n) = 2 n2,

g(n) = O(n4), g(n) = O(n3), g(n) = O(n2) are all technically correct.

But the last one is the BEST ANSWER.

• If h(n) = 2 n2 + 100 n

Do NOT say h(n) = O(2 n2 + 100 n) or h(n) = O(2 n2) or h(n) = O(n2 + n).

The correct form is h(n) = O (n2).

34
Growth-Rate Functions – Example1
Cost Times
i = 1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
i = i + 1; c4 n
sum = sum + i; c5 n
}

T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*c5


= (c3+c4+c5)*n + (c1+c2+c3)
= a*n + b
➔ So, the growth-rate function for this algorithm is O(n)

35
Growth-Rate Functions – Example2
Cost Times
i=1; c1 1
sum = 0; c2 1
while (i <= n) { c3 n+1
j=1; c4 n
while (j <= n) { c5 n*(n+1)
sum = sum + i; c6 n*n
j = j + 1; c7 n*n
}
i = i +1; c8 n
}

T(n) = c1 + c2 + (n+1)*c3 + n*c4 + n*(n+1)*c5+n*n*c6+n*n*c7+n*c8


= (c5+c6+c7)*n2 + (c3+c4+c5+c8)*n + (c1+c2+c3)
= a*n2 + b*n + c
➔ So, the growth-rate function for this algorithm is O(n2)
36
Growth-Rate Functions – Example3

for (i=1; i<=n; i++)

for (j=1; j<=i; j++)

for (k=1; k<=j; k++)

x=x+1;

n i j
T(n) =
∑∑∑1
i=1 j= 1 k=1

➔ So, the growth-rate function for this algorithm is O(n3)

37
Growth-Rate Functions – Example4

• How about this one? O(logn)

38
Algorithm Analysis

We would like to eliminate the bad algorithmic ideas early by algorithm


analysis.

Asymptotic notations are not affected by the programming language. Thus,


there is no need to code an algorithm for finding its time complexity (you
can analyze the time complexity of even a pseudocode).

However, after coding the algorithm, if it runs much more slowly than the
algorithm analysis suggests, there may be an implementation inefficiency.

39
What to Analyze
• An algorithm can require different times to solve different problems
of the same size.
– Eg. Searching an item in a list of n elements using sequential search. ➔ Cost: 1,2,…,n

• Worst-Case Analysis –The maximum amount of time that an


algorithm require to solve a problem of size n.
– This gives an upper bound for the time complexity of an algorithm.
– Normally, we try to find worst-case behavior of an algorithm.

• Best-Case Analysis –The minimum amount of time that an


algorithm require to solve a problem of size n.
– The best case behavior of an algorithm is NOT so useful.

• Average-Case Analysis –The average amount of time that an


algorithm require to solve a problem of size n.
– Sometimes, it is difficult to find the average-case behavior of an algorithm.
– We have to look at all possible data organizations of a given size n, and their
distribution probabilities of these organizations.
– Worst-case analysis is more common than average-case analysis. 40
A Common Misunderstanding

• The best case for my algorithm is n=1 because that is the fastest.”
WRONG!

• Best case is defined for the input of size n that is fastest among all
inputs of size n.

41
A Common Misunderstanding

• Confusing worst case with upper bound.

• Upper bound refers to a growth rate.

• Worst case refers to the worst input from among the choices for
possible inputs of a given size.

42
Sequential Search
public static int sequential(int[] arr, int N, int target)
{
for(int i = 0 ; i < N ; i++)
if( arr[i] == target)
return i;
return -1;
}
Find the worst case, best case, and average case upper-bounds.

• For a successful search:


Worst-case: N iterations are necessary when the value is the last array item → O(N)
Best-case: Only 1 iteration is necessary when the value is the first array item → O(1)
Average-case: (N + 1) / 2 iterations are necessary on the average. For calculating this
average, one needs to consider all possibilities → O( N )

• For an unsuccessful search:


Worst-case, best-case, and average-case are all the same → O( N )
43

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy