0% found this document useful (0 votes)
36 views

Analyzing Algorithms: CS-EE-310 Algorithms Analysis

This document discusses analyzing algorithms and their running time. It introduces the random access machine (RAM) model for analyzing computational time independently of any particular computer. It discusses how an algorithm's running time depends on input size and can vary between best, average, and worst cases. The document then analyzes the running time of insertion sort, finding it is linear in the best case when the array is already sorted but quadratic in the worst case when the array is reverse sorted. It emphasizes that the worst case analysis is most important, though average case may also be considered.

Uploaded by

Steven
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Analyzing Algorithms: CS-EE-310 Algorithms Analysis

This document discusses analyzing algorithms and their running time. It introduces the random access machine (RAM) model for analyzing computational time independently of any particular computer. It discusses how an algorithm's running time depends on input size and can vary between best, average, and worst cases. The document then analyzes the running time of insertion sort, finding it is linear in the best case when the array is already sorted but quadratic in the worst case when the array is reverse sorted. It emphasizes that the worst case analysis is most important, though average case may also be considered.

Uploaded by

Steven
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Analyzing Algorithms

CS-EE-310 Algorithms Analysis

Basic principles

Analyzing an algorithm means ability to


predict resources that the algorithm requires.
Computational (running) time is the most
important factor that we want to measure.
To evaluate resource requirements and to
predict running time, we need a universal and
independent computational model:
computational time is a universal measure
and it should not depend on a particular
computer.
2

Random-access machine
(RAM) model

Random-access machine (RAM) is a


virtual computer with the following
properties:
Instructions are executed one after
another. No concurrent operations.

Random-access machine
(RAM) model

The instructions set coincides with the


commonly found one in real computers:
Arithmetic: add, subtract, multiply,
divide, remainder, shift left/right).
Data movement (load, store, copy).
Control (conditional/unconditional
branching, subroutine call and return).
4

Random-access machine
(RAM) model

The RAM model uses integer and


floating-point types of numeric data.
We dont worry about precision.
The is a limit of the word size: when
working with inputs of size n, assume
that integers are represented by c lg n
bits for some constant c1 (lg n is a
commonly used shorthand for log2 n)
5

Analyzing an algorithms
running time
The time taken by an algorithm
depends on the input:

Sorting 1000 numbers takes longer than


sorting 3 numbers.
A given sorting algorithm may even take
different amounts of time on two inputs of
the same size: it takes less time to sort n
elements when they are already sorted than
when they are in reverse sorted order.
6

Analyzing an algorithms
running time
Input size depends on the problem
being considered:

Usually, the number of items in the input. Like the


size n of the array being sorted.
Could be something else: if multiplying two integers,
could be the total number of bits in the two integers.
Could be described by more than one number. For
example, graph algorithm running times are usually
expressed in terms of the number of vertices and the
number of edges in the input graph.
7

Analyzing an algorithms
running time
Finally, the running time is the
number of primitive operations (steps or
pseudocode lines) executed on a
particular input.

Analyzing an algorithms
running time: Fundamentals

Steps of an algorithm to be machine-independent.


Each line of pseudocode requires a constant amount of
time.
One line may take a different amount of time than
another, but each execution of line i takes the same
amount of time ci.
The line consists only of primitive operations.

If the line is a function (subroutine) call, then the actual


call takes constant time, but the execution of the
function being called might not.
If the line specifies operations other than primitive
ones, then it might take more than constant time.
Examples: sort the points by x-coordinate, sort an
array, etc.
9

The sorting problem


1st algorithm (Insertion-Sort)
for j 2 to length [A]
do { keyA[j]
Insert A[j] into the sorted sequence A[1..j-1]
i j-1
while (i>0) and (A[i]>key)
do { A[i+1]A[i]
ii-1
A[i+1] key
}
}
10

Analyzing Insertion Sort

Assume that the ith line takes time ci, which is


a constant.
For j=2,3,,n, let tj be the number of times
that the while loop test is executed for that
value of j.
Note that when a for or while loop exits in
the usual way due to the test in the loop
header the test is executed one time more
than the loop body.

11

The running time


The running time =

(cost of the j
j

th

statement) x

x (number of times statement is executed) =

c s

j j

12

Insertion Sort 1st algorithm:


the Running time
Statement

Running time

InsertionSort(A, n)
for j 2 to n
c1n
do {key A[j]
c2(n-1)
comment line
c3=0
i j - 1;
c4(n-1)
while (i > 0) and (A[i] > key)
c5 T
do {A[i+1] A[i]
c6(T-(n-1))
i i - 1
c7(T-(n-1))
A[i+1] key
c8(T-(n-1))
}
}
T = t2 + t3 + + tn where tj is number of while expression evaluations for the jth
for loop iteration

13

st
1

Insertion Sort
algorithm:
the Running time
n

T (n) = c1n + c2 (n 1) + c4 (n 1) + c5 t j +
j =2

+c6 ( t j 1) + c7 ( t j 1) + c8 ( t j 1)
n

=j 2 =j 2=j 2

What can T be?


Best case -- inner loop body never executed (the array is already
sorted)
tj= 1 T(n) is a linear function
Worst case -- inner loop body executed for all previous elements
tj= j T(n) is a quadratic function
Average case
???

14

st
1

Insertion Sort
algorithm:
the Running time. Best case
Best case -inner loop body never executed (the array is already sorted)
n

T (n) = c1n + c2 (n 1) + c4 (n 1) + c5 t j +
j =2

+ c6 ( t j 1) + c7 ( t j 1) + c8 ( t j 1) =
=j 2 =j 2=j 2

= c1n + c2 (n 1) + c4 (n 1) + c5 ( n 1)
T(n) is a linear function
15

st
1

Insertion Sort
algorithm:
the Running time. Worst case
Worst case -- inner loop body executed for all previous elements (the
array is initially sorted in the reverse order)
n

T (n) = c1n + c2 (n 1) + c4 (n 1) + c5 j +
j =2

+c6 ( j 1) + c7 ( j 1) + c8 ( j 1) =

=j 2 =j 2=j 2

n(n + 1)
= c1n + c2 (n 1) + c4 (n 1) + c5
1 +
2

n(n 1)
n(n 1)
n(n 1)
+ c6
+ c7
+ c8
2
2
2
T(n) is a quadratic function

16

The sorting problem


1st-a algorithm (Insertion-Sort)
for j 2 to length [A]
do { keyA[j]
Insert A[j] into the sorted sequence A[1..j-1]
i j-1
while (i>0) and (A[i]>key)
do { A[i+1]A[i]
ii-1
}
A[i+1] key
}
17

Insertion Sort 1st-a algorithm:


the Running time

Statement

Running time

InsertionSort(A, n)
for j 2 to n
c1n
do { key A[j]
c2(n-1)
comment line
c3=0
i j - 1;
c4(n-1)
while (i > 0) and (A[i] > key)
c5 T
do {A[i+1] A[i]
c6(T-(n-1))
i i - 1
c7(T-(n-1))
}
A[i+1] key
c8(n-1)
}
T = t2 + t3 + + tn where tj is number of while expression evaluations for the jth
for loop iteration

18

st-a
1

Insertion Sort
algorithm:
the Running time
n

T (n) = c1n + c2 (n 1) + c4 (n 1) + c5 t j +
j =2

+c6 ( t j 1) + c7 ( t j 1) + c8 (n 1)
n

=j 2=j 2

What can T(n) be?


Best case -- inner loop body never executed (the array is already
sorted)
tj= 1 T(n) is a linear function
Worst case -- inner loop body executed for all previous elements
tj= j T(n) is a quadratic function
Average case
???

19

st-a
1

Insertion Sort
algorithm:
the Running time. Best case
Best case -inner loop body never executed (the array is already sorted)
n

T (n) = c1n + c2 (n 1) + c4 (n 1) + c5 t j +
j =2

+ c6 ( t j 1) + c7 ( t j 1) + c8 (n 1) =
=j 2=j 2

= c1n + c2 (n 1) + c4 (n 1) + c5 (n 1) + c8 (n 1)
T(n) is a linear function
20

st-a
1

Insertion Sort
algorithm:
the Running time. Worst case
Worst case -- inner loop body executed for all previous elements (the
array is initially sorted in the reverse order)
n

T (n) = c1n + c2 (n 1) + c4 (n 1) + c5 j +
j =2

+c6 ( j 1) + c7 ( j 1) + c8 (n 1) =

=j 2=j 2

n(n + 1)
= c1n + c2 (n 1) + c4 (n 1) + c5
1 +
2

n(n 1)
n(n 1)
+ c6
+ c7
+ c8 (n 1)
2
2
T(n) is a quadratic function

21

st-a
1

Insertion Sort
algorithm:
the Running time. Worst case
Worst case -- inner loop body executed for all previous elements (the
array is initially sorted in the reverse order)

n(n + 1)
T (n) = c1n + c2 (n 1) + c4 (n 1) + c5
1 +
2

n(n 1)
n(n 1)
+c6
+ c7
+ c8 (n 1) =
2
2
c5 c6 c7
c5 c6 c7 2

+ + n + c1 + c2 + c4 + + c8 n
2 2 2
2 2 2

( c2 + c4 + c5 + c8 )
T(n) is a quadratic function

22

Running time: What is more


important to analyze?

Best case?
Worst case?
Average case?
Some of them?
All?

23

Running time: the worst case


is the most interesting!

The worst-case running time gives a


guaranteed upper bound for any input
For many algorithms, the worst case
occurs often. For example, when
searching, the worst case often occurs
when the item being searched for is not
present, and searches for empty items
may be frequent
24

Running time: average case

The average case is interesting and important


because it gives a closer estimation of the
realistic running time
However, its consideration usually requires
more efforts (algebraic transformations, etc.)
On the other hand, it is often about as bad
as the worst case
Hence, it is often enough to consider the
worst case
25

Running time

The worst case is the most interesting


The average case is interesting, but
often is as bad as the worst case and
may be estimated by the worst case
The best case is the least interesting

26

Average Case vs. Worst Case


(Insertion Sort example)

On average, the key in A[j] is less than half


the elements in A[1..j-1] and it is greater
than the other half
On average, the while loop has to look
halfway through the sorted subarray A[1..j-1]
to where to drop key tj=j/2
Although the average-case running time is
approximately half of the worst-case running
time, it is still a quadratic function of n
27

Algorithm Efficiency
Big-O Notation

28

What is the algorithms efficiency

The algorithms efficiency is a function of the


number of elements to be processed. The
general format is

f (n) = efficiency

29

The basic concept

When comparing two different algorithms


that solve the same problem, we often
find that one algorithm is an order of
magnitude more efficient than the other.
A typical example is a famous Fast Fourier
Transform algorithm. It requires NxlogN
multiplications and additions, while a
direct Fourier Transform algorithm
requires N2 multiplications and additions.
30

The basic concept

If the efficiency function is linear then this means


that the algorithm is linear and it contains no
loops or recursions. In this case, the algorithms
efficiency depends only on the speed of the
computer.
If the algorithm contains loops or recursions (any
recursion may always be converted to a loop), it
is called nonlinear. In such a case, the efficiency
function strongly and informally depends on the
number of elements to be processed.
31

Linear Loops

The efficiency depends on how many times the body


of the loop is repeated. In a linear loop, the loop
update (the controlling variable) either adds or
subtracts.
For example:

for (i 0 step 1 to 1000)


the loop body

Here the loop body is repeated 1000 times.


For the linear loop the efficiency is directly proportional
to the number of iterations, it is:

f ( n) = n
32

Logarithmic Loops

In a logarithmic loop, the controlling variable is multiplied or divided


in each iteration
For example:
Multiply loop

Divide loop

for (i 0 step *2 to 1024)

for (i 1024 step \2 down to 1)

the loop body

the loop body

For the logarithmic loop the efficiency is determined by the


following formula:

f (n) = log n
33

34

Linear Logarithmic Nested


Loop
for (i 1 to 10)
for (j 1 step *2 to 10)
the loop body

The outer loop in this example adds, while the inner loop multiplies
A total number of iterations in the linear logarithmic nested loop is
equal to the product of the numbers of iterations for the external and
inner loops, respectively (10log10 in our example).
For the linear logarithmic nested loop the efficiency is determined
by the following formula:

f (n) = n log n
35

Quadratic Nested Loop


for (i 1 to 10)
for (j 1 to 10)
the loop body

Booth loops in this example add


A total number of iterations in the quadratic nested loop is equal to
the product of the numbers of iterations for the external and inner
loops, respectively (10x10=100 in our example).
For the quadratic nested loop the efficiency is determined
by the following formula:

f ( n) = n

36

Dependent Quadratic Nested


Loop
for (i 1 to 10)
for (j i to 10)
the loop body

The number of iterations of the inner loop depends on the outer loop.
It is equal to the sum of the first n members of an arithmetic
progression: n(n+1)/2

A total number of iterations in the dependent quadratic nested loop is


equal to the product of the numbers of iterations for the external and
inner loops, respectively (10x5=50 in our example).
For the dependent quadratic nested loop the efficiency is determined
by the following formula:

n +1
f ( n) = n

37

Big-O notation

The number of statements executed in the


function for n elements of data is a function
of the number of elements expressed as f(n).
Although the equation derived for a function
may be complex, a dominant factor in the
equation usually determines the order of
magnitude of the result.
This factor is a big-O, as in on the order of.
It is expressed as O(n) .
38

Big-O notation

The big-O notation can be derived from f (n)


using the following steps:

1. In each term set the coefficient of the term to 1.


2. Keep the largest term in the function and discard the others. Terms are
ranked from lowest to highest: log n n log n n2 n3 nk2nn!
For example,
n +1 1 2 1
f=
( n) n =
n + n

2
2
2

n +n
2

O ( f ( n) ) = O ( n 2 )

39

40

41

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy