0% found this document useful (0 votes)
16 views

Ds Module4

Uploaded by

ravirajmca95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Ds Module4

Uploaded by

ravirajmca95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 77

Introduction

To
Algorithms
References

1. Introduction to the Design and analysis of


Algorithms, Anany Levitin, Pearson.

2. Data structures and algorithms using C.


- Mark Allen Weiss.

3. Data structures using C


- Schaum’s Series.
Introduction :

The etymology of the word


Algorithm dates back to the
8th Century AD.

The word Algorithm is derived


from the name of the Persian
Author “Abu Jafar
Mohammad ibn Musa al
Khowarizmi”
Algorithms

Definition :

An algorithm is a sequence of unambiguous instructions


for solving a problem, i.e., for obtaining a required output
for any legitimate input in a finite amount of time.
problem

algorithm

input “computer” output


1. Input : Zero or more quantities are externally
supplied.
2. Output : At least one quantity is produced
3. Definiteness : Each instruction is clear and
unambiguous
4. Finiteness : The algorithm must terminates after a
finite number of steps.
5. Effectiveness : Every instructions must be very
basic so that it can be carried out in
principle by a person using only pencil
and paper.
Active Areas Of Research

1. How to devise algorithms.


•It is an art
•Standard algorithmic techniques available

2. How to validate algorithms.


Correct answer for all possible legal inputs
•Algorithm validation
•Program proving or verification

3. How to analyze algorithms


•Performance analysis: time & memory

4. How to test program.


•Debugging and Profiling
Algorithm Specification

• By using natural language like English

• By using graphical representation called Flow charts

•Pseudo code representation


To compute greatest common divisor (GCD) of 2 integers

GCD of 2 non-negative, not both zero integers m & n,


denoted as gcd(m,n), is defined as the largest integer that
divides both m & n evenly i.e. with a remainder of 0.

gcd (m,n) = gcd (n, m mod n)

where (m mod n) is the remainder of the division of m by n.


Step 1 : If n=0, return the value of m as the answer &
stop; otherwise go to Step 2.

Step 2 : Divide m by n and assign the value of


remainder to r.

Step 3 : Assign the value of n to m and the value of r


to n. Go to Step 1.
ALGORITHM Euclid(m,n)
// Computes gcd(m,n) by Euclid’s algorithm
// Input: 2 non-negative, not both zero integers m & n
// Output: GCD of m & n

while n # 0 do
r = m mod n
m=n
n=r
return m
Flow chart :
Natural Language:
start
Algorithm Exchange
// This algorithm exchanges
the value of x and y.
Read x, y
Step1:
Read two numbers
Step2 : Temp = x
x=y
Exchange x and y y = temp
Step3 :
Output the result
Print x, y
step4:
Terminate.

stop
Pseudo code representation

Algorithm Exchange
{
read x, y;
temp = x;
x = y;
y = temp;
print x, y;
}
Points to remember

1. Comments begin with // and Continue until the end of


the line.

2. Blocks are indicated with matching braces { }.


A compound statements can be represented by block.
Each statements are is delimited by ; (semicolon).

3. Assignments of values to variables is done using the


assignment statements.
<variable>:=<expression>
4. Inside the body of the algorithms we can use.

a. Logical operators (and, or, not)


b. Relational operators (<, <=, =, >= and >)
c. Looping statements (for, while, repeat until)
d. Conditional statements (if, if then, If then else, case )

5. Input and output are done using the instructions “read


and write”.
E.g. : Algorithm to sort a collection of n elements

Algorithm sort
for i := 1 to n do
{
Examine a[i] to a[n] and suppose the smallest
element is at a[j];
Interchange a[i] and a[j];
}
Pseudo code for above algorithm

Algorithm selection sort ( a, n)

//sort the array a[1:n]

{
for i = 1 to n do
{
J:=i;
for k:= i+1 to n do
If (a[k] < a[j]) then j:=k;
t:=a[i];
a[i]:=a[j];
a[j]:=t;
}
}
Sample algorithms
1. Largest of three numbers
Algorithm largest
Step 1 : Input three numbers { read a, b, c }
Step2 : Assume a is largest { big = a }
Step 3 : Obtain the largest of a and b
{ if ( b > big )
big = b
end if }
step 4 : Obtain the largest of a, b and c
{ if ( c > big )
big = c
end if }
step 5 : Output the largest
print big
step 6 : Finished .
stop.
2. Sum of first n natural numbers

Algorithm sum of series

Step1 : input number of terms


read n
Step2 : initialization
sum = 0
Step3 : find the sum of all terms
for ( i = 1 to n )
sum = sum + i
end for
Step4 : output the sum
print sum
Step5 : finished
stop.
3. Finding the roots of quadratic equation
Algorithm Roots_of_quadratic_equation
Step1 : input the coefficients of quadratic equation
read a, b, c
Step2 : find two equal real roots
if ( b2 – 4ac = 0) then
x1 = x2 = -b / 2a
print ( “ two real roots “ x1, x2)
end if
stop.
Step3 : find two distinct roots
if ( b2 -4ac > 0) then
x1 = -b + root ( b2 – 4ac ) / 2a
x2 = -b - root ( b2 – 4ac ) / 2a
print (“ two distinct roots” x1, x2 )
end if
stop.
Step4 : print complex roots
print (“ roots complex”)

Step5 : stop.
Algorithm design and analysis process
Understanding the problem

Decide on computational means,


Exact vs approximate solving,
Data structure(s),
Algorithm design Technique

Design an algorithm

Prove correctness

Analyze the algorithm

Code the algorithm


Performance analyses of an algorithm is decided based
on complexity.

Complexity is divided into two parts


Space complexity (space efficiency)
Time complexity (time efficiency)

Time efficiency indicates how fast an algorithm runs.


Space efficiency deals with the space the algorithm requires.

Hear the amount of extra space required by an algorithm is


typically not of as much concern. Because half a century of
technological innovations have improved the computers memory
size by many orders of magnitude.
So the time complexity of an algorithm is decided based on
the following points.

a. Measuring an input size.

b. Units for measuring running time

c. Orders of growth

d. Worst-case, Best-case and Average-case efficiencies.


Running Time
Most algorithms transform best case
average case
input objects into output worst case
120
objects.
100
The running time of an

Running Time
80

algorithm typically grows with 60


the input size. 40

Average case time is often 20

difficult to determine. 0
1000 2000 3000 4000
We focus on the worst case Input Size

running time.
 Easier to analyze
 Crucial to applications such as
games, finance and robotics
Experimental Studies
9000
Write a program
8000
implementing the algorithm 7000
Run the program with 6000

Time (ms)
inputs of varying size and 5000

composition 4000

Use a method like 3000

2000
System.currentTimeMillis() to
1000
get an accurate measure of 0
the actual running time 0 50 100

Plot the results Input Size


Limitations of Experiments
Results may not be indicative of the running
time on other inputs not included in the
experiment.

In order to compare two algorithms, the same


hardware and software environments must be
used
Theoretical Analysis
Uses a high-level description of the algorithm
instead of an implementation
Characterizes running time as a function of
the input size, n.
Takes into account all possible inputs
Allows us to evaluate the speed of an
algorithm independent of the
hardware/software environment
Methodology for algorithm analysis
The several components are:

Language for describing algorithms


(1)Pseudo code
(2)Step by step procedure
(3)Flow chart

A metric for measuring running time


(1)Step count
(2)operation count

An approach for characterizing running time including


recursive algorithms
Pseudocode
High-level description Example: find max
of an algorithm element of an array
More structured than Algorithm arrayMax(A, n)
English prose Input array A of n integers
Less detailed than a Output maximum element of A
program
Preferred notation for currentMax  A[0]
describing algorithms for i  1 to n  1 do
Hides program design if A[i]  currentMax then
issues currentMax  A[i]
return currentMax
Pseudocode Details
Control flow Method call
 if … then … [else …] var.method (arg [, arg…])
 while … do … Return value
 repeat … until … return expression
 for … do … Expressions
 Indentation replaces braces  Assignment
(like  in Java)
Method declaration
 Equality testing
Algorithm method (arg [, arg…]) (like  in Java)
Input … n2 Superscripts and other
Output … mathematical formatting
allowed
Step count:
Primitive Operations ?
Assigning a value to a variable
Calling a method
Performing arithmetic operation
Indexing into an array
Following an object reference
Returning from a method
Comparing two numbers
Primitive Operations
Basic computations
Examples:
performed by an algorithm  Evaluating an
Identifiable in pseudo code expression
Largely independent from the  Assigning a value to
a variable
programming language  Indexing into an
Exact definition not important array
(we will see why later)  Calling a method
 Returning from a
Assumed to take a constant
method
amount of time in the RAM
model
Counting Primitive Operations
By inspecting the pseudocode, we can determine the
maximum number of primitive operations executed by an
algorithm, as a function of the input size

Algorithm arrayMax(A, n)
currentMax  A[0] 2
{ initialize counter i to 1} 1
for i  1 to n  1 do n
if A[i]  currentMax then 2(n  1)
currentMax  A[i] 2(n  1)
{ increment counter i } 2(n  1)
return currentMax 1
……contd
To summarize, the number of primitive
operations executed by the algorithm
At least are( best case): 2+1+n+4(n-1)+1=5n
(because the assignment statement is not
executed)
At most are (worst case): 2+1+n+6(n-1)+1=7n-2
(because the assignment statement is executed)

This method is difficult to implement if the program


is more complex.
Operation count method
In this method we identify the basic operation and
obtain the total number of times that operation is
executed.
For ex in algorithm arraymax, the basic operation is
comparitive statement
if A[i]  currentMax
The loop is executed n-1 times for i=1 to n-1 with if
stmt executed once for each iteration
Therefore time taken is n-1
This method is easy to implement.
Estimating Running Time
Algorithm arrayMax executes 7n  2 primitive
operations in the worst case.
 Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
Let T(n) be worst-case time of arrayMax. Then
a (5n)  T(n)  b(7n  2)
Hence, the running time T(n) is bounded by
two linear functions
Using Recursion
The ability of the function to call itself is called
Recursion

In recursion, we need to define base case


(termination condition) and general function ( in
terms of reduction)

If not given the proper terminating condition, what


happens?????
Example fact(n)
 int fact(int n)
{
if( n == 0 )
return 1;
else
return( n*fact(n-1));
}
Growth Rate of Running Time
Changing the hardware/ software environment
Affects T(n) by a constant factor, but
Does not alter the growth rate of T(n)
The linear growth rate of the running time T(n) is
an intrinsic property of algorithm arrayMax
Recursive Max….
Algorithm recursivemax(A,n)
Input:An array A storing n>=1 integers
Output: The maximum element in A
if n == 1 then
return A[0]
return MAX{ recursivemax(A,n-1),A[n-1]}
Analyzing recursive algorithm
To analyze running time we need to find a
recurrence equation.
Introduce a function T(n) that denotes the
running time on input size n.
For example of factorial:
2 if n==0
T(n) = {
T(n-1) + 4 otherwise
…..contd
For recursiveMax algorithm

3 if n==1
T(n) = {
T(n-1) +7 otherwise
Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines the
running time in big-Oh notation
To perform the asymptotic analysis
 We find the worst-case number of primitive operations
executed as a function of the input size
 We express this function with big-Oh notation
Example:
 We determine that algorithm arrayMax executes at most 7n
 2 primitive operations
 We say that algorithm arrayMax “runs in O(n) time”
word asymptotic means the study of function of
parameter “n” grows larger & larger without bound
Seven Important Functions (§3.3)
Seven functions that
often appear in 1E+29
algorithm analysis: 1E+27 Cubic

 Constant  1 1E+25 Quadratic


1E+23
 Logarithmic  log n 1E+21 Linear
 Linear  n 1E+19
 N-Log-N  n log n 1E+17

T(n)
1E+15
 Quadratic  n2 1E+13
 Cubic  n3 1E+11
1E+9
 Exponential  2n
1E+7
1E+5
In a log-log chart, the 1E+3
1E+1
slope of the line 1E-1
1E-1 1E+2 1E+5 1E+8
corresponds to the
growth rate of the n
function
Constant Factors
The growth rate is 1E+25 Quadratic
1E+23 Quadratic
not affected by 1E+21 Linear
 constant factors or 1E+19
Linear
1E+17
 lower-order terms 1E+15

T(n)
1E+13
Examples 1E+11
 102n  105 is a linear 1E+9
1E+7
function 1E+5
 105n2  108n is a 1E+3
quadratic function 1E+1
1E-1
1E-1 1E+1 1E+3 1E+5 1E+7 1E+9

n
The efficiency analysis frame work concentrates on the order
of growth of an algorithms basic operations count as the
principal indicator of the algorithm’s efficiency.

The word asymptotic means the study of function of


parameter “n” grows larger & larger without bound
The compare & rank such orders of growth,
Computer scientists use three notations.

1. O(big oh ) -> Worst case -> Upper bound.

2. Ω(big omega) -> Best case -> lower bound.

3. ∂ (big theta ) -> Average case -> Upper & lower


bounds.
O(big oh ) Notation:

A function f(n) is said to be in O g(n), denoted as


f(n) € O (g(n)), if f(n) is bounded above some constant
multiple of g(n) for all large n,

i.e ., if there exist some positive constant c &


some non negative integer ‘no’ such that,

F(n) ≤ c.g(n) for all n≥n0


The definition is illustrated in the fig,

c.g(n)
f(n)

no n
The upper bound of f(n) indicates that function f(n) will not
consume more than the specified time of c*g(n).

i.e., f(n) € O (g(n)).


Ω( big omega ) Notation:

A function f(n) is said to be in Ω g(n), denoted


f(n) € Ω g(n). if f(n) is bounded below by some +ve constant
multiple of g(n) for all large n,

i.e., if there exist some positive constant C and some


non- negative integer no such that,

f(n) ≥ c. g(n) for all n≥ no


The definition is illustrated in the fig,

f(n)

c.g(n)

no n
The lower bound of f(n) indicates that function f(n) will consume
more than the specified time of c*g(n).

i.e. f(n) € Ω g(n)

ex: n3 € Ω(n2)
n3 ≥ n2 for all n≥0.
i.e. c=1 & n0=0.
∂ ( big theta)Notation:

A function f(n) is said to be in ∂(g(n)), denoted


f(n) € ∂ (g(n)), if f(n) is bounded both above & below by some
positive constant multiples of g(n)for all large n, if there exist
some positive constant C1 & C2 & some non negative integer
no such that,

C2 g(n) ≤ f(n) ≤ C1 g(n) for all n ≥ no.


The definition is illustrated in the fig,

c1.g(n)
f(n)
c2.g(n)

no n
Examples: Big O notation.
1.Given f(n) = 3n + 2, prove that f(n) € O(n).

Solution: First we have to compute c and no,


By definition,
3n+2 <= c . n for n >= no
f(n) g(n)
Dividing both sides by n
3 + 2/n <= c for n >= no
Setting no = 1, we get c >= 5
now we can say that 3n + 2 <= 5n, for n >= 1.
Now, 3.1 + 2 <= 5.1, for n = 1
3.2 + 2 <= 5.2, for n =2. hence the proof.
2. Show that 3 n2 + 4n – 2 € O(n2).
Solution : we need to find c and no such that
3 n2 + 4n – 2 <= c . n2 for all n >= no
divide both side by n2, we get
3 + 4/n – 2/n2 <= c
if we choose no = 1, then we need a value of c such that
3 + 4 – 2 <= c
we can set c equal to 5. now we have,
3 n2 + 4n -2 <= 5 n2 for all n >= 1 and c = 5.
now,
3.1 + 4.1 – 2 <= 5.1 for n = 1
3.4 + 4.2 – 2 <= 5.4 for n = 2
Hence the proof.
3. Given f(n) = n3, prove that f(n) = O(n2).
Solution:
Let’s assume that that n3 = O(n2)

Then there must exist constants c and no such that


n3 <= c n2 for all n >= no
Divide by n2 ,
we get n <= c
But this is not possible, we can never choose a constant c large
enough that n will never exceed it, since n can grow without bound.
Thus the original assumption, that n3 = O(n2), must be wrong,
so n3 = O(n2). Hence the proof.
Example: Big omega notation
1.Given f(n) = 5 n2, prove that f(n) € Ω(n).

Solution: As per the definition


5 n2 >= c.n
c.n <= 5 n2
Divide both sides by n
we get, c <= 5 n
setting no = 1 results in c <= 5. there fore we can set c = 1.
5 n2 >= 1.n
nom, 5.1 >= 1.1 for n = 1
5.4 >= 1.2 for n = 2 Hence the proof.
Example: Big theta notation
1.Given f(n) = ½ n(n – 1) , prove that f(n) € ∂ (n2)

Solution: As per the definition we have to find c1 , c2 and no

First we prove upper bound,


½ n (n – 1) = ½ n2 – ½ n <= ½ n2 for all n >= 0.

Second we prove lower bound,


½ n (n – 1) = ½ n2 – ½ n >= ½ n2 – ½ n ½ n for all n >= 2
= ¼ n2

Hence, we can select c2 = ¼, c1 = ½ and no = 2.


A. for loops O(n)

The running time of a for loop is at


most the running time of the
statements inside the loop times the
number of iterations.
for loops

sum = 0;
for( i = 0; i < n; i++ )
sum = sum + i;
The running time is O(n)
B. Nested loops :

The total running time is the


running time of the inside statements
times the product of the sizes of all the
loops
Nested loops :

sum = 0;
for( i = 0; i < n; i++)
for( j = 0; j < n; j++)
sum++;

The running time is O(n2)


C. Consecutive program fragments

Total running time :

The maximum of the running time


of the individual fragments
C. Consecutive program fragments
sum = 0; O(n)
for( i = 0; i < n; i++)
sum = sum + i;

sum = 0; O(n2)
for( i = 0; i < n; i++)
for( j = 0; j < 2n; j++) sum++;

The maximum is O(n2)


Total running time :

The maximum of the running time


of the individual fragments
D: If statement
if C
S1;
else
S2;

The running time is the maximum of the


running times of S1 and S2.
EXAMPLES:
sum = 0; O(n3):
for( i = 0 ; i < n; i++)
for( j = 0 ; j < n*n ; j++ )
sum++;

sum = 0; O(n2):
for( i = 0; i < n ; i++)
for( j = 0; j < i ; j++)
sum++;

for (i = 0; i < n; i++) O(n):


if (a[ i ] == x) return 1;
return -1;
Search in a table n x m:

for (i = 0; i < n; i++)


for (j = 0; j < m; j++)
if (a[ i ][ j ] == x) return 1 ;
return -1;
O(n*m)
Orders of growth :

Notation Name Example


O(1) Constant
O(log n) Logarithmic Binary search
O(n) Linear Arrays of size n
O(n log n) Linearithmic Merge sort
O(n2) Quadratic Matrix problems
O(nc) Polynomial
O(cn) Exponential
O(n!) Factorial
Orders of growth :

The below table explains the orders of growth for


some values of n.

Log n n nlogn n2 n3 2n
0 1 0 1 1 2
1 2 2 4 8 4
2 4 8 16 64 16
3 8 24 64 512 256
4 16 64 256 4096 65536
5 32 160 1024 32768 4294967296
Computing Prefix Averages
We further illustrate
35
asymptotic analysis with two X
algorithms for prefix 30 A
averages 25
The i-th prefix average of an
20
array X is average of the first
(i  1) elements of X: 15
A[i] X[0]  X[1]  …  X[i])/(i+1) 10

Computing the array A of 5


prefix averages of another 0
array X has applications to 1 2 3 4 5 6 7
financial analysis
Prefix Averages (Quadratic)
The following algorithm computes prefix averages in
quadratic time by applying the definition
Algorithm prefixAverages1(X, n)
Input array X of n integers
Output array A of prefix averages of X
A  new array of n integers
for i  0 to n  1 do
s  X[0]
for j  1 to i do
s  s  X[j]
A[i]  s  (i  1)
return A
// The running time is : O(n2
)
Arithmetic Progression
7
The running time of 6
prefixAverages1 is 5
O(1 2 …n) 4
The sum of the first n 3
integers is n(n 1) 2 2
 There is a simple visual
1
proof of this fact
0
Thus, algorithm 1 2 3 4 5 6
prefixAverages1 runs in
O(n2) time
Prefix Averages (Linear)
The following algorithm computes prefix averages in
linear time by keeping a running sum
Algorithm prefixAverages2(X, n)
Input array X of n integers
Output array A of prefix averages of X
A  new array of n integers
s0
for i  0 to n  1 do
s  s  X[i]
A[i]  s  (i  1)
return A
Algorithm prefixAverages2 runs in O(n) time

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy