0% found this document useful (0 votes)
4 views

Module 1

Module-1 introduces algorithms as unambiguous instructions for problem-solving, highlighting their characteristics, representations, and the importance of input specifications. It discusses algorithm design, including understanding problems, computational capabilities, and choosing between exact and approximate solutions, as well as methods for proving correctness and analyzing efficiency. The module also covers performance analysis, including time and space efficiency, measuring input size, and worst-case, best-case, and average-case efficiencies.

Uploaded by

C4rb0n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Module 1

Module-1 introduces algorithms as unambiguous instructions for problem-solving, highlighting their characteristics, representations, and the importance of input specifications. It discusses algorithm design, including understanding problems, computational capabilities, and choosing between exact and approximate solutions, as well as methods for proving correctness and analyzing efficiency. The module also covers performance analysis, including time and space efficiency, measuring input size, and worst-case, best-case, and average-case efficiencies.

Uploaded by

C4rb0n
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

DAA Module-1

Module-1
1. Introduction
1.1 What is an algorithm?
Algorithm: It is a sequence of unambiguous instructions for solving a problem.

Notion of an algorithm

Important point for an algorithm which is illustrated further by examples


1. The non ambiguity requirement for each step of an algorithm
2. The range of inputs for which an algorithm works has to be specified carefully.
3. The same algorithm can be represented in several different ways.
4. Several algorithms for solving the same problem may exist.
5. Algorithms for the same problem can be based on very different ideas and can
solve the problem with dramatically different speeds

An algorithm can be described in many ways


• Natural language like English- although if we select this option, we must make sure that
the resulting instructions are definite.
• Graphic representations called flowcharts, but they work well only if the algorithm is
small and simple.
• pseudo code that resembles a programming language like C and Pascal

Example-1

Euclid's algorithm for computing gcd(m, n)


Step 1 If n = 0, return the value of m as the answer and stop; otherwise, proceed to Step 2.
Step 2 Divide m by n and assign the value of the remainder to r.
Step 3 Assign the value of n to m and the value of r to n. Go to Step 1.

Alternatively, we can express the same algorithm in a pseudocode:

ALGORITHM gcd(m, n)
//Computes gcd(m, n) by Euclid’s algorithm
//Input: Two nonnegative, non zero integers m and n
CSE@HKBKCE 1
DAA Module-1

//Output: Greatest common divisor of m and n


while n != 0 do
{
r ←m mod n
m←n
n←r
}
return m

There are several algorithms for computing the greatest common divisor. Let us look at
the other two methods for this problem.

The first is simply based on the definition of the greatest common divisor of m and n as
the largest integer that divides both numbers evenly

Consecutive integer checking algorithm for computing gcd(m, n)


Step 1 : Assign the value of min{m, n) to t.
Step 2 : Divide m by t. If the remainder of this division is 0, go to Step 3; otherwise, go to
Step 4.
Step 3: Divide n by t. If the remainder of this division is 0, return the value of t as the
answer and stop; otherwise, proceed to Step 4.
Step 4 : Decrease the value of t by 1. Go to Step 2

Note that unlike Euclid's algorithm, this algorithm, in the form presented, does not work
correctly when one of its input numbers is zero. This example illustrates why it is so
important to specify the range of an algorithm's inputs explicitly and carefully.

Third Procedure- Middle school method


Middle-school procedure for computing gcd(m, n)
Step 1: Find the prime factors of m.
Step 2 : Find the prime factors of n.
Step 3: Identify all the common factors in the two prime expansions found in Step 1 and
Step 2. (If p is a common factor occurring Pm and Pn times in m and n,
respectively, it should be repeated min{pm, pn) times.)
Step 4 Compute the product of all the common factors and return it as the greatest
common divisor of the numbers given.

Example:
Thus, for the numbers 60 and 24, we get
60=2·2·3·5
24=2·2·2·3
gcd(60, 24) = 2. 2. 3 = 12

CSE@HKBKCE 2
DAA Module-1

a simple algorithm for generating consecutive primes not exceeding any given integer n. It
was probably invented in ancient Greece and is known as the sieve of Eratosthenes (ca.
200 B.C.).

1.2 Fundamentals of Algorithmic Problem Solving

Algorithms can be considered to be procedural solutions to problems. These solutions are not
answers but specific instructions for getting answers

Steps in designing and Analyzing an algorithm


1. Understanding the Problem
It involves understanding the problem given completely. An input to an algorithm specifies an
instance of the problem the algorithm solves. A correct algorithm is not one that works most of
the time, but 'One that works correctly for al legitimate inputs.

2. Ascertaining the Capabilities of a Computational Device

CSE@HKBKCE 3
DAA Module-1

The vast majority of algorithms in use today are still destined to be programmed for a computer
closely resembling the von Neumann machine. Its central assumption is that instructions are
executed one after another, one operation at a time. Accordingly, algorithms designed to be
executed on such machines are called sequential algorithms. Newer computers that can execute
operations concurrently, i.e., in parallel. Algorithms that take advantage of this capability are
called parallel algorithms.

3. Choosing between Exact and Approximate Problem Solving


The next principal decision is to choose between solving the problem exactly or solving it
approximately. In the former case, an algorithm is called an exact algorithm; in the latter case,
an algorithm is called an approximation algorithm. approximation algorithm. Approximate
algorithms ar used as there are important problems that simply cannot be solved exactly for most
of their instances; examples include extracting square roots, solving nonlinear equations,

Deciding on Appropriate Data Structures

Some algorithms do not demand any ingenuity in representing their inputs. But others are require
ingenious data structures

Algorithm Design Techniques


An algorithm design technique (or "strategy" or "paradigm") is a general approach to solving
problems algorithmically that is applicable to a variety of problems from different areas of
computing. Learning these techniques is of utmost importance for the following reasons.

• They provide guidance for designing algorithms for new problems, i.e., problems for
which there is no known satisfactory algorithm.
• Algorithms are the cornerstone of computer science. Algorithm design techniques
make it possible to classify algorithms according to an underlying design idea; therefore,
they can serve as a natural way to both categorize and study algorithms

Methods of Specifying an Algorithm


These are the two options that are most widely used nowadays for specifying algorithms.
• Natural language has an appeal; however, the inherent ambiguity of any natural language
makes a clear description of algorithms difficult.
• A pseudocode is a mixture of a natural language and programming language like
constructs. A pseudocode is usually more precise than a natural language, and its usage
often yields more clear algorithm descriptions
• In the earlier days of computing, the dominant method for specifying algorithms was a
flowchart, This representation technique has proved to be inconvenient for all

Proving an Algorithm's Correctness


It has to be proved that the algorithm yields a required result for every valid input in a finite
amount of time.
For some algorithms, a proof of correctness is quite easy; for others, it can be quite complex.
A common technique for proving correctness is to use mathematical induction because an
algorithm's iterations provide a natural sequence of steps needed for such proofs.

CSE@HKBKCE 4
DAA Module-1

Although tracing the algorithm's performance for a few specific inputs can be a very worthwhile
activity, it cannot prove the algorithm's correctness conclusively. But in order to show that an
algorithm is incorrect, you need just one instance of its input for which the algorithm fails.

If the algorithm is found to be incorrect, you need to either redesign it under the same decisions
regarding the data structures, the design technique, and so on, or, in a more dramatic reversal,
to reconsider one or more of those decisions The notion of correctness for approximation
algorithms is less straightforward than it is for exact algorithms. For an approximation algorithm,
we usually would like to be able to show that the error produced by the algorithm does not exceed
a predefined limit

Analyzing an Algorithm
there are two kinds of algorithm efficiency: time efficiency and space efficiency.
• Time ~fficiency indicates how fast the algorithm runs
• space efficiency indicates how much extra memory the algorithm needs.

Other desirable characteristic of an algorithm is simplicity and generality. generality of the


problem the algorithm solves and the range of inputs it accepts.

Coding an Algorithm

Most algorithms are destined to be ultimately implemented as computer programs. It is important


to implement the algorithm correctly and efficiently.

Algorithm Design and Analysis Process

CSE@HKBKCE 5
DAA Module-1

1.3 Analysis framework/ Performance analysis

There are two kinds of efficiency time efficiency and space efficiency.
• Time efficiency, also called time complexity,indicates how fast an algorithm in question
runs, or the time required by an algorithm to run to completion
• Space efficiency, also called space complexity, refers to the amount of memory units
required by the algorithm in addition to the space needed for its input and output that is
the total memory required by an algorithm to run to completion

1.4.1 Measuring an inputs size

Almost all algorithms run longer on larger inputs. For example, it takes longer to sort larger arrays,
multiply larger matrices, and so on. Therefore an algorithm’s efficiency can be analyzed as a
function of some parameter n indicating the algorithm’s input size.
The choice of an appropriate size metric can be influenced by operations of the algorithm in
question. For example
1. Consider the input’s size for a spell-checking algorithm ,If the algorithm examines
individual characters of its input, the size is measured by the number of characters. If it
works by processing words, then the number of words in the input should be counted.
2. Algorithms solving problems such as checking primality of a positive integer n. Here, the
input is just one number, and it is this number’s magnitude that determines the input. In
such situations, it is preferable to measure size by the number b of bits in the n’s binary
representation: b = log2 n + 1.

1.3.1 Units for Measuring running time


Drawback of using some standard unit of time measurement like second, or millisecond, and so
on
• Dependence on the speed of a particular computer
• Dependence on the quality of a program implementing the algorithm and
• Dependence of the compiler used in generating the machine code, and the
• Difficulty of clocking the actual running time of the program.
Since algorithm’s efficiency has to be measured, we would like to have a metric that does not
depend on these external factors.

One possible approach is to count the number of times each of the algorithm’s operations is
executed. This approach is both excessively difficult and, as we shall see, usually unnecessary.
Therefore it is sufficient to identify the most important operation of the algorithm, called the basic
operation, the operation contributing the most to the total running time, and compute the number
of times the basic operation is executed.

Basic operation of an algorithm is usually the most time-consuming operation in the algorithm’s
innermost loop.
Example:
1. Most sorting algorithms work by comparing elements (keys) of a list being sorted with
each other; for such algorithms, the basic operation is a key comparison.

CSE@HKBKCE 6
DAA Module-1

2. Algorithms for mathematical problems typically involve some or all of the four
arithmetical operations: addition, subtraction, multiplication, and division. Of the four, the
most time-consuming operation is division, followed by multiplication and then addition
and subtraction.

Thus, the established framework for the analysis of an algorithm’s time efficiency suggests
measuring it by counting the number of times the algorithm’s basic operation is executed on inputs
of size n.

Application: Let Cop be the execution time of an algorithm’s basic operation on a particular
computer, and let C(n) be the number of times this operation needs to be executed for this
algorithm. Then we can estimate the running time T (n) of a program implementing this algorithm
on that computer by the formula
T (n) ≈ Cop C(n).
1
assuming that C(n) = 2 ∗ 𝑛 ∗ (𝑛 − 1), how much longer will the algorithm run if we double its
input size.

It would run 4 times longer

It has to be noted that the value of cop was cancelled out in the ratio. Also note that 1/2 , the
multiplicative constant in the formula for the count C(n), was also cancelled out. It is for these
reasons that the efficiency analysis framework ignores multiplicative constants and concentrates
on the count’s order of growth within a constant multiple for large-size inputs.

1.3.2 Order of Growth

For large values of n, it is the function’s order of growth that counts. Table 1.1 contains values of
a few functions particularly important for analysis of algorithms. The magnitude of the numbers
in Table 1.1 has a profound significance for the analysis of algorithms.
Table 1.1 Values (some approximate) of several functions important for analysis of algorithms

CSE@HKBKCE 7
DAA Module-1

Plot of Function versus values

The function growing the slowest among these is the logarithmic function. We can expect a
program implementing an algorithm with a logarithmic basic-operation count to run practically
instantaneously on inputs of all realistic sizes
The exponential function 2n and the factorial function n! Both these functions grow so fast that
their values become very large even for rather small values of n.

1.3.3 Worst-Case, Best-Case, and Average-Case Efficiencies

There are many algorithms for which running time depends not only on an input size but also on
the specifics of a particular input. Consider, as an example, sequential search. This algorithm
searches for a given item (some search key K) in a list of n elements by checking successive
elements of the list until either a match with the search key is found or the list is exhausted.

ALGORITHM SequentialSearch(a[0..n − 1], K)


//Searches for a given value in a given array by sequential search
//Input: An array A[0..n − 1] and a search key K
//Output: The index of the first element in A that matches K or −1 if there are no matching elements
{
For i= 1 to n do

{
if(a[i]=k
return i
}
retrun -1
}

CSE@HKBKCE 8
DAA Module-1

In the worst case, when there are no matching elements or the first matching element happens to
be the last one on the list, the algorithm makes the largest number of key comparisons among all
possible inputs of size n: Cworst(n) = n.

The worst-case efficiency of an algorithm is its efficiency for the worst-case input of size n, which
is an input (or inputs) of size n for which the algorithm runs the longest among all possible inputs
of that size. The worst-case analysis provides very important information about an algorithm’s
efficiency by bounding its running time from above. In other words it guarantees that for any
instance of size n, the running time will not exceed Cworst(n .).
The best-case efficiency of an algorithm is its efficiency for the best-case input of size n, which
is an input (or inputs) of size n for which the algorithm runs the fastest among all possible inputs
of that size. For example, the best-case inputs for sequential search are lists of size n with their
first element equal to a search key; accordingly, Cbest(n) = 1 for this algorithm.

Average-case efficiency provides necessary information about the algorithms behavior on


random or typical input. To analyze the algorithm’s average case efficiency, we must make some
assumptions about possible inputs of size n. consider the sequential search example
The standard assumptions are that
• The probability of a successful search is equal to p (0 ≤ p ≤ 1) and
• The probability of the first match occurring in the ith position of the list is the same for
every i and is equal to p/n. number of comparison is i
In the case of an unsuccessful search, the number of comparisons will be n with the probability of
such a search being (1− p).
Therefore, Under these assumptions we can find the average number of key comparisons Cavg(n)
as follows

if p = 1 (the search must be successful), the average number of key comparisons made by sequential
search is (n + 1)/2; that is, the algorithm will inspect, on average, about half of the list’s elements.
If p = 0 (the search must be unsuccessful), the average number of key comparisons will be n
because the algorithm will inspect all n elements on all such inputs.

investigation of the average-case efficiency is considerably more difficult than investigation of


the worst-case and best-case efficiencies

1.4 Asymptotic Notations


To compare and rank order of growth, computer scientists use three notations O (big oh), (big
omega), and  (big theta).

CSE@HKBKCE 9
DAA Module-1

Let t (n) and g(n) can be any nonnegative functions defined on the set of natural numbers.

• t (n) - algorithm’s running time (usually indicated by its basic operation count C(n)), and
• g(n) will be some simple function to compare the count with
1.4.1 O(big oh) –Notation
Informally, O(g(n)) is the set of all functions with a lower or same order of growth as g(n)

Definition: A function t (n) is said to be in O(g(n)), denoted t (n) ∈ O(g(n)),if t (n) is bounded
above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c
and some nonnegative integer n0 such that

t (n) ≤ cg(n) for all n ≥ n0.

Figure 1.2 Big-oh notation: t (n) ∈ O(g(n)).


Examples :

Prove: 100n + 5 ∈ O(n2).


To prove we have to find a constant c and initial value n0 such that

100n + 5 ≤ c*n2 (for all n ≥ n0)


taking c= 105 and n0= 1
100n+5 ≤ 105 n2 for all n>=1
n=1, 105=105
n=2, 205 < 420

Therefore 100n + 5 ∈ O(n2).

Note: The definition gives us a lot of freedom in choosing specific values for constants c and n0.

1.4.2 ( omega) Notation


The second notation, (g(n)), stands for the set of all functions with a higher or same order of
growth as g(n)

Definition : A function t (n) is said to be in  (g(n)), denoted t (n) ∈ (g(n)), if t (n) is bounded
below by some positive constant multiple of g(n) for all large n, i.e., if there exist some positive
constant c and some nonnegative integer n0 such that

t (n) ≥ cg(n) for all n ≥ n0.

CSE@HKBKCE 10
DAA Module-1

Figure 1.3 (big omega)


Example :
n3 ∈  (n2)
To prove we have tofind a constant c and initial value n0 such that
n3 ≥ c.n2 for all n ≥ n0
selecting c = 1 and n0 = 0
n3 ≥ 1.n2 for all n ≥ 0,
therefore n3 ∈  (n2)

1.4.3 (Theta) Notation


The third notation (g(n)is the set of all functions that have the same order of growth as g(n)

Definition : A function t (n) is said to be in  (g(n)), denoted t (n) ∈ (g(n)), if t (n) is bounded
both above and below by some positive constant multiples of g(n) for all large n, i.e., if there
exist some positive constants c1 and c2 and some non negative integer n0 such that
c2g(n) ≤ t (n) ≤ c1g(n) for all n ≥ n0.

Figure 1.4 Big Theta Notation


Example:
1
prove that 2 ∗ 𝑛(𝑛 − 1)) ∈ (n2).
First, we prove the right inequality (the upper bound):

Selecting c1= ½ and n0=0


1 1 1 1
∗ 𝑛(𝑛 − 1) = 2 ∗ 𝑛2 − 2 ∗ 𝑛  ∗ 𝑛2 for all n >=0
2 2

Second, we prove the left inequality (the lower bound):


Selecting c2=1/4 and n0=2
1 1 1 1
∗ 𝑛(𝑛 − 1) = 2 ∗ 𝑛2 − 2 ∗ 𝑛  ∗ 𝑛2 for all n >=2
2 4

CSE@HKBKCE 11
DAA Module-1

Hence c1=1/2 c2=1/4 and n0=2

1.4.4 o(Little oh) Notation


A function t(n) is said to be in in o(g(n), denoted as f(n)  o(g(n)), if t(n) is bounded by some
positive constant multiple of g(n) but can never be equal to g(n) for all large value of n . i.e if
there exist some positive constant c and some non negative integer n0 such that
t (n) < cg(n) for all n ≥ n0.

1.4.5 Property involving the Asymptotic notation

If t1(n)  O(g(n) and t2(n)  O(g2(n)), then t1(n) +t2(n)  O(max{g1(n),g2(n)})

Proof
Since t1(n)  O(g1(n)), there exist some positive constant c1 and some non negative integer n1
such that
t1 (n)  c1g(n) for all n ≥ n1
similarly Since t2(n)  O(g2(n)), there exist some positive constant c2 and some non negative
integer n2 such that
t2 (n)  c2g(n) for all n ≥ n2

Let us denote c3= max{c1,c2} and consider n  max {n1,n2} so that we can use both inequalities

Adding the two inequalities above yields the following

t1(n) +t2(n)  c1(g1(n)) +c2(g2(n))


 c3(g1(n)) +c3(g2(n))
 c3[(g1(n)) +(g2(n))]
 c3 * 2 max {(g1(n)) ,(g2(n))

Hence t1(n) +t2(n)  O(max{g1(n),g2(n)} with the


Constant c= 2c3 = 2 max{c1,c2} and
n0 = max {n1,n2} required by O definition

1.4.6 Basic Asymptotic Classes

Table 1.3 Basic Efficiency class

Class Name Comments


1 Constant Short of best case efficiencies. Very few realistic
examples can be given.
log n logarithmic Typically a result of cutting a problems size by a constant
factor.
n Linear Algorithm that scans a list of size n ex sequential search

CSE@HKBKCE 12
DAA Module-1

nlogn Linearithmic Divide and conquer algorithms example merge sort quick
sort
n2 quadratic Algorithms with two embedded loop. Elementary sorting,
operations on n*n matrix
n3 cubic Algorithms with three embedded loops. Example
algorithms from linear algebra. Matrix multiplication
2n Exponential Algorithms that generates all subsets of n elements set
n! Factorial Algorithms that generates permutation of n element set

1.5 Mathematical Analysis of Non Recursive Algorithms


The efficiency of non recursive algorithms are analyzed here.

General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation. (As a rule, it is located in the innermost loop.)
3. Check whether the number of times the basic operation is executed depends only on the
size of an input. If it also depends on some additional property, the worst-case, average-
case, and, if necessary, best-case efficiencies have to be investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed form formula
for the count or, at the very least, establish its order of growth.

Note : Important formulas

Two basic rules of sum manipulation

Two summation formulas

Example 1: the problem of finding the value of the largest element in a list of n numbers. we
assume that the list is implemented as an array. The pseudo code is given below

ALGORITHM MaxElement(A[0..n − 1])


{
//Determines the value of the largest element in a given array

CSE@HKBKCE 13
DAA Module-1

//Input: An array A[0..n − 1] of real numbers


//Output: The value of the largest element in A
maxval ←A[0]
for i =1 to n − 1 do
if A[i]>maxval
maxval=A[i]
return maxval
}

Input size here is the number of elements in the array, i.e., n.

Basic operation- . There are two operations in the loop’s body, the comparison A[i]> maxval and
the assignment maxval←A[i]. Since the comparison is executed on each repetition of the loop and
the assignment is not, we should consider the comparison to be the algorithm’s basic operation.
Since the number of comparison is the same for all array of size n , we need not have to distinguish
between best, worst and average cases

Let us denote C(n) -the number of times this comparison is executed. The algorithm makes one
comparison on each execution of the loop, which is repeated for each value of the loop’s variable
i within the bounds 1 and n − 1, inclusive. Therefore, we get the following sum for C(n):

Example -2:

Consider the element uniqueness problem: To check whether all the elements in a given array of
n elements are distinct. This problem can be solved by the following algorithm.

ALGORITHM UniqueElements(A[0..n − 1])


{
//Determines whether all the elements in a given array are distinct
//Input: An array A[0..n − 1]
//Output: Returns “true” if all the elements in A are distinct and “false” otherwise
for i=0 to n − 2 do
for j =i + 1 to n − 1 do
if A[i]= A[j ] return false
return true
}

1. Input size - n, the number of elements in the array.


2. Basic operation - comparison of two elements in the innermost loop.

CSE@HKBKCE 14
DAA Module-1

3. The number of element comparisons depends not only on n but also on whether there are
equal elements in the array and, if there are, which array positions they occupy. So we will
limit our investigation to the worst case only.

By definition, the worst case input is an array for which the number of element comparisons
Cworst(n) is the largest among all arrays of size n. An inspection of the innermost loop reveals
that there are two kinds of worst-case inputs. Arrays with no equal elements and arrays in which
the last two elements are the only pair of equal elements.

For such inputs, one comparison is made for each repetition of the innermost loop, i.e., for each
value of the loop variable j between its limits i + 1 and n − 1; this is repeated for each value of the
outer loop, i.e., for each value of the loop variable i between its limits 0 and n − 2. Accordingly,
we get

Example 3: Given two n × n matrices A and B, find the time efficiency of the algorithm for
computing their product C = AB.

ALGORITHM MatrixMultiplication(A[0..n − 1, 0..n − 1], B[0..n − 1, 0..n − 1])


{
//Multiplies two square matrices of order n by the definition-based algorithm
//Input: Two n × n matrices A and B
//Output: Matrix C = AB
for i ←0 to n − 1 do
for j ←0 to n − 1 do
C[i, j ]←0.0
for k←0 to n − 1 do
C[i, j ]←C[i, j ]+ A[i, k] * B[k, j]
return C
return c
}

Input size- Matrix order n


Basic operation: There are two arithmetical operations in the innermost loop here—
multiplication and addition. we do not have to choose between them, because on each repetition
of the innermost loop each of the two is executed exactly once. So by counting one we
automatically count the other. Still, following a well-established tradition, we consider
multiplication as the basic operation
Since this count depends only on the size of the input matrices, we do not have to investigate

CSE@HKBKCE 15
DAA Module-1

the worst-case, average-case, and best-case efficiencies separately.

There is just one multiplication executed on each repetition of the algorithm’s innermost loop,
which is governed by the variable k ranging from the lower bound 0 to the upper bound n − 1.
Therefore, the number of multiplications made for every pair of specific values of variables i and
j is

and the total number of multiplications M(n) is expressed by the following triple sum:

If we now want to estimate the running time of the algorithm on a particular machine, we can do
it by the product

Where cm is the time of one multiplication on the machine in question.We would get a more
accurate estimate if we took into account the time spent on theadditions, too:

where ca is the time of one addition. Note that the estimates differ only by their multiplicative
constants and not by their order of growth.

Example 4: The following algorithm finds the number of binary digits in the binary representation
of a positive decimal integer.

ALGORITHM Binary(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ←1
while n > 1 do
count ←count + 1
n←_n/2_

CSE@HKBKCE 16
DAA Module-1

return count

The most frequently executed operation here is not inside the while loop but rather the comparison
n > 1 that determines whether the loop’s body will be executed.

the loop variable takes on only a few values between its lower and upper limits; therefore, we
have to use an alternative way of computing the number of times the loop is executed. Since the
value of n is about halved on each repetition of the loop, the answer should be about log2 n.

1.6 Mathematical analysis of Recursive algorithms

Recursive Algorithms

An algorithm is said to be recursive if the same algorithm is invoked in the body.


• Direct recursive algorithm -An algorithm that calls itself .
• Indirect recursive Algorithm A calls another algorithm which in turn calls A.
Recursive mechanism is appropriate when the problem itself is recursive.

The framework for analysis of recursive algorithms is presented here

General Plan for Analyzing the Time Efficiency of Recursive Algorithms


1. Decide on a parameter (or parameters) indicating an input’s size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is executed can vary on different
inputs of the same size; if it can, the worst-case, average-case, and best-case efficiencies
must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the number of times
the basic operation is executed.
5. Solve the recurrence or, at least, ascertain the order of growth of its solution.

EXAMPLE 1 Compute the factorial function F(n) = n! for an arbitrary nonnegative integer n.
Since
n!= 1 . . . . . (n − 1) . n = (n − 1)! . n for n ≥ 1
and 0!= 1 by definition, we can compute F(n) = F(n − 1) . n with the following
recursive algorithm.

ALGORITHM F(n)
{
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0 return 1
else return F(n − 1) * n
}
consider n as an indicator of this algorithm’s input size
CSE@HKBKCE 17
DAA Module-1

The basic operation of the algorithm is multiplication.


Number of executions we denote M(n). Since the function F(n) is computed according to the
formula
F(n) = F(n − 1) . n for n > 0,
F(0)= 1

The recurrence relation for the number of multiplicationsM(n) is

M(n) = M(n − 1) + 1 for n > 0


To compute To multiply
F(n-1) F(n-1) by n

The initial condition tells us the value with which the sequence starts. We can obtain this value
by inspecting the condition that makes the algorithm stop its recursive calls:
if n = 0 return 1.
This tells us two things.
1. since the calls stop when n = 0, the smallest value of n for which this algorithm is
executed is 0
2. when n = 0, the algorithm performs no multiplications.
Therefore the initial condition is M(0)=0

Solving the equation by the method of Backward substitution

M(n) = M(n − 1) + 1 substitute M(n − 1) = M(n − 2) + 1


= [M(n − 2) + 1]+ 1= M(n − 2) + 2 substitute M(n − 2) = M(n − 3) + 1
= [M(n − 3) + 1]+ 2 = M(n − 3) + 3.

By observation the general formula for the pattern is


M(n) = M(n − i) + i.
Since the initial condition is specified for n=0 Substituting i=n

M(n) = M(n − n) + n initial condition M(0)=0.


M(n) = M(0) + n =n
Therefore M(n) (n)

EXAMPLE 2: Tower of Hanoi


Move the disks from tower A to tower B using tower C for intermediate storage. Only one can
disk cam be moved at a time. In addition, at no time can a larger disk be on top of a smaller disk.

CSE@HKBKCE 18
DAA Module-1

Figure 1.1 Tower of Hanoi

This problem has a very good recursive solution . Assume that there are n disks . first move n-1
disk from tower A to tower C . Now move the largest disk from tower A to B. Then move the
remaining n-1 disk from tower C to tower B
Algorithm TowersOfHanoi(n, S, D, T)
2 // Move the top n disks from tower S to tower D.
3{
4 if (n > 1) then
5 {
6 TowersOfHanoi(n -1,S , T, D);
7 write ("move top disk from tower", S, "to top of tower", D);
9 TowersOfHanoi(n-1, T, D, S);
10 }
11 }

Recursive solution to the Tower of Hanoi puzzle

Input size: number of disk n


Basic operation: Moving one disk
The number of moves M(n) depends on n , and the recurrence equation for it is :
M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.
M(n)= 2M(n-1)+ 1
initial condition M(1) = 1,

Solving the recurrence relation by method of backward substitutions


M(n) =2M(n − 1) + 1 sub. M(n − 1) = 2M(n − 2) + 1
= 2[2M(n − 2) + 1]+ 1= 2 M(n − 2) + 2 + 1 sub. M(n − 2) = 2M(n − 3) + 1
2

= 22[2M(n − 3) + 1]+ 2 + 1= 23M(n − 3) + 22 + 2 + 1.


The pattern suggest that the next term will be
=24M(n-4) + 23 + 22 + 2 + 1 In general after i substitutions
i−1 i−2
=2 M(n − i) + 2 + 2 + . . . + 2 + 1
i

Since theinitial condition is specified for n = 1, which is achieved for i = n − 1,


= 2n−1M(n − (n − 1)) + 2n−1 − 1
= 2n−1M(1) + 2n−1 − 1= 2n−1 + 2n−1 − 1= 2n − 1.
Thus, we have an exponential algorithm, which will run for an unimaginably long time even for
moderate values of n

When a recursive algorithm makes more than a single call to itself, it can be useful for analysis
purposes to construct a tree of its recursive calls. In this tree, nodes correspond to recursive calls,
and we can label them with the value of the parameter. For the Tower of Hanoi example, the tree
is given in Figure 1.5. By counting the number of nodes in the tree, we can get the total number of
calls made by the Tower of Hanoi algorithm
𝑛−𝑖
=2n −1
C(n)= ∑ 2l where l is the level in the tree in Figure 1.5
l=0

CSE@HKBKCE 19
DAA Module-1

Figure 1.5 Tree representing the calls made by TOH algorithm

Example 3: a recursive version of the algorithm that finds the number of digits in the binary
representation of a decimal number.

ALGORITHM BinRec(n)
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
if n = 1 return 1
else return BinRec(n/2) + 1

Input size: number of bits in n


Basic operation : Addition
The recurrence relation for the number of addition is
A(n) = A(n/2) + 1 for n > 1.
Since the recursive calls end when n is equal to 1 and there are no additions made then, the
initial condition is
A(1) = 0.

It is difficult to use the method of backward substitutions on values of n that are not powers of 2.
Therefore, the standard approach to solving such a recurrence is to solve it only for n = 2k .After
getting a solution for powers of 2, we can fine-tune this solution to get a formula valid for an
arbitrary n.

Apply this concept to recurrence relation, for n = 2k takes the form


A(2k) = A(2k−1) + 1 for k > 0,
A(20) = 0.

Now applying backward substitution


A(2k) = A(2k−1) + 1 substitute A(2k−1) = A(2k−2) + 1
k−2 k−2
= [A(2 ) + 1]+ 1= A(2 ) + 2 substitute A(2k−2) = A(2k−3) + 1
= [A(2k−3) + 1]+ 2 = A(2k−3) + 3 . . .
...
= A(2k−i) + i
Since the initial condition is 20=0 substituting i=k
k−k
= A(2 ) + k.
Thus,
CSE@HKBKCE 20
DAA Module-1

A(2k) = A(1) + k = k,
or, after returning to the original variable n = 2k and hence k = log2 n,
A(n) = log2 n ∈ (log n).

1.7 Brute-Force Approaches

1.7.1 Selection Sort


We start selection sort by scanning the entire given list to find its smallest element and exchange
it with the first element, putting the smallest element in its final position in the sorted list. Then
we scan the list, starting with the second element, to find the smallest among the last n − 1
elements and exchange it with the second element, putting the second smallest element in its
final position.

ALGORITHM SelectionSort(A[0..n − 1])


//Sorts a given array by selection sort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ←0 to n − 2 do
min←i
for j ←i + 1 to n − 1 do
if A[j ]<A[min] min←j
swap A[i] and A[min]

Example

Time Complexity Analysis

The input size is given by the number of elements n; the basic operation is the key comparison
A[j ]<A[min]. The number of times it is executed depends only on the array size and is given by
the following sum:

CSE@HKBKCE 21
DAA Module-1

Thus, selection sort is a _(n2) algorithm on all inputs


Bubble Sort

This is a sorting problem that compares adjacent elements of the list and exchange them if they
are out of order. By doing it repeatedly, we end up “bubbling up” the largest element to the
last position on the list. The next pass bubbles up the second largest element, and so on, until
after n − 1 passes the list is sorted.

ALGORITHM BubbleSort(A[0..n − 1])


//Sorts a given array by bubble sort
//Input: An array A[0..n − 1] of orderable elements
//Output: Array A[0..n − 1] sorted in nondecreasing order
for i ←0 to n − 2 do
for j ←0 to n − 2 − i do
if A[j + 1]<A[j ] swap A[j ] and A[j + 1]

Example: Sort -2 0 11 -9 45

CSE@HKBKCE 22
DAA Module-1

Time Complexity Analysis

Brute String Match

Given a string of n characters called the text and a string of m characters (m <= n) called the
pattern, find a substring of the text that matches the pattern. We want to find i-the index of the
leftmost character of the first matching substring in the text that matches. The search continues
until the entire text is exhausted. Last position in the text which can still be a beginning of a
matching substring is n-m. Beyond that position, there are not enough characters to match the
entire pattern; hence, the algorithm need not make any comparisons there.

ALGO

CSE@HKBKCE 23
DAA Module-1

RITHM BruteForceStringMatch(T[O .. n -1]. P[O .. m -1])


//Implements brute-force string matching
//Input: An array T[0 .. n1 - 1] of n characters representing a text and an
// array P[O .. m - 1] of m characters representing a pattern
//Output: The index of the first character in the text that starts a matching substring or -1 if the //
search is unsuccessful

For i=0 to n-m do


J=0
while j < m and P[j] = T[i + j] do
j ->j+1
if j = m return i
return -1

Example- Brute force string matching

Analysis
• In worst case the algorithm may have to make all m comparisons before shifting the pattern,
and this can happen for each of the n - m + 1 tries. Thus in worst case time complexity is
 (nm)
• For a typical word search in a natural language text, however, most shifts would happen
after very few comparisons Therefore, the average-case efficiency should be considerably
better than the worst-case efficiency.
• For searching in random texts, it has been shown to be linear, i.e., (n + m) = (n)

CSE@HKBKCE 24

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy