0% found this document useful (0 votes)
7 views17 pages

DAA CT1 QB Answers

The document provides a comprehensive overview of various algorithms and their complexities, including Fibonacci number generation, forward and backward substitution methods, and algorithm design approaches such as brute force, divide and conquer, and dynamic programming. It explains asymptotic notation, time complexity analysis using the step count method, and differentiates between best, average, and worst-case efficiencies. Additionally, it outlines the properties of algorithms and includes examples for searching and matrix multiplication.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views17 pages

DAA CT1 QB Answers

The document provides a comprehensive overview of various algorithms and their complexities, including Fibonacci number generation, forward and backward substitution methods, and algorithm design approaches such as brute force, divide and conquer, and dynamic programming. It explains asymptotic notation, time complexity analysis using the step count method, and differentiates between best, average, and worst-case efficiencies. Additionally, it outlines the properties of algorithms and includes examples for searching and matrix multiplication.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Design and Analysis of Algorithms

CT1 Question Bank Answers – Unit 1

Part B
1. Write an algorithm for Fibonacci numbers generation and
find the time complexity of an algorithm using the step count
method.
Ans.
Algorithm:

Step 1: Set a=0, b=1, c=0


Step 2: Print a, b
Step 3: REPEAT steps 4-6, for n times
Step 4: c=a+b
Step 5: Print c
Step 6: set a=b, b=c
[end of loop]
Step 7: Exit.

Analysis using Step-Count Method:

Thus, T(n) = 5n + 5, and hence, time complexity is O(n).

2. Forward substitution.
Ans. Forward substitution is a method for solving linear
homogeneous recurrence equations with constant coefficients.
There are two steps involved in the forward substitution, namely
1. Plug: Substitute values repeatedly.
2. Chug: Simplify the obtained expressions.
The process is continued till the closed form is obtained, which
can be confirmed by the observation of the common pattern that
emerges from the process of repeated substitution.

[NOTE: The part below is given just for understanding, in the


exam an equation will be given, which you have to solve.]

Let us look at an example for a forward substitution:

Also, Thus the time complexity Big O can be given as O(n), that is
the highest power of n in the resulting equation.

3. Backward substitution problems.


Ans. Backward substitution, similar to the Forward Substitution, is
a method for solving linear homogeneous recurrence equations
with constant coefficients. However, this approach substitutes
backwards from the desired condition to the initial condition.
There are two steps involved in the backward substitution, namely
1. Plug: Substitute values repeatedly.
2. Chug: Simplify the obtained expressions.
The process is continued till the closed form is obtained, which
can be confirmed by the observation of the common pattern that
emerges from the process of repeated substitution.

[NOTE: The part below is given just for understanding, in the


exam an equation will be given, which you have to solve.]
4. Recurrence equation using Recursive Tree.
Ans. The Recursive Tree method is a method to solve a given
recurrence equation. The steps involved in this method are as
follows:

[NOTE: The part below is given just for understanding, in the


exam an equation will be given, which you have to solve.]
PART-C
1. Algorithm Design approaches.
Ans. There are several algorithm design approaches, and
selecting one depends on the problem and the specific
requirements of a problem. The correct choice can significantly
impact the time and resources required to solve a given problem.
1. Brute Force:
● A straightforward approach that systematically explores all
possible solutions until the desired outcome is found.
● Example: Checking every combination of numbers to find the
sum that equals a specific target.
● Advantages: Easy to implement, applicable to various
problems.
● Disadvantages: Inefficient for large problem sizes, often
leading to exponential time complexity.
2. Divide and Conquer:
● Decomposes a problem into smaller, independent
subproblems, solves them recursively, and combines the
solutions to obtain the solution for the original problem.
● Example: Merge sort, which divides the list into halves, sorts
them recursively, and then merges the sorted halves.
● Advantages: Efficient for problems solvable by breaking
them down into smaller, independent subproblems. Often
leads to logarithmic or linear time complexity.
● Disadvantages: Overhead associated with recursion and
subproblem management.
3. Greedy Algorithm:
● Makes locally optimal choices at each step with the hope of
finding a globally optimal solution.
● Example: Finding the shortest path in a maze by choosing
the nearest unvisited neighbor at each step.
● Advantages: Often simple to implement and efficient for
specific problems.
● Disadvantages: May not always lead to globally optimal
solutions for all problems.
4. Dynamic Programming:
● Solves problems by storing solutions to subproblems and
reusing them to solve larger problems efficiently.
● Example: Finding the longest common subsequence of two
strings by breaking down the problem into smaller
subproblems and storing solutions in a table.
● Advantages: Efficient for problems with overlapping
subproblems, reducing redundant calculations.
● Disadvantages: Requires additional space to store
subproblem solutions and might be complex to design for
certain problems.
5. Backtracking:
● Systematically explores all possible solutions by recursively
trying different options and backtracking from invalid paths.
● Example: Solving the N-Queens problem by placing queens
on a chessboard such that no two queens threaten each
other.
● Advantages: Useful for finding all possible solutions or
optimal solutions in some cases.
● Disadvantages: Can be computationally expensive for
problems with a large number of possible solutions.

6. Branch and Bound:
● Systematically explores promising solutions while discarding
those that can be guaranteed not to lead to an optimal
solution.
● Example: Finding the shortest path in a graph by pruning
branches that exceed a certain lower bound on the cost of
reaching the destination.
● Advantages: Can be more efficient than backtracking by
eliminating non-optimal solutions early.
● Disadvantages: Designing effective bounding functions can
be challenging for some problems.
7. Randomized Algorithms:
● Incorporate randomness into the algorithm's decision-making
process to achieve efficiency or overcome limitations of
deterministic approaches.
● Example: Quick sort, which randomly chooses a pivot
element to partition the list, often leading to good
average-case performance.
● Advantages: Can provide efficient solutions for specific
problems where deterministic algorithms struggle.
● Disadvantages: May not always guarantee optimal solutions
and might introduce variability in performance.
2. Explain Asymptotic notation.
Ans. Asymptotic Notations are mathematical tools that allow us to
analyze an algorithm’s running time by identifying its behavior as
its input size grows.
● Two algorithms can’t be compared directly, but their time and
space complexities, i.e., their asymptotic notations can be
used to compare the algorithms.
● There are two types:
● Time Complexity: The amount of time it takes for an
algorithm to complete for a given input (size).
● Space Complexity: The amount of Space it takes for an
algorithm to complete execution for a given input (size).
Let us look at the asymptotic notations for time complexity:

[NOTE: Add whole of Answer 5 from below in exam]

3. Write an algorithm for Insertion sort and find the time


complexity of an algorithm using the step count method.
Ans.
4. Table count method for time complexity analysis.
Ans. The table count, or the frequency count, or the step count
method is basically a method for performing time complexity
analysis on an algorithm. In this method, the running time of an
algorithm is the function defined by the number of steps required
to solve input instances of size n.
For an execution, the step count and the statement count are
given as follows:
● Declarative statements with no initializations have a
statement count of 0. In case initializations are made, step
count is 1.
● Comments, brackets such as begin/ end/ endif/ end while/
end for, all have a step count of 0.
● Expressions have a step count of 1.
● Assignment statements, function invocation, return
statements, and other statements such as
break/continue/go/goto all have a step count of 1.
Advantages:
● Evaluation of time complexity using the step count method is
very easy.
● The step count method can provide an accurate estimation
of the time complexity for simple algorithms with well-defined
control flow.

Disadvantages:
● Major Disadvantage is that it does not depend on the
operands. Eg. a=a+10 and a=a+1000000 have the same
step count.
● The method focuses on counting steps but overlooks
constant factors that might affect the actual execution time.
● It becomes cumbersome and error-prone for complex
algorithms with nested loops, conditional statements, and
functions with varying execution times.
Let us take an example algorithm for the sum of n numbers:
Thus, it can be said that T(n) = 2n + 3, which means the time
complexity is O(n).

5. Differentiate between Best, average and worst case


efficiency.
Ans. The best, average, and worst-case time complexities of an
algorithm represent different scenarios for how long an algorithm
might take to execute based on the input it receives.

1. Best Case Time Complexity (big Ω)

● It is represented by Big Omega (Ω).


● It represents the minimum amount of time an algorithm takes
to execute for a specific input size.
● This occurs when the algorithm encounters the most
favorable input conditions, allowing it to complete with the
fewest possible steps.
● Example: In linear search, if the target element is present at
the beginning of the list, the search concludes in one
comparison, resulting in a best-case complexity of O(1).
● Thus, it guarantees the lower bound on the algorithm's
execution time for a specific input size.

2. Average Case Time Complexity (Θ)

● It is represented by Theta (Θ).


● It represents the average amount of time an algorithm takes
to execute for a specific input size.
● This complexity is calculated by averaging the time taken for
all possible inputs and their corresponding frequencies.
● In linear search, assuming all elements have an equal
chance of being the target, the average-case complexity is
O(n/2), as the target element might be found in the middle
on average.
● Thus, it captures the typical behavior of the algorithm for a
specific input size, considering all possible inputs with equal
probability.

3. Worst Case Time Complexity (big O)

● It is represented by Big O (O).


● It represents the maximum amount of time an algorithm
takes to execute for a specific input size.
● This occurs when the algorithm encounters the most
unfavorable input conditions, leading to the most steps
required for completion.
● In linear search, if the target element is not present in the
list, the search needs to compare with all elements, resulting
in a worst-case complexity of O(n).
● Thus, it signifies the upper bound on the algorithm's
execution time for a specific input size. The algorithm will
never take more time than the worst-case complexity
suggests, regardless of the specific input.

6. Explain the properties of an algorithm with an example.


Ans. There are four properties of an algorithm which are desired.
These are:

1. Definiteness:
● Every step of the algorithm must be clearly defined and
unambiguous. There should be no room for interpretation or
individual choices during execution.
● Definiteness guarantees that the algorithm produces the
same output for a given input every time it is run, regardless
of the implementer or platform. This consistency is essential
for reliable and predictable behavior.
2. Finiteness:
● The algorithm must terminate after a finite number of steps
for any valid input. It should not loop indefinitely or run
forever.
● Finiteness ensures that the algorithm completes its task
within a reasonable timeframe and avoids becoming stuck in
an infinite loop. This predictability is crucial for practical
applications.
3. Correctness:
● The algorithm must produce the correct output for all valid or
correct inputs.
● Correctness ensures that the algorithm solves the problem it
is designed for and delivers accurate results. This is
paramount for applications where reliable outcomes are
critical.
4. Efficiency:
● Each instruction must be very basic so that it can be easily
carried out.
● The algorithm should use resources (time, memory) in an
optimal or near-optimal manner. This translates to
minimizing the number of steps and memory usage required
to complete the task.
● Efficiency ensures that the algorithm is practical and can be
executed within reasonable time and resource constraints.

Let us take an example of an algorithm for searching a sorted


array of numbers:

Function search(arr, n, k)
Step 1: Start
Step 2: Set i = 0
Step 3: Repeat Step 4 till i less than n
Step 4: if arr[i] = k
return i
else
i=i+1
[end of if]
[end of loop]
Step 5: return -1
Step 6: Exit.

1. Definiteness: This algorithm clearly defines each step,


including initialization, loop condition, comparison criteria, actions
upon finding a match or not finding a match, and termination
conditions.
2. Finiteness: The loop iterates through the list, and the loop
condition (i < n) guarantees termination once all elements have
been compared. Additionally, if the target element is present, the
loop terminates upon finding the match.
3. Correctness: Assuming the list is correctly sorted, the
comparison at each step ensures that the algorithm either finds
the target element at its correct position or correctly determines its
absence. The returned value accurately reflects the search
outcome.
4. Efficiency: This algorithm utilizes linear search, but utilizing a
binary search approach, which has a time complexity of O(log n),
would make it more efficient for searching large datasets
compared to linear search (O(n)).

7. Give the algorithm for matrix multiplication and find the


time complexity of the algorithm using the operation count
method.
Ans. Let us look at the following algorithm for matrix
multiplication.

Here, all the matrices are n-dimensional arrays. Here, for


simplicity we consider 2d matrices.

A and B are matrices, m is the number of rows in A, n is the


number of columns in B, p is the number of rows in B which must
be equal to the number of columns in A.

Function matrix_mult(A, B, m, n, p):


Step 1: Start
Step 2: Initialize matrix C with all zero values.
Step 3: for i = 0 to i = m - 1, repeat step 4
Step 4: for j = 0 to j = n - 1, repeat step 5
Step 5: for k = 0 to k = p - 1, repeat step 6:
Step 6: C[i][j] += A[i][k] * B[k][j]
[end of loop]
[end of loop]
[end of loop]
Step 7: return C
Step 8: End.

[USE WITH CAUTION: I HAVE NO IDEA IF THIS ANALYSIS IS


CORRECT OR NOT.]

Analysis using Operation Count:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy