0% found this document useful (0 votes)
82 views21 pages

MPS QB

An algorithm is a step-by-step procedure or set of instructions designed to solve a particular problem. It has specific characteristics including defining a sequence of steps, having a clear stopping point, being unambiguous, taking inputs and providing outputs, being feasible, being deterministic, being generalizable, being specific enough, and being efficient.

Uploaded by

pritam nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views21 pages

MPS QB

An algorithm is a step-by-step procedure or set of instructions designed to solve a particular problem. It has specific characteristics including defining a sequence of steps, having a clear stopping point, being unambiguous, taking inputs and providing outputs, being feasible, being deterministic, being generalizable, being specific enough, and being efficient.

Uploaded by

pritam nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

1. What are the characteristics of an algorithm?

An algorithm is a step-by-step procedure or set of instructions designed to solve a particular


problem. It is a systematic approach to solving problems and has some specific
characteristics, which include:

1. Defines a sequence of steps: An algorithm defines a sequence of steps to be followed in


order to solve a particular problem or accomplish a specific task.

2. Finiteness: An algorithm must have a clear stopping point or termination. It must come to
an end after a finite number of steps.

3. Unambiguous: Each step of the algorithm must be clear and unambiguous, such that it can
be easily understood and followed by anyone.

4. Input/output: An algorithm must take one or more inputs and provide one or more
outputs.

5. Feasibility: An algorithm must be feasible, which means that it must be possible to


perform the steps in a reasonable amount of time using available resources.

6. Deterministic: An algorithm must be deterministic, meaning that given the same input, it
will always produce the same output.

7. Generalization: An algorithm must be general enough to be applicable to a wide range of


problems, not just specific ones.

8. Specificity: An algorithm must be specific enough to handle a particular problem or task,


but not so specific that it cannot be adapted to new use cases.

9. Efficiency: An algorithm must be efficient and minimize the amount of time and space
required to solve a problem within acceptable limits.

Overall, algorithms are designed to provide an efficient and effective solution to problems
and tasks, and these characteristics enable them to do just that.

2. Write the different methods to solve the recurrence equation.


A recurrence equation is a mathematical formula that defines a sequence of numbers
recursively in terms of its previous values. There are several methods to solve a recurrence
equation, including:

1. Iteration method: In this method, one manually computes the first few terms of the
sequence until a pattern emerges. Then, using this pattern, we express the nth term of the
sequence in terms of the earlier terms.
2. Substitution method: In this method, one assumes a solution to the recurrence equation
and then substitutes it into the equation to verify its correctness.

3. Recursion tree method: This method is useful in finding the upper and lower bounds of
the running time of an algorithm. It involves constructing a tree whose nodes represent the
subproblems of the recurrence and computing the sum of the costs of all the nodes in the
tree.

4. Master theorem: The Master theorem gives a precise solution to a large class of
recurrence equations that arise in the analysis of algorithms. It is a useful tool in determining
the running time of algorithms such as merge sort, quicksort, etc.

5. Generating functions: In this method, one associates to the sequence of interest a power
series called a generating function, which encodes information about the sequence in its
coefficients. One can then manipulate the generating function using standard techniques of
calculus to obtain a closed-form solution to the recurrence equation.

6. Matrix method: In this method, one converts the recurrence equation into a system of
linear equations in terms of the initial conditions of the sequence. Then, the system of linear
equations is represented in matrix form, and its corresponding determinant is computed to
obtain a closed-form solution.

Overall, each of these methods has its own strengths and weaknesses, and the choice of
method depends on the specific recurrence equation and the purpose for which it is being
used.

3. What is P-class problem?

4. What will be the time complexity of the given program segment?

int fact ( int n)

if(n==0)

return 1;

else

return n* fact(n-1);

the time complexity of this function is O(n).


5. Rank the following functions by their order of growth in ascending order.

n2, n, ln n, n3, 2n

ln n < n < n^2 < n^3 < 2^n

6. Write the Different steps of divide and conquer algorithm.


The divide and conquer algorithm is a problem-solving approach that involves breaking a
problem down into smaller, more manageable sub-problems, and then solving these sub-
problems independently. The process involves several steps:

1. Divide: Break down the problem into smaller sub-problems that can be solved
independently. Divide the input into two or more parts, until it becomes simple enough to
be solved directly.

2. Conquer: Solve each sub-problem recursively, dividing it further if necessary, until the
problem becomes small enough to be solved directly.

3. Combine: Combine the results obtained from the sub-problems and solve the original
problem. This involves merging the solutions obtained from the sub-parts, and producing
the final solution for the problem.

4. Base case: Identify the simplest possible problems that can be solved directly. Establish
when the problem cannot be broken down further, and solve this sub-part, known as the
base case.

5. Recursion: The process of dividing and conquering can be recursive - it can continue
until the base case is reached for each sub-problem. The recursive approach is used to
divide problems into as many smaller sub-problems as possible, and solve each sub-
problem independently.

The divide and conquer algorithm is useful for solving complex problems, especially those
that have a recursive nature, such as sorting algorithms, searching algorithms, and matrix
multiplication, among others. The algorithm is known for its efficiency and effectiveness in
solving such problems.

7. Show that (n+1)5 is O (n5).

8. Which of the following algorithm design techniques is used by quick sort?


(I) Dynamic Programming (II) Back Tracking (III) Divide and Conquer
. Divide and conquer

9. What is single source shortest path problem?


Single source shortest path problem is a problem in graph theory where we
need to find the shortest path in a weighted and directed graph from a single
source vertex to all other vertices in the graph. The shortest path is defined as
the path between two vertices with the minimum total weight or minimum total
distance.

The problem can be solved using numerous algorithms like Dijkstra's


algorithm, Bellman-Ford algorithm, Floyd-Warshall algorithm, etc.

The single source shortest path has many real-world applications like finding
the shortest route between two cities, determining the fastest route between two
points on a network, finding the minimum distance between a source and a
destination in a map, etc. In computer networks, it is used for routing packets,
calculating link-state information, determining the shortest path in a network,
etc.

10. What is the difference between BFS and DFS?


BFS (Breadth-First Search) and DFS (Depth-First Search) are two common
graph traversal algorithms. The main differences between the two are:

1. Approach: The key difference between BFS and DFS is that BFS visits
nodes in layers while DFS goes down a particular path until it can't go any
further before backtracking. BFS visits all nodes at depth 1 before proceeding
to the nodes at depth 2, while DFS explores as far as it can before backtracking.

2. Data structure: BFS uses a queue to store and traverse the nodes while DFS
uses a stack or recursion to store and traverse the nodes.

3. Memory consumption: BFS consumes more memory than DFS, as it needs


to store all the nodes in the current layer in the queue. On the other hand, DFS
only needs to store the nodes in the current path, which makes it more memory-
efficient.

4. Time complexity: Both algorithms have O(n + m) time complexity, where n


is the number of nodes and m is the number of edges in the graph. However,
BFS often performs better on graphs that are not very deep, while DFS is better
suited to deep and high branching factor graphs.

In summary, BFS is useful for finding the shortest path between two nodes,
while DFS is more suitable for finding all nodes within a subtree or exploring
as far as possible in a certain direction.
11. How the Greedy paradigm of algorithm differs from that of Dynamic
programming?
The Greedy paradigm and Dynamic Programming are two different algorithmic
design techniques that are used to solve optimization problems. The main
differences between them are:

1. Approach:

The Greedy algorithm makes locally optimal choices at each step to find a
global optimum, without considering the overlapping subproblems. The Greedy
algorithm has no backtracking step, and it builds up the solution piece by piece
without looking back, thus sometimes producing suboptimal solutions.

On the other hand, Dynamic Programming uses a "divide and conquer"


approach by solving subproblems once and storing their solutions. Dynamic
Programming combines the solutions to these subproblems to solve larger
subproblems so as to eventually arrive at the solution to the original problem,
thus resulting in globally optimal solutions.

2. Solution quality:

The solutions obtained from the Greedy algorithm may not necessarily be the
globally optimal solution. However, it can provide a good approximate solution
in a much faster time.

Contrarily, solutions obtained using Dynamic Programming are guaranteed to


be globally optimal but often result in more computation time and require more
memory.

3. Applicability:

Greedy algorithms are suitable for some optimization problems where we can
make local greedy choices that lead to a globally optimal solution. However,
greedy algorithms cannot be used for problems where the solution depends on
future decisions or when a solution cannot be constructed piece by piece.

Dynamic programming, on the other hand, can handle a wide variety of


optimization problems, regardless of whether the problem has overlapping
subproblems or optimal substructure.

In summary, the Greedy paradigm is faster but provides suboptimal solutions


that are locally optimal, while Dynamic Programming is slower but guarantees
optimal solutions with a broader application.
12. Which of the following sorting algorithms has the lowest worst case
complexity?
(I)Quick Sort (II)Merge Sort (III)Selection Sort (IV)Heap Sort
Merge Sort

13. Write in detail different phases of algorithm.


An algorithm is a set of instructions or procedures that are followed to solve a specific
problem. An algorithm typically consists of several phases that collectively accomplish the
task of solving the problem at hand. The different phases of algorithm are:

1. Problem Definition: The first phase of algorithm design involves defining the problem that
needs to be solved. This phase establishes the goals and objectives that the algorithm aims
to achieve. The problem definition should be clear and specific, including all relevant
information and constraints.

2. Analysis: In the analysis phase, the problem is examined in depth to determine its
structure, components, parameters, and other relevant details. The analysis should consider
the input data format, the data range, and data types. The output format and any
intermediate storage requirements should also be defined during the analysis.

3. Design: Once the problem definition and analysis are complete, the next phase is
algorithm design. This phase involves identifying appropriate data structures and developing
a logical sequence of steps that will solve the problem. There are usually multiple algorithms
that can be used to solve the same problem, so it is important to select an algorithm that is
efficient and effective.

4. Implementation: In the implementation phase, the algorithm is translated into executable


code using a programming language. This phase often involves debugging, testing, and
refining the code to ensure that it works correctly and efficiently.

5. Testing: The testing phase involves evaluating the performance and functionality of the
algorithm under various inputs and conditions. The algorithm is tested to ensure that it
produces accurate and reliable results in different scenarios.

6. Maintenance: Once the algorithm has been implemented and tested, it needs to be
maintained to ensure that it continues to work as intended. This phase involves monitoring
and reviewing the algorithm periodically to identify and fix any issues that arise.

In conclusion, the different phases of algorithm design help to improve the quality and
reliability of an algorithm by approaching it in a systematic and structured manner
14. Write in details about the convex hull problem and closest pair problem.

15. What are the different algorithms to search an element?


There are several algorithms used to search an element in a data structure such as arrays,
lists, trees, and graphs. Some of the common algorithms to search an element are:

1. Linear Search (Sequential Search): A simple algorithm that checks each element of the list
or array in order from beginning to end until the required element is found or the end of the
list is reached.

2. Binary Search: Binary search is a more efficient algorithm to search an element in a sorted
list or array. This algorithm starts by comparing the middle element of the sorted array with
the search key. If it matches, the search is complete. If not, it will look in either the left or
the right subarray, based on the comparison.

3. Hashing: Hashing is a technique that maps data of an arbitrary size to a fixed-size output
known as a hash value. It is useful for searching large collections of data in almost constant
time.

4. Interpolation Search: Interpolation search is an improvement on binary search that uses


specific formulas to guess the location of the search value.

5. Fibonacci Search: Fibonacci search uses Fibonacci numbers to locate the element in a
sorted array by dividing the array into two portions that have Golden Ratio properties.

6. Jump Search: Jump search is a variation of linear search that skips elements by taking a
fixed-step forward and backward through the array until the required element is found.

In conclusion, the choice of search algorithm depends on a variety of factors, such as the size
and structure of the dataset, the frequency of the search, and the available resources.
16. Write the procedure of Substitution method to solve the recurrence equation.
The Substitution method is one of the techniques used to solve recurrence equations that
arise in analyzing the run-time complexity of algorithms. The general procedure to solve a
recurrence relation by substitution method is as follows:

1. Guess a closed-form solution to the recurrence relation.

2. Use mathematical induction to prove that the guess is correct.

3. Then, use substitution to find constants in that guess.

4. Lastly, verify that the guess satisfies the recurrence relation.

More specifically, the procedure can be broken down into the following steps:

1. Solve the homogeneous equation: For a recurrence relation of the form f(n) = af(n-1),
solve the homogeneous equation f(n) = af(n-1) by assuming f(n) = cr^(n). Here 'r' is the
constant exponent, and c is a constant.
2. Find Particular solution: After solving the homogeneous equation, find the particular
solution, which represents the non-homogeneous part of the relation.

3. Determine constants: Determine the values of the constants by matching the particular
solution with the recurrence relation using the substitution technique.

4. Verify the guess: Verify that the guess works by showing that the general solution satisfies
the recurrence relation.

5. Apply the initial values: If initial values are given, substitute them into the general solution
and solve for the final values.

Overall, the Substitution method enables us to find a closed-form solution to a recurrence


relation by guessing a solution and then verifying that the guess is correct using
mathematical induction and substitution.
17. Which data structure is used in BFS and DFS?
Both BFS (Breadth-First Search) and DFS (Depth-First Search) are graph traversal algorithms
that are used to explore or search for elements in a graph. The data structure used in BFS
and DFS is a queue and a stack, respectively.

For BFS, we use the queue data structure to keep track of the next set of nodes to visit.
Here, we start at a source node and explore its adjacent neighbors. After exploring each
node, we add it to the queue to explore its neighbors later. Therefore, we process the
vertices at the same level before moving on to the next level. This approach ensures that we
traverse the graph in the breadth-first manner.

For DFS, we use the stack data structure to maintain the order in which nodes have to be
explored. We start at a source node and visit its neighbors. We continue to visit the
neighbors of the neighbors until all the nodes have been visited. We use a stack data
structure to keep track of the nodes to be visited next. In DFS, we go deep, that is, we
explore the nodes in depth until all the nodes have been visited.

In conclusion, the data structure used in BFS and DFS are queue and stack, respectively. Both
data structures help in keeping track of the nodes that need to be visited next, and this
enables the BFS and DFS algorithms to explore all nodes in a graph.
18. Solve the Recurrence relation T(N) = 2T(N/4) + N, for N≥2 with T(1) = 1.
19. Explain the various criteria used for analyzing algorithms.
Analyzing algorithms helps us to understand the performance of an algorithm with
respect to its input size. There are different criteria or measures that we use for this
purpose. In general, the criteria for analyzing algorithms can be classified into three
categories: time complexity, space complexity, and algorithmic efficiency.
1. Time complexity: Time complexity is a measure of the amount of time an
algorithm requires to process its input. It is usually expressed as a function of the size
of the input. There are two types of time complexity: worst-case and average-case.
The worst-case time complexity refers to the maximum amount of time an algorithm
requires to process any input of size n. The average-case time complexity refers to the
average amount of time an algorithm requires to process inputs of size n.

2. Space complexity: Space complexity is a measure of the amount of memory an


algorithm requires to process its input. It is also usually expressed as a function of the
size of the input. Similar to time complexity, there are two types of space complexity:
worst-case and average-case.

3. Algorithmic efficiency: Algorithmic efficiency refers to the ability of an algorithm


to solve a problem in a reasonable amount of time and using a reasonable amount of
space. Generally, algorithmic efficiency refers to the time and space complexity of an
algorithm along with its simplicity and ease of implementation. A more efficient
algorithm should have a better time and space complexity, along with being simple
and easy to implement.

Other factors that can be considered while analyzing algorithms are scalability,
maintainability, and correctness. Scalability is the ability of an algorithm to handle
large inputs, and maintainability is the ability of an algorithm to be maintained and
improved over time. Correctness refers to the ability to obtain the correct output for
each input.

In summary, while there are multiple criteria for analyzing algorithms, time
complexity, space complexity, and algorithmic efficiency are the primary and most
commonly used ones
20. What is meant by divide &conquer?

21. Consider the recurrence T n= {2T ( ⌊ n/21 n=1⌋)+n n>1


Find the asymptotic bound on T.
22. Find out the asymptotic bound for the given recurrence using recursion tree
method. T(n) = 3T(n/4) + cn2

23. Find out the asymptotic bound for the given recurrence equation by using
recurrence method. T(n) = 2T(n/2) + n2.

24. Find out the asymptotic bound for the given recurrence equation by using
recurrence method. T(n) = T(n/3) + T(2n/3) + n.

25. Give two real time problems that could be solved using greedy algorithm.

26. How to find the complexity of an algorithm?


27. How to identify the efficient allogrithm?

28. Use master method to solve the following recurrence


T(n)=9T(n/3) + n

29. Differentiate between dynamic programming and Divide and conquer method.
Dynamic programming and divide and conquer method are two techniques used to solve
complex problems in computer science. While they share similar ideas, there are significant
differences between them.

1. Definition:
Dynamic Programming (DP) is a technique to solve problems by breaking them down into
smaller subproblems and solving them independently. It maintains a table for all
subproblems solved so that these subproblems need not be recomputed again. On the other
hand, Divide and Conquer (D&C) is also a technique for solving problems by recursively
breaking them down into smaller subproblems. However, unlike DP, it doesn't maintain a
table for subproblems solved.

2. Approach:
DP approach is bottom-up, i.e., we start solving subproblems and recursively solving larger
subproblems by combining the previous subproblem solutions. For D&C, the approach is
top-down, i.e., it starts solving the larger problem by recursively breaking them down into
smaller subproblems until they become small enough to solve directly.

3. Overlapping Subproblems:
DP subproblems can be overlapping or redundant, which require us to store the results of
each subproblem in a table known as memoization. In D&C, subproblems are mutually
exclusive or non-overlapping, which does not require any memoization.

4. Optimal Substructure:
DP problems have optimal substructure property, which means the solution to the larger
problem can be obtained from the optimal solution of its smaller subproblems. D&C doesn't
have this property.

5. Complexity:
DP typically takes less time complexity compared to D&C due to several repeated
calculations as well as memoization. D&C usually takes more time complexity than DP
because it doesn't have any memoization or overlapping subproblem avoidance technique.

6. Example:
An example of DP is the Fibonacci sequence, while tower of Hanoi is an example of D&C.
30. Find the worst case time complexity of Quick sort?
The worst-case time complexity of Quick Sort is O(n^2).
31. Define NP-Complete and NP-Hard problem.

32. How is O-notation different form Ω -notation?


-notation and Ω -notation are both asymptotic notations used to describe the upper and
lower bounds, respectively, of the growth rate of an algorithm.

1. O-notation:
O-notation is used to describe the upper bound of an algorithm, i.e., the maximum amount
of time the algorithm will take to run. It represents the worst-case scenario where the
algorithm takes no more time than the given function. The O-notation provides an
estimation of the upper bound of the algorithm and represents the upper limit of how soon
the algorithm will complete.

2. Ω-notation:
Ω-notation is used to describe the lower bound of an algorithm, i.e., the minimum amount
of time an algorithm will take to run. It represents the best-case scenario where the
algorithm takes at least as much time as the given function. The Ω-notation provides an
estimation of the lower bound of the algorithm and represents the lower limit of how soon
the algorithm will complete.

In simple terms, O-notation represents the upper limit of the growth rate, while Ω-notation
represents the lower limit of the growth rate.

For example, if an algorithm has a growth rate of O(n^2), it means that the algorithm will not
take more time than n^2, and if it has a growth rate of Ω(n^2), it means that the algorithm
will always take at least n^2 time.
33. Differentiate between BFS and DFS.
Breadth-First Search (BFS) and Depth-First Search (DFS) are two different algorithms used to
traverse and search through graphs, trees, or other data structures.

1. Approach:
BFS starts with the root node or source node and explores all the neighbors at the current
depth level before moving on to the next level. It uses a queue data structure to keep track
of the nodes to be visited. On the other hand, DFS starts with the root node and explores as
far as possible along each branch before backtracking. It uses a stack or recursion to keep
track of the nodes to be visited.

2. Traversal Strategy:
BFS follows a level-by-level approach, where it explores all nodes at the current level before
moving on to the next level. Hence, it is also known as level-order traversal. DFS follows a
deep-first approach, where it explores as far as possible before backtracking.

3. Memory:
BFS may require more memory than DFS because it has to store all the nodes in the current
level in the memory. DFS, on the other hand, requires less memory because it visits the
nodes sequentially and needs to store only the nodes in the current path.

4. Time Complexity:
The time complexity of both algorithms depends on the graph's structure. In the worst case,
both algorithms can take O(V+E) time, where V is the number of vertices or nodes, and E is
the number of edges. However, BFS is generally faster than DFS for finding the shortest path
in an unweighted graph because of its level-by-level approach. DFS can get stuck in an
infinite loop, and its execution time can vary significantly.

5. Use Cases:
BFS is commonly used to solve shortest path and minimum spanning tree problems, and DFS
is commonly used in backtracking algorithms and cycle detection problems.

In summary, both BFS and DFS are graph traversal algorithms, but they differ in their
approach, traversal strategy, memory usage, and use cases. BFS follows a level-by-level
approach, while DFS follows a deep-first approach. BFS requires more memory, while DFS
has a lower memory requirement.
34. Solve the given text using Brute force string matching algorithm with the given pattern.
Text= TWO ROADS DIVERGED IN A YELLOW WOOD
Pattern= ROADS
35. Write the algorithm of Quick sort.
Quick sort is a Divide and Conquer algorithm that sorts an array or list by partitioning it into
smaller sub-arrays, sorting those sub-arrays, and then merging them back together.

The general algorithm of Quick sort can be described as follows:

1. Choose a pivot element from the array. The pivot can be any element, but in practice, it is
often the first or last element.

2. Partition the array into two sub-arrays based on the pivot element, one that contains
elements smaller than the pivot and another that contains elements greater than the pivot.
This is done by iterating through the entire array and comparing each element to the pivot.

3. Recursively apply the above two steps to the left and right sub-arrays until they are
sorted. This is the Divide and Conquer approach.

Below is the more detailed algorithm of Quick sort:

```
function quickSort(arr, left, right) {
// If the array is empty or contains only one element, it is already sorted
if (arr.length < 2) {
return arr;
}

// If no arguments are passed for left and right, set them to the start and end of the array,
respectively
left = typeof left != "number" ? 0 : left;
right = typeof right != "number" ? arr.length - 1 : right;

// Choose a pivot point (the middle point can also be chosen), and partition the array into
two sub-arrays
const pivot = arr[Math.floor((left + right) / 2)];
let leftIndex = left;
let rightIndex = right;

while (leftIndex <= rightIndex) {


while (arr[leftIndex] < pivot) {
leftIndex++;
}
while (arr[rightIndex] > pivot) {
rightIndex--;
}

if (leftIndex <= rightIndex) {


// Swap the values at the left and right index
[arr[leftIndex], arr[rightIndex]] = [arr[rightIndex], arr[leftIndex]];

// Move the indices closer to the middle


leftIndex++;
rightIndex--;
}
}

// Recursively apply the above steps to the left and right sub-arrays
if (left < rightIndex) {
arr = quickSort(arr, left, rightIndex);
}
if (leftIndex < right) {
arr = quickSort(arr, leftIndex, right);
}

return arr;
}

// Example usage
const arr = [5, 3, 8, 4, 2, 7, 1, 10];
console.log(quickSort(arr)); // Output: [1, 2, 3, 4, 5, 7, 8, 10]
```

The time complexity of Quick sort is O(n log n) on average, with the worst-case time
complexity of O(n^2) occurring when the pivot element is chosen in such a way that the
partition is highly unbalanced.
36. Find the worst case time complexity of quick sort with recurrence equation.
37. Sort the following elements using Merge sort: 7, 5, 4, 8, 12, 10, 9, 6, 2, 4
To sort the given elements using Merge sort, we need to divide the array into smaller sub-
arrays, sort those sub-arrays, and then merge them back together. Here are the steps:

1. Divide the array into two sub-arrays of equal size (or nearly equal, if the array size is odd).
```
[7, 5, 4, 8, 12] and [10, 9, 6, 2, 4]
```
2. Recursively apply Merge sort to both sub-arrays until they are sorted.
```
[4, 5, 7, 8, 12] and [2, 4, 6, 9, 10]
```
3. Merge the two sorted sub-arrays to create the final sorted array.
```
[2, 4, 4, 5, 6, 7, 8, 9, 10, 12]
```

Thus, the Merge sort algorithm has sorted the given elements [7, 5, 4, 8, 12, 10, 9, 6, 2, 4]
into [2, 4, 4, 5, 6, 7, 8, 9, 10, 12].
38. Write an algorithm of Floyd’s Algorithm with the time complexity.
Floyd's algorithm is a dynamic programming algorithm for finding the shortest path between
all pairs of vertices in a weighted graph. Here is the algorithm of Floyd's Algorithm:

1. Initialize a 2D matrix dist[][] where dist[i][j] represents the distance between vertex i and
vertex j.
2. Initialize dist[][] with the weights of edges between vertices. If there is no edge between
vertices i and j, set dist[i][j] to infinity.
3. Use three nested loops to find the shortest path between every pair of vertices:
a. Loop for k from 1 to total number of vertices:
i. Loop for i from 1 to total number of vertices:
ii. Loop for j from 1 to total number of vertices:
- If dist[i][j] > dist[i][k] + dist[k][j], set dist[i][j] to dist[i][k] + dist[k][j].
4. Return the dist[][] matrix.

The time complexity of Floyd's Algorithm is O(n^3), where n is the total number of vertices in
the graph. The algorithm uses nested loops to check every possible pair of vertices and
compare their distances with the distances of their intermediate vertices. It requires three
nested loops, making it a cubic time complexity algorithm.
Despite its high time complexity, Floyd's Algorithm is useful for finding the shortest path
between all pairs of vertices in small graphs. It is also commonly used in applications such as
network routing, traffic management, and transportation planning.
39. Take any undirected graph and find the MST using Kruskal’s algorithm.
40. Take any undirected graph and find the MST using Prim’s algorithm.
41. Explain the step by step procedure of Dijkstra’s algorithm.
42. What is Knapsack problem? Write the different knapsack problems. Explain the
procedure of 0/1 Knapsack problem.
43. Define P, NP and NP- Complete Problems.
44. Determine minimal spanning tree for given graph using Prim’s algorithm.

45. Determine minimal spanning tree for given graph using Kruskal’s algorithm.

46. Use Dijkstra’s algorithm for solving single source shortest path problem on a given
directed graph.

47. Implement the fractional knapsack problem using C. Let us consider that the capacity of
the knapsack W = 60 and the list of provided items are shown in the following table −

Item A B C D

Profit 80 10 20 30

Weight 40 5 10 4

48. Write an algorithm for Huffman tree. Construct a Huffman tree for the given text and the
text is “DESIGN AND ANALYSIS OF ALGORITHM”. Show binary encoding of each character.
49. Write an algorithm for Huffman tree. The frequency with which each character occurs in a
file is shown in the given table. Construct a Huffman tree corresponding to these
frequencies and show binary encoding of each character.

Character a b c d e f g h
Frequenc 1 8 7 10 12 5 4 2
y

50. Write an algorithm Fractional Knapsack. Suppose the capacity of the knapsack
is 30. The Profit array is {25, 30, 40} and the corresponding Weight array is {10,
15, 18}.Find profit based on minimum weight.
51. What is the difference between fixed length codeword and variable length code
word. Explain with an example.
52. For the following graph having four nodes represented by the matrix given
below determine the all pairs source shortest path.

53. Discuss the classes P, NP, NP complete, and NP hard with examples. How can
we show that a problem is NP complete?
54. What do you understand by Polynomial time reducibility?
55. Use Floyd’s-Warshall’s algorithm to find the shortest paths for all pairs of
vertices in the given graph.

56. Write an algorithm for Selection Sort and sort the following elements: 7, 5, 4, 8, 12, 10, 9,
6, 2, 4
57. Write an algorithm for Bubble Sort and sort the following elements: 7, 5, 4, 8, 12, 10, 9, 6,
2, 4
58. Write an algorithm for Sequential search and search element from the following given
elements: 7, 5, 4, 8, 12, 10, 9, 6, 2, 4
59. Write an algorithm for Depth-First Search (DFS) and find its time complexity.
Here is an algorithm for Depth-First Search (DFS):

1. Create a visited array to keep track of which vertices have been visited.
- Initialize all vertices as not visited.
2. Choose a starting vertex.
3. Mark the starting vertex as visited and push it onto a stack.
4. While the stack is not empty:
- Pop a vertex from the stack and print it (or process it).
- For each unvisited neighbor of the popped vertex:
- Mark the neighbor as visited.
- Push the neighbor onto the stack.
5. If there are no more vertices to visit (i.e., the stack is empty), terminate the search.

The time complexity of DFS is O(V + E), where V is the number of vertices and E is the
number of edges in the graph. The DFS algorithm visits each vertex and each edge in the
graph at most once.
60. Write an algorithm for Breadth-First Search (BFS) and find its time complexity.
Here is an algorithm for Breadth-First Search (BFS):

1. Create a visited array to keep track of which vertices have been visited.
- Initialize all vertices as not visited.
2. Choose a starting vertex and enqueue it.
3. Mark the starting vertex as visited.
4. While the queue is not empty:
- Dequeue a vertex from the queue and print it (or process it).
- For each unvisited neighbor of the dequeued vertex:
- Mark the neighbor as visited.
- Enqueue the neighbor.
5. If there are no more vertices to visit (i.e., the queue is empty), terminate the search.

The time complexity of BFS is O(V + E), where V is the number of vertices and E is the
number of edges in the graph. The BFS algorithm visits each vertex and each edge in the
graph at most once. It is worth noting that BFS is usually slower than DFS, though it can be
better in certain situations (such as when searching for the shortest path between two
nodes).
61. Show that the worst case running time of Quick sort algorithm is n2.
62. Define asymptotic notations. Write its type.
Asymptotic notations (also known as order of growth notations) are used in
computer science and mathematics to describe the growth rate of a function or
algorithm in terms of its input size.

The three main types of asymptotic notations are:

1. Big O notation: This notation is used to give an upper bound on the growth
rate of a function. We say that f(n) is O(g(n)) (read as "f of n is big O of g of
n") if there exist positive constants c and n0 such that f(n) ≤ c * g(n) for all n ≥
n0.

2. Omega notation: This notation is used to give a lower bound on the growth
rate of a function. We say that f(n) is Ω(g(n)) (read as "f of n is omega of g of
n") if there exist positive constants c and n0 such that f(n) ≥ c * g(n) for all n ≥
n0.
3. Theta notation: This notation is used to give an asymptotically tight bound
on the growth rate of a function. We say that f(n) is Θ(g(n)) (read as "f of n is
theta of g of n") if there exist positive constants c1, c2, and n0 such that c1 *
g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0.

These notations are commonly used in analyzing the time and space
complexity of algorithms.
63. Write the master method of recurrence equation.
64. What do you mean by time complexity and space complexity of an
algorithm?
Time complexity and space complexity are two measures used to analyze the
efficiency of an algorithm.

Time complexity refers to the amount of time an algorithm takes to run as a


function of the size of the input. It is usually expressed as the number of
operations the algorithm executes in terms of the input size. For example, if an
algorithm takes n^2 steps to complete for an input of size n, its time
complexity is O(n^2). The time complexity of an algorithm can be affected by
factors such as the size of the input, the algorithm's design, and the hardware
on which it is run.

Space complexity, on the other hand, refers to the amount of memory (or
space) an algorithm uses as a function of the size of the input. It is usually
expressed in terms of the number of memory units (usually words) the
algorithm requires as a function of the input size. For example, if an algorithm
requires 2n memory units to store an array of size n, its space complexity is
O(n). The space complexity of an algorithm can be affected by factors such as
the size and type of data the algorithm manipulates, the algorithm's design,
and the programming language or environment in which it is implemented.

Both time complexity and space complexity are important considerations


when designing algorithms. A good algorithm aims to minimize its time and
space complexity while still achieving the desired output. Different algorithms
may have different trade-offs between time and space complexity, and the
optimal algorithm for a given task may depend on factors such as the size of
the input, the available hardware, and the requirements of the problem at hand.
65. Construct a Huffman tree for the given text and the text is
“REPETITIVE SENTENCE EXAMPLE”. Show binary encoding of each
character.
66. Write short notes on Lower bounds for sorting

Lower bounds for sorting refer to the minimum possible time required to sort a collection of
data items. Sorting is a fundamental operation in computer science, and efficient sorting
algorithms are crucial for rapidly processing large data sets. Lower bounds for sorting can
provide a theoretical optimum on the performance of sorting algorithms and can guide in the
development of faster and more efficient sorting techniques. Some of the most significant
results in the study of lower bounds for sorting include

1. Comparison Sorting: In the comparison sorting model, all sorting algorithms are required to do
pairwise comparisons between input elements to determine their relative order. The best lower
bound for comparison sorting is Ω(n * log n), which implies that any comparison-based sorting
algorithm must make at least n * log n comparisons to sort n elements.

2. Linear-Time Sorting: In the comparison sorting model, a linear-time sorting algorithm is one that
sorts in O(n) time. There are three well-known linear-time algorithms: counting sort, radix sort, and
bucket sort. These algorithms are not comparison-based, and they exploit special properties of the
input data to sort more efficiently than any comparison-based algorithm

3. Non-Comparison-Based Sorting: There exist sorting algorithms that do not rely on pairwise
comparisons and instead take advantage of specific properties of the input data, such as the
distribution of values or presortedness. These algorithms have specific requirements on the input
data but can achieve faster sorting speeds than comparison-based algorithms.

67. What are the different ways to traverse the binary tree?
There are three common ways to traverse a binary tree:

1. In-order traversal: In this traversal, the left subtree of each node is visited first, then the
node itself, and finally the right subtree. This results in a sorted list of the nodes if the binary
tree is a binary search tree (BST). The algorithm for in-order traversal is:

- Traverse the left subtree recursively.


- Visit the current node.
- Traverse the right subtree recursively.

Inorder traversal is commonly used to print the elements of a binary search tree in sorted
order.

2. Pre-order traversal: In this traversal, the node is visited first, then its left and right
subtrees are visited recursively. The algorithm for pre-order traversal is:

- Visit the current node.


- Traverse the left subtree recursively.
- Traverse the right subtree recursively.

Pre-order traversal is often used to create a copy of the binary tree.

3. Post-order traversal: In this traversal, the left and right subtrees of each node are visited
recursively, and then the node itself is visited. The algorithm for post-order traversal is:
- Traverse the left subtree recursively.
- Traverse the right subtree recursively.
- Visit the current node.

Post-order traversal is commonly used to delete the nodes of the binary tree.

These three traversal techniques can be used to traverse any binary tree, regardless of its
shape or structure. Additionally, variations of these traversal techniques, such as level-order
traversal or diagonal traversal, can be used to traverse trees in specific patterns or orders.
68. Explain about the optimal binary search tree.
In computer science, an optimal binary search tree is a binary search tree that provides the
minimum search cost for a sequence of elements. In other words, it is a tree that minimizes
the expected number of comparisons required to search for a given element. Optimal binary
search trees are also known as weighted binary search trees or WOBSTs.

An optimal binary search tree can be constructed using dynamic programming. The
algorithm involves computing the expected search cost for all possible subtrees of the given
keys and determining the optimal subtree with minimum search cost. The algorithm uses a
2D table to store the expected search costs, which is filled in a bottom-up fashion.

The steps involved in constructing an optimal binary search tree are:

1. Sort the sequence of elements in non-decreasing order of their probabilities.


2. Compute a 2D table of size (n+1) x (n+1), where n is the number of elements.
3. For each subsequence of length k, compute the minimum expected search cost for the
optimal subtree with root in the kth element.
4. Fill in the table diagonally, using the computed values for smaller subsequences to derive
values for larger subsequences.
5. The expected search cost of the optimal binary search tree is the value in the first cell of
the table.

The time complexity of constructing an optimal binary search tree using dynamic
programming is O(n^3), where n is the number of elements. However, there are more
efficient algorithms that can reduce this to O(n^2), such as the matrix chain multiplication
algorithm.

The optimal binary search tree has important applications in computer science, particularly
in the efficient implementation of search operations. It is commonly used in data structures
such as symbol tables, associative arrays, and databases.
69. Write an algorithm of Brute Force search and find its time complexity.
Brute Force search, also known as exhaustive search, is an algorithmic technique that
involves trying all possible solutions to a problem to find the one that works. Here is the
algorithm of Brute Force search:
1. Start from the first element of the list.
2. Check if the element matches the desired element.
3. If yes, return its index.
4. If not, move to the next element and repeat steps 2-3 until the end of the list.
5. If the end of the list is reached without finding the desired element, return -1 as the
index.

The time complexity of Brute Force search is O(n), where n is the number of elements in
the list. This is because the algorithm needs to compare the desired element with each
element of the list until it finds a match or reaches the end of the list. In the worst-case
scenario, the algorithm will have to check every element of the list, resulting in a linear time
complexity.
70. Match the pattern P= ABB from the given text T = AAAAABBABBABBBACBBCAABCCCACCC
by using Brute Force search.
71. Differentiate between brute force search and exhaustive search approach.
Brute force search and exhaustive search are often used interchangeably, as they share the
same basic idea. Both of these approaches involve trying all possible solutions to a problem
to find the one that works. However, there is a subtle difference between the two concepts.

Brute Force Search:


Brute force search is an algorithmic technique that involves trying all possible solutions to a
problem to find the one that works. In this approach, we consider every possible input and
evaluate the output for each input. This method is simple to implement, but it can be
computationally expensive. Brute force search is often used when we have no a priori
information about the problem or there is no better way to solve the problem.

Exhaustive Search:
Exhaustive search is a problem-solving approach that involves trying all possible solutions to
a problem in a systematic or exhaustive manner. In this approach, we generate every
possible combination of inputs and evaluate the output for each combination. Exhaustive
search can be seen as a specific type of brute force search, where we systematically
generate and evaluate every possible input. This method is typically used when we can
reduce the search space by exploiting prior knowledge about the problem.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy