ADA 1st IA Question Bank
ADA 1st IA Question Bank
Divide and Conquer – This technique divides a problem into smaller sub-problems,
solves them recursively, and combines the results. Example: Merge Sort.
Effectiveness: All operations must be basic enough to be performed exactly and in finite
time.
Iterative Algorithm: Uses loops to repeat operations until a condition is met. Example:
Loop-based factorial.
Recursive Algorithm: Calls itself with a subset of the original problem. Example:
Recursive factorial.
7. What is Decrease and Conquer Technique? Mention its Variations with examples
This technique solves a problem by solving a smaller version of the same problem and
extending the solution. Variations:
Optimization Problems: Find the best solution among many. (e.g., Shortest path)
10. Write an algorithm for brute force string matching and explain with example
Algorithm:
BruteForceMatch(T, P):
n = length of text T
m = length of pattern P
for i from 0 to n - m:
j=0
j++
if j == m:
return -1 // no match
Example: Text = "abcdabc", Pattern = "abc" → Match found at index 0 and 4. Time Complexity:
O(nm)
Best Case: Minimum time required (e.g., Linear Search finding item at first index → O(1))
Worst Case: Maximum time taken for any input (e.g., O(n) if item is last)
Input size
Basic operations
Number of operations
Linear Search → O(n) Units: Measured in terms of steps/operations, not actual seconds.
(Already Completed)
Constant Time (O(1)): The time remains constant regardless of input size. Example:
Accessing an element in an array using an index.
Logarithmic Time (O(log n)): The number of operations grows logarithmically with input
size. Example: Binary Search, which repeatedly halves the search space.
Linear Time (O(n)): Time increases linearly with input size. Example: Linear Search,
where each element is checked one by one.
Linearithmic Time (O(n log n)): Common in efficient sorting algorithms like Merge Sort
and Heap Sort. It represents a mix of linear and logarithmic growth.
Quadratic Time (O(n²)): Time increases with the square of the input size. Typical in
algorithms with nested loops like Bubble Sort and Selection Sort.
Exponential Time (O(2^n)): Time doubles with each additional input element. Seen in
algorithms like recursive solution to the Traveling Salesman Problem. Understanding
these classes helps predict performance and choose the most suitable algorithm for
large data sets.
2. Explain Empirical Analysis of Linear Search and discuss the difference between
Mathematical and Empirical Analysis
Empirical Analysis involves running the algorithm with real input data and recording the
actual performance metrics like execution time and memory usage. To empirically
analyze linear search:
Use test data sets of varying sizes (e.g., 1000, 10000, 100000 elements).
Measure and record the time taken to search for an element in each case.
Differences:
Empirical Analysis is affected by factors like hardware, compiler, and programming
language, while Mathematical Analysis is platform-independent.
Empirical Analysis gives actual run-time values; Mathematical Analysis gives asymptotic
complexity.
Designing the algorithm: Choose an approach (brute force, divide and conquer, greedy,
etc.).
Proving correctness: Ensure the algorithm works for all valid inputs.
Testing: Run on various inputs to validate correctness and performance. This step-by-
step strategy helps systematically build robust and efficient algorithms.
4. Write Euclid's Algorithm to find GCD of two numbers and explain with example
Euclid’s algorithm is based on the principle that the GCD of two numbers a and b (a > b)
is the same as the GCD of b and a % b. Algorithm:
EuclidGCD(a, b):
while b ≠ 0:
temp = b
b=a%b
a = temp
return a
5. Explain space and time complexity and find the S(p) and T(n) of an algorithm to find
the sum of elements in an array
Time Complexity (T(n)) refers to the number of basic operations as a function of input
size. Space Complexity (S(p)) is the amount of memory used by an algorithm. Algorithm:
SumArray(A, n):
sum = 0
for i = 0 to n-1:
return sum
Space Complexity S(p): O(1) as only a few variables (sum, i) are used regardless of input
size. The algorithm is efficient in both time and space, ideal for real-time systems and
memory-constrained environments.
(Already Completed)
6. What is Exhaustive Search? Explain with TSP Algorithm and realize its time and space
complexity
Exhaustive Search is a brute-force approach that tries all possible solutions to find the
correct or optimal one. It is guaranteed to work but is computationally expensive,
especially for large inputs.
Traveling Salesman Problem (TSP): Given a list of cities and the distances between each
pair, the TSP asks for the shortest possible route that visits each city exactly once and
returns to the origin city.
Example: For 4 cities A, B, C, D, there are (4-1)! = 6 possible paths (excluding reverse
directions). For each, calculate the tour cost and select the minimum.
Time Complexity: O(n!) – grows factorially with the number of cities. Space Complexity:
O(n) for recursive stack or path storage.
Advantages:
Disadvantages:
Types:
Big O (O): Upper bound – represents worst-case time complexity. Example: O(n^2) for
Bubble Sort.
Big Omega (Ω): Lower bound – represents best-case time complexity. Example: Ω(n) for
Bubble Sort (when the array is already sorted).
Big Theta (Θ): Tight bound – represents average-case time complexity. Example: Θ(n log
n) for Merge Sort.
Visualization:
On a graph with input size (n) on the x-axis and time on the y-axis:
o Big O shows the upper limit line
o Big Θ lies between the two and hugs the actual performance curve
Understanding these traits helps in writing better and more efficient algorithms.
Steps:
o Substitution Method
o Recursion Tree
o Master Theorem
Example: Merge Sort has the recurrence: T(n) = 2T(n/2) + O(n) → Using Master
Theorem: T(n) = O(n log n)
This method helps determine how recursive calls grow and how they affect total time.
Steps:
Example: Test linear search on arrays of size 10, 100, 1000, etc., and observe how time
increases.
Each approach has its strengths; combining both provides a complete performance
picture.
13. Advantages and Disadvantages of Empirical Analysis
Advantages:
Disadvantages:
Platform-dependent results
Time-consuming to implement
14. Solve problems discussed in class related to time complexity (operation method / step
count / step per execution / Asymptotic notation / limit for comparing)
Example: Find the largest element in an array
MaxElement(A, n):
max = A[0]
for i = 1 to n-1:
max = A[i]
return max
per iteration Asymptotic notation: T(n) = O(n) Limit method: lim(n→∞) (n/n) = 1 ⇒ T(n)
count: For loop runs n-1 times Step per execution: One comparison each loop → O(1)
∈ Θ(n)
15. Write an Algorithm to perform Bubble Sort and express its complexities
BubbleSort(A, n):
for i = 0 to n-1:
for j = 0 to n-i-2:
(Already Completed)
(Already Completed)
1. Write an Algorithm and realize a program to sort N elements using Insertion Sort and
also perform its time and space analysis.
InsertionSort(A, n):
for i = 1 to n-1:
key = A[i]
j=i-1
A[j + 1] = A[j]
j=j-1
A[j + 1] = key
Explanation:
The algorithm treats the first element as sorted. Then it picks the next element and
inserts it into the correct position within the sorted part by shifting elements that are
greater than the key.
This process is repeated until all elements are placed in order.
Best Case: O(n) → when the array is already sorted; only comparisons are made, no
shifts.
Worst Case: O(n²) → when the array is sorted in reverse order; maximum number of
shifts.
Space Complexity:
O(1) – Since insertion sort is done in-place, no extra memory is required except a few
variables.
Stability: Yes. It does not change the relative order of equal elements.
Adaptiveness: Efficient when array is nearly sorted. Can be made adaptive by checking if
any shift occurred.
Use Cases:
Small datasets
Advantages:
Disadvantages:
Program in Python:
arr = [5, 2, 4, 6, 1, 3]
key = arr[i]
j=i-1
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
Output:
Graphical Visualization: A visual plot of time vs input size would show a linear curve for
best case and a parabolic curve for worst case.
2. Explain The Tower of Hanoi problem with an Algorithm, realize a program using
recursion? Explain time complexity using mathematical analysis.
Problem Statement: Tower of Hanoi is a classic recursive problem where you have three pegs
and n disks of different sizes. The objective is to move all the disks from the source peg to the
destination peg using an auxiliary peg, following these rules:
Algorithm:
TowerOfHanoi(n, source, auxiliary, destination):
if n == 1:
else:
Explanation:
The recursive idea is to move n-1 disks to the auxiliary peg, move the nth disk to the
destination peg, and finally move the n-1 disks from the auxiliary peg to the destination
peg.
Python Program:
if n == 1:
return
n=3
Time Complexity (Mathematical Analysis): Let T(n) be the number of moves for n disks. The
recurrence relation is:
T(n) = 2T(n - 1) + 1
Solving:
T(1) = 1
T(2) = 3
T(3) = 7
...
T(n) = 2^n - 1
Space Complexity:
Applications:
Teaching recursion
Visualization: Can be visualized using animation or diagram of peg and disk movement.
Conclusion: Tower of Hanoi demonstrates the power of recursion. Despite being simple in
concept, the number of operations grows exponentially, making it a great example to study
recursive growth and performance trade-offs.
Algorithm:
MatrixMultiply(A, B, m, n, p):
C[i][j] = 0
Explanation: Each element C[i][j] is computed as the dot product of the ith row of A and the jth
column of B. This requires multiplying and summing n elements.
Dry Run Example: Let A = [[1, 2], [3, 4]] and B = [[5, 6], [7, 8]]
Program in Python:
m, n = len(A), len(A[0])
p = len(B[0])
for i in range(m):
for j in range(p):
for k in range(n):
return C
result = matrix_multiply(A, B)
print(row)
Output:
[19, 22]
[43, 50]
Explanation: BFS explores all vertices at the current depth before moving to the next
depth level, using a queue. It’s suitable for finding the shortest path in unweighted
graphs.
Example: In a graph with vertices {A, B, C, D} and edges {A-B, A-C, B-D}, BFS starting at A
visits A, B, C, then D.
Explanation: DFS explores as far as possible along a branch before backtracking, using a
stack (or recursion). It’s useful for topological sorting or detecting cycles.
BFS Algorithm:
Algorithm BFS(G, s)
Initialize queue Q
Enqueue s into Q
Mark s as visited
v ← Dequeue(Q)
Process v
Mark u as visited
Enqueue u into Q
DFS Algorithm (Recursive):
Mark v as visited
Process v
DFS(G, u, visited)
Complexity Analysis:
1. BFS Complexity:
o Time Complexity:
Adjacency List: O(V + E), where V is vertices, E is edges (each vertex and
edge processed once).
2. DFS Complexity:
o Time Complexity:
o Space Complexity: O(V) for visited array + O(V) for recursion stack = O(V).
Summary:
5. Explain the mathematical Analysis of an Algorithm, its general plan, and find the time
efficiency for finding the duplicate element in an array.
General Plan:
3. Establish Recurrence (if recursive): For recursive algorithms, set up and solve
recurrence relations.
Assumption: Array A of size n contains integers from 1 to n-1, with exactly one duplicate.
text
Copy
Algorithm FindDuplicate(A, n)
actualSum ← 0
for i ← 0 to n-1 do
Alternative Approach (Sorting): Sorting the array (O(n log n)) and checking adjacent
elements (O(n)) yields O(n log n), but the sum method is more efficient.
Asymptotic Notations:
Definition: Asymptotic notations describe the growth rate of an algorithm’s running time
as input size approaches infinity.
Types:
1. Big O (O): Upper bound, f(n) = O(g(n)) if ∃ constants c, n₀ such that f(n) ≤ c·g(n)
for all n ≥ n₀.
2. Big Omega (Ω): Lower bound, f(n) = Ω(g(n)) if ∃ constants c, n₀ such that f(n) ≥
c·g(n) for all n ≥ n₀.
3. Big Theta (Θ): Tight bound, f(n) = Θ(g(n)) if f(n) = O(g(n)) and f(n) = Ω(g(n)).
Proofs:
Compare terms: n² ≤ 2^n for large n (e.g., n ≥ 10, since 2^10 = 1024 > 100 = 10²).
Upper Bound (O): f(n) = 1/2 n² - 3n ≤ 1/2 n² for n ≥ 1 (since -3n ≤ 0).
Lower Bound (Ω): f(n) = 1/2 n² - 3n ≥ 1/4 n² for n ≥ 12 (since -3n ≥ -3n²/12 = -1/4 n²).
Compute limit: lim (n→∞) f(n)/g(n) = lim (n→∞) (20n³ - 3)/n³ = lim (n→∞) (20 - 3/n³) =
20.
7. Write an Algorithm to perform Selection Sort, realize its program, and express its
complexities.
text
Copy
Algorithm SelectionSort(A, n)
for i ← 0 to n-2 do
minIndex ← i
minIndex ← j
return A
text
Copy
Procedure SelectionSort(A, n)
for i = 0 to n-2
minIndex = i
minIndex = j
Swap(A[i], A[minIndex])
End
Complexity Analysis:
Time Complexity:
o Inner loop: For i=0, n-1 comparisons; i=1, n-2 comparisons; ..., i=n-2, 1
comparison.
Space Complexity:
8. Explain the mathematical Analysis of an Algorithm, its general plan, and find the time
efficiency for matrix multiplication.
General Plan:
text
Copy
Algorithm MatrixMultiplication(A, B, n)
for i ← 0 to n-1 do
for j ← 0 to n-1 do
for k ← 0 to n-1 do
return C
Count Operations:
9. Explain the mathematical Analysis of an Algorithm, its general plan, and find the time
efficiency to find the largest element in an Array.
General Plan: Identify operations, count them, sum across iterations, express
asymptotically.
text
Copy
Algorithm FindMax(A, n)
max ← A[0]
for i ← 1 to n-1 do
max ← A[i]
return max
Count Operations:
10. Explain the mathematical Analysis of an Algorithm, its general plan, and find the time
efficiency for finding Factorial of a given number.
text
Copy
Algorithm Factorial(n)
// Output: n!
if n = 0 then
return 1
return n * Factorial(n-1)
Recurrence Relation:
Solve Recurrence:
Alternative (Iterative):
text
Copy
Algorithm FactorialIterative(n)
result ← 1
for i ← 1 to n do
result ← result * i
return result
Total: O(n).
Summary: Time Complexity = O(n); Space Complexity = O(n) (recursive, due to call stack) or O(1)
(iterative).