AA Assignment
AA Assignment
A famous result in calculus (and algorithm analysis) shows that as n grows large, H(n)
behaves very similarly to the natural logarithm function. More precisely, it can be
approximated by:
H(n) ≈ ln(n) + γ
A common method for approximating sums like H(n) is to compare them to an integral.
Consider the integral of 1/x from 1 to n:
∫₁ⁿ (1/x) dx
Lower Bound
For i ≥ 2, notice that the function 1/x is decreasing. Therefore, for each term in the series
(except the first):
⇒ ln(n) ≤ H(n)
∫₁ⁿ (1/x) dx ≤ H(n)
Upper Bound
⇒ H(n) ≤ 1 + ln(n)
H(n) – 1 ≤ ∫₁ⁿ (1/x) dx
H(n) ≤ 1 + ln(n)
For sufficiently large n, ln(n) dominates the constant 1. In fact, we can say there exists a
constant c (for example, c = 2) and a value n₀ such that for all n ≥ n₀:
H(n) ≤ 2 ln(n)
H(n) = O(ln(n))
Since logarithms in any base differ only by a constant factor, it’s common to express this
simply as O(log n).
H(n) ≥ ln(n)
This directly tells us that for all n ≥ 1, H(n) is at least ln(n). Therefore, there exists a
constant, say c' = 1, such that:
H(n) = Ω(ln(n))
4. Summary
5. Final Answer
Key insight:
o The algorithm performs a fixed number of operations.
o Constant time operations are ideal for efficiency.
Example:
int x = array[5];
No matter how big the array is, this operation always takes the same amount of time.
Practical Impact:
Such operations are extremely efficient. When used properly, constant time routines form the building blocks of
more complex algorithms.
Key insight:
o This typically happens when the problem size is reduced by a constant fraction in each step (for
example, halving).
o Logarithms “compress” large increases in n into relatively small increases in work.
Example:
Binary Search:
When searching for an element in a sorted array, binary search repeatedly divides the interval in half.
Practical Impact:
Algorithms with logarithmic growth are very scalable. Even huge datasets can be handled efficiently if the
algorithm reduces the problem size rapidly.
Key insight:
o Every element is processed a fixed number of times, so doubling n roughly doubles the execution time.
Example:
Linear Search:
Scanning an unsorted list to find a target element.
Practical Impact:
Linear algorithms are straightforward and perform well for moderate input sizes. They are essential when each
element must be examined.
4. Linearithmic Time – O(n log n)
Key insight:
o The problem is divided (which gives the logarithmic factor) and then combined or processed linearly for
each division.
Example:
Merge Sort:
Merge sort divides an array into halves recursively (log n levels) and then merges these halves in O(n) time.
Combining these two parts yields a total running time of O(n log n).
Practical Impact:
This is considered optimal for comparison-based sorting algorithms. Algorithms with O(n log n) behavior strike a
balance between division and merging steps.
Key insight:
o This usually results from having two nested loops iterating over the input.
Example:
Bubble Sort:
In bubble sort, each element is compared with almost every other element.
Practical Impact:
Quadratic time can quickly become inefficient as n grows, but is acceptable for very small datasets. Many naive
algorithms for problems like finding duplicates or checking for pair sums are quadratic.
Key insight:
o Like quadratic algorithms, but with an extra level of iteration, making them even less scalable.
Example:
Practical Impact:
While cubic algorithms are more computationally intensive than quadratic ones, in areas like 3D simulations or
solving certain combinatorial problems, they may still be practical for small n.
7. Exponential Time – O(2ⁿ) (or similar)
Key insight:
o Often arise in brute-force approaches, recursive algorithms without memoization, or problems with
solutions requiring examination of every possible combination.
Example:
int fibonacci(int n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
The above implementation has exponential time complexity, as it recomputes subproblems many times.
Practical Impact:
Exponential algorithms are mostly limited to problems with very small input sizes. They often appear in
theoretical contexts or as baseline brute-force solutions before optimizations (like dynamic programming) are
applied.
Problem Statement:
Prove the recurrence equation of the Tower of Hanoi problem using the repeated
substitution method. The recurrence is:
T(n)=2T(n−1)+1withT(1)=1.T(n)=2T(n−1)+1withT(1)=1.
Step-by-Step Proof
1. Recurrence Relation
1. Moving n−1n−1 disks from the source rod to the auxiliary rod
(T(n−1)T(n−1) moves).
2. Moving the largest disk from the source to the target rod (11 move).
3. Moving n−1n−1 disks from the auxiliary rod to the target rod
(T(n−1)T(n−1) moves).
T(n)=2T(n−1)+1.T(n)=2T(n−1)+1.
T(n)=2[2T(n−2)+1]+1=22T(n−2)+2+1.T(n)=2[2T(n−2)+1]+1=22T(n−2)+2+1.
Third substitution (k=3k=3):
Substitute T(n−2)=2T(n−3)+1T(n−2)=2T(n−3)+1:
T(n)=22[2T(n−3)+1]+2+1=23T(n−3)+22+2+1.T(n)=22[2T(n−3)+1]+2+1=23T(n−3)+
22+2+1.
After kk substitutions:
T(n)=2kT(n−k)+2k−1+2k−2+⋯+21+20⏟Geometric series.T(n)=2kT(n−k)+Geometric se
ries2k−1+2k−2+⋯+21+20.
∑i=0k−12i=2k−1.i=0∑k−12i=2k−1.
T(n)=2kT(n−k)+(2k−1).T(n)=2kT(n−k)+(2k−1).
T(n)=2n−1T(1)+(2n−1−1).T(n)=2n−1T(1)+(2n−1−1).
Since T(1)=1T(1)=1:
T(n)=2n−1⋅1+2n−1−1=2n−1.T(n)=2n−1⋅1+2n−1−1=2n−1.
n=1n=1:
T(1)=21−1=1T(1)=21−1=1. ✔️
n=2n=2:
T(2)=22−1=3T(2)=22−1=3.
Using recurrence: T(2)=2T(1)+1=2⋅1+1=3T(2)=2T(1)+1=2⋅1+1=3. ✔️
n=3n=3:
T(3)=23−1=7T(3)=23−1=7.
Using recurrence: T(3)=2T(2)+1=2⋅3+1=7T(3)=2T(2)+1=2⋅3+1=7. ✔️
Final Conclusion
Using repeated substitution, we derived the closed-form solution for the Tower of Hanoi
recurrence:
T(n)=2n−1.T(n)=2n−1.
This matches the known minimum number of moves required to solve the puzzle
for nn disks.
Key Takeaways: