0% found this document useful (0 votes)
11 views18 pages

Okoko

The document explains the concept of Randomized Algorithms, highlighting their advantages such as improved performance, simplicity, and avoidance of worst-case scenarios, with Randomized QuickSort as a primary example. It also details the String Matching problem, presenting the Naïve String Matching Algorithm and the Rabin-Karp Algorithm, including their complexities and step-by-step execution. The Rabin-Karp algorithm utilizes hashing for efficient pattern matching, demonstrating its effectiveness compared to the naive approach.

Uploaded by

Amisha Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views18 pages

Okoko

The document explains the concept of Randomized Algorithms, highlighting their advantages such as improved performance, simplicity, and avoidance of worst-case scenarios, with Randomized QuickSort as a primary example. It also details the String Matching problem, presenting the Naïve String Matching Algorithm and the Rabin-Karp Algorithm, including their complexities and step-by-step execution. The Rabin-Karp algorithm utilizes hashing for efficient pattern matching, demonstrating its effectiveness compared to the naive approach.

Uploaded by

Amisha Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Randomization ensures that no specific input consistently causes poor performance.

Randomized Algorithm Explanation Code Example (Python):

python

When is an algorithm called a “Randomized Algorithm”? What is the advantage of using


import random
Randomization in these
algorithms? Explain with the help of an example. def randomized_quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = random.choice(arr) # Randomly selecting a pivot
Randomized Algorithm left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
An algorithm is called a "Randomized Algorithm" if it makes random choices during its execution to return randomized_quick_sort(left) + middle + randomized_quick_sort(right)
influence its behavior. These algorithms use randomness to achieve better performance in expectation
# Example usage
or simplify the implementation of complex problems. arr = [10, 80, 30, 90, 40, 50, 70]
sorted_arr = randomized_quick_sort(arr)
print(sorted_arr)
Advantages of Using Randomization

1. Improved Performance: Randomization often helps achieve better average-case performance Conclusion
compared to deterministic algorithms.
2. Simplicity: Some problems are easier to solve using random choices rather than deterministic A randomized algorithm introduces randomness in its execution to improve efficiency, avoid worst-
logic. case scenarios, and simplify problem-solving. Randomized QuickSort is a prime example where
3. Avoiding Worst-Case Scenarios: Randomization helps in avoiding specific worst-case inputs that randomization helps in achieving an expected O(n log n) complexity, making it more reliable in
might degrade the performance of a deterministic algorithm.
practice compared to its deterministic counterpart.
4. Load Balancing: In distributed systems, randomization helps in balancing the workload efficiently.
5. Security: Many cryptographic algorithms use randomization to enhance security.
Give the formal definition of the string matching problem and a naïve algorithm for its solution.
Example: Randomized QuickSort What is the
complexity of your algorithm? Give an example to show that the bound you have given is tight.
A classic example of a randomized algorithm is Randomized QuickSort.

Deterministic QuickSort
Formal Definition of the String Matching Problem

The string matching problem is defined as follows:


In the standard QuickSort, we choose a fixed pivot (e.g., the first or last element). If the input is already
sorted or nearly sorted, the worst-case time complexity becomes O(n²). Given a text T of length n and a pattern P of length m, find all occurrences of P as a contiguous
substring in T .
Randomized QuickSort
Formally, we need to find all indices i (where 0 ≤ i ≤ n − m) such that:
Instead of selecting a fixed pivot, we randomly choose a pivot from the array. This avoids the worst- T [i : i + m − 1] = P [0 : m − 1]
case scenario for already sorted inputs.
Algorithm Steps: where T [i : i + m − 1] represents the substring of T starting at index i and of length m.

1. Select a pivot randomly.


Naïve String Matching Algorithm
2. Partition the array around the pivot.
3. Recursively apply the same process to the left and right subarrays. The naïve algorithm (also called the brute-force approach) works by sliding the pattern P over the text
T one character at a time and checking for a match at each position.
Why Randomization Helps?

Algorithm Steps:
The expected time complexity remains O(n log n), irrespective of the input order.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/12 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/12

1. Iterate over all possible starting positions i in T from 0 to n − m. Thus, for n = 10 and m = 5:
2. For each position i, compare the substring T [i : i + m] with P .
3. If they match, store or print the index i. The algorithm checks 6 possible shifts.
Each shift compares 5 characters.
4. Continue until all positions are checked.
Total comparisons = 6 × 5 = 30.

This confirms the tight bound of O(nm).


Naïve String Matching Algorithm (Python)

python Conclusion
def naive_string_matching(T, P):
n, m = len(T), len(P)
The naïve string matching algorithm checks all possible alignments, leading to O(nm) worst-case
positions = [] # List to store matching indices complexity. This makes it inefficient for large texts, motivating the need for optimized algorithms like
for i in range(n - m + 1): # Iterate over all possible shifts Knuth-Morris-Pratt (KMP) and Boyer-Moore.
match = True
for j in range(m): # Compare character by character
if T[i + j] != P[j]: Describe a naïve string matching algorithm and show the comparisons that the naïve string
match = False
break matcher makes for the
if match: pattern P = 0001 in the text T = 000010001010001.
positions.append(i)

return positions
Naïve String Matching Algorithm
# Example usage
text = "abcabcabc"
pattern = "abc" The naïve string matching algorithm slides the pattern over the text one character at a time and
matches = naive_string_matching(text, pattern)
print("Pattern found at indices:", matches)
checks for a match at each position.

Algorithm Steps:
Output:
1. Iterate over all possible starting positions i in T from 0 to n − m.
less 2. For each position i, compare the substring T [i : i + m] with P character by character.
3. If a complete match is found, store or print the index i.
Pattern found at indices: [0, 3, 6]
4. Continue until all positions are checked.

Time Complexity Analysis Given Input:

Outer loop runs for (n − m + 1) iterations. Text T = "000010001010001" (length n = 15)


Inner loop runs for at most m comparisons in each iteration. Pattern P = "0001" (length m = 4)
Worst-case complexity: O(nm).

Tight Bound Example Step-by-Step Comparisons

Consider the worst-case scenario where: Shift i Compare T [i : i + 4] with P = 0001 Match?

0 0000 vs. 0001 ❌ (Mismatch at last character)


1. The text consists of repeated characters: T = "aaaaaaaaaa" (length n).
2. The pattern is also repetitive but slightly different: P = "aaaaa" (length m).
1 0001 vs. 0001 ✅ (Match found at index 1)
2 0010 vs. 0001 ❌ (Mismatch at third character)
Since every shift has a partial match except the last character, the algorithm performs nearly n × m 3 0100 vs. 0001 ❌ (Mismatch at first character)
character comparisons. 4 1000 vs. 0001 ❌ (Mismatch at first character)
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/12 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/12
Shift i Compare T [i : i + 4] with P = 0001 Match? Rolling Hash Function
5 0001 vs. 0001 ✅ (Match found at index 5)
6 0010 vs. 0001 ❌ (Mismatch at third character) We use a rolling hash to efficiently compute hash values in constant time:

7 0100 vs. 0001 ❌ (Mismatch at first character) H(T [i + 1 : i + m]) = d ⋅ (H(T [i : i + m − 1]) − T [i] ⋅ dm−1 ) + T [i + m]
8 1001 vs. 0001 ❌ (Mismatch at first character) Where:
9 0010 vs. 0001 ❌ (Mismatch at third character)
10 0100 vs. 0001 ❌ (Mismatch at first character) d is a base (commonly 256 for ASCII characters).
11 1000 vs. 0001 ❌ (Mismatch at first character) q is a prime number used for modulo to avoid overflow.
H(T [i : i + m]) is the hash value of the substring starting at index i of length m.

Matches Found
Rabin-Karp Algorithm (Python)
The pattern "0001" appears at indices 1 and 5.
python

Time Complexity Analysis def rabin_karp(T, P, d=256, q=101): # d=256 (ASCII), q=101 (prime)
n, m = len(T), len(P)
h_P = 0 # Hash value for pattern
Outer loop runs for (n − m + 1) = (15 − 4 + 1) = 12 shifts. h_T = 0 # Hash value for text window
Each shift requires at most O(m) = O(4) character comparisons. h = 1 # d^(m-1) % q
positions = [] # Stores indices of matches
Worst-case complexity: O(nm) = O(15 × 4) = O(60).
# Precompute h = d^(m-1) % q
for _ in range(m - 1):
h = (h * d) % q
Conclusion
# Compute initial hash values for P and first window of T
for i in range(m):
The naïve string matching algorithm successfully finds the pattern at indices 1 and 5 in O(nm) time. h_P = (d * h_P + ord(P[i])) % q
h_T = (d * h_T + ord(T[i])) % q
Although simple, this method is inefficient for large texts, which is why optimized algorithms like Knuth-
Morris-Pratt (KMP) or Boyer-Moore are preferred for practical applications. # Slide over text
for i in range(n - m + 1):
# If hash matches, check characters
if h_P == h_T:
Give Rabin-Karp algorithm for string matching and show how it works on a suitable example. if T[i:i + m] == P:
positions.append(i) # Confirm match

# Compute hash for next window


Rabin-Karp Algorithm for String Matching if i < n - m:
h_T = (d * (h_T - ord(T[i]) * h) + ord(T[i + m])) % q
The Rabin-Karp algorithm is a string matching algorithm that uses hashing to efficiently find a pattern if h_T < 0: # Ensure positive hash
h_T += q
in a text. Instead of directly comparing substrings, it compares hash values, which makes it faster in
return positions
many cases.
# Example usage
text = "3141592653589793"
Algorithm Overview pattern = "979"
matches = rabin_karp(text, pattern)
print("Pattern found at indices:", matches)
1. Compute the hash value of the pattern P .
2. Compute the hash values of all substrings of the text T with the same length as P .
3. If a substring’s hash value matches P 's hash value, perform a direct character comparison to
Example Walkthrough
confirm the match.
4. Use rolling hash to efficiently compute the hash values of subsequent substrings.
Input:

Text T = "3141592653589793" (length n = 16)


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/12 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/12

Pattern P = "979" (length m = 3) Rabin-Karp Algorithm - Step-by-Step Execution


Parameters: d = 10, q = 101
We will apply the Rabin-Karp algorithm to find the pattern P = "0001" in the text T =
Step-by-Step Execution "000010001010001".

Let's assume base 10 for clarity.


Given Data
1. Compute Hash for Pattern:
Text (T): `"000010001010001"` (length n = 15)
H(P ) = (9 × 102 + 7 × 101 + 9 × 100 ) mod 101 = 979 mod 101 = 71 Pattern (P): `"0001"` (length m = 4)
Base (d): 10 (assuming numeric representation for simplicity)
2. Compute Hash for First Window of Text (314): Prime Modulus (q): 101 (to reduce collisions)

H(314) = (3 × 102 + 1 × 101 + 4 × 100 ) mod 101 = 314 mod 101 = 11


Step 1: Compute Initial Hash Values
No match.
Using the formula:
3. Slide the Window and Compute New Hashes:
Use the rolling hash formula to efficiently compute the next hash. H(P ) = (dm−1 × P0 + dm−2 × P1 + ... + Pm−1 ) mod q
​ ​ ​

If at any index the hash matches H(P ), perform a direct character check.
Compute Hash for Pattern P ("0001")
Final Matches:
H(P ) = (0 × 103 + 0 × 102 + 0 × 101 + 1) mod 101
The pattern "979" is found at index 12.
H(P ) = (1) mod 101 = 1

Time Complexity Analysis Compute Hash for First Window of T ("0000")

Hash computation: O(m) H(T0 ) = (0 × 103 + 0 × 102 + 0 × 101 + 0) mod 101


Sliding window updates: O(n)


Character checks (rare, for false positives only): O(m) H(T0 ) = 0 mod 101 = 0

Overall Complexity: No match.

Best/Average case: O(n)


Step 2: Slide the Window & Compute Rolling Hash
Worst case (if all hashes collide): O(nm) (same as naive)
Using the rolling hash formula:

Advantages of Rabin-Karp H(Ti+1 ) = (d × (H(Ti ) − T [i] × dm−1 ) + T [i + m]) mod q


✅ Efficient for multiple pattern matching (e.g., plagiarism detection).
​ ​

✅ Rolling hash improves efficiency over naive matching. Comparisons for Each Shift

✅ Expected complexity is O(n), much faster than O(nm). Shift i Window T [i : i + 4] H(T [i]) Matches H(P ) = 1? Character Check?
However, false positives (hash collisions) require extra verification. 0 `"0000"` 0 ❌ (No match) No
1 `"0001"` 1 ✅ (Match) Yes (Match at index 1)
Show the comparisons that the Rabin-Karp algorithm makes for the pattern P = 0001 in the text T = 2 `"0010"` 10 ❌ (No match) No
000010001010001.
3 `"0100"` 100 ❌ (No match) No
4 `"1000"` 98 ❌ (No match) No

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 7/12 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 8/12
Shift i Window T [i : i + 4] H(T [i]) Matches H(P ) = 1? Character Check? 1. P (Polynomial Time):

5 `"0001"` 1 ✅ (Match) Yes (Match at index 5) The class of problems that can be solved in polynomial time, i.e., in O(n^k) for some constant
6 `"0010"` 10 ❌ (No match) No
k.

7 `"0100"` 100 ❌ (No match) No


Example: Sorting (Merge Sort, Quick Sort), Finding the shortest path (Dijkstra’s Algorithm).
2. NP (Nondeterministic Polynomial Time):
8 `"1001"` 99 ❌ (No match) No
9 `"0010"` 10 ❌ (No match) No
Problems for which a solution can be verified in polynomial time, even if finding the
solution may take longer.
10 `"0100"` 100 ❌ (No match) No Example: The Hamiltonian Path problem (finding a path visiting all vertices exactly once).
11 `"1000"` 98 ❌ (No match) No 3. NP-Complete (NPC):
A subset of NP problems that are at least as hard as any other NP problem.
If an NP-complete problem can be solved in polynomial time, then all NP problems can be
Final Matches
solved in polynomial time.
The pattern "0001" appears at indices 1 and 5. Example: Traveling Salesman Problem (TSP), 3-SAT problem.

Complexity Analysis 7. Key Concepts in NPC

Initial Hash Computation: O(m) = O(4) a. Decision Problems vs. Optimization Problems
Sliding Window Hash Updates: O(n − m) = O(15 − 4) = O(11)
Character Comparisons (only for matches): O(m) per match Decision Problems: Problems where the answer is YES or NO.
In this case, 2 matches → 2 × 4 = 8 extra operations. Example: "Does a Hamiltonian cycle exist in this graph?"
Optimization Problems: Problems where the goal is to find the best solution among many
Overall, the expected time complexity is O(n), significantly better than the naïve O(nm). possible solutions.
Example: "Find the shortest Hamiltonian cycle."

6. Distinguish clearly between the following classes of problems (i) P (ii) NP (iii) NP Complete.
b. Reductions
7. Explain the following key concepts in relation to NPC:
a. Decision problems vs. Optimization problems A method to transform one problem into another in polynomial time.
b. Reductions If problem A is reduced to problem B, and B is solvable in polynomial time, then A is also solvable
8. What are approximation algorithms? What are the situations in which approximation algorithms in polynomial time.
are useful? Used to prove NP-completeness.
9. What is the significance of approximation ratio of an approximation algorithm?
10. Define the following in the context of approximation algorithms:
a. Approximation scheme 8. Approximation Algorithms
b. Polynomial-time approximation scheme
c. Fully polynomial-time approximation scheme Approximation algorithms find near-optimal solutions for NP-hard problems in polynomial time.
11. Give an approximation algorithm for vertex cover and describe its running time analysis and Used when finding an exact solution is computationally infeasible.
approximation Example: The Greedy algorithm for the Vertex Cover problem.
guarantee.
12. What is the set-covering problem? Give the outline of a greedy approximation algorithm for set
covering 9. Approximation Ratio
problem. How good is this algorithm?
Measures how close an approximation algorithm's solution is to the optimal solution.
Defined as:
Here are detailed answers to your questions:
Solution Cost (Approximate Algorithm)
R=
Solution Cost (Optimal Algorithm)

6. Classes of Problems: P, NP, and NP-Complete

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 9/12 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 10/12

A ratio closer to 1 is better. 2. Pick the subset that covers the most uncovered elements.
3. Repeat until all elements are covered.

10. Approximation Algorithms Concepts Performance Analysis

a. Approximation Scheme Approximation Guarantee: ln(n) (Harmonic series bound).


Runs in O(n log n) time.
An algorithm that can achieve an approximation ratio of (1+ε) for any small ε > 0.

b. Polynomial-Time Approximation Scheme (PTAS)


Let me know if you need further explanations! 🚀
ChatGPT can make mistakes. Check important info.

An approximation scheme where the runtime is polynomial for fixed ε.


Example: The Knapsack problem has a PTAS.

c. Fully Polynomial-Time Approximation Scheme (FPTAS)

A PTAS where the runtime is polynomial in both input size and 1/ε.
Example: Fractional Knapsack problem.

11. Approximation Algorithm for Vertex Cover

Algorithm (Greedy 2-Approximation)

1. Pick an arbitrary edge (u, v) from the graph.


2. Add both u and v to the vertex cover.
3. Remove all edges covered by u or v.
4. Repeat until all edges are covered.

Running Time Analysis

The algorithm runs in O(E) since each edge is considered at most once.

Approximation Guarantee

The obtained cover is at most twice the size of the optimal vertex cover.
Approximation Ratio: 2-approximation.

12. Set Covering Problem & Greedy Approximation

Set Cover Problem

Given a universe U and a collection of subsets S1, S2, ..., Sm, find the minimum number of subsets
whose union covers U.

Greedy Approximation Algorithm

1. Start with an empty cover.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 11/12 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 12/12
Can be used for cycle detection, topological sorting, connected components, etc.

Graph Algorithms Comparison Benefits:

✅ Requires less memory than BFS in most cases.


Compare all the graph algorithms, their working, benefits, demerits and a proper detailed ✅ Efficient for checking connectivity, topological sorting, and cycle detection.
comparison, TC SC for all, ( BFS,DFS, MST Prims, kruskals, all pair shotest path, dijsktraas,max flow)
✅ Can be modified for finding strongly connected components.
Here's a detailed comparison of various graph algorithms, including their workings, benefits, drawbacks, Drawbacks:
time complexity (TC), and space complexity (SC).
❌ Does not guarantee the shortest path.
❌ Can get stuck in infinite loops if cycles exist and not handled properly.
1. Breadth-First Search (BFS)
Time Complexity:
Working:
Adjacency List: O(V + E)
BFS explores all neighbors of a node before moving to their neighbors. Adjacency Matrix: O(V 2 )
It uses a queue (FIFO) for traversal.
Suitable for finding the shortest path in an unweighted graph. Space Complexity:

Benefits: Adjacency List: O(V + E)

✅ Finds the shortest path in an unweighted graph. Adjacency Matrix: O(V 2 )

✅ Good for level-wise traversal. Recursive call stack can go up to O(V ) in the worst case.

✅ Guarantees reaching all reachable nodes.


3. Minimum Spanning Tree (MST) - Prim’s Algorithm
Drawbacks:

❌ Requires extra space for queue storage. Working:


❌ Not efficient for deep graphs compared to DFS. Starts from any node and grows the MST by adding the minimum weight edge from the current
tree.
Time Complexity: Uses Priority Queue (Min-Heap) for efficiency.

Adjacency List: O(V + E)


Benefits:
Adjacency Matrix: O(V 2 )
✅ Works well for dense graphs with an Adjacency Matrix.
Space Complexity: ✅ Greedy algorithm ensuring minimum cost MST.
Adjacency List: O(V + E)
Drawbacks:
Adjacency Matrix: O(V 2 )
❌ Slower for sparse graphs compared to Kruskal’s algorithm.
2. Depth-First Search (DFS) Time Complexity:

Working: Using Min Heap + Adjacency List: O((V + E) log V )


Using Adjacency Matrix: O(V 2 )
DFS explores as deep as possible along a branch before backtracking.
Uses a stack (recursion or explicit stack) for traversal. Space Complexity:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/6 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/6

Adjacency List: O(V + E) Using Min-Heap + Adjacency List: O((V + E) log V )


Adjacency Matrix: O(V 2 ) Using Adjacency Matrix: O(V 2 )

Space Complexity:
4. Minimum Spanning Tree (MST) - Kruskal’s Algorithm
O(V + E)
Working:

Sorts all edges in increasing order and picks the smallest edge that doesn’t form a cycle. 6. All-Pairs Shortest Path - Floyd-Warshall Algorithm
Uses Disjoint Set Union (DSU) for cycle detection.
Working:
Benefits:

✅ Works well for sparse graphs.


Uses Dynamic Programming (DP).
Checks if a path through an intermediate vertex shortens the existing path.
✅ More efficient than Prim’s for edge-list representation.
Benefits:
Drawbacks:
✅ Can handle negative weights (but no negative cycles).
❌ Sorting edges takes O(E log E), which can be slower for dense graphs. ✅ Simple and easy to implement.
Time Complexity: Drawbacks:

Using Union-Find: O(E log E) ❌ Very slow for large graphs due to O(V ) complexity.
3

❌ Needs O(V ) space, which is costly for large graphs.


2
Space Complexity:
Time Complexity:
Adjacency List: O(V + E)
O(V 3 )

5. Single Source Shortest Path - Dijkstra’s Algorithm Space Complexity:

Working: O(V 2 )

Uses Min-Heap (Priority Queue) to pick the next minimum distance vertex.
Works only for graphs with non-negative weights. 7. All-Pairs Shortest Path - Bellman-Ford Algorithm
Benefits: Working:
✅ Guarantees shortest path for non-negative weights.
✅ Faster than Bellman-Ford for sparse graphs. Relax all edges V − 1 times to update distances.
Can detect negative weight cycles.

Drawbacks: Benefits:

❌ Does not work with negative weights (use Bellman-Ford instead). ✅ Works with negative weights.
❌ Heap-based implementation can be tricky. ✅ Can detect negative weight cycles.
Time Complexity: Drawbacks:

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/6 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/6
❌ Slower than Dijkstra’s for positive weight graphs.
❌ O(V E) time makes it infeasible for large graphs.
Time Space
Algorithm Use Case Pros Cons
Complexity Complexity

MST (Dense O((V + Works well with Slower for


Prim’s O(V + E)
Time Complexity: Graphs) E) log V ) adjacency matrix sparse graphs

MST (Sparse Works well with edge Requires


O(V E) Kruskal’s O(E log E) O(V + E)
Graphs) list sorting edges

Shortest Path (No O((V + Fast for non- No negative


Space Complexity: Dijkstra’s O(V + E)
Neg. Weights) E) log V ) negative graphs weights
O(V ) Floyd- All-Pairs Shortest Simple
O(V 3 ) O(V 2 ) Very slow
Warshall Path implementation

Bellman- Handles negative Slower than


Negative Weights O(V )
8. Maximum Flow - Ford-Fulkerson Algorithm
O(V E)
Ford weights Dijkstra’s

Ford- O(E ⋅ Used in many Slow for high


Maximum Flow O(V + E)
Working: Fulkerson Max Flow) applications max flow

Uses DFS/BFS to find augmenting paths in the residual graph.


Increases flow along paths until no more augmenting paths exist.
Would you like a more detailed explanation of any algorithm? 😊
ChatGPT can make mistakes. Check important info.

Benefits:

✅ Can find maximum bipartite matching and network flow problems.


✅ Useful in real-world applications (transport, pipelines).
Drawbacks:

❌ Time complexity depends on flow values (in naive DFS implementation).


❌ BFS-based Edmonds-Karp is more stable but slower.
Time Complexity:

Ford-Fulkerson (naive DFS): O(E ⋅ Max Flow)


Edmonds-Karp (BFS-based): O(V E 2 )

Space Complexity:

O(V + E)

Comparison Table
Time Space
Algorithm Use Case Pros Cons
Complexity Complexity

Unweighted High space


BFS O(V + E) O(V + E) Finds shortest path
Shortest Path usage

Topological Low memory in No shortest


DFS O(V + E) O(V + E)
Sorting, SCCs many cases path guarantee

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/6 Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/6

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy