Abdeta Beshana
Abdeta Beshana
1. Binary Search
Binary Search is an efficient algorithm for finding an item from a sorted list of items. It works by
repeatedly dividing the search interval in half.
Algorithm:
1. Initialize:
o Set low to 0 (the start index of the list).
o Set high to the length of the list minus 1 (the end index).
2. While loop:
o While low is less than or equal to high:
1. Compute the middle index: mid = low + (high - low) / 2.
2. Compare the target value with the middle element:
If the target is equal to the middle element, return mid.
If the target is less than the middle element, update high to mid -
1.
If the target is greater than the middle element, update low to mid
+ 1.
3. Return:
o If the loop exits without finding the target, return -1 (indicating the target is not in
the list).
2. Merge Sort
Merge Sort is a divide-and-conquer algorithm that divides the array into smaller subarrays, sorts
those subarrays, and then merges them back together.
Algorithm:
1
1. Divide:
o If the array has one element or is empty, return it (base case).
o Find the middle index: mid = length of array / 2.
o Recursively sort the two halves: left_half and right_half.
2. Merge:
o Create two pointers: one for each half (i for left_half, j for right_half).
o Compare elements from both halves and build a sorted array:
While both pointers have elements:
Compare the elements pointed to by i and j.
Append the smaller element to the merged array and move the
pointer forward.
Append any remaining elements from left_half and right_half to the
merged array.
3. Return:
o Return the merged and sorted array.
Explanation
4. Initialization:
o low is set to 0, and high is set to the last index of the array (9 in this case).
5. First Iteration:
o Compute mid: mid = 0 + (9 - 0) // 2 = 4.
o arr[4] is 9, which is less than 15, so we update low to mid + 1 (5).
6. Second Iteration:
o Compute mid: mid = 5 + (9 - 5) // 2 = 7.
o arr[7] is 15, which matches the target.
7. Return:
o The function returns the index 7, where the target value 15 is found
3. Quick Sort
Quick Sort is another divide-and-conquer algorithm that picks an element as a pivot and
partitions the array around the pivot.
Algorithm:
1. Partitioning:
o Choose a pivot element (often the last element in the array).
o Reorder the array so that elements less than the pivot are on the left, elements
greater than the pivot are on the right.
o Place the pivot in its correct position.
2. Recursively Sort:
o Recursively apply the quick sort to the subarrays formed by partitioning:
The subarray to the left of the pivot.
The subarray to the right of the pivot.
3. Base Case:
2
oIf the array has one or zero elements, it is already sorted.
4. Return:
o Return the sorted array.
4. Selection Sort
Selection Sort is a simple comparison-based sorting algorithm that repeatedly selects the smallest
(or largest) element from the unsorted part and moves it to the sorted part.
Algorithm:
1. Iterate:
o For each index i from 0 to length of array - 2:
1. Assume the smallest element is at index i.
2. Iterate through the unsorted portion of the array (from i+1 to the end):
Find the index of the smallest element.
3. Swap the smallest found element with the element at index i.
2. Repeat:
o Continue this process until the entire array is sorted.
3. Return:
o Return the sorted array.
1. Binary Search
Example:
Steps:
3
4. Search Interval Update: Since 19 < 23, update the interval to [17,19][17, 19][17,19]
5. Updated Array: [17,19][17, 19][17,19]
o Middle index: 1 (Element: 19)
6. Element Found: 19 is found at index 1.
2. Merge Sort
Time Complexity: O(n log n)
Example:
Steps:
4. Quick Sort
In the average case, the partitioning operation takes linear time, and there are logn\log nlogn
levels of recursion, resulting in O(nlogn)O(n \log n)O(nlogn). In the worst case (e.g., already
sorted array), time complexity becomes O(n2)O(n^2)O(n2).
4
Time Complexity: Average Case: O(n log n); Worst Case: O(n^2)
Example:
Steps:
4. Selection Sort
Selection Sort performs nnn passes through the array, with each pass involving scanning the
remaining elements, leading to O(n2)O(n^2)O(n2) time complexity Time Complexity: O(n^2)
Example:
Let's consider an array with 5 elements: Array=[64,25,12,22,11]\text{Array} = [64, 25, 12, 22,
11]Array=[64,25,12,22,11]
Steps:
5
5. Final Array: [11,12,22,25,64][11, 12, 22, 25, 64][11,12,22,25,64]
Summary Table
1. Prim's Algorithm
Prim's Algorithm is a greedy algorithm that builds the MST by starting from an arbitrary node
and growing the MST one edge at a time.
Algorithm:
1. Initialization:
o Start with an arbitrary vertex and add it to the MST.
o Create a priority queue (min-heap) to store edges with their weights.
o Create a set to keep track of vertices included in the MST.
o Initialize the MST set with the starting vertex.
2. Process:
o While there are vertices not yet included in the MST:
1. Extract the edge with the smallest weight from the priority queue.
2. If the edge connects a vertex in the MST to a vertex not yet included:
Add this vertex to the MST.
Add all edges connecting this new vertex to vertices not in the
MST to the priority queue.
3. Add the edge to the MST.
3. Return:
o The MST formed by the edges added during the process.
Example:
Consider a graph with vertices V={A,B,C,D}V = \{A, B, C, D\}V={A,B,C,D} and edges with
weights:
A−B:1A-B: 1A−B:1
A−C:4A-C: 4A−C:4
6
B−C:2B-C: 2B−C:2
B−D:5B-D: 5B−D:5
C−D:3C-D: 3C−D:3
2. Kruskal's Algorithm
Kruskal's Algorithm is another greedy algorithm that finds the MST by sorting all the edges and
adding them one by one, ensuring no cycles are formed.
Algorithm:
1. Initialization:
o Sort all edges in non-decreasing order of their weights.
o Initialize a union-find data structure to manage and find cycles.
o Create an empty list to store the MST edges.
2. Process:
o For each edge in the sorted list:
1. Check if adding the edge to the MST would form a cycle using the union-
find data structure.
2. If no cycle is formed:
Add the edge to the MST.
Union the sets of the two vertices of the edge.
3. Return:
o The MST formed by the edges added during the process.
Example:
Consider the same graph with vertices V={A,B,C,D}V = \{A, B, C, D\}V={A,B,C,D} and edges
with weights:
7
A−B:1A-B: 1A−B:1
A−C:4A-C: 4A−C:4
B−C:2B-C: 2B−C:2
B−D:5B-D: 5B−D:5
C−D:3C-D: 3C−D:3
Summary
Prim's Algorithm: Starts from a vertex and grows the MST by adding the smallest edge
connecting the MST to an outside vertex.
Kruskal's Algorithm: Starts with all edges and adds the smallest edge to the MST,
ensuring no cycles are formed using a union-find structure.
Parallel algorithms are designed to execute tasks concurrently, which helps in solving problems
faster by utilizing multiple processors or cores. Here’s a look into the functionality, applications
Parallel algorithms are designed to execute tasks concurrently, which helps in solving problems
faster by utilizing multiple processors or cores. Here’s a look into the functionality, applications,
and an example algorithm for parallel processing:
Functionality:
Concurrency: Parallel algorithms divide a problem into smaller sub-tasks that can be
executed simultaneously on multiple processors or cores.
Speedup: By executing multiple tasks at once, these algorithms can significantly reduce
computation time compared to sequential algorithms.
Scalability: Efficient parallel algorithms can scale with the number of processors or
cores, improving performance as more computational resources are added.
8
Synchronization: They include mechanisms to handle dependencies between tasks,
ensuring correct results despite simultaneous executions.
Parallel algorithms are widely used in various fields due to their ability to handle large-scale and
complex problems efficiently:
Parallel Merge Sort is an extension of the traditional Merge Sort algorithm, designed to run
efficiently on multi-core processors by dividing the sorting task across multiple threads.
Algorithm:
1. Divide:
o Recursively divide the array into two halves until each sub-array contains a single
element or no elements.
2. Parallel Merge:
o Use multiple threads to sort the two halves of each sub-array in parallel.
o Once sorted, merge the two halves concurrently.
Divide the merging process into smaller tasks that can be performed in
parallel.
3. Merge:
o Perform merging of the sorted sub-arrays by combining them into a single sorted
array using multiple threads if necessary.
4. Combine:
o Continue merging sorted sub-arrays in parallel until the entire array is sorted.
Detailed Steps:
9
o Each thread recursively sorts its assigned half.
o This recursion continues until the base case is reached (sub-array with one or zero
elements).
3. Merge Sorted Halves:
o Once the sub-arrays are sorted, merge them.
o Use multiple threads to perform merging operations concurrently.
o Implement a parallel merging strategy to reduce the overall merge time.
Explanation:
Merge Step: The sorted halves are merged concurrently, using threads to speed up the
merging process.
Combine: The sorted halves are combined to form the final sorted array.
Summary
Example:
Algorithm:
1. Initialization:
o Let AAA be an m×nm \times nm×n matrix.
o Let BBB be an n×pn \times pn×p matrix.
o Let CCC be an m×pm \times pm×p matrix to store the result.
2. Parallel Computation:
o Divide the computation of each element of CCC across multiple processors.
o Each processor calculates one or more elements of CCC by performing the dot
product of the corresponding row of AAA and column of BBB.
10
3. Combine Results:
o Gather the results from all processors to form the final matrix CCC.
Detailed Steps:
2. Parallel Execution:
o Distribute the calculation of different elements C[i][j]C[i][j]C[i][j] to different
processors.
Example:
Quick Sort is a popular sorting algorithm. Parallel Quick Sort enhances the performance of
sorting large datasets by sorting sub-arrays in parallel.
Algorithm:
1. Partition:
o Choose a pivot element.
o Partition the array into elements less than the pivot and elements greater than the
pivot.
2. Parallel Sort:
o Recursively sort the sub-arrays on either side of the pivot in parallel.
3. Combine:
o Merge the sorted sub-arrays into a single sorted array.
Detailed Steps:
1. Partition Step:
o Rearrange elements around the pivot.
2. Parallel Recursive Sort:
o Use multiple threads to sort the left and right sub-arrays.
Example:
BFS is used for traversing or searching tree or graph data structures. Parallel BFS can be used to
explore multiple levels of a graph simultaneously.
11
Algorithm:
1. Initialization:
o Start from a source node.
o Initialize a queue and a visited set.
2. Parallel Level Processing:
o While there are nodes in the queue:
Process all nodes at the current level in parallel.
Enqueue their unvisited neighbors to be processed in the next level.
3. Combine Results:
o Collect results from all parallel executions.
Detailed Steps:
1. Initialization:
o Mark the source node as visited and enqueue it.
2. Level-wise Processing:
o Use multiple threads to process nodes at the current level.
Summary
12