0% found this document useful (0 votes)
17 views

Fundamental Computing Algorithms

Uploaded by

charlesopiyo446
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Fundamental Computing Algorithms

Uploaded by

charlesopiyo446
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

FUNDAMENTAL

COMPUTING
ALGORITHMS.
GROUP 12
O(N log N)
• What is Big O?
• Big O notation is a system for measuring the rate of growth of an
algorithm. Big O notation mathematically describes the complexity of
an algorithm in terms of time and space. We don’t measure the speed
of an algorithm in seconds (or minutes!). Instead, we measure the
number of operations it takes to complete.
• The O is short for “Order of”. So, if we’re discussing an algorithm with
O(n^2), we say its order of, or rate of growth, is n^2, or quadratic
complexity.
How Does Big O Work?
• Big O notation measures the worst-case scenario.
• Why?
• Because we don’t know what we don’t know.
• We need to know just how poorly our algorithm will perform so we
can evaluate other solutions.
• The worst-case scenario is also known as the “upper bound”. When
we say "upper bound", we mean the maximum number of operations
performed by an algorithm.
O Complexity Rate of growth

O(1) constant fast

O(log n) logarithmic

O(n) linear time

O(n * log n) log linear

O(n^2) quadratic

O(n^3) cubic

O(2^n) exponential

O(n!) factorial slow


O(N log N) Sorting Algorithms.
• A Sorting Algorithm is used to rearrange a given array or list of
elements according to a comparison operator on the elements. The
comparison operator is used to decide the new order of elements in
the respective data structure.
What is Sorting?
• Sorting refers to rearrangement of a given array or list of elements
according to a comparison operator on the elements. The comparison
operator is used to decide the new order of elements in the
respective data structure. Sorting means reordering of all the
elements either in ascending or in descending order.
Sorting Terminology.
• In-place Sorting: An in-place sorting algorithm uses constant
space for producing the output (modifies the given array only) or
copying elements to a temporary storage. Examples: Selection Sort,
Bubble Sort Insertion Sort and Heap Sort.
• Internal Sorting: Internal Sorting is when all the data is placed in
the main memory or internal memory. In internal sorting, the
problem cannot take input beyond its size. Example: heap sort,
bubble sort, selection sort, quick sort, shell sort, insertion sort.
Sorting Terminology…
• External Sorting : External Sorting is when all the data that needs to be
sorted cannot be placed in memory at a time, the sorting is called
external sorting. External Sorting is used for the massive amount of
data. Examples: Merge sort, Tag sort, Polyphase sort, Four tape sort,
External radix sort, etc.
• Stable sorting: When two same items appear in the same order in
sorted data as in the original array called stable sort. Examples: Merge
Sort, Insertion Sort, Bubble Sort.
• Unstable sorting: When two same data appear in the different order in
sorted data it is called unstable sort. Examples: Selection Sort, Quick
Sort, Heap Sort, Shell Sort.
Characteristics of Sorting
Algorithms:
• Time Complexity: Time complexity, a measure of how long it takes to
run an algorithm, is used to categorize sorting algorithms. The worst-
case, average-case, and best-case performance of a sorting algorithm
can be used to quantify the time complexity of the process.
• Auxiliary Space : This is the amount of extra space (apart from input
array) needed to sort. For example, Merge Sort requires O(n) and
Insertion Sort O(1) auxiliary space
• Stability: A sorting algorithm is said to be stable if the relative order of
equal elements is preserved after sorting. This is important in certain
applications where the original order of equal elements must be
maintained.
Characteristics of Sorting
Algorithms…
• In-Place Sorting: An in-place sorting algorithm is one that does not
require additional memory to sort the data. This is important when
the available memory is limited or when the data cannot be moved.
• Adaptivity: An adaptive sorting algorithm is one that takes advantage
of pre-existing order in the data to improve performance. For example
insertion sort takes time proportional to number of inversions in the
input array.
Applications of Sorting Algorithms.
• Searching Algorithms: Sorting is often a crucial step in search
algorithms like binary search and Ternary Search. A lot of Greedy
Algorithms use sorting as a first step to apply Greedy Approach. For
example Activity Selection, Fractional Knapsack, Weighted Job
Scheduling, etc
• Data management: Sorting data makes it easier to search, retrieve, and
analyze. For example the order by operation in SQL queries requires
sorting.
• Database optimization: Sorting data in databases improves query
performance. We preprocess the data by sorting so that efficient
searching can be applied.
Applications of Sorting Algorithms…
• Machine learning: Sorting is used to prepare data for training
machine learning models.
• Data Analysis: Sorting helps in identifying patterns, trends, and
outliers in datasets. It plays a vital role in statistical analysis, financial
modeling, and other data-driven fields.
• Operating Systems: Sorting algorithms are used in operating systems
for tasks like task scheduling, memory management, and file system
organization.
TYPES OF SORTING ALGORITHMS.
i)Comparison based algorithms;
• Bubble sort
• Selection sort
• Inserton sort
• Merge sort
• Quick sort
• Heap sort
ii)Non-comparison-based sorting algorithms;
• Counting sort
• Radix sort
• Bucket sort
1.Selection Sort;
• Selection Sort is a comparison-based sorting algorithm.
• It sorts an array by repeatedly selecting the smallest (or
largest) element from the unsorted portion and swapping it with the
first unsorted element.
• This process continues until the entire array is sorted.
Selection Sort Algorithm:
Steps of the Selection Sort Algorithm:
• Start with the first element as the initial position.
• Find the smallest element in the unsorted portion of the array.
• Swap this smallest element with the first unsorted element.
• Move the boundary of the sorted portion one element forward.
• Repeat steps 2-4 for the remaining unsorted elements until the entire
array is sorted.
Complexity Analysis of Selection
Sort;
• Time Complexity: O(n2) ,as there are two nested loops:
• One loop to select an element of Array one by one = O(n)
• Another loop to compare that element with every other Array
element = O(n)
• Therefore overall complexity = O(n) * O(n) = O(n*n) = O(n2)
• Auxiliary Space: O(1) as the only extra memory used is for temporary
variables.
Advantages of Selection Sort
• Easy to understand and implement, making it ideal for teaching basic
sorting concepts.
• Requires only a constant O(1) extra memory space.
Disadvantages of the Selection
Sort.
• Selection sort has a time complexity of O(n^2) makes it slower
compared to algorithms like Quick Sort or Merge Sort.
• Does not maintain the relative order of equal elements.
• Does not preserve the relative order of items with equal keys which
means it is not stable.
Applications of Selection Sort.
• Perfect for teaching fundamental sorting mechanisms and algorithm
design.
• Suitable for small lists where the overhead of more complex
algorithms isn’t justified.
• Ideal for systems with limited memory due to its in-place sorting
capability.
• Used in simple embedded systems where resource availability is
limited and simplicity is important.
2.Bubble Sort.
• Bubble Sort is the simplest sorting algorithm that works by repeatedly
swapping the adjacent elements if they are in the wrong order. This
algorithm is not suitable for large data sets as its average and worst-
case time complexity are quite high.

• Complexity Analysis of Bubble Sort:


• Time Complexity: O(n2)
Auxiliary Space: O(1)
STEPS…
• Start at the first element of the array.
• Compare the current element with the next element.
• If the current element is greater than the next element, swap them.
• Move to the next pair of elements and repeat the comparison and
swap if needed.
• After each complete pass through the array, the largest unsorted
element is placed at its correct position at the end of the array.
• Repeat the above process of a pass for the remaining unsorted
elements until the entire array is sorted.
Advantages of Bubble Sort:
• Bubble sort is easy to understand and implement.
• It does not require any additional memory space.
• It is a stable sorting algorithm, meaning that elements with the same
key value maintain their relative order in the sorted output.
Disadvantages of Bubble Sort:
• Bubble sort has a time complexity of O(n2) which makes it very slow
for large data sets.
• Bubble sort is a comparison-based sorting algorithm, which means
that it requires a comparison operator to determine the relative order
of elements in the input data set. It can limit the efficiency of the
algorithm in certain cases.
3.Insertion sort.
• Insertion sort is a simple sorting algorithm that works by iteratively
inserting each element of an unsorted list into its correct position in a
sorted portion of the list. It is a stable sorting algorithm, meaning that
elements with equal values maintain their relative order in the sorted
output.
• Insertion sort is a simple sorting algorithm that works by building a
sorted array one element at a time. It is considered an ” in-place ”
sorting algorithm, meaning it doesn’t require any additional memory
space beyond the original array.
STEPS…
• We start with second element of the array as first element in the array is
assumed to be sorted.
• Compare second element with the first element and check if the second
element is smaller then swap them.
• Move to the third element and compare it with the second element, then
the first element and swap as necessary to put it in the correct position
among the first three elements.
• Continue this process, comparing each element with the ones before it and
swapping as needed to place it in the correct position among the sorted
elements.
• Repeat until the entire array is sorted.
Complexity Analysis of Insertion Sort
i)Time Complexity of Insertion Sort;
• Best case: O(n) , If the list is alread;y sorted, where n is the number of
elements in the list.
• Average case: O(n 2 ) , If the list is randomly ordered
• Worst case: O(n 2 ) , If the list is in reverse order
ii)Space Complexity of Insertion Sort
• Auxiliary Space: O(1), Insertion sort requires O(1) additional space,
making it a space-efficient sorting algorithm.
Advantages of Insertion Sort:
• Simple and easy to implement.
• Stable sorting algorithm.
• Efficient for small lists and nearly sorted lists.
• Space-efficient.
• Adoptive. the number of inversions is directly proportional to number
of swaps. For example, no swapping happens for a sorted array and it
takes O(n) time only.
Disadvantages of Insertion Sort:
• Inefficient for large lists.
• Not as efficient as other sorting algorithms (e.g., merge sort, quick
sort) for most cases.
Applications of Insertion Sort:
• Insertion sort is commonly used in situations where:
• The list is small or nearly sorted.
• Simplicity and stability are important.
• Used as a subroutine in Bucket Sort
• Can be useful when array is already almost sorted (very few inversions)
• Since Insertion sort is suitable for small sized arrays, it is used in
Hybrid Sorting algorithms along with other efficient algorithms like Quick
Sort and Merge Sort. When the subarray size becomes small, we switch
to insertion sort in these recursive algorithms. For example IntroSort and
TimSort use insertions sort.
4.Merge sort.
• Merge sort is a sorting algorithm that follows the divide-and-
conquer approach. It works by recursively dividing the input array into
smaller subarrays and sorting those subarrays then merging them
back together to obtain the sorted array.
• In simple terms, we can say that the process of merge sort is to divide
the array into two halves, sort each half, and then merge the sorted
halves back together. This process is repeated until the entire array is
sorted.
How does Merge Sort work?
• Merge sort is a popular sorting algorithm known for its efficiency and
stability. It follows the divide-and-conquer approach to sort a given array of
elements.
• Here’s a step-by-step explanation of how merge sort works:
• Divide: Divide the list or array recursively into two halves until it can no
more be divided.
• Conquer: Each subarray is sorted individually using the merge sort
algorithm.
• Merge: The sorted subarrays are merged back together in sorted order. The
process continues until all elements from both subarrays have been
merged.
Complexity Analysis of Merge
Sort:
i)Time Complexity:
• Best Case: O(n log n), When the array is already sorted or nearly sorted.
• Average Case: O(n log n), When the array is randomly ordered.
• Worst Case: O(n log n), When the array is sorted in reverse order.
ii)Auxiliary Space: O(n), Additional space is required for the temporary
array used during merging.
Applications of Merge Sort:
• Sorting large datasets
• External sorting ,(when the dataset is too large to fit in memory)
• Inversion counting is a preferred algorithm for sorting Linked lists.
• It can be easily parallelized as we can independently sort subarrays
and then merge.
• The merge function of merge sort to efficiently solve the problems
like union and intersection of two sorted arrays.
Advantages of Merge Sort:
• Stability : Merge sort is a stable sorting algorithm, which means it
maintains the relative order of equal elements in the input array.
• Guaranteed worst-case performance: Merge sort has a worst-case
time complexity of O(N logN) , which means it performs well even on
large datasets.
• Simple to implement: The divide-and-conquer approach is
straightforward.
• Naturally Parallel : We independently merge subarrays that makes it
suitable for parallel processing.
Disadvantages of Merge Sort:
• Space complexity: Merge sort requires additional memory to store
the merged sub-arrays during the sorting process.
• Not in-place: Merge sort is not an in-place sorting algorithm, which
means it requires additional memory to store the sorted data. This
can be a disadvantage in applications where memory usage is a
concern.
• Slower than QuickSort in general. QuickSort is more cache friendly
because it works in-place.
5.QuickSort.
• QuickSort is a sorting algorithm based on the Divide and Conquer
that picks an element as a pivot and partitions the given array around
the picked pivot by placing the pivot in its correct position in the
sorted array.
How does QuickSort Algorithm
work?
• There are mainly three steps in the algorithm.
1. Choose a pivot
2. Partition the array around pivot. After partition, it is ensured that
all elements are smaller than all right and we get index of the end
point of smaller elements. The left and right may not be sorted
individually.
3. Recursively call for the two partitioned left and right subarrays.
4. We stop recursion when there is only one element is left.
• Partition Algorithm:
• The key process in quickSort is a partition(). There are three common algorithms to
partition. All these algorithms have O(n) time complexity.

1. Naive Partition : Here we create copy of the array. First put all smaller elements and
then all greater. Finally we copy the temporary array back to original array. This
requires O(n) extra space.
• 2. Lomuto Partition : We have used this partition in this article. This is a simple
algorithm, we keep track of index of smaller elements and keep swapping. We have
used it here in this article because of its simplicity.
• 3. Hoare’s Partition : This is the fastest of all. Here we traverse array from both sides
and keep swapping greater element on left with smaller on right while the array is not
partitioned. Please refer Hoare’s vs Lomuto for details.
Complexity Analysis of Quick Sort :
i)Time Complexity:
• Best Case : Ω (N log (N))
The best-case scenario for quicksort occur when the pivot chosen at the each step divides the array into
roughly equal halves.
In this case, the algorithm will make balanced partitions, leading to efficient Sorting.
• Average Case: θ ( N log (N))
Quicksort’s average-case performance is usually very good in practice, making it one of the fastest sorting
Algorithm.
• Worst Case: O(N ^ 2)
The worst-case Scenario for Quicksort occur when the pivot at each step consistently results in highly
unbalanced partitions. When the array is already sorted and the pivot is always chosen as the smallest or
largest element. To mitigate the worst-case Scenario, various techniques are used such as choosing a good
pivot (e.g., median of three) and using Randomized algorithm (Randomized Quicksort ) to shuffle the
element before sorting.
ii)Auxiliary Space: O(1), if we don’t consider the recursive stack space. If we consider the recursive stack
space then, in the worst case quicksort could make O ( N ).
Advantages of Quick Sort:
• It is a divide-and-conquer algorithm that makes it easier to solve
problems.
• It is efficient on large data sets.
• It has a low overhead, as it only requires a small amount of memory to
function.
• It is Cache Friendly as we work on the same array to sort and do not
copy data to any auxiliary array.
• Fastest general purpose algorithm for large data when stability is not
required.
• It is tail recursive and hence all the tail call optimization can be done.
Disadvantages of Quick Sort:
• It has a worst-case time complexity of O(N 2 ), which occurs when the
pivot is chosen poorly.
• It is not a good choice for small data sets.
• It is not a stable sort, meaning that if two elements have the same
key, their relative order will not be preserved in the sorted output in
case of quick sort, because here we are swapping elements according
to the pivot’s position (without considering their original positions).
Heap sort.
• Heap sort is a comparison-based sorting technique based on
Binary Heap Data Structure. It can be seen as an optimization over
selection sort where we first find the max (or min) element and swap
it with the last (or first).
• We repeat the same process for the remaining elements. In Heap
Sort, we use Binary Heap so that we can quickly find and move the
max element in O(Log n) instead of O(n) and hence achieve the O(n
Log n) time complexity.
Steps…
• First convert the array into a max heap using heapify, Please note that this
happens in-place. The array elements are re-arranged to follow heap properties.
Then one by one delete the root node of the Max-heap and replace it with the
last node and heapify. Repeat this process while size of heap is greater than 1.
• Rearrange array elements so that they form a Max Heap.
• Repeat the following steps until the heap contains only one element:
• Swap the root element of the heap (which is the largest element in current heap) with the
last element of the heap.
• Remove the last element of the heap (which is now in the correct position). We mainly
reduce heap size and do not remove element from the actual array.
• Heapify the remaining elements of the heap.
• Finally we get sorted array.
Advantages of Heap Sort.
• Efficient Time Complexity: Heap Sort has a time complexity of O(n log n) in
all cases. This makes it efficient for sorting large datasets. The log n factor
comes from the height of the binary heap, and it ensures that the algorithm
maintains good performance even with a large number of elements.
• Memory Usage: Memory usage can be minimal (by writing an iterative
heapify() instead of a recursive one). So apart from what is necessary to
hold the initial list of items to be sorted, it needs no additional memory
space to work
• Simplicity: It is simpler to understand than other equally efficient sorting
algorithms because it does not use advanced computer science concepts
such as recursion.
Disadvantages of Heap Sort.
• Costly: Heap sort is costly as the constants are higher compared to
merge sort even if the time complexity is O(n Log n) for both.
• Unstable: Heap sort is unstable. It might rearrange the relative order.
• Inefficient: Heap Sort is not very efficient because of the high
constants in the time complexity.
Radix Sort.
• Radix Sort is a linear sorting algorithm that sorts elements by
processing them digit by digit. It is an efficient sorting algorithm for
integers or strings with fixed-size keys.
• The key idea behind Radix Sort is to exploit the concept of place
value. It assumes that sorting numbers digit by digit will eventually
result in a fully sorted list.
• Radix Sort can be performed using different variations, such as Least
Significant Digit (LSD) Radix Sort or Most Significant Digit (MSD) Radix
Sort.
Complexity ….

Time Complexity:
• Radix sort is a non-comparative integer sorting algorithm that sorts data with integer keys by
grouping the keys by the individual digits which share the same significant position and value. It
has a time complexity of O(d * (n + b)), where d is the number of digits, n is the number of
elements, and b is the base of the number system being used.
• In practical implementations, radix sort is often faster than other comparison-based sorting
algorithms, such as quicksort or merge sort, for large datasets, especially when the keys have many
digits. However, its time complexity grows linearly with the number of digits, and so it is not as
efficient for small datasets.
• Auxiliary Space:
• Radix sort also has a space complexity of O(n + b), where n is the number of elements and b is the
base of the number system. This space complexity comes from the need to create buckets for each
digit value and to copy the elements back to the original array after each digit has been sorted.
Counting Sort.
• Counting Sort is a non-comparison-based sorting algorithm. It is
particularly efficient when the range of input values is small compared
to the number of elements to be sorted. The basic idea behind
Counting Sort is to count the frequency of each distinct element in the
input array and use that information to place the elements in their
correct sorted positions.
Counting Sort Algorithm:
• Declare an auxiliary array countArray[] of size max(inputArray[])+1 and
initialize it with 0.
• Traverse array inputArray[] and map each element of inputArray[] as an
index of countArray[] array, i.e., execute countArray[inputArray[i]]+
+ for 0 <= i < N.
• Calculate the prefix sum at every index of array inputArray[].
• Create an array outputArray[] of size N.
• Traverse array inputArray[] from end and
update outputArray[ countArray[ inputArray[i] ] – 1] = inputArray[i].
Also, update countArray[ inputArray[i] ] = countArray[ inputArray[i] ]- – .
Complexity Analysis of Counting
Sort:
i)Time Complexity: O(N+M), where N and M are the size
of inputArray[] and countArray[] respectively.
• Worst-case: O(N+M).
• Average-case: O(N+M).
• Best-case: O(N+M).
ii)Auxiliary Space: O(N+M), where N and M are the space taken
by outputArray[] and countArray[] respectively.
Advantage of Counting Sort:
• Counting sort generally performs faster than all comparison-based
sorting algorithms, such as merge sort and quicksort, if the range of
input is of the order of the number of input.
• Counting sort is easy to code
• Counting sort is a stable algorithm.
Disadvantage of Counting Sort:
• Counting sort doesn’t work on decimal values.
• Counting sort is inefficient if the range of values to be sorted is very
large.
• Counting sort is not an In-place sorting algorithm, It uses extra space
for sorting the array elements.
Applications of Counting Sort:
• It is a commonly used algorithm for the cases where we have limited
range items. For example, sort students by grades, sort a events by
time, days, months, years, etc
• It is used as a subroutine in Radix Sort
• The idea of counting sort is used in Bucket Sort to divide elements
into different buckets.
Bucket sort.
• Bucket sort is a sorting technique that involves dividing elements into
various groups, or buckets. These buckets are formed by uniformly
distributing the elements. Once the elements are divided into buckets,
they can be sorted using any other sorting algorithm. Finally, the
sorted elements are gathered together in an ordered fashion.
Steps…
• Create n empty buckets (Or lists) and do the following for every array
element arr[i].
• Insert arr[i] into bucket[n*array[i]]
• Sort individual buckets using insertion sort.
• Concatenate all sorted buckets.
Complexity Analysis of Bucket
Sort Algorithm:
• Worst Case Time Complexity: O(n2) The worst case happens when one bucket
gets all the elements. In this case, we will be running insertion sort on all items
which will make the time complexity as O(n2). We can reduce the worst case
time complexity to O(n Log n) by using a O(n Log n) algorithm like Merge Sort or
Heap Sort to sort the individual buckets, but that will improve the algorithm
time for cases when buckets have small number of items as insertion sort works
better for small arrays.
• Best Case Time Complexity : O(n + k) The best case happens when every bucket
gets equal number of elements. In this case every call to insertion sort will take
constant time as the number of items in every bucket would be constant
(Assuming that k is linearly proportional to n).
• Auxiliary Space: O(n+k)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy