Fundamental Computing Algorithms
Fundamental Computing Algorithms
COMPUTING
ALGORITHMS.
GROUP 12
O(N log N)
• What is Big O?
• Big O notation is a system for measuring the rate of growth of an
algorithm. Big O notation mathematically describes the complexity of
an algorithm in terms of time and space. We don’t measure the speed
of an algorithm in seconds (or minutes!). Instead, we measure the
number of operations it takes to complete.
• The O is short for “Order of”. So, if we’re discussing an algorithm with
O(n^2), we say its order of, or rate of growth, is n^2, or quadratic
complexity.
How Does Big O Work?
• Big O notation measures the worst-case scenario.
• Why?
• Because we don’t know what we don’t know.
• We need to know just how poorly our algorithm will perform so we
can evaluate other solutions.
• The worst-case scenario is also known as the “upper bound”. When
we say "upper bound", we mean the maximum number of operations
performed by an algorithm.
O Complexity Rate of growth
O(log n) logarithmic
O(n^2) quadratic
O(n^3) cubic
O(2^n) exponential
1. Naive Partition : Here we create copy of the array. First put all smaller elements and
then all greater. Finally we copy the temporary array back to original array. This
requires O(n) extra space.
• 2. Lomuto Partition : We have used this partition in this article. This is a simple
algorithm, we keep track of index of smaller elements and keep swapping. We have
used it here in this article because of its simplicity.
• 3. Hoare’s Partition : This is the fastest of all. Here we traverse array from both sides
and keep swapping greater element on left with smaller on right while the array is not
partitioned. Please refer Hoare’s vs Lomuto for details.
Complexity Analysis of Quick Sort :
i)Time Complexity:
• Best Case : Ω (N log (N))
The best-case scenario for quicksort occur when the pivot chosen at the each step divides the array into
roughly equal halves.
In this case, the algorithm will make balanced partitions, leading to efficient Sorting.
• Average Case: θ ( N log (N))
Quicksort’s average-case performance is usually very good in practice, making it one of the fastest sorting
Algorithm.
• Worst Case: O(N ^ 2)
The worst-case Scenario for Quicksort occur when the pivot at each step consistently results in highly
unbalanced partitions. When the array is already sorted and the pivot is always chosen as the smallest or
largest element. To mitigate the worst-case Scenario, various techniques are used such as choosing a good
pivot (e.g., median of three) and using Randomized algorithm (Randomized Quicksort ) to shuffle the
element before sorting.
ii)Auxiliary Space: O(1), if we don’t consider the recursive stack space. If we consider the recursive stack
space then, in the worst case quicksort could make O ( N ).
Advantages of Quick Sort:
• It is a divide-and-conquer algorithm that makes it easier to solve
problems.
• It is efficient on large data sets.
• It has a low overhead, as it only requires a small amount of memory to
function.
• It is Cache Friendly as we work on the same array to sort and do not
copy data to any auxiliary array.
• Fastest general purpose algorithm for large data when stability is not
required.
• It is tail recursive and hence all the tail call optimization can be done.
Disadvantages of Quick Sort:
• It has a worst-case time complexity of O(N 2 ), which occurs when the
pivot is chosen poorly.
• It is not a good choice for small data sets.
• It is not a stable sort, meaning that if two elements have the same
key, their relative order will not be preserved in the sorted output in
case of quick sort, because here we are swapping elements according
to the pivot’s position (without considering their original positions).
Heap sort.
• Heap sort is a comparison-based sorting technique based on
Binary Heap Data Structure. It can be seen as an optimization over
selection sort where we first find the max (or min) element and swap
it with the last (or first).
• We repeat the same process for the remaining elements. In Heap
Sort, we use Binary Heap so that we can quickly find and move the
max element in O(Log n) instead of O(n) and hence achieve the O(n
Log n) time complexity.
Steps…
• First convert the array into a max heap using heapify, Please note that this
happens in-place. The array elements are re-arranged to follow heap properties.
Then one by one delete the root node of the Max-heap and replace it with the
last node and heapify. Repeat this process while size of heap is greater than 1.
• Rearrange array elements so that they form a Max Heap.
• Repeat the following steps until the heap contains only one element:
• Swap the root element of the heap (which is the largest element in current heap) with the
last element of the heap.
• Remove the last element of the heap (which is now in the correct position). We mainly
reduce heap size and do not remove element from the actual array.
• Heapify the remaining elements of the heap.
• Finally we get sorted array.
Advantages of Heap Sort.
• Efficient Time Complexity: Heap Sort has a time complexity of O(n log n) in
all cases. This makes it efficient for sorting large datasets. The log n factor
comes from the height of the binary heap, and it ensures that the algorithm
maintains good performance even with a large number of elements.
• Memory Usage: Memory usage can be minimal (by writing an iterative
heapify() instead of a recursive one). So apart from what is necessary to
hold the initial list of items to be sorted, it needs no additional memory
space to work
• Simplicity: It is simpler to understand than other equally efficient sorting
algorithms because it does not use advanced computer science concepts
such as recursion.
Disadvantages of Heap Sort.
• Costly: Heap sort is costly as the constants are higher compared to
merge sort even if the time complexity is O(n Log n) for both.
• Unstable: Heap sort is unstable. It might rearrange the relative order.
• Inefficient: Heap Sort is not very efficient because of the high
constants in the time complexity.
Radix Sort.
• Radix Sort is a linear sorting algorithm that sorts elements by
processing them digit by digit. It is an efficient sorting algorithm for
integers or strings with fixed-size keys.
• The key idea behind Radix Sort is to exploit the concept of place
value. It assumes that sorting numbers digit by digit will eventually
result in a fully sorted list.
• Radix Sort can be performed using different variations, such as Least
Significant Digit (LSD) Radix Sort or Most Significant Digit (MSD) Radix
Sort.
Complexity ….
•
Time Complexity:
• Radix sort is a non-comparative integer sorting algorithm that sorts data with integer keys by
grouping the keys by the individual digits which share the same significant position and value. It
has a time complexity of O(d * (n + b)), where d is the number of digits, n is the number of
elements, and b is the base of the number system being used.
• In practical implementations, radix sort is often faster than other comparison-based sorting
algorithms, such as quicksort or merge sort, for large datasets, especially when the keys have many
digits. However, its time complexity grows linearly with the number of digits, and so it is not as
efficient for small datasets.
• Auxiliary Space:
• Radix sort also has a space complexity of O(n + b), where n is the number of elements and b is the
base of the number system. This space complexity comes from the need to create buckets for each
digit value and to copy the elements back to the original array after each digit has been sorted.
Counting Sort.
• Counting Sort is a non-comparison-based sorting algorithm. It is
particularly efficient when the range of input values is small compared
to the number of elements to be sorted. The basic idea behind
Counting Sort is to count the frequency of each distinct element in the
input array and use that information to place the elements in their
correct sorted positions.
Counting Sort Algorithm:
• Declare an auxiliary array countArray[] of size max(inputArray[])+1 and
initialize it with 0.
• Traverse array inputArray[] and map each element of inputArray[] as an
index of countArray[] array, i.e., execute countArray[inputArray[i]]+
+ for 0 <= i < N.
• Calculate the prefix sum at every index of array inputArray[].
• Create an array outputArray[] of size N.
• Traverse array inputArray[] from end and
update outputArray[ countArray[ inputArray[i] ] – 1] = inputArray[i].
Also, update countArray[ inputArray[i] ] = countArray[ inputArray[i] ]- – .
Complexity Analysis of Counting
Sort:
i)Time Complexity: O(N+M), where N and M are the size
of inputArray[] and countArray[] respectively.
• Worst-case: O(N+M).
• Average-case: O(N+M).
• Best-case: O(N+M).
ii)Auxiliary Space: O(N+M), where N and M are the space taken
by outputArray[] and countArray[] respectively.
Advantage of Counting Sort:
• Counting sort generally performs faster than all comparison-based
sorting algorithms, such as merge sort and quicksort, if the range of
input is of the order of the number of input.
• Counting sort is easy to code
• Counting sort is a stable algorithm.
Disadvantage of Counting Sort:
• Counting sort doesn’t work on decimal values.
• Counting sort is inefficient if the range of values to be sorted is very
large.
• Counting sort is not an In-place sorting algorithm, It uses extra space
for sorting the array elements.
Applications of Counting Sort:
• It is a commonly used algorithm for the cases where we have limited
range items. For example, sort students by grades, sort a events by
time, days, months, years, etc
• It is used as a subroutine in Radix Sort
• The idea of counting sort is used in Bucket Sort to divide elements
into different buckets.
Bucket sort.
• Bucket sort is a sorting technique that involves dividing elements into
various groups, or buckets. These buckets are formed by uniformly
distributing the elements. Once the elements are divided into buckets,
they can be sorted using any other sorting algorithm. Finally, the
sorted elements are gathered together in an ordered fashion.
Steps…
• Create n empty buckets (Or lists) and do the following for every array
element arr[i].
• Insert arr[i] into bucket[n*array[i]]
• Sort individual buckets using insertion sort.
• Concatenate all sorted buckets.
Complexity Analysis of Bucket
Sort Algorithm:
• Worst Case Time Complexity: O(n2) The worst case happens when one bucket
gets all the elements. In this case, we will be running insertion sort on all items
which will make the time complexity as O(n2). We can reduce the worst case
time complexity to O(n Log n) by using a O(n Log n) algorithm like Merge Sort or
Heap Sort to sort the individual buckets, but that will improve the algorithm
time for cases when buckets have small number of items as insertion sort works
better for small arrays.
• Best Case Time Complexity : O(n + k) The best case happens when every bucket
gets equal number of elements. In this case every call to insertion sort will take
constant time as the number of items in every bucket would be constant
(Assuming that k is linearly proportional to n).
• Auxiliary Space: O(n+k)