0% found this document useful (0 votes)
2 views19 pages

Designn and Analysis of Algorithm Unit-I

The document outlines the syllabus for a course on the Design and Analysis of Algorithms, covering various types of algorithms including sorting, searching, recursion, and dynamic programming. It emphasizes the importance of analyzing algorithms in terms of time and space complexity, providing methods for evaluating their efficiency and comparing different algorithms. Additionally, it discusses asymptotic analysis and the complexities associated with algorithms, including examples and general time complexities for various problem sizes.

Uploaded by

priyanshu200480
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views19 pages

Designn and Analysis of Algorithm Unit-I

The document outlines the syllabus for a course on the Design and Analysis of Algorithms, covering various types of algorithms including sorting, searching, recursion, and dynamic programming. It emphasizes the importance of analyzing algorithms in terms of time and space complexity, providing methods for evaluating their efficiency and comparing different algorithms. Additionally, it discusses asymptotic analysis and the complexities associated with algorithms, including examples and general time complexities for various problem sizes.

Uploaded by

priyanshu200480
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 19

BCS 503

DESIGN AND ANALYSIS OF ALGORITHM

UNIT 1 Introduction

SYLLABUS

Algorithms, Analyzing Algorithms, Complexity of Algorithms, Growth


of Functions, Performance Measurements, Sorting and Order
Statistics - Shell Sort, Quick Sort, Merge Sort, Heap Sort, Comparison
of Sorting Algorithms, Sorting in Linear Time
Algorithm?
An algorithm is a finite sequence of well-defined instructions that can
be used to solve a computational problem. It provides a step-by-step
procedure that convert an input into a desired output.
Algorithms typically follow a logical structure:
•Input: The algorithm receives input data.

•Processing: The algorithm performs a series of operations on the

input data.
•Output: The algorithm produces the desired output.

What is the Need for Algorithms?


Algorithms are essential for solving complex computational problems
efficiently and effectively. They provide a systematic approach to:

•Solving problems: Algorithms break down problems into smaller,

manageable steps.
•Optimizing solutions: Algorithms find the best or near-optimal

solutions to problems.
•Automating tasks: Algorithms can automate repetitive or complex

tasks, saving time and effort.

1. Analysis of Algorithms
Analysis of Algorithms is the process of evaluating the efficiency of
algorithms, focusing mainly on the time and space complexity. This
helps in evaluating how the algorithm's running time or space
requirements grow as the size of input increases.

2. Mathematical Algorithms
Mathematical algorithms are used for analyzing and optimizing data
structures and algorithms. Knowing basic concepts
like divisibility, LCM, GCD, etc. can really help you understand how
data structures work and improve your ability to design efficient
algorithms.

3. Bitwise Algorithms
Bitwise algorithms are algorithms that operate on individual bits of
numbers. These algorithms manipulate the binary representation of
numbers like shifting bits, setting or clearing specific bits of a
number and perform bitwise operations (AND, OR, XOR). Bitwise
algorithms are commonly used in low-level programming,
cryptography, and optimization tasks where efficient
manipulation of individual bits is required.

4. Searching Algorithms
Searching Algorithms are used to find a specific element or item in a
collection of data. These algorithms are widely used to retrieve data
efficiently from large datasets.

5. Sorting Algorithms
Sorting algorithms are used to arrange the elements of a list in
a specific order, such as numerical or alphabetical. It organizes the
items in a systematic way, making it easier to search for and access
specific elements.

6. Recursion
Recursion is a programming technique where a function calls
itself within its own definition. It is usually used to solve problems
that can be broken down into smaller instances of the same problem.

7. Backtracking Algorithm
Backtracking Algorithm is derived from the Recursion algorithm,
with the option to revert if a recursive solution fails, i.e. in case a
solution fails, the program traces back to the moment where it failed
and builds on another solution. So basically it tries out all the possible
solutions and finds the correct one.

8. Divide and Conquer Algorithm


Divide and conquer algorithms follow a recursive strategy to solve
problems by dividing them into smaller subproblems, solving those
subproblems, and combining the solutions to obtain the final solution.

9. Greedy Algorithm
Greedy Algorithm builds up the solution one piece at a time and
chooses the next piece which gives the most obvious and immediate
benefit i.e., which is the most optimal choice at that moment. So
the problems where choosing locally optimal also leads to the global
solutions are best fit for Greedy.

10. Dynamic Programming


Dynamic Programming is a method used to solve complex problems
by breaking them down into simpler subproblems. By solving each
subproblem only once and storing the results, it avoids redundant
computations, leading to more efficient solutions for a wide range of
problems.

11. Graph Algorithms


Graph algorithms are a set of techniques and methods used to solve
problems related to graphs, which are a collection of nodes and edges.
These algorithms perform various operations on graphs, such
as searching, traversing, finding the shortest path, and
determining connectivity. They are essential for solving a wide range
of real-world problems, including network routing, social network
analysis, and resource allocation.

12. Pattern Searching


Pattern Searching is a fundamental technique in DSA used to find
occurrences of a specific pattern within a larger text. The Pattern
Searching Algorithms use techniques like preprocessing to minimize
unnecessary comparisons, making the search faster.

13. Branch and Bound Algorithm


Branch and Bound Algorithm is a method used in combinatorial
optimization problems to systematically search for the best solution. It
works by dividing the problem into smaller subproblems, or branches,
and then eliminating certain branches based on bounds on the optimal
solution. This process continues until the best solution is found or all
branches have been explored.

14. Geometric Algorithms


Geometric algorithms are a set of algorithms that solve problems
related to shapes, points, lines and polygons. Geometric algorithms
are essential for solving a wide range of problems in computer
science, such as intersection detection, convex hull computation, etc.
15. Randomized Algorithms
Randomized algorithms are algorithms that use randomness to solve
problems. They make use of random input to achieve their goals, often
leading to simpler and more efficient solutions. These algorithms
may not product same result but are particularly useful in
situations when a probabilistic approach is acceptable.

Why Analysis of Algorithms is important?


•To predict the behavior of an algorithm for large inputs (Scalable Software).
•It is much more convenient to have simple measures for the efficiency of an algorithm
than to implement the algorithm and test the efficiency every time a certain parameter in
the underlying computer system changes.
•More importantly, by analyzing different algorithms, we can compare them to determine
the best one for our purpose.
Let f(n) and g(n) be the time taken by two algorithms where n >= 9 and f(n) and g(n) are
also greater than equal to 0. A function f(n) is said to be growing faster than g(n) if
g(n)/f(n) for n tends to infinity is 0 (or f(n)/g(n) for n tends to infinity is infinity).

Example 1: f(n) = 1000, g(n) = n + 1


For n > 999, g(n) would always be greater than f(n) because order of growth of g(n) is
more than f(n).
Example 2: f(n) = 4n2 , g(n) = 2n + 2000
f(n) has higher order of growth as it grows quadratically in terms of input size.
How do we Quickly find order of Growth?
When n >= 0, f(n) >= 0 and g(n) >= 0, we can use the below steps.
•Ignore the order terms.
•Ignore the constants
For example,
Example 1 : 4n2 + 3n + 100
After ignoring lower order terms, we get
4n2
After ignoring constants, we get
n2
Hence order of growth is n2
Example 1 : 100 n Log n + 3n + 100 Log n + 2
After ignoring lower order terms, we get
100 n Log n
After ignoring constants, we get
n Log n
Hence order of growth is n Log n
How do we compare two order of growths?
The following are some standard terms that we must remember for comparison.
c < Log Log n < Log n < n1/3 < n1/2 < n < n Log n < n2 < n2 Log n < n3 < n4 <
2n < nn
Here c is a constant
Aymptotic Notation

Given two algorithms for a task, how do we find out which one is better?

One naive way of doing this is – to implement both the algorithms and run the two
programs on your computer for different inputs and see which one takes less time. There
are many problems with this approach for the analysis of algorithms.
•It might be possible that for some inputs, the first algorithm performs better than the
second. And for some inputs second performs better.
•It might also be possible that for some inputs, the first algorithm performs better on
one machine, and the second works better on another machine for some other inputs.
Asymptotic Analysis is the big idea that handles the above issues in analyzing algorithms.
In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size
(we don’t measure the actual running time). We calculate, order of growth of time taken
(or space) by an algorithm in terms of input size. For example linear search grows linearly
and Binary Search grows logarithmically in terms of input size.
For example, let us consider the search problem (searching a given item) in a sorted
array.
The solution to above search problem includes:
•Linear Search (order of growth is linear)
•Binary Search (order of growth is logarithmic).
To understand how Asymptotic Analysis solves the problems mentioned above in analyzing
algorithms,

•let us say:
We run the Linear Search on a fast computer A and
Binary Search on a slow computer B and
•For small values of input array size n, the fast computer may take less time.
•But, after a certain value of input array size, the Binary Search will definitely start taking
less time compared to the Linear Search even though the Binary Search is being run on a
slow machine. Why? After certain value, the machine specific factors would not matter as
the value of input would become large.
•The reason is the order of growth of Binary Search with respect to input size is logarithmic
while the order of growth of Linear Search is linear.
•So the machine-dependent constants can always be ignored after a certain
value of input size.
•Let’s say the constant for machine A is 0.2 and the constant for B is 1000 which means
that A is 5000 times more powerful than B.
Input Size Running time on A Running time on B

10 2 sec ~1h

100 20 sec ~ 1.8 h

10^6 ~ 55.5 h ~ 5.5 h

10^9 ~ 6.3 years ~ 8.3 h

Running times for this example:


•Linear Search running time in seconds on A: 0.2 * n
•Binary Search running time in seconds on B: 1000*log(n)
Does Asymptotic Analysis always work?
Asymptotic Analysis is not perfect, but that’s the best way available for analyzing
algorithms. For example, say there are two sorting algorithms that take 1000nLogn and
2nLogn time respectively on a machine. Both of these algorithms are asymptotically the
same (order of growth is nLogn). So, With Asymptotic Analysis, we can’t judge which one is
better as we ignore constants in Asymptotic Analysis. For example, asymptotically Heap
Sort is better than Quick Sort, but Quick Sort takes less time in practice.
Also, in Asymptotic analysis, we always talk about input sizes larger than a constant value.
It might be possible that those large inputs are never given to your software and an
asymptotically slower algorithm always performs better for your particular situation. So,
you may end up choosing an algorithm that is Asymptotically slower but faster for your
software.

Complexities of an Algorithm

The complexity of an algorithm computes the amount of time and spaces required
by an algorithm for an input of size (n). The complexity of an algorithm can be
divided into two types. The time complexity and the space complexity.

Time Complexity of an Algorithm


The time complexity is defined as the process of determining a formula for total time
required towards the execution of that algorithm. This calculation is totally
independent of implementation and programming language.

Example 1: Addition of two scalar variables.


Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <- A + B
return C
The addition of two scalar numbers requires one addition operation. the time complexity
of this algorithm is constant, so T(n) = O(1) .
In order to calculate time complexity on an algorithm, it is assumed that a constant time
c is taken to execute one operation, and then the total operations for an input length
on N are calculated. Consider an example to understand the process of calculation:
Suppose a problem is to find whether a pair (X, Y) exists in an array, A of N elements
whose sum is Z. The simplest idea is to consider every pair and check if it satisfies the
given condition or not.
The pseudo-code is as follows:
int a[n];
for(int i = 0;i < n;i++)
cin >> a[i]

for(int i = 0;i < n;i++)


for(int j = 0;j < n;j++)
if(i!=j && a[i]+a[j] == z)
return true
return false
Below is the implementation of the above approach:
// C++ program for the above approach
#include <bits/stdc++.h>
using namespace std;

// Function to find a pair in the given


// array whose sum is equal to z
bool findPair(int a[], int n, int z)
{
// Iterate through all the pairs
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)

// Check if the sum of the pair


// (a[i], a[j]) is equal to z
if (i != j && a[i] + a[j] == z)
return true;

return false;
}

// Driver Code
int main()
{
// Given Input
int a[] = { 1, -2, 1, 0, 5 };
int z = 0;
int n = sizeof(a) / sizeof(a[0]);

// Function Call
if (findPair(a, n, z))
cout << "True";
else
cout << "False";
return 0;
}
Output
False
Assuming that each of the operations in the computer takes approximately constant time, let it be c.
The number of lines of code executed actually depends on the value of Z. During analyses of the
algorithm, mostly the worst-case scenario is considered, i.e., when there is no pair of elements with
sum equals Z. In the worst case,
•N*c operations are required for input.
•The outer loop i loop runs N times.
•For each i, the inner loop j loop runs N times.
So total execution time is N*c + N*N*c + c. Now ignore the lower order terms since the lower
order terms are relatively insignificant for large input, therefore only the highest order term is taken
(without constant) which is N*N in this case. Different notations are used to describe the limiting
behavior of a function, but since the worst case is taken so big-O notation will be used to represent
the time complexity.
Hence, the time complexity is O(N2) for the above algorithm. Note that the time complexity is
solely based on the number of elements in array A i.e the input length, so if the length of the array
will increase the time of execution will also increase.
Order of growth is how the time of execution depends on the length of the input. In the above
example, it is clearly evident that the time of execution quadratically depends on the length of the
array. Order of growth will help to compute the running time with ease.
Another Example: Let’s calculate the time complexity of the below algorithm:
count = 0

for (int i = N; i > 0; i /= 2)

for (int j = 0; j < i; j++)

count++;

This is a tricky case. In the first look, it seems like the complexity is O(N * log N). N for the j
′s loop and log(N) for i′s loop. But it’s wrong. Let’s see why.
Think about how many times count++ will run.
•When i = N, it will run N times.
•When i = N / 2, it will run N / 2 times.
•When i = N / 4, it will run N / 4 times.
•And so on.
The total number of times count++ will run is N + N/2 + N/4+…+1= 2 * N. So the time complexity
will be O(N).
Some general time complexities are listed below with the input range for which they are accepted in
competitive programming:
Input Worst Accepted Time
Usually type of solutions
Length Complexity

10 -12 O(N!) Recursion and backtracking

15-18 O(2N * N) Recursion, backtracking, and bit manipulation

18-22 O(2N * N) Recursion, backtracking, and bit manipulation

30-40 O(2N/2 * N) Meet in the middle, Divide and Conquer

100 O(N4) Dynamic programming, Constructive

400 O(N3) Dynamic programming, Constructive

Dynamic programming, Binary Search, Sorting,


2K O(N2* log N)
Divide and Conquer

Dynamic programming, Graph, Trees,


10K O(N2)
Constructive

1M O(N* log N) Sorting, Binary Search, Divide and Conquer

100M O(N), O(log N), O(1) Constructive, Mathematical, Greedy Algorithms

Space Complexity of an Algorithm


Space complexity is defining as the process of defining a formula for prediction of
how much memory space is required for the successful execution of the algorithm.
The memory space is generally considered as the primary memory.
It is the amount of memory needed for the completion of an algorithm.
To estimate the memory requirement we need to focus on two parts:
(1) A fixed part: It is independent of the input size. It includes memory for instructions (code),
constants, variables, etc.
(2) A variable part: It is dependent on the input size. It includes memory for recursion stack,
referenced variables, etc.
Example : Addition of two scalar variables
Algorithm ADD SCALAR(A, B)
//Description: Perform arithmetic addition of two numbers
//Input: Two scalar variables A and B
//Output: variable C, which holds the addition of A and B
C <— A+B
return C
The addition of two scalar numbers requires one extra memory location to hold the result. Thus the
space complexity of this algorithm is constant, hence S(n) = O(1).
The pseudo-code is as follows:
int freq[n];
int a[n];
for(int i = 0; i<n; i++)
{
cin>>a[i];
freq[a[i]]++;
}

Below is the implementation of the above approach:


// C++ program for the above approach
#include <bits/stdc++.h>

using namespace std;

// Function to count frequencies of array items

void countFreq(int arr[], int n)

unordered_map<int, int> freq;

// Traverse through array elements and

// count frequencies

for (int i = 0; i < n; i++)

freq[arr[i]]++;

// Traverse through map and print frequencies

for (auto x : freq)

cout << x.first << " " << x.second << endl;

// Driver Code

int main()

// Given array

int arr[] = { 10, 20, 20, 10, 10, 20, 5, 20 };

int n = sizeof(arr) / sizeof(arr[0]);


// Function Call

countFreq(arr, n);

return 0;

Output
5 1
20 4
10 3
Here two arrays of length N, and variable i are used in the algorithm so, the total space used is N * c
+ N * c + 1 * c = 2N * c + c, where c is a unit space taken. For many inputs, constant c is
insignificant, and it can be said that the space complexity is O(N).
There is also auxiliary space, which is different from space complexity. The main difference is
where space complexity quantifies the total space used by the algorithm, auxiliary space quantifies
the extra space that is used in the algorithm apart from the given input. In the above example, the
auxiliary space is the space used by the freq[] array because that is not part of the given input. So
total auxiliary space is N * c + c which is O(N) only.

Growth of Functions

Performance Measurements

Shell sort
Shell sort is mainly a variation of Insertion Sort. In insertion sort, we move elements only one position ahead. When an
element has to be moved far ahead, many movements are involved. The idea of ShellSort is to allow the exchange of far
items. In Shell sort, we make the array h-sorted for a large value of h. We keep reducing the value of h until it becomes
1. An array is said to be h-sorted if all sublists of every h’th element are sorted.

Algorithm:

Step 1 − Start
Step 2 − Initialize the value of gap size, say h.
Step 3 − Divide the list into smaller sub-part. Each must have equal intervals to h.
Step 4 − Sort these sub-lists using insertion sort.
Step 5 – Repeat this step 2 until the list is sorted.
Step 6 – Print a sorted list.
Step 7 – Stop.

Following is the implementation of ShellSort.

// C++ implementation of Shell Sort


#include <iostream>
using namespace std;

/* function to sort arr using shellSort */


int shellSort(int arr[], int n)
{
// Start with a big gap, then reduce the gap
for (int gap = n/2; gap > 0; gap /= 2)
{
// Do a gapped insertion sort for this gap size.
// The first gap elements a[0..gap-1] are already in gapped order
// keep adding one more element until the entire array is
// gap sorted
for (int i = gap; i < n; i += 1)
{
// add a[i] to the elements that have been gap sorted
// save a[i] in temp and make a hole at position i
int temp = arr[i];

// shift earlier gap-sorted elements up until the correct


// location for a[i] is found
int j;
for (j = i; j >= gap && arr[j - gap] > temp; j -= gap)
arr[j] = arr[j - gap];

// put temp (the original a[i]) in its correct location


arr[j] = temp;
}
}
return 0;
}

void printArray(int arr[], int n)


{
for (int i=0; i<n; i++)
cout << arr[i] << " ";
}

int main()
{
int arr[] = {12, 34, 54, 2, 3}, i;
int n = sizeof(arr)/sizeof(arr[0]);

cout << "Array before sorting: \n";


printArray(arr, n);

shellSort(arr, n);

cout << "\nArray after sorting: \n";


printArray(arr, n);

return 0;
}
Output
Array before sorting:
12 34 54 2 3
Array after sorting:
2 3 12 34 54
Time Complexity: Time complexity of the above implementation of Shell sort is O(n2). In the
above implementation, the gap is reduced by half in every iteration. There are many other ways to
reduce gaps which leads to better time complexity. See this for more details.

Worst Case Complexity


The worst-case complexity for shell sort is O(n2)
Best Case Complexity
When the given array list is already sorted the total count of comparisons of each interval is equal to
the size of the given array.
So best case complexity is Ω(n log(n))
Average Case Complexity
The Average Case Complexity: O(n*log n)~O(n1.25)
Space Complexity
The space complexity of the shell sort is O(1).

Questions:
1. Which is more efficient shell or heap sort?
Ans. As per big-O notation, shell sort has O(n^{1.25}) average time complexity whereas, heap sort
has O(N log N) time complexity. According to a strict mathematical interpretation of the big-O
notation, heap sort surpasses shell sort in efficiency as we approach 2000 elements to be sorted.
Note:- Big-O is a rounded approximation and analytical evaluation is not always 100% correct, it
depends on the algorithms’ implementation which can affect actual run time.

Shell Sort Applications


1. Replacement for insertion sort, where it takes a long time to complete a given task.
2. To call stack overhead we use shell sort.
3. when recursion exceeds a particular limit we use shell sort.
4. For medium to large-sized datasets.
5. In insertion sort to reduce the number of operations.
QuickSort
It is a sorting algorithm based on the Divide and Conquer that picks an element as a pivot and partitions the given array
around the picked pivot by placing the pivot in its correct position in the sorted array.
How does QuickSort Algorithm work?
QuickSort works on the principle of divide and conquer, breaking down the problem into smaller
sub-problems.
There are mainly three steps in the algorithm:
1.Choose a Pivot: Select an element from the array as the pivot. The choice of pivot can vary (e.g.,
first element, last element, random element, or median).
2.Partition the Array: Rearrange the array around the pivot. After partitioning, all elements
smaller than the pivot will be on its left, and all elements greater than the pivot will be on its right.
The pivot is then in its correct position, and we obtain the index of the pivot.
3.Recursively Call: Recursively apply the same process to the two partitioned sub-arrays (left and
right of the pivot).
4.Base Case: The recursion stops when there is only one element left in the sub-array, as a single
element is already sorted.
Here’s a basic overview of how the QuickSort algorithm works.

Choice of Pivot
There are many different choices for picking pivots.

•Always pick the first (or last) element as a pivot. The below implementation is picks the last
element as pivot. The problem with this approach is it ends up in the worst case when array is
already sorted.
•Pick a random element as a pivot. This is a preferred approach because it does not have a pattern
for which the worst case happens.
•Pick the median element is pivot. This is an ideal approach in terms of time complexity as we can
find median in linear time and the partition function will always divide the input array into two
halves. But it is low on average as median finding has high constants.

Partition Algorithm
The key process in quickSort is a partition(). There are three common algorithms to partition. All
these algorithms have O(n) time complexity.
1.Naive Partition: Here we create copy of the array. First put all smaller elements and then all
greater. Finally we copy the temporary array back to original array. This requires O(n) extra space.
2.Lomuto Partition: We have used this partition in this article. This is a simple algorithm, we keep
track of index of smaller elements and keep swapping. We have used it here in this article because
of its simplicity.
3.Hoare’s Partition: This is the fastest of all. Here we traverse array from both sides and keep
swapping greater element on left with smaller on right while the array is not partitioned. Please
refer Hoare’s vs Lomuto for details.

Working of Partition Algorithm with Illustration


The logic is simple, we start from the leftmost element and keep track of the index of
smaller (or equal) elements as i . While traversing, if we find a smaller element, we
swap the current element with arr[i]. Otherwise, we ignore the current element.

Let us understand the working of partition algorithm with the help of the following example:

2/6

Illustration of QuickSort Algorithm


In the previous step, we looked at how the partitioning process rearranges the array based on the
chosen pivot. Next, we apply the same method recursively to the smaller sub-arrays on
the left and right of the pivot. Each time, we select new pivots and partition the arrays again. This
process continues until only one element is left, which is always sorted. Once every element is in its
correct position, the entire array is sorted.
Below image illustrates, how the recursive method calls for the smaller sub-arrays on
the left and right of the pivot:
Quick Sort is a crucial algorithm in the industry, but there are other sorting algorithms that may be
more optimal in different cases. To gain a deeper understanding of sorting and other essential
algorithms, check out our course Tech Interview 101 – From DSA to System Design . This course
covers almost every standard algorithm and more.
C++CJavaPythonC#JavaScriptPHP

#include <bits/stdc++.h>
using namespace std;

int partition(vector<int>& arr, int low, int high) {

// Choose the pivot


int pivot = arr[high];

// Index of smaller element and indicates


// the right position of pivot found so far
int i = low - 1;

// Traverse arr[;ow..high] and move all smaller


// elements on left side. Elements from low to
// i are smaller after every iteration
for (int j = low; j <= high - 1; j++) {
if (arr[j] < pivot) {
i++;
swap(arr[i], arr[j]);
}
}
// Move pivot after smaller elements and
// return its position
swap(arr[i + 1], arr[high]);
return i + 1;
}

// The QuickSort function implementation


void quickSort(vector<int>& arr, int low, int high) {

if (low < high) {

// pi is the partition return index of pivot


int pi = partition(arr, low, high);

// Recursion calls for smaller elements


// and greater or equals elements
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

int main() {
vector<int> arr = {10, 7, 8, 9, 1, 5};
int n = arr.size();
quickSort(arr, 0, n - 1);

for (int i = 0; i < n; i++) {


cout << arr[i] << " ";
}
return 0;
}

Output
Sorted Array
1 5 7 8 9 10

Complexity Analysis of Quick Sort


Time Complexity:
•Best Case: (Ω(n log n)), Occurs when the pivot element divides the array into two equal halves.
•Average Case (θ(n log n)), On average, the pivot divides the array into two parts, but not
necessarily equal.
•Worst Case: (O(n²)), Occurs when the smallest or largest element is always chosen as the pivot
(e.g., sorted arrays).
Auxiliary Space: O(n), due to recursive call stack

Advantages of Quick Sort


•It is a divide-and-conquer algorithm that makes it easier to solve problems.
•It is efficient on large data sets.
•It has a low overhead, as it only requires a small amount of memory to function.
•It is Cache Friendly as we work on the same array to sort and do not copy data to any auxiliary
array.
•Fastest general purpose algorithm for large data when stability is not required.
•It is tail recursive and hence all the tail call optimization can be done.

Disadvantages of Quick Sort


•It has a worst-case time complexity of O(n2), which occurs when the pivot is chosen poorly.
•It is not a good choice for small data sets.
•It is not a stable sort, meaning that if two elements have the same key, their relative order will not
be preserved in the sorted output in case of quick sort, because here we are swapping elements
according to the pivot’s position (without considering their original positions).

Applications of Quick Sort


•Efficient for sorting large datasets with O(n log n) average-case time complexity.
•Used in partitioning problems like finding the kth smallest element or dividing arrays by pivot.
•Integral to randomized algorithms, offering better performance than deterministic approaches.
•Applied in cryptography for generating random permutations and unpredictable encryption keys.
•Partitioning step can be parallelized for improved performance in multi-core or distributed
systems.
•Important in theoretical computer science for analyzing average-case complexity and developing
new techniques.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy