0% found this document useful (0 votes)
1 views29 pages

Algorithm and Complexity

An algorithm is a finite set of step-by-step instructions designed to solve a specific problem, producing outputs from given inputs. Key characteristics include definiteness, finiteness, effectiveness, and efficiency, while advantages encompass effective communication, easy debugging, and scalability. The analysis of algorithms focuses on estimating their efficiency through time and space complexity, with various design techniques such as Divide and Conquer, Greedy, and Dynamic Programming used to develop efficient solutions.

Uploaded by

lythouse3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views29 pages

Algorithm and Complexity

An algorithm is a finite set of step-by-step instructions designed to solve a specific problem, producing outputs from given inputs. Key characteristics include definiteness, finiteness, effectiveness, and efficiency, while advantages encompass effective communication, easy debugging, and scalability. The analysis of algorithms focuses on estimating their efficiency through time and space complexity, with various design techniques such as Divide and Conquer, Greedy, and Dynamic Programming used to develop efficient solutions.

Uploaded by

lythouse3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 29

A. What is an Algorithm?

1. An algorithm can be defined as a finite set of steps, which has to be followed while
carrying out a particular problem. It is nothing but a process of executing actions step by
step.
2. An algorithm is a distinct computational procedure that takes input as a set of values and
results in the output as a set of values by solving the problem.
3. An algorithm is correct, if, for each input instance, it gets the correct output and gets
terminated.
4. An algorithm unravels the computational problems to output the desired result.
5. An algorithm can be described by incorporating a natural language such as English,
Computer language, or a hardware language.
6. An algorithm is a set of commands that must be followed for a computer to perform
calculations or other problem-solving operations.
7. An algorithm is a finite set of instructions carried out in a specific order to perform a
particular task.

 Problem: A problem can be defined as a real-world problem or real-world instance


problem for which you need to develop a program or set of instructions. An algorithm
is a set of instructions.
 Algorithm: An algorithm is defined as a step-by-step process that will be designed for
a problem.
 Input: After designing an algorithm, the algorithm is given the necessary and desired
inputs.
 Processing unit: The input will be passed to the processing unit, producing the
desired output.
 Output: The outcome or result of the program is referred to as the output
Characteristics of Algorithms
The main features of Algorithms are;
 Input: It should externally supply zero or more quantities or data.
 Output: It results in at least one quantity or result.
 Definiteness: Each instruction should be cleared and unambiguous.
 Finiteness: An algorithm should terminate after executing a finite number of steps.
 Effectiveness: Every instruction should be fundamental to be carried out, in principle, by
a person using only pen and paper.
 Feasible: It must be feasible enough to produce each instruction.
 Flexibility: It must be flexible enough to carry out desired changes with no efforts
 Efficient: The term efficiency is measured in terms of time and space required by an
algorithm to implement.
 Independent: An algorithm must be language independent, which means that it should
mainly focus on the input and the procedure required to derive the output instead of
depending upon the language.

Advantages of Algorithms
 Effective Communication: Since it is written in a natural language like English, it
becomes easy to understand the step-by step delineation of a solution to any particular
problem.
 Easy Debugging: A well-designed algorithm facilitates easy debugging to detect the
logical errors that occurred inside the program.
 Easy and Efficient Coding: An algorithm is nothing but a blueprint of a program that
helps develop a program.
 Independent of Programming Language: Since it is a language-independent, it can be
easily coded by incorporating any high-level language.
 Efficiency: Algorithms streamline processes, leading to faster and more optimized
solutions.

 Reproducibility: They yield consistent results when provided with the same inputs.

 Problem-solving: Algorithms offer systematic approaches to tackle complex problems


effectively.

 Scalability: Many algorithms can handle larger datasets and scale with increasing input
sizes.

 Automation: They enable automation of tasks, reducing the need for manual
intervention.

Disadvantages of Algorithms
 Developing algorithms for complex problems would be time consuming and difficult to
understand.
 It is a challenging task to understand complex logic through algorithms
 Complexity: Developing sophisticated algorithms can be challenging and time-
consuming.

 Limitations: Some problems may not have efficient algorithms, leading to suboptimal
solutions.

 Resource Intensive: Certain algorithms may require significant computational


resources.

 Inaccuracy: Inappropriate algorithm design or implementation can result in incorrect


outputs.

 Maintenance: As technology evolves, algorithms may require updates to stay relevant


and effective.
Factors of an Algorithm
The following are the factors to consider when designing an algorithm:
1. Modularity: This feature was perfectly designed for the algorithm if you are given a problem
and break it down into small-small modules or small-small steps, which is a basic definition
of an algorithm.

2. Correctness: An algorithm's correctness is defined as when the given inputs produce the
desired output, indicating that the algorithm was designed correctly. An algorithm's analysis
has been completed correctly.

3. Maintainability: It means that the algorithm should be designed in a straightforward,


structured way so that when you redefine the algorithm, no significant changes are made to
the algorithm.

4. Functionality: It takes into account various logical steps to solve a real-world problem.

5. Robustness: Robustness refers to an algorithm's ability to define your problem clearly.

6. User-friendly: If the algorithm is difficult to understand, the designer will not explain it to
the programmer.

7. Simplicity: If an algorithm is simple, it is simple to understand.

8. Extensibility: Your algorithm should be extensible if another algorithm designer or


programmer wants to use it.

Qualities of a Good Algorithm


1. Efficiency: A good algorithm should perform its task quickly and use minimal resources.

2. Correctness: It must produce the correct and accurate output for all valid inputs.
3. Clarity: The algorithm should be easy to understand and comprehend, making it
maintainable and modifiable.

4. Scalability: It should handle larger data sets and problem sizes without a significant
decrease in performance.

5. Reliability: The algorithm should consistently deliver correct results under different
conditions and environments.

6. Optimality: Striving for the most efficient solution within the given problem constraints.

7. Robustness: Capable of handling unexpected inputs or errors gracefully without crashing.

8. Adaptability: Ideally, it can be applied to a range of related problems with minimal


adjustments.

9. Simplicity: Keeping the algorithm as simple as possible while meeting its requirements,
avoiding unnecessary complexity.

B. Analysis of Algorithms
Definition. Analysis of an algorithm is the same thing as estimating the efficiency of the
algorithm. There are two fundamental parameters based on which we can analyze the
algorithm and they are Space and Time Complexity.
There is the concept in Time Complexity of estimating the running time of an algorithm
and we have the Best-case, Average-case and Worst-case.
Definition The analysis is a process of estimating the efficiency of an
algorithm and that is, trying to know how good or how bad an algorithm could be.
Space Complexity: The space complexity can be understood as the amount of space required
by an algorithm to run to completion.
Time Complexity: Time complexity is a function of input size n that refers to the amount of
time needed by an algorithm to run to completion.

Types of Time Complexity Analysis


There are three types of analysis in term of time complexity::
 Worst-case time complexity: For 'n' input size, the worst-case time complexity can be
defined as the maximum amount of time needed by an algorithm to complete its
execution. Thus, it is nothing but a function defined by the maximum number of steps
performed on an instance having an input size of n. Computer Scientists are more
interested in this.
 Average case time complexity: For 'n' input size, the average case time complexity can
be defined as the average amount of time needed by an algorithm to complete its
execution. Thus, it is nothing but a function defined by the average number of steps
performed on an instance having an input size of n.
 Best case time complexity: For 'n' input size, the best-case time complexity can be
defined as the minimum amount of time needed by an algorithm to complete its
execution. Thus, it is nothing but a function defined by the minimum number of steps
performed on an instance having an input size of n.

Complexity of Algorithms
The term algorithm complexity measures how many steps are required by the algorithm to solve
the given problem. It evaluates the order of count of operations executed by an algorithm as a
function of input data size.
To assess the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps. O(f) notation represents the complexity of an
algorithm, which is also termed as an Asymptotic notation or "Big O" notation.

Typical Complexities of an Algorithm


The different types of complexities of an algorithm and one or more of our algorithm or program
will fall into any of the following categories;
 Constant Complexity : Imposes a complexity of O (1). It undergoes an execution of a
constant number of steps like 1, 5, 10, etc. for solving a given problem. The count of
operations is independent of the input data size.
 Logarithmic Complexity: Imposes a complexity of O (log(N)). It undergoes the
execution of the order of log (N) steps. To perform operations on N elements, it often
takes the logarithmic base as 2. For N = 1,000,000, an algorithm that has a complexity of
O(log(N)) would undergo 20 steps (with a constant precision).
 Linear Complexity: Imposes a complexity of O (N). It encompasses the same number
of steps as that of the total number of elements to implement an operation on N elements.
For example, if there exist 500 elements, then it will take about 500steps. For a given
1000 elements, the linear complexity will execute 10,000 steps for solving a given
problem.
 Quadratic Complexity: It imposes a complexity of O (n2). For N input data size, it
undergoes the order of N2 count of operations on N number of elements for solving a
given problem. If N = 100, it will endure 10,000 steps. In other words, whenever the
order of operation tends to have a quadratic relation with the input data size, it results in
quadratic complexity.
 Cubic Complexity: It imposes a complexity of O (n3). For N input data size, it executes
the order of N3 steps on N elements to solve a given problem. For example, if there exist
100 elements, it is going to execute 1,000,000 steps.
 Exponential Complexity: It imposes a complexity of O(2n), O(N!), O(nk), …. For N
elements, it will execute the order of count of operations that is exponentially dependable
on the input data size. For example, if N = 10, then the exponential function 2N will
result in 1024. Similarly, if N = 20, it will result in 1048 576, and if N = 100, it will result
in a number having 30 digits. The exponential function N! grows even faster; for
example, if N = 5 will result in 120.

Algorithm Design Techniques


An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving
problems algorithmically that is applicable to a variety of problems from different areas of
computing.
Learning these techniques is of utmost importance for the following reasons.
- It provide guidance for designing algorithms for new problems, i.e., problems for which
there is no known satisfactory algorithm.
- algorithms are the cornerstone of computer science. Every science is interested in
classifying its principal subject, and computer science is no exception.
Algorithm design techniques make it possible to classify algorithms according to an underlying
design idea; therefore, they can serve as a natural way to both categorize and study algorithms.
While the algorithm design techniques do provide a powerful set of general approaches to
algorithmic problem solving, designing an algorithm for a particular problem may still be a
challenging task. Some design techniques can be simply inapplicable to the problem in question.
Sometimes, several techniques need to be combined, and there are algorithms that are hard to
pinpoint as applications of the known design techniques.
Algorithm Design Techniques
The following is a list of several popular design approaches:
1. Divide and Conquer Approach: The divide-and-conquer paradigm often helps in the
discovery of efficient algorithms. It is a top-down approach. The algorithms which follow
the divide & conquer techniques involve three steps.
Divide the original problem into a set of sub-problems.
Solve every sub-problem individually, recursively.
Combine the solution of the sub-problems (top level) into a solution of the
whole original problem.
Divide and Conquer Algorithm: Breaks a complex problem into smaller sub problems, solves
them independently, and then combines their solutions to address the original problem
effectively.
Following are some standard algorithms that are of the Divide and Conquer
algorithms variety.
a. Binary Search is a searching algorithm.
b. Quick sort is a sorting algorithm.
c. Merge Sort is also a sorting algorithm.
d. Closest Pair of Points The problem is to find the closest pair of points in a set of points in x-y
plane.
Advantages of Divide and Conquer
i.. Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach, it has
lessened the effort as it works on dividing the main problem into two halves and then solve them
recursively.This algorithm is much faster than other algorithms.
ii. It efficiently uses cache memory without occupying much space because it solves simple sub-
problems within the cache memory instead of accessing the slower main memory.
iii. It is more proficient than that of its counterpart Brute Force technique.
Since these algorithms inhibit parallelism, it does not involve any modification and handled by
systems incorporating parallel processing.
Disadvantages of Divide and Conquer
a. Since most of its algorithms are designed by incorporating recursion, so it necessitates high
memory management.
b. An explicit stack may overuse the space.
c. It may even crash the system if the recursion is performed rigorously greater than the stack
present in the CPU.

Properties of Divide-and-Conquer Algorithms


Divide-and-Conquer has several important properties.
a. It follows the structure of an inductive proof, and therefore usually leads to relatively simple
proofs of correctness. To prove a divide-and-conquer algorithm correct, we first prove that the
base case is correct. Then, we assume by strong (or structural) induction that the recursive
solutions are correct, and show that, given correct solutions to smaller instances, the combined
solution is correct.
b. Divide-and-conquer algorithms can be work efficient. To ensure efficiency, we need to make
sure that the divide and combine steps are efficient, and that they do not create too many sub
instances.
c. The work and span for a divide-and-conquer algorithm can be expressed as a mathematical
equation called recurrence, which can be usually be solved without too much difficulty.
d. Divide-and-conquer algorithms are naturally parallel, because the sub-instances can be solved
in parallel. This can lead to significant amount of parallelism, because each inductive step can
create more independent instances. For example, even if the algorithm divides the problem
instance into two sub instances, each of those sub instances could themselves generate two more
sub instances, leading to a geometric progression, which can quickly produce abundant
parallelism.
2. Greedy Technique: Greedy method or technique is an algorithmic paradigm that builds up
a solution piece by piece, always choosing the next piece that offers the most obvious and
immediate benefit. So the problems where choosing locally optimal also leads to global solution
are best fit for Greedy. The Greedy method is used to solve the optimization problem. An
optimization problem is one in which we are given a set of input values, which are required
either to be maximized or minimized (knownas objective), i.e. some constraints or conditions.
Greedy Algorithm always makes the choice (greedy criteria) looks best at the moment, to
optimize a given objective.

Greedy Algorithm: Makes locally optimal choices at each step in the hope of finding a global
optimum, useful for optimization problems but may not always lead to the best solution.

The greedy algorithm doesn't always guarantee the optimal solution


however it generally produces a solution that is very close in value to the optimal.
Examples of Greedy Algorithms
 Prim's Minimal Spanning Tree Algorithm.
 Travelling Salesman Problem.
 Graph – Map Coloring.
 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm
 Graph – Vertex Cover.
 Knapsack Problem.
 Job Scheduling Problem.
2. Dynamic Programming: Dynamic Programming (DP) is an algorithmic technique for
solving an optimization problem by breaking it down into simpler sub problems and
utilizing the fact that the optimal solution to the overall problem depends upon the
optimal solution to its sub-problems. Dynamic programming is both a mathematical
optimization method and a computer programming method. The method was developed
by Richard Bellman in the 1950s and has found applications in numerous fields, from
aerospace engineering to economics Dynamic programming is used where we have
problems, which can be divided into similar sub-problems, so that their results can be re-
used. Mostly, these algorithms are used for optimization. Before solving the in-hand sub-
problem, dynamic algorithm will try to examine the results of the previously solved sub-
problems. Dynamic Programming Algorithm Stores and reuses intermediate results to
avoid redundant computations, enhancing the efficiency of solving complex problems.

Some examples of Dynamic Programming are;


i. Tower of Hanoi
ii. Dijkstra Shortest Path
iii. Fibonacci sequence
iv. Matrix chain multiplication
v. Egg-dropping puzzle
3. Branch and Bound
The branch and bound method is a solution approach that partitions the feasible solution
space into smaller subsets of solutions., can assume any integer value greater than or equal to
zero is what gives this model its designation as a total integer model. It is used for solving the
optimization problems and minimization problems. If we have given a maximization problem
then we can convert it using the Branch and bound technique by simply converting the problem
into a maximization problem. An important advantage of branch-and-bound algorithms is that
we can control the quality of the solution to be expected, even if it is not yet found. The cost
of an optimal solution is only up to smaller than the cost of the best computed one. Branch and
bound is an algorithm design paradigm which is generally used for solving combinatorial
optimization problems. Some examples of Branch-and-Bound Problems are:
i. Knapsack problems
ii. Traveling Salesman Problem
iii. Job Assignment Problem, etc

4. Backtracking Algorithm
A backtracking algorithm is a problem-solving algorithm that uses a brute force
approach for finding the desired output. The Brute force approach tries out all the possible
solutions and chooses the desired/best solutions. Backtracking is a general algorithm for finding
solutions to some computational problems, notably constraint satisfaction problems, that
incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon
as it determines that the candidate cannot possibly be completed to a valid solution. A
backtracking algorithm uses the depth-first search method. When the algorithm begins to explore
the solutions, the abounding function is applied so that the algorithm can determine whether the
proposed solution satisfies the constraints. If it does, it will keep looking. If it does not, the
branch is removed, and the algorithm returns to the previous level. In any backtracking
algorithm, the algorithm seeks a path to a feasible solution that includes some intermediate
checkpoints. If the checkpoints do not lead to a viable solution, the problem can return to the
checkpoints and take another path to find a solution There are the following scenarios in which
you can use the backtracking.
A trial-and-error technique used to explore potential solutions by undoing choices when
they lead to an incorrect outcome, commonly employed in puzzles and optimization problems
a. It is used to solve a variety of problems. You can use it, for example, to find a feasible
solution to a decision problem. Backtracking algorithms were also discovered to be very
effective for solving optimization problems
b. In some cases, it is used to find all feasible solutions to the enumeration problem
c. Backtracking, on the other hand, is not regarded as an optimal problem-solving
technique. It is useful when the solution to a problem does not have a time limit.
Backtracking algorithms are used in;
i. Finding all Hamiltonian paths present in a graph
ii. Solving the N-Queen problem
iii. Knights Tour problem, etc

5. Randomized Algorithm
A randomized algorithm is an algorithm that employs a degree of randomness as part
of its logic or procedure. In some cases, probabilistic algorithms are the only practical means of
solving a problem. The output of a randomized algorithm on a given input is a random variable.
Thus, there may be a positive probability that the outcome is incorrect. As long as the probability
of error is small for every possible input to the algorithm, this is not a problem. Randomized
Algorithm: Utilizes randomness in its steps to achieve a solution, often used in situations where
an approximate or probabilistic answer suffices.
There are two main types of randomized algorithms: Las Vegas algorithms and Monte-Carlo
algorithms.
Example 1: In Quick Sort, using a random number to choose a pivot.
Example 2: Trying to factor a large number by choosing a random
number as possible divisors

Recursive Algorithm: A method that breaks a problem into smaller, similar sub problems and
repeatedly applies itself to solve them until reaching a base case, making it effective for tasks
with recursive structures

Encryption Algorithm: Utilized to transform data into a secure, unreadable form using
cryptographic techniques, ensuring confidentiality and privacy in digital communications and
transactions. .

Searching Algorithm: Designed to find a specific target within a dataset, enabling efficient
retrieval of information from sorted or unsorted collections.

Hashing Algorithm: Converts data into a fixed-size hash value, enabling rapid data access and
retrieval in hash tables, commonly used in databases and password storage.

SORTING ALGORITHS
Sorting is a technique to rearrange the elements of a list in ascending or descending order, which
can be numerical, alphabetical or any user-defined order. Sorting is a process through which the
data is arranged in ascending or descending order.

Sorting Algorithm: Aimed at arranging elements in a specific order, like numerical or


alphabetical, to enhance data organization and retrieval.

BUBBLE SORT

In bubble sort method the list is divided into two sub-lists sorted and unsorted. The smallest
element is bubbled from unsorted sub-list. After moving the smallest element the imaginary wall
moves one element ahead. The bubble sort was originally written to bubble up the highest
element in the list. But there is no difference whether highest / lowest element is bubbled. This
method is easy to understand but time consuming. In this type, two successive elements are
compared and swapping is done. Thus, step-by-step entire array elements are checked. Given a
list of ‘n’ elements the bubble sort requires up to n-1 passes to sort the data.

Algorithm for Bubble Sort:

Bubble_Sort ( A [ ] , N )

Step 1 : Repeat For P = 1 to N – 1 Begin


Step 2 : Repeat For J = 1 to N – P Begin
Step 3 : If ( A [ J ] < A [ J – 1 ] )
Swap ( A [ J ] , A [ J – 1 ] ) End For
End For
Step 4 : Exit

Example:

Ex:- A list of unsorted elements are: 10 47 12 54 19 23 Using Bubble sort up for highest value
shown here)
A list of sorted elements now : 54 47 23 19 12 10
. Show the bubble sort results for each pass for the following initial array of elements.

35 18 7 12 5 23 16 3

16 36 24 37 15
Pass 1:
Compare a0 and a1
16 30 24 37 15

As a0< a1 so the array will remain as it is.


Compare a1 and a2
16 36 24 37 15
Now a1 > a2, so we will swap both of them.
16 24 36 37 15
Compare a2 and a3
16 24 36 37 15
As a2< a3 so the array will remain as it is.
Compare a3 and a4
16 24 36 37 15
Here a3 > a4, so we will again swap both of them.
16 24 36 15 37
Pass 2:
Compare a0 and a1
16 24 36 15 37
As a0 < a1 so the array will remain as it is.
Compare a1 and a2
16 24 36 15 37
Here a1 < a2, so the array will remain as it is.
Compare a2 and a3
16 24 36 15 37
In this case, a2 > a3, so both of them will get swapped.
16 24 15 36 37

Pass 3:
Compare a0 and a1
16 24 15 36 37
As a0 < a1 so the array will remain as it is.
Compare a1 and a2
16 24 15 36 37
Now a1 > a2, so both of them will get swapped.
16 15 24 36 37
Pass 4:
Compare a0 and a1
16 15 24 36 37
Here a0 > a1, so we will swap both of them.

15 16 24 36 37
Hence the array is sorted as no more swapping is required.
Advantages of Bubble Sort
1. Easily understandable.
2. Does not necessitate any extra memory.
3. The code can be written easily for this algorithm.
4. Minimal space requirement than that of other sorting algorithms.
Disadvantages of Bubble Sort
1. It does not work well when we have large unsorted lists, and it necessitates more
resources that end up taking so much of time.
2. It is only meant for academic purposes, not for practical implementations.
3. It involves the n2 order of steps to sort an algorithm.

INSERTION SORT
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single element
at a particular instance. It is not the best sorting algorithm in terms of performance, but it's
slightly more efficient than selection sort and bubble sort in practical scenarios.
Both the selection and bubble sorts exchange elements. But insertion sort does not exchange
elements. In insertion sort the element is inserted at an appropriate place similar to card insertion.
Here the list is divided into two parts sorted and unsorted sub-lists. In each pass, the first element
of unsorted sub list is picked up and moved into the sorted sub list by inserting it in suitable
position. Suppose we have ‘n’ elements, we need n-1 passes to sort the elements.
Insertion sort works this way:

INSERTION_SORT (A)

1. FOR j ← 2 TO length [A]


2. DO key ← A[j]
3. {Put A[ j] into the sorted sequence A[1 . . j − 1]}
4. i ← j − 1
5. WHILE i > 0 and A[i] > key
6. DO A[i +1] ← A[i]
7. i ← i − 1
8. A[i + 1] ← key

Demonstrate the insertion sort results for each insertion for the following initial array of elements
25 6 15 12 8 34 9 18

Consider the following example of an unsorted array that we will sort with the help of the
Insertion Sort algorithm.
A = (41, 22, 63, 14, 55, 36)
Initially,
1st Iteration:
Set key = 22
Compare a1 with a0

Since a0 > a1, swap both of them.

2nd Iteration:
Set key = 63
Compare a2 with a1 and a0

Since a2 > a1 > a0, keep the array as it is.

3rd Iteration:
Set key = 14
Compare a3 with a2, a1 and a0
Since a3 is the smallest among all the elements on the left-hand side,
place a3 at the beginning of the array.

4th Iteration:
Set key = 55
Compare a4 with a3, a2, a1 and a0.

As a4 < a3, swap both of them.

5th Iteration:
Set key = 36
Compare a5 with a4, a3, a2, a1 and a0.

Since a5 < a2, so we will place the elements in their correct positions.

Hence the array is arranged in ascending order, so no more swapping is required.


The insertion sort algorithm is used in the following cases:
When the array contains only a few elements.
When there exist few elements to sort.
Advantages of Insertion Sort
1. It is simple to implement.
2. It is efficient on small datasets.
3. It is stable (does not change the relative order of elements with equal keys)
4. It is in-place (only requires a constant amount O (1) of extra memory space).
5. It is an online algorithm, which can sort a list when it is received.
Disadvantages of Insertion Sort
1. Insertion sort is inefficient against more extensive data sets.
2. The insertion sort exhibits the worst-case time complexity of O(n2)
3. It does not perform well than other, more advanced sorting algorithms
Selection Sort

Algorithm: Selection_Sort ( A [ ] , N )

Step 1 : Repeat For K = 0 to N – 2 Begin


Step 2 : Set POS = K
Step 3 : Repeat for J = K + 1 to N – 1 Begin
If A[ J ] < A [ POS ]
Set POS = J
End For
Step 5 : Swap A [ K ] with A [ POS ]
End For
Step 6 : Exit

Ex:- A list of unsorted elements are: 23 78 45 8 32 56

A list of sorted elements now : 8 23 32 45 56 78 based selection


Advantages of Selection Sort
It is an in-place algorithm. It does not require a lot of space for sorting. Only one extra space
is required for holding the temporal variable.
It performs well on items that have already been sorted
Disadvantage of selection sort
As the input size increases, the performance of selection sort decreases

QUICK SORT
Quick sort is based on partition. It is also known as partition exchange sorting. The basic concept
of quick sort process is pick one element from an array and rearranges the remaining elements
around it. This element divides the main list into two sub lists. This chosen element is called
pivot. Once pivot is chosen, then it shifts all the elements less than pivot to left of value pivot
and all the elements greater than pivot are shifted to the right side. This procedure of choosing
pivot and partition the list is applied recursively until sub-lists consisting of only one element.
Algorithm for quick sort:
It is also known as partition exchange sort. It was invented by CAR Hoare. It is based on
partition. The basic concept of quick sort process is pick one element from an array and
rearranges the remaining elements around it. This element divides the main list into two sub lists.
This chosen element is called pivot. Once pivot is chosen, then it shifts all the elements less than
pivot to left of value pivot and all the elements greater than pivot are shifted to the right side.
This procedure of choosing pivot and partition the list is applied recursively until sub-lists
consisting of only one element.
Quicksort (q)
Var list less, pivot List, greater
if length(q) ≤ 1
return q
select a pivot value pivot from q
for each x in q except the pivot element
if x < pivot then add x to less
if x ≥ pivot then add x to greater
add pivot to pivotList
return concatenate(quicksort(less), pivotList, quicksort(greater))
Time Complexity of Quick sort:
Best case : O (n log n)
Average case : O (n log n)
Worst case : O (n2)
Advantages of quick sort
1. This is faster sorting method among all.
2. Its efficiency is also relatively good.
3. It requires relatively small amount of memory.

Disadvantages of quick sort:


1. It is complex method of sorting so, it is little hard to implement than other sorting methods.

A list of unsorted elements are: 8 3 2 11 5 14 0 2 9 4 20


A list of unsorted elements are: 39 9 81 45 90 27 72 18

Sorted elements are: 9 18 27 39 45 72 81 90 using Merge sort

Time Complexity of merge sort:


Best case : O (n log n)
Average case : O (n log n)
Worst case : O (n log n)
the array is sorted as 2, 7, 15, 25, 36, 40,80

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy