Algorithm and Complexity
Algorithm and Complexity
1. An algorithm can be defined as a finite set of steps, which has to be followed while
carrying out a particular problem. It is nothing but a process of executing actions step by
step.
2. An algorithm is a distinct computational procedure that takes input as a set of values and
results in the output as a set of values by solving the problem.
3. An algorithm is correct, if, for each input instance, it gets the correct output and gets
terminated.
4. An algorithm unravels the computational problems to output the desired result.
5. An algorithm can be described by incorporating a natural language such as English,
Computer language, or a hardware language.
6. An algorithm is a set of commands that must be followed for a computer to perform
calculations or other problem-solving operations.
7. An algorithm is a finite set of instructions carried out in a specific order to perform a
particular task.
Advantages of Algorithms
Effective Communication: Since it is written in a natural language like English, it
becomes easy to understand the step-by step delineation of a solution to any particular
problem.
Easy Debugging: A well-designed algorithm facilitates easy debugging to detect the
logical errors that occurred inside the program.
Easy and Efficient Coding: An algorithm is nothing but a blueprint of a program that
helps develop a program.
Independent of Programming Language: Since it is a language-independent, it can be
easily coded by incorporating any high-level language.
Efficiency: Algorithms streamline processes, leading to faster and more optimized
solutions.
Reproducibility: They yield consistent results when provided with the same inputs.
Scalability: Many algorithms can handle larger datasets and scale with increasing input
sizes.
Automation: They enable automation of tasks, reducing the need for manual
intervention.
Disadvantages of Algorithms
Developing algorithms for complex problems would be time consuming and difficult to
understand.
It is a challenging task to understand complex logic through algorithms
Complexity: Developing sophisticated algorithms can be challenging and time-
consuming.
Limitations: Some problems may not have efficient algorithms, leading to suboptimal
solutions.
2. Correctness: An algorithm's correctness is defined as when the given inputs produce the
desired output, indicating that the algorithm was designed correctly. An algorithm's analysis
has been completed correctly.
4. Functionality: It takes into account various logical steps to solve a real-world problem.
6. User-friendly: If the algorithm is difficult to understand, the designer will not explain it to
the programmer.
2. Correctness: It must produce the correct and accurate output for all valid inputs.
3. Clarity: The algorithm should be easy to understand and comprehend, making it
maintainable and modifiable.
4. Scalability: It should handle larger data sets and problem sizes without a significant
decrease in performance.
5. Reliability: The algorithm should consistently deliver correct results under different
conditions and environments.
6. Optimality: Striving for the most efficient solution within the given problem constraints.
9. Simplicity: Keeping the algorithm as simple as possible while meeting its requirements,
avoiding unnecessary complexity.
B. Analysis of Algorithms
Definition. Analysis of an algorithm is the same thing as estimating the efficiency of the
algorithm. There are two fundamental parameters based on which we can analyze the
algorithm and they are Space and Time Complexity.
There is the concept in Time Complexity of estimating the running time of an algorithm
and we have the Best-case, Average-case and Worst-case.
Definition The analysis is a process of estimating the efficiency of an
algorithm and that is, trying to know how good or how bad an algorithm could be.
Space Complexity: The space complexity can be understood as the amount of space required
by an algorithm to run to completion.
Time Complexity: Time complexity is a function of input size n that refers to the amount of
time needed by an algorithm to run to completion.
Complexity of Algorithms
The term algorithm complexity measures how many steps are required by the algorithm to solve
the given problem. It evaluates the order of count of operations executed by an algorithm as a
function of input data size.
To assess the complexity, the order (approximation) of the count of operation is always
considered instead of counting the exact steps. O(f) notation represents the complexity of an
algorithm, which is also termed as an Asymptotic notation or "Big O" notation.
Greedy Algorithm: Makes locally optimal choices at each step in the hope of finding a global
optimum, useful for optimization problems but may not always lead to the best solution.
4. Backtracking Algorithm
A backtracking algorithm is a problem-solving algorithm that uses a brute force
approach for finding the desired output. The Brute force approach tries out all the possible
solutions and chooses the desired/best solutions. Backtracking is a general algorithm for finding
solutions to some computational problems, notably constraint satisfaction problems, that
incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon
as it determines that the candidate cannot possibly be completed to a valid solution. A
backtracking algorithm uses the depth-first search method. When the algorithm begins to explore
the solutions, the abounding function is applied so that the algorithm can determine whether the
proposed solution satisfies the constraints. If it does, it will keep looking. If it does not, the
branch is removed, and the algorithm returns to the previous level. In any backtracking
algorithm, the algorithm seeks a path to a feasible solution that includes some intermediate
checkpoints. If the checkpoints do not lead to a viable solution, the problem can return to the
checkpoints and take another path to find a solution There are the following scenarios in which
you can use the backtracking.
A trial-and-error technique used to explore potential solutions by undoing choices when
they lead to an incorrect outcome, commonly employed in puzzles and optimization problems
a. It is used to solve a variety of problems. You can use it, for example, to find a feasible
solution to a decision problem. Backtracking algorithms were also discovered to be very
effective for solving optimization problems
b. In some cases, it is used to find all feasible solutions to the enumeration problem
c. Backtracking, on the other hand, is not regarded as an optimal problem-solving
technique. It is useful when the solution to a problem does not have a time limit.
Backtracking algorithms are used in;
i. Finding all Hamiltonian paths present in a graph
ii. Solving the N-Queen problem
iii. Knights Tour problem, etc
5. Randomized Algorithm
A randomized algorithm is an algorithm that employs a degree of randomness as part
of its logic or procedure. In some cases, probabilistic algorithms are the only practical means of
solving a problem. The output of a randomized algorithm on a given input is a random variable.
Thus, there may be a positive probability that the outcome is incorrect. As long as the probability
of error is small for every possible input to the algorithm, this is not a problem. Randomized
Algorithm: Utilizes randomness in its steps to achieve a solution, often used in situations where
an approximate or probabilistic answer suffices.
There are two main types of randomized algorithms: Las Vegas algorithms and Monte-Carlo
algorithms.
Example 1: In Quick Sort, using a random number to choose a pivot.
Example 2: Trying to factor a large number by choosing a random
number as possible divisors
Recursive Algorithm: A method that breaks a problem into smaller, similar sub problems and
repeatedly applies itself to solve them until reaching a base case, making it effective for tasks
with recursive structures
Encryption Algorithm: Utilized to transform data into a secure, unreadable form using
cryptographic techniques, ensuring confidentiality and privacy in digital communications and
transactions. .
Searching Algorithm: Designed to find a specific target within a dataset, enabling efficient
retrieval of information from sorted or unsorted collections.
Hashing Algorithm: Converts data into a fixed-size hash value, enabling rapid data access and
retrieval in hash tables, commonly used in databases and password storage.
SORTING ALGORITHS
Sorting is a technique to rearrange the elements of a list in ascending or descending order, which
can be numerical, alphabetical or any user-defined order. Sorting is a process through which the
data is arranged in ascending or descending order.
BUBBLE SORT
In bubble sort method the list is divided into two sub-lists sorted and unsorted. The smallest
element is bubbled from unsorted sub-list. After moving the smallest element the imaginary wall
moves one element ahead. The bubble sort was originally written to bubble up the highest
element in the list. But there is no difference whether highest / lowest element is bubbled. This
method is easy to understand but time consuming. In this type, two successive elements are
compared and swapping is done. Thus, step-by-step entire array elements are checked. Given a
list of ‘n’ elements the bubble sort requires up to n-1 passes to sort the data.
Bubble_Sort ( A [ ] , N )
Example:
Ex:- A list of unsorted elements are: 10 47 12 54 19 23 Using Bubble sort up for highest value
shown here)
A list of sorted elements now : 54 47 23 19 12 10
. Show the bubble sort results for each pass for the following initial array of elements.
35 18 7 12 5 23 16 3
16 36 24 37 15
Pass 1:
Compare a0 and a1
16 30 24 37 15
Pass 3:
Compare a0 and a1
16 24 15 36 37
As a0 < a1 so the array will remain as it is.
Compare a1 and a2
16 24 15 36 37
Now a1 > a2, so both of them will get swapped.
16 15 24 36 37
Pass 4:
Compare a0 and a1
16 15 24 36 37
Here a0 > a1, so we will swap both of them.
15 16 24 36 37
Hence the array is sorted as no more swapping is required.
Advantages of Bubble Sort
1. Easily understandable.
2. Does not necessitate any extra memory.
3. The code can be written easily for this algorithm.
4. Minimal space requirement than that of other sorting algorithms.
Disadvantages of Bubble Sort
1. It does not work well when we have large unsorted lists, and it necessitates more
resources that end up taking so much of time.
2. It is only meant for academic purposes, not for practical implementations.
3. It involves the n2 order of steps to sort an algorithm.
INSERTION SORT
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a single element
at a particular instance. It is not the best sorting algorithm in terms of performance, but it's
slightly more efficient than selection sort and bubble sort in practical scenarios.
Both the selection and bubble sorts exchange elements. But insertion sort does not exchange
elements. In insertion sort the element is inserted at an appropriate place similar to card insertion.
Here the list is divided into two parts sorted and unsorted sub-lists. In each pass, the first element
of unsorted sub list is picked up and moved into the sorted sub list by inserting it in suitable
position. Suppose we have ‘n’ elements, we need n-1 passes to sort the elements.
Insertion sort works this way:
INSERTION_SORT (A)
Demonstrate the insertion sort results for each insertion for the following initial array of elements
25 6 15 12 8 34 9 18
Consider the following example of an unsorted array that we will sort with the help of the
Insertion Sort algorithm.
A = (41, 22, 63, 14, 55, 36)
Initially,
1st Iteration:
Set key = 22
Compare a1 with a0
2nd Iteration:
Set key = 63
Compare a2 with a1 and a0
3rd Iteration:
Set key = 14
Compare a3 with a2, a1 and a0
Since a3 is the smallest among all the elements on the left-hand side,
place a3 at the beginning of the array.
4th Iteration:
Set key = 55
Compare a4 with a3, a2, a1 and a0.
5th Iteration:
Set key = 36
Compare a5 with a4, a3, a2, a1 and a0.
Since a5 < a2, so we will place the elements in their correct positions.
Algorithm: Selection_Sort ( A [ ] , N )
QUICK SORT
Quick sort is based on partition. It is also known as partition exchange sorting. The basic concept
of quick sort process is pick one element from an array and rearranges the remaining elements
around it. This element divides the main list into two sub lists. This chosen element is called
pivot. Once pivot is chosen, then it shifts all the elements less than pivot to left of value pivot
and all the elements greater than pivot are shifted to the right side. This procedure of choosing
pivot and partition the list is applied recursively until sub-lists consisting of only one element.
Algorithm for quick sort:
It is also known as partition exchange sort. It was invented by CAR Hoare. It is based on
partition. The basic concept of quick sort process is pick one element from an array and
rearranges the remaining elements around it. This element divides the main list into two sub lists.
This chosen element is called pivot. Once pivot is chosen, then it shifts all the elements less than
pivot to left of value pivot and all the elements greater than pivot are shifted to the right side.
This procedure of choosing pivot and partition the list is applied recursively until sub-lists
consisting of only one element.
Quicksort (q)
Var list less, pivot List, greater
if length(q) ≤ 1
return q
select a pivot value pivot from q
for each x in q except the pivot element
if x < pivot then add x to less
if x ≥ pivot then add x to greater
add pivot to pivotList
return concatenate(quicksort(less), pivotList, quicksort(greater))
Time Complexity of Quick sort:
Best case : O (n log n)
Average case : O (n log n)
Worst case : O (n2)
Advantages of quick sort
1. This is faster sorting method among all.
2. Its efficiency is also relatively good.
3. It requires relatively small amount of memory.