Fds 3
Fds 3
The element
is matched succes
sfully, it returns the
index of the
Unsuccessful Search
Element is
not
matched th
en it return -
1.
Sequential Search
Algorithm
1. Take the input array arr[] from user.
2. Take element(x) you want to search in this array
from user.
3. Set flag variable as -1
4. LOOP : arr[start] -> arr[end]
1. if match found i.e arr[current_postion] == x then
1. Print “Match Found at position” current_position.
2. flag = 0
3. abort
5. After loop check flag variable.
1. if flag == -1
1. print “No Match Found”
6. STOP
Sequential Search
Pseudocode
void LinearSearch(int arr[], int value, int n)
//arr[]=list of data, value=key to be search, n= total number of element
{ int found = 0;
for (int i = 0; i < n ; i++) {
if (value == arr[i] ) {
found = 1;
break;
}
}
if (found == 1)
printf("Element is present in the array at position %d “, i+1);
else
printf("Element is not present in the array.“);
}
Cont..
Best case-
In the best possible case,
comparison.
Thus in best case, linear search algorithm takes O(1) operations.
Worst Case-
In the worst possible case,
comparisons.
In the later case, the search terminates in failure with n comparisons.
Space complexity
As linear search algorithm does not use any extra
space thus its space complexity = O(n) for an array
of n number of elements.
Time Complexity
Worst case complexity: O(n) – This case occurs
when the element to search is not present in the
array.
Best case complexity: O(1) – This case occurs
when the first element is the element to be searched.
Average complexity: O(n) – This means when an
element is present somewhere in the middle of the
array.
SENTINEL LINEAR SEARCH
Sentinel Linear Search
idea is to reduce the number of comparisons
required to find an element in a list.
Here we replace the last element of the list
with the search element itself and run a while
loop to see if there exists any copy of the
search element in the list and quit the loop as
soon as we find the search element.
This algorithm is faster than linear search
because it cuts down on the number of
comparisons
Algorithm
In sentinel search, we first insert the target at the end of
the list, and then we compare each item of the list until
we find the required item.
Here we see that the while loop makes only one comparison in each iteration and it is sure
that it will terminate since the last element of the list is the search element itself. So in the
worst case ( if the search element does not exists in the list ) then there will be at
most N+2 comparisons ( N comparisons in the while loop and 2 comparisons in the if
condition). Which is better than ( 2N+1 ) comparisons as found in Simple Linear Search.
Take note that both the algorithms have time complexity of O(n).
Implementation
BINARY SEARCHING
Binary Search
The sequential search algorithm is very slow.
If we have an array of 1000 elements, we
must make 100 comparisons in the worst
case.
In binary search, the list is divided into two halves and the item
is compared with the middle element of the list.
KeyPoint
Fibonacci Search examines closer elements in few steps. So when
input array is big that cannot fit in CPU cache or in RAM, it is useful.
On average, fibonacci search requires 4% more comparisons than
binary search
Fibonacci search requires only addition and subtraction whereas
binary search requires bit-shift, division or multiplication operations.
Fibonacci search can reduce the time needed to access an element
in a random access memory.
INDEXED SEQUENTIAL SEARCH.
Indexed Sequential Search.
Indexed Sequential Search, also known as Indexed
Sequential Access Method (ISAM), is a searching
technique used in data structures, particularly for
searching within large datasets stored in sequential
access files.
A 13 7 43 5 3 19 2 23 29
0 1 2 3 4 5 6 7 8
A 13 7 43 5 3 19 2 23 29 Original array
7 13 5 3 19 2 23 29 43 Pass 1
7 5 3 13 2 19 23 29 43 Pass 2
5 3 7 2 13 19 23 29 43 Pass 3
3 5 2 7 13 19 23 29 43 Pass 4
3 2 5 7 13 19 23 29 43 Pass 5
2 3 5 7 13 19 23 29 43 Pass 6
2 3 5 7 13 19 23 29 43 Pass 7
2 3 5 7 13 19 23 29 43 Pass 8
Selection sort
Selection sort
The Selection sort algorithm is based on the idea of
finding the minimum or maximum element in an
unsorted array and then putting it in its correct position
in a sorted array.
Stable Yes No
Method Exchanging Selection
Speed Slow Fast as compared to
bubble sort
Agenda
Insertion Sort
Example of insertion sort
Algorithm and time complexity
Inserti
on
sort
Insertion sort
Insertion sort is a simple sorting algorithm that
builds the final sorted array (or list) one item at a
time.
N = 10
gap = floor(N/2) = floor(10/2) = 5
Example 2
X = [18, 32, 12, 5, 38, 33, 16, 2]
No. of elements = 8
gap = floor(N/2) = floor(8/2) = 4
Algorithm
Step 1: Divide the list into n/2 sublists.
Step 2: Sort each sublist using insertion
sort.
Step 3: Merge the sublists.
Step 4: Halve the number of sublists.
Step 5: Repeat steps 2 and 3 until the
number of sublists becomes “1”.
Example
X = [35, 33, 42, 10, 14, 19, 27, 44]
No. of elements = 8
gap = floor(N/2) = floor(8/2) = 4
Applications:
The C standard library uses shell sort when dealing
with embedded systems.
Compressors, such as bzip2, also use it to avoid
problems that could come when sorting algorithms
exceed a language’s recursion depth.
It is also used in the Linux kernel because it does
not use the call stack.
Agenda
What is Divide and Conquer approach
Merge Sort
divide and conquer
In “divide and conquer” algorithm where we
first divide the problem into subproblems and
then merge them together to conquer our
solution.
Else
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
pivot_index = 0 40 20 10 80 60 50 7 30 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
pivot_index = 0 40 20 10 80 60 50 7 30 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
pivot_index = 0 40 20 10 80 60 50 7 30 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
pivot_index = 0 40 20 10 80 60 50 7 30 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
pivot_index = 0 40 20 10 80 60 50 7 30 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
pivot_index = 0 40 20 10 80 60 50 7 30 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
pivot_index = 0 40 20 10 30 60 50 7 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 60 50 7 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 60 50 7 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 60 50 7 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 60 50 7 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 60 50 7 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 60 50 7 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 0 40 20 10 30 7 50 60 80 100
too_big_index too_small_index
1. While data[too_big_index] <= data[pivot]
++too_big_index
2. While data[too_small_index] > data[pivot]
--too_small_index
3. If too_big_index < too_small_index
swap data[too_big_index] and data[too_small_index]
4. While too_small_index > too_big_index, go to 1.
5. Swap data[too_small_index] and data[pivot_index]
pivot_index = 4 7 20 10 30 40 50 60 80 100
too_big_index too_small_index
Partition Result
7 20 10 30 40 50 60 80 100
7 20 10 30 40 50 60 80 100
The time complexity of the Counting Sort algorithm is O(m+n) where m is the number of
elements in the input array and n is the range of input.
In all the cases whether it be the best case, worst case, or average case the time
complexity of the algorithm is the same because the time complexity of the counting sort
algorithm is not dependent on how many elements we store in the array.
Bucket Sort
BUCKET SORT
Bucket sort is a starting algorithm mainly used when we have
data uniformly distributed over a range.