CS23231-DS-Unit-V_Reg_2023
CS23231-DS-Unit-V_Reg_2023
SEARCHING
1.1 INTRODUCTION
• Linear search
• Binary search
REVIEW QUESTIONS
1. What is searching?
2. What are the types of searching?
CHAPTER - 2 - LINEAR SEARCH
2.1 INTRODUCTION
2.2 ALGORITHM
LinearSearch(A[], N, KEY)
Step 1 : Start.
Step 2 : Repeat For I = 0 to N-1.
Step 3 : If A[I] = KEY then Goto Step 4 else Goto Step 5.
Step 4 : Return I and Stop.
Step 5 : Increment I by 1.
Step 6 : [End of Step 3 For loop].
Step 7 : Return -1.
Step 8 : Stop.
2.3. ROUTINE
2.4 PROGRAM
#include <stdio.h>
Output
3.1 INTRODUCTION
3.2 ALGORITHM
Step 1 : Start.
Step 2 : Set FIRST = 0.
Step 3 : Set LAST = N – 1.
Step 4 : Repeat While FIRST <= LAST.
Step 5 : Set MID = (FIRST + LAST) / 2.
Step 6 : If A[MID] = KEY then Goto Step 7 else Goto Step 8.
Step 7 : Return MID and Stop.
Step 8 : If A[MID] < KEY then Goto Step 9 else Goto Step 10.
Step 9 : Set FIRST = MID + 1 and Goto Step 11.
Step 10 : Set LAST = MID – 1.
Step 11 : [End of Step 4 While loop].
Step 12 : Return -1.
Step 14 : Stop.
3.3 ROUTINE
3.4 PROGRAM
#include <stdio.h>
Output
Best Case
The best case for a binary search algorithm occurs when the
element to be searched is present at the middle of the list. In this
case, only one comparison is made to perform the search operation.
Worst Case
The worst case for a binary search algorithm occurs when the
element to be searched is not present in the list. In this case, the list
is continuously divided until only one element is left for comparison.
11.1 INTRODUCTION
Sorting Techniques
• Internal Sorting
• External Sorting
• Insertion sort
• Selection sort
• Shell sort
• Bubble sort
• Quick sort
• Heap sort
• Merge Sort
• Multiway Merge
• Polyphase merge
REVIEW QUESTIONS
1. What is sorting?
2. What are the types of sorting?
3. Differentiate between internal sorting and external sorting. (or)
Differentiate internal and external sorting.
4. List the four types of sorting techniques.
5. What is the need for external sorting?
6. List sorting algorithm which uses logarithmic time complexity.
CHAPTER - 12 - INSERTION SORT
12.1. INTRODUCTION
Pass 2: A[3] is compared with both A[1] and A[2] and inserted at
an appropriate place. This makes A[1], A[2], A[3] a sorted sub array.
Pass n–1: A[n] is compared with each element in the sub array
A[1], A[2], A[3], … A[n-1] and inserted at an appropriate position. This
eventually makes the entire array A sorted.
12.2. ALGORITHM
InsertionSort(A[], N)Integer
: array
A[] Integer
:
N
Start.
:
Step 1 Repeat For I = 1 to N-1.
:
Step 2 Set TEMP = A[I].
:
Step 3 Set J = I.
:
Step 4 Repeat While J > 0 and A[J - 1] > TEMP.
:
Step 5 Set A[J] = A[J - 1].
:
Step 6 Decrement J by 1.
:
Step 7 [End of Step 5 While loop].
:
Step 8 Set A[J] = TEMP.
:
Step 9 Increment I by 1.
:
Step 10 [End of Step 2 For
:
Step 11 loop]. Stop.
:
Step 12
12.3 ROUTINE
12.4. EXAMPLE
20 10 60 40 30 15
Positions
Original 20 10 60 40 30 15
Moved
After i = 1 10 20 60 40 30 15 1
After i = 2 10 20 60 40 30 15 0
After i = 3 10 20 40 60 30 15 1
After i = 4 10 20 30 40 60 15 2
After i = 5 10 15 20 30 40 60 4
Sorted 10 15 20 30 40 60
Array
12.5 PROGRAM
#include <stdio.h>
void InsertionSort(int a[], int n);
int
main()
{ int n, i, a[10]; printf("Enter
the limit : "); scanf("%d", &n);
printf("Enter the elements : ");
for (i = 0; i < n; i++)
scanf("%d", &a[i]);
InsertionSort(a, n);
printf("The sorted elements are :
"); for (i = 0; i < n; i++)
printf("%d\t", a[i]); return 0;
}
void InsertionSort(int a[], int n)
{
int i, j, temp;
for (i = 1; i < n; i++)
{
temp = a[i];
j = i;
while (j > 0 && a[j - 1] > temp)
{
a[j] = a[j - 1];
j = j - 1;
}
a[j] = temp;
}
}
Output
• The efficiency of O(n2) is not well suited for large sized lists.
• It is expensive because of shifting all following elements by
one.
REVIEW QUESTIONS
13.1 INTRODUCTION
13.2 ALGORITHM
SelectionSort(A[], N)
Step 1 : Start.
Step 2 : Repeat For I = 0 to
N - 1. Step 3 : Set MIN = I.
Step 4 : Repeat For J = I + 1 to N.
Step 5 : If A[J] < A[MIN] then Goto Step 6 else Goto Step 7.
Step 6 : Set MIN = J.
Step 7 : Increment J by 1.
Step 8 : [End of Step 4 For loop].
Step 9 : Set TEMP = A[I].
Step 10 : Set A[I] = A[MIN].
Step 11 : Set A[MIN] = TEMP.
Step 12 : Increment I by 1.
Step 13 : [End of Step 2 For loop].
Step 14 : Stop.
13.3 ROUTINE
13.4. EXAMPLE
13.5. PROGRAM
#include <stdio.h>
void SelectionSort(int a[], int n);
int
main()
{
int i, n, a[10]; printf("Enter
the limit : "); scanf("%d", &n);
printf("Enter the elements : ");
for (i = 0; i < n; i++)
scanf("%d", &a[i]);
SelectionSort(a, n);
Output
• The efficiency of O(n2) is not well suited for large sized lists.
• It does not leverage the presence of any existing sort pattern
in the list.
REVIEW QUESTIONS
14.1 INTRODUCTION
Shell sort is an algorithm that first sorts the elements far apart
from each other and successively reduces the interval between the
elements to be sorted. It is a generalized version of insertion sort.
14.2. ALGORITHM
ShellSort(A[], N)
Step 1 : Start.
Step 2 : Repeat For GAP = N / 2 to 0.
Step 3 : Repeat For I = GAP to N.
Step 4 : Set TEMP = A[I].
Step 5 : Repeat For J = I to J >= GAP and A[J -
GAP] > TEMP.
Step 6 : Set A[J] = A[J - GAP].
Step 7 : [End of Step 5 For loop].
Step 8 : Set A[J] = TEMP.
Step 9 : [End of Step 3 For loop].
Step 10 : [End of Step 2 For loop].
Step 11 : Stop.
14.3 ROUTINE
14.4 EXAMPLE
81 94 11 96 12 35 17 95 28 58
35 17 11 28 12 81 94 95 96 58
28 12 11 35 17 81 58 95 96 94
11 12 17 28 35 58 81 94 95 96
14.5 PROGRAM
#include <stdio.h>
void ShellSort(int a[], int n);
int
main()
{
int i, n, a[10]; printf("Enter
the limit : "); scanf("%d", &n);
printf("Enter the elements : ");
for (i = 0; i < n; i++)
scanf("%d", &a[i]); ShellSort(a, n);
printf("The sorted elements are :
"); for (i = 0; i < n; i++)
printf("%d\t", a[i]); return 0;
}
void ShellSort(int a[], int n)
{
int gap, i, j, temp;
for (gap = n / 2; gap > 0; gap /= 2)
{
for (i = gap; i < n; i += 1)
{
temp = a[i];
for (j = i; j >= gap && a[j - gap] > temp; j -= gap)
{
a[j] = a[j - gap];
}
a[j] = temp;
}
}
}
Output
15.1 INTRODUCTION
15.2 ALGORITHM
BubbleSort(A[], N)
15.5 PROGRAM
#include <stdio.h>
Output
• The efficiency of O(n2) is not well suited for large sized lists.
• It requires large number of elements to be shifted.
• It is slow in execution as large elements are moved towards
the end of the list in a step-by-step fashion.
REVIEW QUESTIONS
16.1. INTRODUCTION
The quick sort works by partitioning the array A[1], A[2] . . . A[n]
by picking some key value in the array as a pivot element. Pivot
element is used to rearrange the elements in the array. Pivot can be
the first element of an array and the rest of the elements are moved
so that the elements on left side of the pivot are lesser than the
pivot, whereas those on the right side are greater than the pivot.
Now, the pivot element is placed in its correct position. Now the
quicksort procedure is applied for left array and right array in a
recursive manner.
16.2. ALGORITHM
Step 1 : Start.
Step 2 : If LEFT < RIGHT then Goto Step 3 else Goto Step 23.
Step 3 : Set PIVOT = LEFT.
Step 4 : Set I = LEFT + 1.
Step 5 : Set J = RIGHT.
Step 6 : Repeat While I < J.
Step 7 : Repeat While A[I] < A[PIVOT].
Step 8 : Increment I by 1.
Step 9 : [End of Step 7 While loop].
Step 10 : Repeat While A[J] > A[PIVOT].
Step 11 : Decrement J by 1.
Step 12 : [End of Step 10 While loop].
Step 13 : If I < J then Goto Step 14 else Goto Step 17.
Step 14 : Set TEMP = A[I]. Step
15 : Set A[I] = A[J].
Step 16 : Set A[J] = TEMP.
Step 17 : [End of Step 6 While loop].
Step 18 : Set TEMP = A[PIVOT].
Step 19 : Set A[PIVOT] = A[J].
Step 20 : Set A[J] = TEMP.
Step 21 : QUICKSORT(LEFT, J –
1), Step 22 : QUICKSORT(J + 1,
RIGHT).
Step 23 : Stop.
16.3 ROUTINE
16.4 EXAMPLE
40 20 70 14 60 61 97 30
40
20 70 14 60 61 97 30
Pivot i j
40
20 70 14 60 61 97 30
40
Pivot i j
40
40
40
40
40
40
14 20 30 60 61 97 70
40
j i
Now, the pivot element has reached its correct position. The
elements lesser than the Pivot {14, 20, 30} is considered as left sub
array. The elements greater than the pivot {60, 61, 97, 70} is
considered as right sub array. Then the QuickSort procedure is
applied recursively for both these arrays.
16.5 PROGRAM
#include <stdio.h>
temp = a[pivot];
a[pivot] = a[j];
a[j] = temp;
QuickSort(a, left, j - 1);
QuickSort(a, j + 1, right);
}
}
Output
• The worst case efficiency of O(n2) is not well suited for large
sized lists.
• Its algorithm is considered as a little more complex in
comparison to some other sorting techniques.
REVIEW QUESTIONS
17.1 INTRODUCTION
Merge sort is applied to the first half and second half of the
array. This gives two sorted halves, which can then be recursively
merged together using the merging algorithm.
The basic merging algorithm takes two input arrays A and B and
an output array C. The first element of A array and B array are
compared, then the smaller element is stored in the output array C
and the corresponding pointer is incremented. When either input
array is exhausted the remainder of the other array is copied to an
output array C.
17.2 ALGORITHM
Step 1 : Start.
Step 2 : Set N1 = CENTER - LEFT + 1.
Step 3 : Set N2 = RIGHT - CENTER.
Step 4 : Repeat For I = 0 to N1 - 1.
Step 5 : Set A[I] = ARR[LEFT + I].
Step 6 : Increment I by 1.
Step 7 : [End of Step 4 For loop].
Step 8 : Repeat For J = 0 to N2 - 1.
Step 9 : Set B[J] = ARR[CENTER + 1 + J].
Step 10 : Increment J by 1.
Step 11 : [End of Step 8 For loop].
Step 12 : Repeat While APTR < N1 AND BPTR < N2.
Step 13 : If A[APTR]<= B[BPTR] then Goto Step 14 else Goto
Step 18.
Step 14 : Set ARR[CPTR] = A[APTR].
Step 15 : Increment APTR by 1 and Goto Step 19.
Step 16 : Set ARR[CPTR] = B[BPTR].
Step 17 : Increment BPTR by 1.
Step 18 : Increment CPTR by 1.
Step 19 : [End of Step 12 While loop].
Step 20 : Repeat While APTR < N1.
Step 21 : Set ARR[CPTR] = A[APTR].
Step 22 : Increment APTR
by 1. Step 23 : Increment
CPTR by 1.
17.4 EXAMPLE
For instance, to sort the eight element array 24, 13, 26, 1, 2, 27,
38, 15, we recursively sort the first four and last four elements,
obtaining 1, 13, 24, 26, 2, 15, 27, 38 then these array is divided into
two halves and the merging algorithm is applied to get the final
sorted array.
1 13 24 26 2 15 27 38 1
Aptr Bptr
Cptr
Next, 13 and 2 are compared and
the smallest element 2 from B array is copied to
C array and the pointers Bptr and Cptr gets
incremented by one. This proceeds until A array
and B array are exhausted, and all the elements
are copied to an output array C.
1 13 24 26 2 15 27 3 1 2
8
Cptr
Apt 1 2 13
Bpt
r r Cptr
1 13 24 2 2 15 27 3 1 2 13 15
6 8
Cptr
Apt Bpt 1 2 13 15 24
r r
1 13 24 26 2 15 27 38
Aptr Bptr
Cptr
Apt Bptr
r
1 13 24 26 2 15 27 38
1 1 2 2 15 27 3 1 2 13 15 24 2
3 4 8 6
Aptr Bptr Cptr
1 13 24 26 2 15 27 3 1 2 1 15 24 2 27 38
8 3 6
Aptr Bptr Cptr
1 2 13 15 24 26 27 38
17.5 PROGRAM
#include <stdio.h>
17.7 ADVANTAGES
17.8 LIMITATIONS
REVIEW QUESTIONS
16.1. INTRODUCTION
Phase 1:
Structure Property
For any element in array position I, the left child is in position 2i, the
right child is in 2i+1, (ie) the cell after the left child.
The key value in the parent node is smaller than or equal to the key
value of any of its child node. To Build the heap, apply the heap order
property starting from the right most nonleaf node at the bottom
level.
Phase 2
Phase 1:
Binary Heap satisfying structure property
Phase 2:
Remove the smallest element from the heap and place it in the array.
Find the smallest child and swap with the root.
After the last deletemin, the array will contain the elements in
descending order. To get the elements sorted in ascending order,
Deletemax routine of heap is applied, in which the heap order
property is changed. i.e. the parent has a larger key value then its
children.
Remove the maximum element from the heap and place in the array.
Reconstruct the heap till it satisfies the heap order property.
4.1 INTRODUCTION
• Static hashing
• Dynamic hashing
• Compilers
• Graph theory problem
• Online spelling checkers etc.
• Database systems
• Symbol tables
• Data dictionaries
• Network processing algorithms Browse caches
REVIEW QUESTIONS
5.1 INTRODUCTION
In the hash table generated in the above example, the hash function
is Employee ID%10. Thus, for Employee ID 101, hash key will be
calculated as 1. Therefore, Name1 will be stored at position 1 in the
hash table. For Employee ID 102, hash key will be 2, hence Name2
will be stored at position 2 in the hash table. Similarly, Name3,
Name4, and Name5 will be stored at position 4, 3, and 5 respectively,
as shown in Fig. Later, whenever an employee record is searched
using the Employee ID, the hash function will indicate the exact
position of that record in the hash table.
• Minimize collisions
• Be easy and quick to compute
• Distribute key values evenly in the hash table
• Use all the information provided in the key
In this method, the key is squared and the middle part of the
result based on the number of digits required for addressing is taken
as the hash value. This method works well if the keys do not contain a
lot of leading and trailing zeros.
For example:
Map the key 2453 into a hash table of size 1000, there, X = 2453.
X2 = 6017209
For example:
H(4) = 4 % 4 = 0
This method involves splitting keys into two or more parts each
of which has the same length as the required address with the
possible exception of the last part and then adding the parts to form
the hash address. There are two types of folding method. They are:
For example:
Let X = 1 2 3 2 0 3 2 4 1
Position X into 123, 203, 241, then add these three values.
Key is broken into several parts and reverse the digits in the
outermost partitions and then add the partition to form the hash
value.
For example:
X = 123203241
Xi+1 = A Xi % TableSize
This method extracts the selected digits from the key and used it
as the address or it can be reversed to give the hash value.
For example:
For example:
Map the key (8465)10 in the range 0 to 999 using base 15.
(8465)10 = (2795)15
REVIEW QUESTIONS
6.1 INTRODUCTION
• Separate chaining
• Open addressing o Linear probing o Quadratic probing o
Double hashing
• Rehashing
• Extendible hashing
REVIEW QUESTIONS
7.1 INTRODUCTION
7.2 INSERTION
Insert 1 H(1) = 1 % 10 = 1
Insert 81 H(81) = 81 % 10 = 1
The element 81 collides to the same hash value 1. To place the value
at this position, perform the following:
Insert 4 H(4) = 4 % 10 =
4
Insert 64 H(64) = 64 %
10 = 4
Insert 25 H(25) = 25 %
10 = 5
Insert 16 H(16) = 16 %
10 = 6
Insert 36 H(36) = 36 %
10 = 6
Insert 9 H(9) = 9 % 10 =
9
Insert 49 H(49) = 49 %
10 = 9
7.3 FIND
7.4 ADVANTAGE
7.5 DISADVANTAGE
8.1 INTRODUCTION
with F(0) = 0.
• Linear Probing
• Quadratic Probing Double
Hashing
hi(X) = (Hash(X) + i) % N
h0(X) = Hash(X) % N
h1(X) = (Hash(X) +
1) % N h2(X) =
(Hash(X) + 2) % N
Figure shows the result of inserting keys {89, 18, 49, 58, 69} into a
hash table using the same hash function as before and the collision
resolution strategy, F(i) = i.
Advantage
Disadvantage
h0(X) = Hash(X) % N
h1(X) = (Hash(X) +
1) % N h2(X) =
(Hash(X) + 4) % N
Figure shows the resulting open addressing hash table with this
collision function on the same input used in the linear probing
example.
Next 58 collides at position 8. Then the cell one away is tried but
another collision occurs. A vacant cell is found at the next cell tried,
which is 22 = 4 away. 58 is thus placed in cell 2.
Hash2(Key) = R – (Key % R)
Hash(49) = 49 % 10 = 9
Hash2(49) = 7 – (49 % 7) = 7 – 0 = 7
h1(49) = (9 + 1 * 7) % 10 = (9 + 7) % 10 = 16 % 10 = 6.
Hash(58) = 58 % 10 = 8
Hash2(58) = 7 – (58 % 7) = 7 – 2 = 5
h1(58) = (8 + 1 * 5) % 10 = (8 + 5) % 10 = 13 % 10 = 3.
Hash(69) = 69 % 10 = 9
Hash2(69) = 7 – (69 % 7) = 7 – 6 = 1
h1(69) = (9 + 1 * 1) % 10 = (9 + 1) % 10 = 10 % 10 = 0.
9.1 INTRODUCTION
If the table gets too full, the running time for the operations will
start taking too long and inserts might fail for closed hashing with
quadratic resolution. This can happen if there are too many deletions
intermixed with insertions. A solution, then, is to build another table
that is about twice as big (with associated new hash function) and
scan down the entire original hash table, computing the new hash
value for each (non-deleted) element and inserting it in the new
table. This entire operation is called rehashing.
Figure: Open addressing hash table with linear probing with input
13,15, 6, 24
If 23 is inserted into the table, the resulting table in Figure will
be over 70 percent full.
HashTable Rehash(HashTable H)
{
int i, OldSize;
Cell *OldCells;
OldCells = H->TheCells;
OldSize = H->TableSize;
H = InitializeTable(2 * OldSize);
for(i = 0; i < OldSize; i++)
if(OldCells[i].Info == Legitimate)
Insert(OldCells[i].Element, H);
free(OldCells);
return H;
}
REVIEW QUESTIONS
1. What is rehashing?
2. When the rehashing be implemented?
10.1 INTRODUCTION
Notice that all of the leaves not involved in the split are now
pointed to by two adjacent directory entries. Thus, although an entire
directory is rewritten, none of the other leaves are actually accessed.
Figure: Extendible hashing: after insertion of 100100 and directory
split
If the key 000000 is now inserted, then the first leaf is split,
generating two leaves with dL = 3. Since D = 3, the only change
required in the directory is the updating of the 000 and 001 pointers.
See Figure.
This very simple strategy provides quick access times for Insert
and Find operations on large databases.
REVIEW QUESTIONS