0% found this document useful (0 votes)
10 views57 pages

Dndid D QAhdhd D

The document discusses time complexity, explaining its significance in algorithm performance with examples of common complexities like O(1), O(n), O(n²), O(log n), and O(n log n). It also covers the best-case, average-case, and worst-case complexities for binary search and linear search, along with a detailed explanation of selection sort and bubble sort algorithms. Additionally, it introduces space complexity and its measurement in C programming.

Uploaded by

apex54856
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views57 pages

Dndid D QAhdhd D

The document discusses time complexity, explaining its significance in algorithm performance with examples of common complexities like O(1), O(n), O(n²), O(log n), and O(n log n). It also covers the best-case, average-case, and worst-case complexities for binary search and linear search, along with a detailed explanation of selection sort and bubble sort algorithms. Additionally, it introduces space complexity and its measurement in C programming.

Uploaded by

apex54856
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

UNIT-5

Q..NO DESCRIPTION OF QUESTION MARKS


a What is time complexity?
Time Complexity

Time complexity refers to how the runtime of an algorithm


changes with the size of the input.

Common Time Complexities in C Programs

 O(1): Constant time


o Example: Accessing an array element by
index.

int arr[10];
int x = arr[5]; // O(1)

 O(n): Linear time


o Example: Traversing an array.

1 int sum = 0; 1
for (int i = 0; i < n; i++) { // O(n)
sum += arr[i];
}

 O(n²): Quadratic time


o Example: Nested loops for comparing
elements.

for (int i = 0; i < n; i++) {


for (int j = 0; j < n; j++) { // O(n^2)
// Perform operation
}
}

 O(log⁡n): Logarithmic time


o Example: Binary search.

int binarySearch(int arr[], int low, int high, int key) {


// O(log n)
1
while (low <= high) {
int mid = low + (high - low) / 2;
if (arr[mid] == key) return mid;
if (arr[mid] < key) low = mid + 1;
else high = mid - 1;
}
return -1;
}

 O(nlog⁡n): Linearithmic time


o Example: Efficient sorting algorithms like
mergesort or quicksort.

Measuring Time Complexity in C

The exact runtime also depends on the system's clock,


compiler optimizations, and the specific implementation of
the algorithm. To measure execution time in C, you can use
the <time.h> library:

#include <stdio.h>
#include <time.h>

int main() {
clock_t start, end;
double cpu_time_used;

start = clock();
// Code to measure
for (int i = 0; i < 100000; i++) { /* Some computation */
}
end = clock();

cpu_time_used = ((double)(end - start)) /


CLOCKS_PER_SEC;
printf("Execution time: %f seconds\n", cpu_time_used);

return 0;
}
2
b Best case time complexity of Binary search

The best-case time complexity of Binary Search is:

O(1)
Explanation:

Binary Search works by dividing the search range into half


repeatedly. In the best case, the target element is found at
the very first comparison (i.e., the middle element of the
array). Since no further comparisons or recursive calls are
needed, the time complexity is constant O(1).

Algorithm Steps in Best Case

1. Start with a sorted array.


2. Calculate the middle index: mid = (low + high) / 2.
3. Compare the middle element with the target:
o If the middle element matches the target, the
search stops.
1
4. Since this happens in one step, it’s a constant-time
operation.

Example

Consider a sorted array:

int arr[] = {2, 4, 6, 8, 10, 12, 14};


int target = 8;

 The middle element of the array is arr[3] = 8.


 Since arr[3] matches the target on the first
comparison, the search is completed in O(1).

Summary:

 Best case: The target is found in the first comparison,


resulting in O(1).
 Average case: Depends on the number of iterations
required to locate the target, O(log⁡n).
 Worst case: The target is not found, requiring
3
O(log⁡n) comparisons.

c Devise an algorithm for selection sort and explain with an


example.

Selection Sort is a simple, comparison-based sorting


algorithm. It works by dividing the list into two parts: the
sorted part and the unsorted part. Initially, the sorted
part is empty, and the unsorted part is the entire list. The
algorithm repeatedly selects the smallest (or largest)
element from the unsorted part and swaps it with the first
unsorted element, growing the sorted part and shrinking
the unsorted part in each pass.

How Selection Sort Works:

1. Starting from the first element, find the smallest


(or largest) element in the unsorted part of the list.
2. Swap the smallest element with the first element of
the unsorted part.
3. Move the boundary of the sorted part by one
element (i.e., consider the first element of the 10
unsorted part as sorted).
4. Repeat steps 1-3 for the remaining unsorted part of
the list.
5. Continue the process until the entire list is sorted.

Selection Sort Algorithm:

1. Start with the first element in the list.


2. Find the smallest element in the unsorted part of
the list (from the current position to the last
element).
3. Swap the smallest element with the element at the
current position.
4. Move the boundary of the sorted portion by one
step (i.e., move to the next element).
5. Repeat steps 2-4 for the remaining unsorted
portion of the list until the entire list is sorted.

Pseudocode:

4
function selectionSort(list):
n = length of list
for i = 0 to n-1:
minIndex = i // Assume the first element is the
smallest
for j = i+1 to n-1: // Look for a smaller element in
the remaining unsorted part
if list[j] < list[minIndex]:
minIndex = j // Update minIndex if a smaller
element is found
if minIndex != i:
swap(list[i], list[minIndex]) // Swap the smallest
element with the current element
return list
Explanation:

1. Outer loop (i): This loop iterates through the list,


starting from the first element, and moves forward,
expanding the sorted part of the list.
2. Inner loop (j): The inner loop finds the minimum
element in the unsorted part of the list, starting
from the element right after i to the last element.
3. Swap: After finding the smallest element, it is
swapped with the first unsorted element, ensuring
that the smallest element gets placed at the
beginning of the unsorted part.
4. Repeat: The outer loop then continues, and the
sorted portion of the list grows until the entire list is
sorted.

Example Walkthrough:

Let’s sort the list [64, 25, 12, 22, 11] using Selection Sort:

1. Initial List: [64, 25, 12, 22, 11]


o Find the smallest element in the entire list:
11.
o Swap 11 with 64.
o List after 1st pass: [11, 25, 12, 22, 64]
2. Next, sort the remaining list [25, 12, 22, 64].

5
o Find the smallest element: 12.
o Swap 12 with 25.
o List after 2nd pass: [11, 12, 25, 22, 64]
3. Next, sort the remaining list [25, 22, 64].
o Find the smallest element: 22.
o Swap 22 with 25.
o List after 3rd pass: [11, 12, 22, 25, 64]
4. Next, sort the remaining list [25, 64].
o Find the smallest element: 25 (no swap
needed).
o List after 4th pass: [11, 12, 22, 25, 64]
5. Only one element left (64), no need to do anything.

Final Sorted List: [11, 12, 22, 25, 64]

Second Example:

Working of Selection Sort


1. Set the first element as minimum.

2. Select first element as minimum


3. Compare minimum with the second element. If the
second element is smaller than minimum, assign the
second element as minimum.

Compare minimum with the third element. Again, if the


third element is smaller, then assign minimum to the third
element otherwise do nothing. The process goes on until
the last element.

6
Compare
minimum with the remaining elements
4. After each iteration, minimum is placed in the front of the

unsorted list.
5. Swap the first with minimum
6. For each iteration, indexing starts from the first unsorted
element. Step 1 to 3 are repeated until all the elements are
placed at their correct positions.

7
7.
8.
9. The first iteration

8
The second iteration

The third iteration

9
The fourth iteration
Time Complexity:

 Best, Average, and Worst Case: O(n²) where n is


the number of elements in the list.
o This is because the algorithm uses two
nested loops: the outer loop runs n times, and
the inner loop runs n-1, n-2, ..., 1 times.

Space Complexity:

 O(1): Selection Sort is an in-place sorting


algorithm, meaning it only requires a constant
amount of extra space (apart from the input list).

Summary:

 Selection Sort is a simple but inefficient algorithm


for sorting large datasets due to its O(n²) time
complexity.

 It is in-place and requires no additional memory,


which makes it space-efficient. However, it is slow
for large datasets and not suitable for performance-
critical applications.

10
Advantages Disadvantages
Inefficient for large
Simplicity: Easy to datasets: O(n²) time
understand and implement. complexity makes it slow
for large lists.
In-place sorting: No Not adaptive: Always
additional memory is performs the same number
required apart from a few of comparisons regardless
variables. of the input's initial order.
Efficient for small
No early termination: The
datasets: Works well for
algorithm doesn't stop early
smaller lists where
even if the list is already
performance is not a critical
sorted.
factor.
Poor worst-case
Low memory usage: Only
performance: Its time
requires a constant amount
complexity remains O(n²)
of extra space (O(1)).
in the worst case.
Stable for equal elements:
Not suitable for large
Selection Sort can be made
real-world data: The
stable with slight
quadratic time complexity
modifications (though
makes it unsuitable for
typically not stable in its
large-scale applications.
default form).
Selection Sort is a simple and easy-to-understand
sorting algorithm, but it is not efficient for large datasets
because of its quadratic time complexity (O(n²)).

It is best suited for small datasets where simplicity and


low memory usage are important.
a What is space complexity

2 Space Complexity 1

Space complexity refers to the amount of memory required

11
by an algorithm, including:

 Fixed Part: Memory required for constants, program


code, and input size.
 Variable Part: Memory required for variables,
recursion stack, dynamic allocations, etc.

Space Complexity in C

 Constant Space O(1):


o Example: Algorithm that uses a fixed number
of variables.

int x = 10, y = 20; // O(1)


int z = x + y; // O(1)

 Linear Space O(n):


o Example: Using an array to store nnn elements.

int *arr = malloc(n * sizeof(int)); // O(n)

 Logarithmic Space O(log⁡n):


o Example: Space required for recursion in
divide-and-conquer algorithms like binary
search.

int binarySearch(int arr[], int low, int high, int key) {


if (low > high) return -1;
int mid = low + (high - low) / 2;
if (arr[mid] == key) return mid;
else if (arr[mid] < key) return binarySearch(arr,
mid + 1, high, key);
else return binarySearch(arr, low, mid - 1, key);
} // O(log n) due to recursion stack

 Quadratic Space O(n²):


o Example: Matrix-based algorithms or 2D
dynamic programming.

int matrix[n][n]; // O(n²)


Measuring Space Complexity in C
12
C doesn't directly provide tools to measure memory usage,
but you can use functions like sizeof() to estimate the size
of variables or structures in bytes:

#include <stdio.h>

int main() {
int arr[10];
printf("Size of array: %lu bytes\n", sizeof(arr)); // Array
size in bytes
printf("Size of int: %lu bytes\n", sizeof(int)); // Size of
int type

return 0;
}

To analyze memory dynamically, tools like Valgrind


(Linux) or Memory Profilers can be used to detect
memory usage and leaks.
b what is best, average, worst case time complexity of linear
search

Linear Search is a simple searching algorithm that checks


each element in the array sequentially until the target
element is found or the end of the array is reached. Its time
complexity depends on the position of the target element in
the array.

Time Complexities
1
1. Best Case: O(1)
o The target element is the first element of the
array.
o The algorithm finds the element in a single
comparison.
2. Average Case: O(n)
o The target element is located randomly in the
array.
o On average, the algorithm will check half the
elements, resulting in approximately n/2n/2n/2
13
comparisons. However, in Big-O notation,
constants are ignored, so it simplifies to O(n)
3. Worst Case: O(n)
o The target element is the last element of the
array, or it is not present in the array at all.
o The algorithm will check all nnn elements in
these cases.

Explanation

 Best Case Example:


Array: [5, 8, 12, 3, 9]
Target: 5 (first element).
Comparisons: 1.
 Average Case Example:
Array: [5, 8, 12, 3, 9]
Target: 12 (middle element).
Comparisons: Approximately n/2n/2n/2.
 Worst Case Example:
Array: [5, 8, 12, 3, 9]
Target: 9 (last element) or 7 (not in the array).
Comparisons: nnn.

Summary Table
Time
Case Reason
Complexity
Best O(1) Target found at the first position.
Target found halfway through
Average O(n)
the array on average.
Target is the last element or not
Worst O(n)
present at all.

Linear Search is straightforward and works on both sorted


and unsorted arrays, but it is inefficient for large datasets
compared to more advanced algorithms like Binary Search.
c Explain about the Bubble sort with an example.

Bubble Sort is a simple comparison-based sorting 10


algorithm. It repeatedly steps through the list, compares
adjacent elements, and swaps them if they are in the
14
wrong order. This process is repeated until the list is
sorted.

Steps:

1. Starting from the first element, compare the current


element with the next element.
2. If the current element is greater than the next
element, swap them.
3. Move to the next pair of elements and repeat the
process until you reach the end of the list.
4. After one full pass, the largest element has
"bubbled" up to the correct position.
5. Repeat the process for the rest of the list (excluding
the last sorted elements) until no more swaps are
needed.

Bubble Sort Algorithm

1. Input: A list of elements to be sorted.


2. Output: The sorted list.

Steps:

1. For each element in the list (from the first to the


second-to-last):
o Set a flag swapped to False.
o For each pair of adjacent elements:
 If the current element is greater than
the next element, swap them.
 Set the flag swapped to True to
indicate that a swap has been made.
o If no swaps were made during the pass (i.e.,
swapped is still False), stop the algorithm
early because the list is already sorted.
2. Repeat step 1 until no more swaps are needed.

Pseudocode:
function bubbleSort(list):
n = length of list
for i = 0 to n-1:

15
swapped = false
for j = 0 to n-i-1:
if list[j] > list[j+1]:
swap(list[j], list[j+1])
swapped = true
if not swapped:
break
return list
Explanation:

 The outer loop ensures that the process continues


until the list is sorted.
 The inner loop compares each pair of adjacent
elements and swaps them if they are in the wrong
order.
 The flag swapped keeps track of whether any
swaps were made during a pass. If no swaps are
made in a pass, the algorithm exits early because
the list is already sorted.

Example:

Let’s sort the list: [5, 3, 8, 4, 2] using Bubble Sort.

1. First Pass:
o Compare 5 and 3 → Swap (since 5 > 3) →
New list: [3, 5, 8, 4, 2]
o Compare 5 and 8 → No swap (since 5 < 8)
o Compare 8 and 4 → Swap (since 8 > 4) →
New list: [3, 5, 4, 8, 2]
o Compare 8 and 2 → Swap (since 8 > 2) →
New list: [3, 5, 4, 2, 8]
o Now, the largest element (8) is in its correct
position.
2. Second Pass:
o Compare 3 and 5 → No swap (since 3 < 5)
o Compare 5 and 4 → Swap (since 5 > 4) →
New list: [3, 4, 5, 2, 8]
o Compare 5 and 2 → Swap (since 5 > 2) →
New list: [3, 4, 2, 5, 8]

16
oNow, 5 is in its correct position.
3. Third Pass:
o Compare 3 and 4 → No swap (since 3 < 4)
o Compare 4 and 2 → Swap (since 4 > 2) →
New list: [3, 2, 4, 5, 8]
o Now, 4 is in its correct position.
4. Fourth Pass:
o Compare 3 and 2 → Swap (since 3 > 2) →
New list: [2, 3, 4, 5, 8]
o Now, the list is fully sorted.

Final Sorted List: [2, 3, 4, 5, 8]

Second Example:
Working of Bubble Sort
Suppose we are trying to sort the elements in ascending
order.
1. First Iteration (Compare and Swap)
1. Starting from the first index, compare the first and the
second elements.
2. If the first element is greater than the second element,
they are swapped.
3. Now, compare the second and the third elements. Swap
them if they are not in order.
4. The above process goes on until the last element.

17
Compare the Adjacent Elements
2. Remaining Iteration
The same process goes on for the remaining iterations.

After each iteration, the largest element among the


unsorted elements is placed at the end.

18
Put the largest element at the end
In each iteration, the comparison takes place up to the last
unsorted element.

Compare the adjacent elements


The array is sorted when all the unsorted elements are
placed at their correct positions.

19
The array is sorted if all elements are kept in the right
order
Time Complexity:

 Best Case (already sorted list): O(n)


 Average and Worst Case: O(n²), where n is the
number of elements in the list.

Bubble Sort is not the most efficient sorting algorithm for


large datasets, but it is easy to understand and implement.

Advantages Disadvantages
Inefficient for Large Lists:
Simplicity: Easy to O(n²) time complexity
understand and implement. makes it slow for large
datasets.
In-place Sorting: No Unnecessary
additional space required, Comparisons: Compares
other than a few temporary elements even if they are
variables. already in the correct order.
Adaptive (Best Case): Can
Slow: Time complexity
perform faster for nearly
makes it slower than other
sorted lists (O(n) time
sorting algorithms for large
complexity in the best
inputs.
case).
Poor Worst-Case
Stable Sort: Maintains the
Performance: O(n²) in the
relative order of equal
worst case, which is
elements.
inefficient.

20
Early Termination: Can
Not Adaptive in Worst
stop early if no swaps are
Case: Still performs poorly
made during a pass,
in the worst-case scenario,
improving performance in
even with early termination.
some cases.
a What is sorting? Give different type of sorting technique

Sorting is the process of arranging the elements of a list or


array in a specific order, either ascending or descending.
Sorting helps organize data efficiently, enabling faster
searches, comparisons, and data analysis.

Types of Sorting Techniques

Sorting techniques are broadly categorized into two groups:

1. Internal Sorting: Sorting is performed entirely in


main memory. Used when the dataset can fit into
memory.

Examples: Bubble Sort, Insertion Sort, Selection


Sort, Merge Sort, Quick Sort, etc.
1
3 2. External Sorting: Sorting is performed using
external storage (like disk) because the dataset is too
large to fit in memory.

Examples: External Merge Sort, Polyphase Merge


Sort, etc.

Choosing the Right Sorting Technique

 Use Merge Sort or Quick Sort for large datasets.


 Use Insertion Sort or Bubble Sort for small
datasets.
 Use Counting Sort or Radix Sort for datasets with
small range values.

b Difference between searching and sorting


1
Difference Between Searching and Sorting
21
Aspect Searching Sorting
Searching is the Sorting is the process
process of finding of arranging elements
Definition the location of a in a specific order
specific element in a (ascending or
dataset. descending).
Locate a specific Organize the entire
Objective
target element. dataset.
Input may need to be
Input Input can be in any sorted for efficient
Requirement order. searching (e.g.,
Binary Search).
Index or location of A dataset where
the target element, or elements are ordered
Output
indication that it is as per the specified
not found. criterion.
- Linear Search
- Bubble Sort
- Binary Search
- Merge Sort
Types - Jump Search
- Quick Sort
- Interpolation
- Insertion Sort
Search
Typically involves
Involves rearranging
Performance finding one or a
the entire dataset.
subset of elements.
Time complexity Time complexity
varies based on the varies from O(n) to
Complexity
algorithm O(1) to O(nlog⁡n) or
O(log⁡n) or O(n). O(n^2).
Searching algorithms
Sorting ensures the
like Binary Search
data is ordered for
Data Order require sorted data;
easier analysis and
others (e.g., Linear
searching.
Search) do not.
Searching for a Alphabetizing a list
number in a contact of names or
Examples
list or finding a word arranging numbers in
in a dictionary. increasing order.
Applications - Data retrieval (e.g., - Organizing datasets
22
database queries) - Preparing data for
- Keyword lookups analysis
- Searching files - Efficient searching

Key Relationship

 Sorting often precedes Searching: For example,


sorting data can enable faster search techniques like
Binary Search.

c Devise an algorithm for linear search and explain with an


example

Linear Search (also known as Sequential Search) is a


simple searching algorithm used to find a particular
element in a list. It works by sequentially checking each
element of the list from the beginning until the target
element is found or the entire list is traversed.

 Best Case: The element is found at the first


position.
 Worst Case: The element is not in the list or it is at
the last position.

How Linear Search Works:


10
1. Start from the first element of the list.
2. Compare the target element with the current
element.
3. If the current element matches the target, return
the index of that element.
4. If not, move to the next element in the list.
5. Repeat steps 2-4 until the target element is found or
you reach the end of the list.

Linear Search Algorithm:

1. Start from the first element of the list.


2. Compare the current element with the target value.
3. If the current element is equal to the target, return
the index of that element.
23
4. If the current element is not equal to the target,
move to the next element.
5. Repeat steps 2-4 for all elements in the list until
the target is found or the entire list has been
searched.
6. If the target is not found after checking all
elements, return -1 to indicate that the target is not
present.

Pseudocode:
function linearSearch(list, target):
for i = 0 to length of list - 1:
if list[i] == target:
return i // Element found, return the index
return -1 // Element not found, return -1
Explanation:

1. Initialization: The algorithm starts by setting the


current element to the first item in the list.
2. Comparison: In each iteration, the algorithm
compares the current element (list[i]) with the
target value.
3. Element Found: If a match is found, the index of
the current element is returned.
4. Iteration: If no match is found, the algorithm
continues to the next element.
5. No Match: If the target is not found by the time the
loop finishes, the algorithm returns -1 to indicate
that the target is not in the list.

Example Walkthrough:

Consider the following list: [4, 2, 9, 11, 5], and we want


to search for the target element 9.

1. Initial List: [4, 2, 9, 11, 5]


o Start from the first element: 4
o 4 is not equal to 9, so move to the next
element.
2. Next Element: 2
o 2 is not equal to 9, so move to the next
24
element.
3. Next Element: 9
o 9 is equal to the target element.
o Return the index 2 because 9 is found at
index 2 (0-based index).

The algorithm would stop at index 2 and return 2.

Output: 2

If we were to search for a number that's not in the list, say


7, the algorithm would check all the elements and then
return -1 after the loop finishes.

Example: Searching for 7

1. Compare 7 with 4 (no match).


2. Compare 7 with 2 (no match).
3. Compare 7 with 9 (no match).
4. Compare 7 with 11 (no match).
5. Compare 7 with 5 (no match).
6. End of list reached, return -1 because 7 is not in
the list.

Output: -1

Time Complexity:

 Best Case: O(1), if the target is found at the first


position.
 Worst Case: O(n), if the target is not found or is
the last element in the list.
 Average Case: O(n), as on average, the search will
check about half of the elements.

Space Complexity:
 O(1): Linear Search is an in-place algorithm and
does not require any extra space apart from the
input list.
Advantages Disadvantages
Simplicity: Easy to Inefficient for large lists: O(n)
25
understand and time complexity makes it slow
implement. for large datasets.
Slower than other search
No Sorting Required:
algorithms: Algorithms like
Works on both sorted
Binary Search are faster for
and unsorted lists.
sorted lists.
Versatility: Can search
Linear time complexity: Every
through any data
element needs to be checked,
structure (arrays, linked
leading to longer search times.
lists).
Not ideal for large-scale data:
Space-efficient: It uses
For large datasets, its
constant space (O(1)),
performance is less efficient
as it doesn’t require any
compared to more advanced
additional memory.
search algorithms.
Works for small lists: No advantage with sorted
Can be fast enough data: Unlike Binary Search,
when the dataset is there is no improvement when
small. the list is sorted.

Summary:

Linear Search is a simple and easy-to-implement


algorithm for searching an element in a list. While it is
effective for small lists or when the data is unsorted, its
O(n) time complexity makes it inefficient for large
datasets compared to more advanced algorithms like
Binary Search.
a Give any two applications of sorting.

Two key applications of sorting:

1. Efficient Searching
4  Description: Sorting helps improve the efficiency of 1
search algorithms.
 Example: In a binary search, the data must be
sorted first. Once sorted, the binary search algorithm
can quickly locate an element by dividing the dataset
in half, which significantly reduces the search time
26
from O(n) in linear search to O(log⁡n).
 Real-World Use Case: Searching for a name in a
phone book or finding a specific product in an online
store (when the list of products is sorted).

2. Data Analysis and Reporting

 Description: Sorting is used to organize data for


analysis, reporting, and decision-making.
 Example: In a sales report, sorting the data by date,
region, or amount helps identify trends and patterns
more easily, like the highest-selling products or sales
by region.
 Real-World Use Case: Sorting financial transactions
by date to detect trends, or sorting test scores to rank
students in educational applications.

Sorting is essential for organizing and processing data


efficiently, making it easier to analyze, search, or present.

b Name the slowest sorting technique in terms of time

The slowest sorting technique in terms of time complexity


is typically Bubble Sort, Selection Sort, or Insertion Sort,
all of which have a worst-case time complexity of O(n2).

However, among these, Bubble Sort is often considered the


slowest in practical scenarios because it requires multiple
passes through the dataset, performing unnecessary swaps
even when the array is nearly sorted.
1
Why Bubble Sort is the Slowest:

 In each pass, Bubble Sort compares adjacent


elements and swaps them if they are in the wrong
order. This is done repeatedly for the entire list,
leading to redundant comparisons and swaps.
 While algorithms like Selection Sort and Insertion
Sort might perform fewer swaps in practice, Bubble
Sort’s repeated comparisons and swaps make it

27
particularly inefficient for larger datasets.

Time Complexity Summary:

 Best Case: O(n) (for nearly sorted data)


 Average/Worst Case: O(n²)

While there are other algorithms with)O(n^2) complexity


(like Selection Sort), Bubble Sort is slower due to its
repeated swapping.
c Explain the insertion sort with an example

Insertion Sort is a simple sorting algorithm that builds the


sorted list one element at a time by inserting elements from
the unsorted part into the correct position within the sorted
part. It works similarly to how you might sort playing cards
in your hands: you take one card at a time and place it in its
proper position relative to the already-sorted cards.

How Insertion Sort Works:

1. Start with the second element in the list (since a


single element is already "sorted"). 5
2. Compare this element with the element(s) before it.
3. If the element is smaller than the previous element,
shift the larger element(s) to the right to make room.
4. Insert the current element into the correct position.
5. Repeat the process for all remaining unsorted
elements until the entire list is sorted.

Insertion Sort Algorithm:

1. Start by considering the second element in the list as


the current element (since a single element is trivially
sorted).
2. Compare this current element with the elements

28
before it.
3. If the current element is smaller than the element
before it, shift the larger elements one position to the
right to make space for the current element.
4. Insert the current element in the correct position
where it is greater than the element before it, but
smaller than the element after it.
5. Move to the next element in the list and repeat steps
2–4 until the entire list is sorted.

Pseudocode:

function insertionSort(list):
for i = 1 to length of list - 1:
key = list[i] // The element to be inserted into the
sorted part of the list
j = i - 1 // The index of the element just before the key
// Move elements of list[0..i-1] that are greater than
key to one position ahead
while j >= 0 and list[j] > key:
list[j + 1] = list[j]
j=j-1
list[j + 1] = key // Insert the key in the correct position
return list

Explanation of the Algorithm:

1. Outer Loop (i): The outer loop runs from index 1 to


n-1, where n is the size of the list. It processes each
element in the list starting from the second element.
2. Key Element: For each element in the list, it is
treated as the "key" and compared to the elements
before it.
3. Shifting: The inner while loop checks if any
elements before the key are greater than it. If so,
these elements are shifted one position to the right.
4. Insert the Key: Once the correct position is found
(when the element before it is smaller or the start of
29
the list is reached), the key is inserted into that
position.

Example Walkthrough:

Let’s consider the list [5, 2, 9, 1, 5, 6] and sort it using


Insertion Sort.

1. Start with element at index 1: Compare 2 with 5.


Since 2 is smaller, shift 5 to the right and insert 2 at
index 0.

List after 1st pass: [2, 5, 9, 1, 5, 6]

2. Next element at index 2: Compare 9 with 5. Since 9


is larger than 5, no shifting is needed.

List after 2nd pass: [2, 5, 9, 1, 5, 6]

3. Next element at index 3: Compare 1 with 9, 5, and


2. Since 1 is smaller, shift all elements one position
to the right and insert 1 at index 0.

List after 3rd pass: [1, 2, 5, 9, 5, 6]

4. Next element at index 4: Compare 5 with 9. Since 5


is smaller, shift 9 to the right and insert 5 at index 3.

List after 4th pass: [1, 2, 5, 5, 9, 6]

5. Next element at index 5: Compare 6 with 9 and 5.


Since 6 is smaller than 9 but greater than 5, shift 9 to
the right and insert 6 at index 4.

List after 5th pass: [1, 2, 5, 5, 6, 9]

Final Sorted List: [1, 2, 5, 5, 6, 9]

30
Second Example:

Working of Insertion Sort


Suppose we need to sort the following array.

Initial array
1. The first element in the array is assumed to be sorted. Take
the second element and store it separately in key.

Compare key with the first element. If the first element is


greater than key, then key is placed in front of the first
element.

If the
first element is greater than key, then key is placed in front
of the first element.
2. Now, the first two elements are sorted.

Take the third element and compare it with the elements on


the left of it. Placed it just behind the element smaller than
31
it. If there is no element smaller than it, then place it at the
beginning of the array.

Place
1 at the beginning
3. Similarly, place every unsorted element at its correct
position.

32
Place
4 behind 1

33
Place
3 behind 1 and the array is sorted

Time Complexity:

 Best Case: O(n), when the list is already sorted (no


shifting needed).
 Worst Case: O(n²), when the list is in reverse order
and every element needs to be compared and shifted.
 Average Case: O(n²), as it involves comparing and
shifting elements for the majority of cases.

Space Complexity:

 O(1): Insertion Sort is an in-place sorting algorithm,


34
meaning it only uses a constant amount of extra
space.

Advantages Disadvantages
Inefficient for large
Simple to implement:
datasets: Time complexity is
Easy to understand and
O(n²), making it slow for
write.
large datasets.
Efficient for small Slower compared to more
datasets: Can be faster than advanced algorithms:
more complex algorithms Algorithms like Merge Sort
like Merge Sort for small or and Quick Sort are faster for
nearly sorted datasets. larger datasets.
Stable sorting: Elements Not suitable for large
with equal values retain unsorted data: Performance
their original relative order. degrades as the dataset grows.
Shifts elements: Elements
In-place sorting: Doesn’t
may need to be moved
require additional memory
multiple times, causing
beyond a few variables.
inefficiency in certain cases.
Adaptive: Performs well if Worst-case time complexity
the data is already sorted or of O(n²): Can be slow when
nearly sorted. the data is in reverse order.

Summary:

Insertion Sort is an in-place, simple sorting algorithm that


works by building a sorted list one element at a time.

Insertion Sort is an efficient sorting algorithm that is


particularly useful for small datasets or nearly sorted data.

It is simple and has minimal space requirements, but it is


inefficient for large datasets due to its O(n²) time
complexity.
d Explain Binary Search with suitable Example

Binary Search is a highly efficient searching algorithm that 5


works on sorted arrays or lists. The basic idea is to
repeatedly divide the search interval in half. If the value of
35
the target element is less than the value in the middle of the
interval, the search continues in the left half, or if the target
value is greater, the search continues in the right half. This
process is repeated until the target element is found or the
interval is empty.

How Binary Search Works:

1. Initial Setup: Start with two pointers, one pointing to


the beginning of the list (low) and one pointing to the
end of the list (high).
2. Find the Middle Element: Calculate the middle
element of the list using middle = (low + high) / 2.
3. Comparison:
o If the middle element is equal to the target,
return its index.
o If the middle element is greater than the target,
narrow the search to the left half by setting
high = middle - 1.
o If the middle element is less than the target,
narrow the search to the right half by setting
low = middle + 1.
4. Repeat the process until the target is found or the
low pointer exceeds the high pointer (indicating that
the target is not in the list).

Binary Search Algorithm:

1. Initialize two variables:


o low = 0 (the first index of the list)
o high = length of the list - 1 (the last index of
the list)
2. While low ≤ high:
o Calculate the middle index: middle = (low +
high) / 2
o Compare the element at middle index with the
target:
 If list[middle] == target, return middle
(target found).
 If list[middle] > target, set high = middle
- 1 (search in the left half).
36
 If list[middle] < target, set low = middle
+ 1 (search in the right half).
3. If the loop ends and the target is not found, return -1
(indicating the target is not in the list).

Pseudocode for Binary Search:


function binarySearch(list, target):
low = 0
high = length of list - 1
while low <= high:
middle = (low + high) / 2
if list[middle] == target:
return middle // Target found
else if list[middle] > target:
high = middle - 1 // Narrow search to the left half
else:
low = middle + 1 // Narrow search to the right half
return -1 // Target not found
Explanation:

1. The algorithm starts by setting the initial search range


from the entire list (low = 0 and high = length - 1).
2. In each iteration, it calculates the middle element of
the current range.
3. It compares the middle element with the target:
o If the middle element matches the target, it
returns the index of the middle element.
o If the middle element is greater than the target,
the search range is reduced to the left half of
the list.
o If the middle element is smaller than the target,
the search range is reduced to the right half.
4. The search continues until the target is found or the
range becomes invalid (low > high), in which case
the function returns -1.

Example Walkthrough:

Consider the sorted list: [1, 3, 5, 7, 9, 11, 13, 15, 17], and
we want to find the target 7.

37
 Initial Setup: low = 0, high = 8.
 First Iteration:
o middle = (0 + 8) / 2 = 4.
o list[4] = 9, which is greater than 7, so we
search the left half (high = middle - 1 = 3).
 Second Iteration:
o low = 0, high = 3.
o middle = (0 + 3) / 2 = 1.
o list[1] = 3, which is less than 7, so we search
the right half (low = middle + 1 = 2).
 Third Iteration:
o low = 2, high = 3.
o middle = (2 + 3) / 2 = 2.
o list[2] = 5, which is less than 7, so we search
the right half (low = middle + 1 = 3).
 Fourth Iteration:
o low = 3, high = 3.
o middle = (3 + 3) / 2 = 3.
o list[3] = 7, which is equal to the target, so we
return the index 3.

Output: 3 (the index where 7 is found).

Time Complexity of Binary Search:

 Best Case: O(1) — When the middle element is the


target.
 Worst Case: O(log n) — The list is halved at each
step, so the number of elements to search is reduced
logarithmically.
 Average Case: O(log n) — In general, the search
reduces the list size by half each time, making it
logarithmic in nature.

Space Complexity of Binary Search:


O(1): Binary Search is an in-
place algorithm that only
requires a constant amount of
Disadvantages
extra space (for the variables
low, high, and
middle).Advantages
38
Requires sorted data:
Efficient for large datasets:
Binary Search can only be
With time complexity of O(log
applied to sorted lists. If
n), Binary Search is much
the data is unsorted, it
faster than linear search for
must be sorted first, which
large sorted datasets.
can add to the overhead.
Not suitable for linked
Fast search: It reduces the lists: Linked lists don’t
search space by half in each allow random access, so
step, making it a very efficient Binary Search is
search algorithm. inefficient on them
compared to arrays.
Overhead for small
datasets: For small
Low space complexity: It only
datasets, Binary Search
requires a constant amount of
may have higher overhead
extra space (O(1)) apart from
compared to simpler
the input data.
algorithms like Linear
Search.
Fixed structure
Works with large data:
requirement: Requires
Especially useful for searching
the dataset to be static, as
in large datasets or databases
changes to the data
where sorting has already been
(insertions or deletions)
done.
can require re-sorting.
Cannot be used on
Logarithmic time complexity: unsorted data: The
O(log n) makes it much faster dataset needs to be sorted
than linear search for large beforehand, which can be
datasets. a limitation if the data
changes frequently.
a What are the advantages of binary search over the linear
search?
Advantages of Binary Search over Linear Search
5 1
1. Faster Search Time:
o Binary search has a time complexity of
O(logn), which is significantly faster than
linear search's O(n) for large datasets.
39
o In binary search, the number of elements to be
checked reduces exponentially with each
iteration.
2. Efficient for Sorted Data:
o Binary search is specifically designed for
sorted datasets, taking advantage of the order
to quickly eliminate half of the remaining
elements at each step.
3. Predictable Steps:
o The maximum number of steps in binary
search is log2 (n), making it more predictable
and efficient for large inputs.
4. Better Performance on Large Data:
o For datasets with millions of elements, binary
search performs significantly fewer
comparisons than linear search, which may
require up to nnn comparisons.

When to Use Binary Search Over Linear Search

 When the Data is Sorted: Binary search requires the


data to be in sorted order.
 When Performance Matters: Binary search is the
better choice for large datasets due to its logarithmic
time complexity.

b What is sorting ? Give different sorting techniques.

Sorting is the process of arranging the elements of a list or


array in a specific order, either ascending or descending.
Sorting helps organize data efficiently, enabling faster
searches, comparisons, and data analysis.

Types of Sorting Techniques


1
Sorting techniques are broadly categorized into two groups:

3. Internal Sorting: Sorting is performed entirely in


main memory. Used when the dataset can fit into
memory.

Examples: Bubble Sort, Insertion Sort, Selection


40
Sort, Merge Sort, Quick Sort, etc.

4. External Sorting: Sorting is performed using


external storage (like disk) because the dataset is too
large to fit in memory.

Examples: External Merge Sort, Polyphase Merge


Sort, etc.

Choosing the Right Sorting Technique

 Use Merge Sort or Quick Sort for large datasets.


 Use Insertion Sort or Bubble Sort for small
datasets.
 Use Counting Sort or Radix Sort for datasets with
small range values.

c Explain about the Bubble sort . Solve the following using


bubble sort 12,8,36,48,2,57,68,4,9,16.

Bubble Sort is a simple comparison-based sorting


algorithm. It works by repeatedly stepping through the list
to be sorted, comparing adjacent items, and swapping
them if they are in the wrong order. This process is
repeated until the list is sorted. The largest unsorted
element "bubbles" to the correct position in each pass
through the list.

Steps of Bubble Sort:


5
1. Compare adjacent elements: Compare each pair
of adjacent elements.
2. Swap if necessary: If the first element is greater
than the second, swap them.
3. Repeat: Continue this process for the entire list,
which "bubbles" the largest element to the end.
4. Repeat the process for the remaining unsorted
portion: After each pass, the largest element is
correctly placed, so the next pass can ignore the last
element.
5. Continue until no swaps are made: The algorithm
41
finishes when no swaps are needed, meaning the
list is sorted.

Bubble Sort Algorithm

3. Input: A list of elements to be sorted.


4. Output: The sorted list.

Steps:

3. For each element in the list (from the first to the


second-to-last):
o Set a flag swapped to False.
o For each pair of adjacent elements:
 If the current element is greater than
the next element, swap them.
 Set the flag swapped to True to
indicate that a swap has been made.
o If no swaps were made during the pass (i.e.,
swapped is still False), stop the algorithm
early because the list is already sorted.
4. Repeat step 1 until no more swaps are needed.

Pseudocode:
function bubbleSort(list):
n = length of list
for i = 0 to n-1:
swapped = false
for j = 0 to n-i-2:
if list[j] > list[j+1]:
swap(list[j], list[j+1])
swapped = true
if not swapped:
break
return list
Explanation:

 The outer loop ensures that the process continues


until the list is sorted.
 The inner loop compares each pair of adjacent
elements and swaps them if they are in the wrong
42
order.
 The flag swapped keeps track of whether any
swaps were made during a pass. If no swaps are
made in a pass, the algorithm exits early because
the list is already sorted.

Let’s sort the list: [12, 8, 36, 48, 2, 57, 68, 4, 9, 16] using
Bubble Sort.

Initial List:
[12, 8, 36, 48, 2, 57, 68, 4, 9, 16]
First Pass:

 Compare 12 and 8, swap them: [8, 12, 36, 48, 2, 57,


68, 4, 9, 16]
 Compare 12 and 36, no swap needed.
 Compare 36 and 48, no swap needed.
 Compare 48 and 2, swap them: [8, 12, 36, 2, 48, 57,
68, 4, 9, 16]
 Compare 48 and 57, no swap needed.
 Compare 57 and 68, no swap needed.
 Compare 68 and 4, swap them: [8, 12, 36, 2, 48, 57,
4, 68, 9, 16]
 Compare 68 and 9, swap them: [8, 12, 36, 2, 48, 57,
4, 9, 68, 16]
 Compare 68 and 16, swap them: [8, 12, 36, 2, 48,
57, 4, 9, 16, 68]

Now, the largest element 68 is in the correct position.

Second Pass:

 Compare 8 and 12, no swap needed.


 Compare 12 and 36, no swap needed.
 Compare 36 and 2, swap them: [8, 12, 2, 36, 48, 57,
4, 9, 16, 68]
 Compare 36 and 48, no swap needed.
 Compare 48 and 57, no swap needed.
 Compare 57 and 4, swap them: [8, 12, 2, 36, 48, 4,
57, 9, 16, 68]
43
 Compare 57 and 9, swap them: [8, 12, 2, 36, 48, 4,
9, 57, 16, 68]
 Compare 57 and 16, swap them: [8, 12, 2, 36, 48, 4,
9, 16, 57, 68]

Now, the second-largest element 57 is in the correct


position.

Third Pass:

 Compare 8 and 12, no swap needed.


 Compare 12 and 2, swap them: [8, 2, 12, 36, 48, 4,
9, 16, 57, 68]
 Compare 12 and 36, no swap needed.
 Compare 36 and 48, no swap needed.
 Compare 48 and 4, swap them: [8, 2, 12, 36, 4, 48,
9, 16, 57, 68]
 Compare 48 and 9, swap them: [8, 2, 12, 36, 4, 9,
48, 16, 57, 68]
 Compare 48 and 16, swap them: [8, 2, 12, 36, 4, 9,
16, 48, 57, 68]

Now, the third-largest element 48 is in the correct


position.

Fourth Pass:

 Compare 8 and 2, swap them: [2, 8, 12, 36, 4, 9, 16,


48, 57, 68]
 Compare 8 and 12, no swap needed.
 Compare 12 and 36, no swap needed.
 Compare 36 and 4, swap them: [2, 8, 12, 4, 36, 9,
16, 48, 57, 68]
 Compare 36 and 9, swap them: [2, 8, 12, 4, 9, 36,
16, 48, 57, 68]
 Compare 36 and 16, swap them: [2, 8, 12, 4, 9, 16,
36, 48, 57, 68]

Now, the fourth-largest element 36 is in the correct


position.

44
Fifth Pass:

 Compare 2 and 8, no swap needed.


 Compare 8 and 12, no swap needed.
 Compare 12 and 4, swap them: [2, 8, 4, 12, 9, 16,
36, 48, 57, 68]
 Compare 12 and 9, swap them: [2, 8, 4, 9, 12, 16,
36, 48, 57, 68]

Now, the fifth-largest element 12 is in the correct


position.

Sixth Pass:

 Compare 2 and 8, no swap needed.


 Compare 8 and 4, swap them: [2, 4, 8, 9, 12, 16, 36,
48, 57, 68]

Now, the sixth-largest element 8 is in the correct position.

Seventh Pass:

 Compare 2 and 4, no swap needed.

At this point, the list is fully sorted.

Final Sorted List:


csharp
Copy code
[2, 4, 8, 9, 12, 16, 36, 48, 57, 68]
Time Complexity:

 Best Case: O(n) (when the list is already sorted,


with an optimized version of Bubble Sort).
 Worst Case: O(n²) (when the list is in reverse
order).
 Average Case: O(n²).

Space Complexity:
O(1) (Bubble Sort is an in-place sorting algorithm).
45
d Solve the following with Linear search
12,8,36,48,2,57,68,4,9,16,for the element 4

Linear Search is a simple searching algorithm that checks


each element in the list sequentially until the desired
element is found or the entire list has been checked.

Steps of Linear Search:

1. Start from the first element of the list.


2. Compare each element with the target value.
3. If the element matches the target, return the index of
the element.
4. If the end of the list is reached without finding the
target, return -1 indicating the element is not found.

Linear Search Algorithm:

7. Start from the first element of the list.


8. Compare the current element with the target value.
9. If the current element is equal to the target, return 5
the index of that element.
10. If the current element is not equal to the target, move
to the next element.
11. Repeat steps 2-4 for all elements in the list until the
target is found or the entire list has been searched.
12. If the target is not found after checking all elements,
return -1 to indicate that the target is not present.

Pseudocode:
function linearSearch(list, target):
for i = 0 to length of list - 1:
if list[i] == target:
return i // Element found, return the index
return -1 // Element not found, return -1
Explanation:

6. Initialization: The algorithm starts by setting the


current element to the first item in the list.
7. Comparison: In each iteration, the algorithm
46
compares the current element (list[i]) with the target
value.
8. Element Found: If a match is found, the index of the
current element is returned.
9. Iteration: If no match is found, the algorithm
continues to the next element.
10. No Match: If the target is not found by the time the
loop finishes, the algorithm returns -1 to indicate that
the target is not in the list.

Example Walkthrough:

We are given the list:

[12, 8, 36, 48, 2, 57, 68, 4, 9, 16]

We need to find the element 4 using Linear Search.

Steps of Linear Search:

1. Start with the first element 12 at index 0.


o 12 != 4, so move to the next element.
2. Check the second element 8 at index 1.
o 8 != 4, so move to the next element.
3. Check the third element 36 at index 2.
o 36 != 4, so move to the next element.
4. Check the fourth element 48 at index 3.
o 48 != 4, so move to the next element.
5. Check the fifth element 2 at index 4.
o 2 != 4, so move to the next element.
6. Check the sixth element 57 at index 5.
o 57 != 4, so move to the next element.
7. Check the seventh element 68 at index 6.
o 68 != 4, so move to the next element.
8. Check the eighth element 4 at index 7.
o 4 == 4, the target is found at index 7.

Output: The element 4 is found at index 7.

Time Complexity:

 Best Case: O(1) (when the element is found at the


47
first index).
 Worst Case: O(n) (when the element is at the last
index or not found at all).
 Average Case: O(n) (on average, the element is
found halfway through the list).

Space Complexity: O(1) (Linear Search is an in-place


algorithm that requires constant space).

a Compare linear search and binary search


Aspect Linear Search Binary Search
Repeatedly
Sequentially checks
divides the sorted
each element until
Definition array into halves
the target is found or
to locate the
the array ends.
target.
Works on both
Data Works only on
sorted and
Requirement sorted arrays.
unsorted arrays.
Time
O(1): Target found O(1): Target is the
Complexity
at the first position. middle element.
(Best)
6 O(log⁡n): 1
Time O(n): Checks
Divides search
Complexity n/2n/2n/2 elements
space in half
(Average) on average.
repeatedly.
Time O(log⁡n):
O(n): Checks all nnn
Complexity Divides until one
elements.
(Worst) element remains.
O(1) for iterative;
Space O(1): No extra O(log⁡n) for
Complexity memory needed. recursive due to
call stack.
Inefficient for large Highly efficient
Performance datasets due to for large datasets
sequential checks. with sorted data.
Algorithm Type Simple, no Requires

48
preprocessing preprocessing
required. (array must be
sorted).
Large datasets
Small datasets or
Use Cases with sorted arrays
unsorted arrays.
or lists.
Slightly more
Straightforward and
Implementation complex but not
easy to implement.
difficult.
Searching for a Searching in a
Example value in an unsorted sorted database
Applications list (e.g., contact (e.g., dictionary
list). lookup).

When to Use

 Linear Search:
o When the dataset is small.
o When the array is unsorted and sorting is not
feasible.
o If the dataset is dynamic and frequently
changes (sorting repeatedly might be
expensive).
 Binary Search:
o When the dataset is large and sorted.
o If fast search performance is critical.
o In static datasets where sorting can be done
once.

b Time complexity of bubble sort insertion sort and selection


sort
Table:
Best Case Average Case Worst Case 1
Algorithm Time Time Time
Complexity Complexity Complexity
Bubble O(n) O(n²) O(n²)
49
Sort
Insertion
O(n) O(n²) O(n²)
Sort
Selection
O(n²) O(n²) O(n²)
Sort

Key Points:

 Bubble Sort is best when the list is already sorted


and can be optimized to check for no swaps.
 Insertion Sort is efficient for small or nearly sorted
lists and is adaptive to the order of the input.
 Selection Sort always has quadratic time complexity,
making it less efficient than Bubble and Insertion
Sort in most cases. However, it does perform fewer
swaps.

c Solve the following using insertion sort


12,32,45,8,2,8,15,48,36,5

Insertion Sort is a comparison-based sorting algorithm that


builds the sorted array one item at a time. It takes each
element from the unsorted part and places it in its correct
position in the sorted part of the array.

Steps of Insertion Sort:

1. Start from the second element (index 1) and compare


it with the first element.
5
2. If the current element is smaller, shift the previous
elements to the right to make space for the current
element.
3. Insert the current element into the correct position in
the sorted part.
4. Repeat the process for all elements until the list is
fully sorted.

Example Walkthrough:

We are given the list:

50
[12, 32, 45, 8, 2, 8, 15, 48, 36, 5]
Step-by-Step Process:

1. Start with the second element (32):


o Compare 32 with 12. Since 32 > 12, no
change.
o The list remains: [12, 32, 45, 8, 2, 8, 15, 48,
36, 5]
2. Move to the third element (45):
o Compare 45 with 32. Since 45 > 32, no
change.
o The list remains: [12, 32, 45, 8, 2, 8, 15, 48,
36, 5]
3. Move to the fourth element (8):
o Compare 8 with 45. Since 8 < 45, shift 45 to
the right.
o Compare 8 with 32. Since 8 < 32, shift 32 to
the right.
o Compare 8 with 12. Since 8 < 12, shift 12 to
the right.
o Insert 8 at the first position.
o The list becomes: [8, 12, 32, 45, 2, 8, 15, 48,
36, 5]
4. Move to the fifth element (2):
o Compare 2 with 45. Since 2 < 45, shift 45 to
the right.
o Compare 2 with 32. Since 2 < 32, shift 32 to
the right.
o Compare 2 with 12. Since 2 < 12, shift 12 to
the right.
o Compare 2 with 8. Since 2 < 8, shift 8 to the
right.
o Insert 2 at the first position.
o The list becomes: [2, 8, 12, 32, 45, 8, 15, 48,
36, 5]
5. Move to the sixth element (8):
o Compare 8 with 45. Since 8 < 45, shift 45 to
the right.
o Compare 8 with 32. Since 8 < 32, shift 32 to
the right.
51
o Compare 8 with 12. Since 8 < 12, shift 12 to
the right.
o Insert 8 after 2.
o The list becomes: [2, 8, 8, 12, 32, 45, 15, 48,
36, 5]
6. Move to the seventh element (15):
o Compare 15 with 45. Since 15 < 45, shift 45 to
the right.
o Compare 15 with 32. Since 15 < 32, shift 32 to
the right.
o Compare 15 with 12. Since 15 > 12, insert 15
after 12.
o The list becomes: [2, 8, 8, 12, 15, 32, 45, 48,
36, 5]
7. Move to the eighth element (48):
o Compare 48 with 45. Since 48 > 45, no
change.
o The list remains: [2, 8, 8, 12, 15, 32, 45, 48,
36, 5]
8. Move to the ninth element (36):
o Compare 36 with 48. Since 36 < 48, shift 48 to
the right.
o Compare 36 with 45. Since 36 < 45, shift 45 to
the right.
o Insert 36 after 32.
o The list becomes: [2, 8, 8, 12, 15, 32, 36, 45,
48, 5]
9. Move to the tenth element (5):
o Compare 5 with 48. Since 5 < 48, shift 48 to
the right.
o Compare 5 with 45. Since 5 < 45, shift 45 to
the right.
o Compare 5 with 36. Since 5 < 36, shift 36 to
the right.
o Compare 5 with 32. Since 5 < 32, shift 32 to
the right.
o Compare 5 with 15. Since 5 < 15, shift 15 to
the right.
o Compare 5 with 12. Since 5 < 12, shift 12 to
the right.
o Compare 5 with 8. Since 5 < 8, shift 8 to the
52
right.
o Compare 5 with 8. Since 5 < 8, shift 8 to the
right.
o Insert 5 at the second position.
o The list becomes: [2, 5, 8, 8, 12, 15, 32, 36, 45,
48]

Final Sorted List:


[2, 5, 8, 8, 12, 15, 32, 36, 45, 48]
Time Complexity:

 Best Case: O(n) (when the list is already sorted).


 Worst Case: O(n²) (when the list is in reverse order).
 Average Case: O(n²) (for random data).

Space Complexity: O(1) (Insertion Sort is an in-place


sorting algorithm).

d Solve the following using binary search 14,


5,8,7,58,25,6,35,84,2 for the element 35. How many search
are required?

Binary Search is an efficient algorithm for finding an item


from a sorted list of elements. It works by repeatedly
dividing the search interval in half. The key requirement for
Binary Search is that the list must be sorted.

Steps for Binary Search:


5
1. Start with the entire list, find the middle element.
2. If the target is equal to the middle element, return the
index.
3. If the target is smaller than the middle element,
repeat the search on the left half of the list.
4. If the target is larger than the middle element, repeat
the search on the right half of the list.
5. Repeat the process until the element is found or the
search interval is empty.

Given Data:

53
Unsorted List:

[14, 5, 8, 7, 58, 25, 6, 35, 84, 2]

We need to find the element 35 using Binary Search.


However, Binary Search requires the list to be sorted.
So, first, we need to sort the list:

Sorted List:
[2, 5, 6, 7, 8, 14, 25, 35, 58, 84]

Now, we perform Binary Search to find the element 35.

Binary Search Steps:

1. Initial List: [2, 5, 6, 7, 8, 14, 25, 35, 58, 84]


o Left = 0, Right = 9 (total 10 elements)
o Middle = (0 + 9) / 2 = 4 → Element at index 4
is 8.

Since 35 > 8, we will search in the right half of the


list.

2. New Search Range: [14, 25, 35, 58, 84]


o Left = 5, Right = 9 (elements from index 5 to
9)
o Middle = (5 + 9) / 2 = 7 → Element at index 7
is 35.

We have found the element 35 at index 7.

Total Searches (Comparisons) Made:

1. First Comparison: 35 > 8 (searching in the right


half)
2. Second Comparison: 35 == 35 (element found)

Total Comparisons: 2
Conclusion:

The element 35 was found in 2 comparisons. The total


54
number of comparisons required for Binary Search in this
case is 2.

Time Complexity:

 Best Case: O(1) (element found at the middle).


 Worst Case: O(log n) (where n is the number of
elements in the list).
 Average Case: O(log n).

For this case, the search took O(log n) time, where n = 10,
so approximately 2 comparisons.

OBJECTIVE QUESTION BANK


UNIT-5
MULTIPLE CHOICE QUESTIONS:
1. Which of the following searching algorithms is the most efficient for large, sorted arrays? [
b ]
(a) Linear Search (b) Binary Search (c) Jump Search (d) Exponential Search
2. What is the worst-case time complexity of linear search? [
c ]
(a) O(1) (b) O(log n) (c) O(n) (d) O(n^2)
3. Binary search requires the array to be: [ a ]
(a) Sorted (b)Un Sorted (c) Two dimensional (d) Circular
4. In binary search, what happens if the middle element is greater than the target value? [
b ]
(a) Search in the right half (b) Search in the left half (c )The search terminates (d) The array is sorted
5. What is the time complexity of binary search in the best case? [ a ]
(a) O(1) (b) O(log n) (c) O(n) (d) O(n^2)
6. Which sorting algorithm is considered the simplest to implement but least efficient for large da

55
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Merge sort

7. What is the time complexity of the Insertion Sort algorithm in the average case? [
d ]
(a) O(1) (b) O(log n) (c) O(n) (d) O(n^2)

8. Which of the following sorting algorithms is based on selecting the minimum element and
swapping it with the element at the beginning?
[ c ]
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Quick sort
9. Which sorting algorithm is known for having a Best-case time complexity of O(n ^2)? [
c ]
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Merge sort
10. The------------ sort divides the list into two parts sorted and unsorted [
b ]
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Merge sort

Fill in the blanks:


11. The Binary search algorithm works by repeatedly dividing the search interval in half.
12. In a linear search, the algorithm compares the target value with each element in the array
sequentially, resulting in a time complexity of O(n)
13. Binary search requires that the array be sorted to function correctly.
14. Linear Search is also called as Sequential Search
15. What the best case complexity of linearsearch O(1)
16. The Bubble Sort algorithm repeatedly steps through the list, compares adjacent elements, and
swaps them if they are in the wrong order.

56
17. Selection Sort repeatedly selects the minimum element from the unsorted portion of the
array and moves it to the sorted portion.
18. Time complexity of insertion sort O(n2)
19. Selection sorting techniques has highest best-case runtime complexity
20. How many swaps are required to sort the given array using bubble sort - { 2, 5, 1, 3, 4} ? 4

57

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy