Dndid D QAhdhd D
Dndid D QAhdhd D
int arr[10];
int x = arr[5]; // O(1)
1 int sum = 0; 1
for (int i = 0; i < n; i++) { // O(n)
sum += arr[i];
}
#include <stdio.h>
#include <time.h>
int main() {
clock_t start, end;
double cpu_time_used;
start = clock();
// Code to measure
for (int i = 0; i < 100000; i++) { /* Some computation */
}
end = clock();
return 0;
}
2
b Best case time complexity of Binary search
O(1)
Explanation:
Example
Summary:
Pseudocode:
4
function selectionSort(list):
n = length of list
for i = 0 to n-1:
minIndex = i // Assume the first element is the
smallest
for j = i+1 to n-1: // Look for a smaller element in
the remaining unsorted part
if list[j] < list[minIndex]:
minIndex = j // Update minIndex if a smaller
element is found
if minIndex != i:
swap(list[i], list[minIndex]) // Swap the smallest
element with the current element
return list
Explanation:
Example Walkthrough:
Let’s sort the list [64, 25, 12, 22, 11] using Selection Sort:
5
o Find the smallest element: 12.
o Swap 12 with 25.
o List after 2nd pass: [11, 12, 25, 22, 64]
3. Next, sort the remaining list [25, 22, 64].
o Find the smallest element: 22.
o Swap 22 with 25.
o List after 3rd pass: [11, 12, 22, 25, 64]
4. Next, sort the remaining list [25, 64].
o Find the smallest element: 25 (no swap
needed).
o List after 4th pass: [11, 12, 22, 25, 64]
5. Only one element left (64), no need to do anything.
Second Example:
6
Compare
minimum with the remaining elements
4. After each iteration, minimum is placed in the front of the
unsorted list.
5. Swap the first with minimum
6. For each iteration, indexing starts from the first unsorted
element. Step 1 to 3 are repeated until all the elements are
placed at their correct positions.
7
7.
8.
9. The first iteration
8
The second iteration
9
The fourth iteration
Time Complexity:
Space Complexity:
Summary:
10
Advantages Disadvantages
Inefficient for large
Simplicity: Easy to datasets: O(n²) time
understand and implement. complexity makes it slow
for large lists.
In-place sorting: No Not adaptive: Always
additional memory is performs the same number
required apart from a few of comparisons regardless
variables. of the input's initial order.
Efficient for small
No early termination: The
datasets: Works well for
algorithm doesn't stop early
smaller lists where
even if the list is already
performance is not a critical
sorted.
factor.
Poor worst-case
Low memory usage: Only
performance: Its time
requires a constant amount
complexity remains O(n²)
of extra space (O(1)).
in the worst case.
Stable for equal elements:
Not suitable for large
Selection Sort can be made
real-world data: The
stable with slight
quadratic time complexity
modifications (though
makes it unsuitable for
typically not stable in its
large-scale applications.
default form).
Selection Sort is a simple and easy-to-understand
sorting algorithm, but it is not efficient for large datasets
because of its quadratic time complexity (O(n²)).
2 Space Complexity 1
11
by an algorithm, including:
Space Complexity in C
#include <stdio.h>
int main() {
int arr[10];
printf("Size of array: %lu bytes\n", sizeof(arr)); // Array
size in bytes
printf("Size of int: %lu bytes\n", sizeof(int)); // Size of
int type
return 0;
}
Time Complexities
1
1. Best Case: O(1)
o The target element is the first element of the
array.
o The algorithm finds the element in a single
comparison.
2. Average Case: O(n)
o The target element is located randomly in the
array.
o On average, the algorithm will check half the
elements, resulting in approximately n/2n/2n/2
13
comparisons. However, in Big-O notation,
constants are ignored, so it simplifies to O(n)
3. Worst Case: O(n)
o The target element is the last element of the
array, or it is not present in the array at all.
o The algorithm will check all nnn elements in
these cases.
Explanation
Summary Table
Time
Case Reason
Complexity
Best O(1) Target found at the first position.
Target found halfway through
Average O(n)
the array on average.
Target is the last element or not
Worst O(n)
present at all.
Steps:
Steps:
Pseudocode:
function bubbleSort(list):
n = length of list
for i = 0 to n-1:
15
swapped = false
for j = 0 to n-i-1:
if list[j] > list[j+1]:
swap(list[j], list[j+1])
swapped = true
if not swapped:
break
return list
Explanation:
Example:
1. First Pass:
o Compare 5 and 3 → Swap (since 5 > 3) →
New list: [3, 5, 8, 4, 2]
o Compare 5 and 8 → No swap (since 5 < 8)
o Compare 8 and 4 → Swap (since 8 > 4) →
New list: [3, 5, 4, 8, 2]
o Compare 8 and 2 → Swap (since 8 > 2) →
New list: [3, 5, 4, 2, 8]
o Now, the largest element (8) is in its correct
position.
2. Second Pass:
o Compare 3 and 5 → No swap (since 3 < 5)
o Compare 5 and 4 → Swap (since 5 > 4) →
New list: [3, 4, 5, 2, 8]
o Compare 5 and 2 → Swap (since 5 > 2) →
New list: [3, 4, 2, 5, 8]
16
oNow, 5 is in its correct position.
3. Third Pass:
o Compare 3 and 4 → No swap (since 3 < 4)
o Compare 4 and 2 → Swap (since 4 > 2) →
New list: [3, 2, 4, 5, 8]
o Now, 4 is in its correct position.
4. Fourth Pass:
o Compare 3 and 2 → Swap (since 3 > 2) →
New list: [2, 3, 4, 5, 8]
o Now, the list is fully sorted.
Second Example:
Working of Bubble Sort
Suppose we are trying to sort the elements in ascending
order.
1. First Iteration (Compare and Swap)
1. Starting from the first index, compare the first and the
second elements.
2. If the first element is greater than the second element,
they are swapped.
3. Now, compare the second and the third elements. Swap
them if they are not in order.
4. The above process goes on until the last element.
17
Compare the Adjacent Elements
2. Remaining Iteration
The same process goes on for the remaining iterations.
18
Put the largest element at the end
In each iteration, the comparison takes place up to the last
unsorted element.
19
The array is sorted if all elements are kept in the right
order
Time Complexity:
Advantages Disadvantages
Inefficient for Large Lists:
Simplicity: Easy to O(n²) time complexity
understand and implement. makes it slow for large
datasets.
In-place Sorting: No Unnecessary
additional space required, Comparisons: Compares
other than a few temporary elements even if they are
variables. already in the correct order.
Adaptive (Best Case): Can
Slow: Time complexity
perform faster for nearly
makes it slower than other
sorted lists (O(n) time
sorting algorithms for large
complexity in the best
inputs.
case).
Poor Worst-Case
Stable Sort: Maintains the
Performance: O(n²) in the
relative order of equal
worst case, which is
elements.
inefficient.
20
Early Termination: Can
Not Adaptive in Worst
stop early if no swaps are
Case: Still performs poorly
made during a pass,
in the worst-case scenario,
improving performance in
even with early termination.
some cases.
a What is sorting? Give different type of sorting technique
Key Relationship
Pseudocode:
function linearSearch(list, target):
for i = 0 to length of list - 1:
if list[i] == target:
return i // Element found, return the index
return -1 // Element not found, return -1
Explanation:
Example Walkthrough:
Output: 2
Output: -1
Time Complexity:
Space Complexity:
O(1): Linear Search is an in-place algorithm and
does not require any extra space apart from the
input list.
Advantages Disadvantages
Simplicity: Easy to Inefficient for large lists: O(n)
25
understand and time complexity makes it slow
implement. for large datasets.
Slower than other search
No Sorting Required:
algorithms: Algorithms like
Works on both sorted
Binary Search are faster for
and unsorted lists.
sorted lists.
Versatility: Can search
Linear time complexity: Every
through any data
element needs to be checked,
structure (arrays, linked
leading to longer search times.
lists).
Not ideal for large-scale data:
Space-efficient: It uses
For large datasets, its
constant space (O(1)),
performance is less efficient
as it doesn’t require any
compared to more advanced
additional memory.
search algorithms.
Works for small lists: No advantage with sorted
Can be fast enough data: Unlike Binary Search,
when the dataset is there is no improvement when
small. the list is sorted.
Summary:
1. Efficient Searching
4 Description: Sorting helps improve the efficiency of 1
search algorithms.
Example: In a binary search, the data must be
sorted first. Once sorted, the binary search algorithm
can quickly locate an element by dividing the dataset
in half, which significantly reduces the search time
26
from O(n) in linear search to O(logn).
Real-World Use Case: Searching for a name in a
phone book or finding a specific product in an online
store (when the list of products is sorted).
27
particularly inefficient for larger datasets.
28
before it.
3. If the current element is smaller than the element
before it, shift the larger elements one position to the
right to make space for the current element.
4. Insert the current element in the correct position
where it is greater than the element before it, but
smaller than the element after it.
5. Move to the next element in the list and repeat steps
2–4 until the entire list is sorted.
Pseudocode:
function insertionSort(list):
for i = 1 to length of list - 1:
key = list[i] // The element to be inserted into the
sorted part of the list
j = i - 1 // The index of the element just before the key
// Move elements of list[0..i-1] that are greater than
key to one position ahead
while j >= 0 and list[j] > key:
list[j + 1] = list[j]
j=j-1
list[j + 1] = key // Insert the key in the correct position
return list
Example Walkthrough:
30
Second Example:
Initial array
1. The first element in the array is assumed to be sorted. Take
the second element and store it separately in key.
If the
first element is greater than key, then key is placed in front
of the first element.
2. Now, the first two elements are sorted.
Place
1 at the beginning
3. Similarly, place every unsorted element at its correct
position.
32
Place
4 behind 1
33
Place
3 behind 1 and the array is sorted
Time Complexity:
Space Complexity:
Advantages Disadvantages
Inefficient for large
Simple to implement:
datasets: Time complexity is
Easy to understand and
O(n²), making it slow for
write.
large datasets.
Efficient for small Slower compared to more
datasets: Can be faster than advanced algorithms:
more complex algorithms Algorithms like Merge Sort
like Merge Sort for small or and Quick Sort are faster for
nearly sorted datasets. larger datasets.
Stable sorting: Elements Not suitable for large
with equal values retain unsorted data: Performance
their original relative order. degrades as the dataset grows.
Shifts elements: Elements
In-place sorting: Doesn’t
may need to be moved
require additional memory
multiple times, causing
beyond a few variables.
inefficiency in certain cases.
Adaptive: Performs well if Worst-case time complexity
the data is already sorted or of O(n²): Can be slow when
nearly sorted. the data is in reverse order.
Summary:
Example Walkthrough:
Consider the sorted list: [1, 3, 5, 7, 9, 11, 13, 15, 17], and
we want to find the target 7.
37
Initial Setup: low = 0, high = 8.
First Iteration:
o middle = (0 + 8) / 2 = 4.
o list[4] = 9, which is greater than 7, so we
search the left half (high = middle - 1 = 3).
Second Iteration:
o low = 0, high = 3.
o middle = (0 + 3) / 2 = 1.
o list[1] = 3, which is less than 7, so we search
the right half (low = middle + 1 = 2).
Third Iteration:
o low = 2, high = 3.
o middle = (2 + 3) / 2 = 2.
o list[2] = 5, which is less than 7, so we search
the right half (low = middle + 1 = 3).
Fourth Iteration:
o low = 3, high = 3.
o middle = (3 + 3) / 2 = 3.
o list[3] = 7, which is equal to the target, so we
return the index 3.
Steps:
Pseudocode:
function bubbleSort(list):
n = length of list
for i = 0 to n-1:
swapped = false
for j = 0 to n-i-2:
if list[j] > list[j+1]:
swap(list[j], list[j+1])
swapped = true
if not swapped:
break
return list
Explanation:
Let’s sort the list: [12, 8, 36, 48, 2, 57, 68, 4, 9, 16] using
Bubble Sort.
Initial List:
[12, 8, 36, 48, 2, 57, 68, 4, 9, 16]
First Pass:
Second Pass:
Third Pass:
Fourth Pass:
44
Fifth Pass:
Sixth Pass:
Seventh Pass:
Space Complexity:
O(1) (Bubble Sort is an in-place sorting algorithm).
45
d Solve the following with Linear search
12,8,36,48,2,57,68,4,9,16,for the element 4
Pseudocode:
function linearSearch(list, target):
for i = 0 to length of list - 1:
if list[i] == target:
return i // Element found, return the index
return -1 // Element not found, return -1
Explanation:
Example Walkthrough:
Time Complexity:
48
preprocessing preprocessing
required. (array must be
sorted).
Large datasets
Small datasets or
Use Cases with sorted arrays
unsorted arrays.
or lists.
Slightly more
Straightforward and
Implementation complex but not
easy to implement.
difficult.
Searching for a Searching in a
Example value in an unsorted sorted database
Applications list (e.g., contact (e.g., dictionary
list). lookup).
When to Use
Linear Search:
o When the dataset is small.
o When the array is unsorted and sorting is not
feasible.
o If the dataset is dynamic and frequently
changes (sorting repeatedly might be
expensive).
Binary Search:
o When the dataset is large and sorted.
o If fast search performance is critical.
o In static datasets where sorting can be done
once.
Key Points:
Example Walkthrough:
50
[12, 32, 45, 8, 2, 8, 15, 48, 36, 5]
Step-by-Step Process:
Given Data:
53
Unsorted List:
Sorted List:
[2, 5, 6, 7, 8, 14, 25, 35, 58, 84]
Total Comparisons: 2
Conclusion:
Time Complexity:
For this case, the search took O(log n) time, where n = 10,
so approximately 2 comparisons.
55
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Merge sort
7. What is the time complexity of the Insertion Sort algorithm in the average case? [
d ]
(a) O(1) (b) O(log n) (c) O(n) (d) O(n^2)
8. Which of the following sorting algorithms is based on selecting the minimum element and
swapping it with the element at the beginning?
[ c ]
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Quick sort
9. Which sorting algorithm is known for having a Best-case time complexity of O(n ^2)? [
c ]
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Merge sort
10. The------------ sort divides the list into two parts sorted and unsorted [
b ]
(a) Bubble Sort (b) Insertion sort (c) Selection sort (d) Merge sort
56
17. Selection Sort repeatedly selects the minimum element from the unsorted portion of the
array and moves it to the sorted portion.
18. Time complexity of insertion sort O(n2)
19. Selection sorting techniques has highest best-case runtime complexity
20. How many swaps are required to sort the given array using bubble sort - { 2, 5, 1, 3, 4} ? 4
57