0% found this document useful (0 votes)
15 views58 pages

DSA REPORT FIN 1

The document outlines a summer internship experience focused on data structures and algorithms, highlighting the importance of these concepts in programming and software development. It details the learning outcomes, including algorithm analysis, time complexity, and various sorting and searching algorithms, as well as bitwise operations. The internship provided practical skills and knowledge essential for technical interviews and coding competitions.

Uploaded by

btsjin sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views58 pages

DSA REPORT FIN 1

The document outlines a summer internship experience focused on data structures and algorithms, highlighting the importance of these concepts in programming and software development. It details the learning outcomes, including algorithm analysis, time complexity, and various sorting and searching algorithms, as well as bitwise operations. The internship provided practical skills and knowledge essential for technical interviews and coding competitions.

Uploaded by

btsjin sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

SUMMER INTERNSHIP

ON

DATA STRUCTURES AND ALGORITHMS

Submitted by

SAMHITHA MADALA
Registration No 12214200
Program Name: B. Tech CSE (Cyber security
and blockchain)

School of Computer Science


Lovely Professional University, Phagwara
Acknowledgment

The Summer Internship opportunity I had with GeeksForGeeks was a great chance for learning and professional
development. Therefore, I consider myself as a very lucky individual as I was provided with an opportunity to be
a part of it. I am also grateful for having a chance to learn from professionals who led me through this internship
period.

I express my deepest thanks to Training and Placement Coordinator, School of Computer Application, Lovely
Professional University for allowing me to grab this opportunity. I choose this moment to acknowledge his
contribution gratefully by giving necessary advice and guidance to make my internship a good learning
experience.

[SAMHITHA MADALA]
[12214200]
INTERNSHIP CERTIFICATE
INTRODUCTION

In today's rapidly evolving technological landscape, data structures and algorithms form the
backbone of efficient problem-solving and software development. Whether we are a beginner
stepping into the world of programming or an experienced developer looking to deepen your
understanding, mastering data structures and algorithms is crucial. These foundational concepts
are not only essential for writing optimized and effective code but also for acing technical
interviews, competing in coding competitions, and understanding the inner workings of various
software applications.

What I learn:

• Data Structures: You will gain an in-depth understanding of various data structures like
arrays, linked lists, stacks, queues, hash tables, trees, and graphs. I learn how to
implement, manipulate, and optimize these structures to solve real-world problems.
• Algorithms: The course will cover essential algorithms, including sorting, searching,
recursion, dynamic programming, and graph traversal techniques. I learn how to design,
analyses, and optimize algorithms for maximum efficiency.
• Problem-Solving Skills: By working through numerous coding challenges and exercises,
I will develop strong analytical and problem-solving skills, enabling us to approach
complex problems with confidence.

Why This Course:

This self-paced course is designed to cater to learners of all levels, providing a flexible and
comprehensive learning experience. Whether we are preparing for a job interview, participating
in a coding competition, or simply looking to improve our programming skills, this course
offers the tools, resources, and support that we need to succeed.
TECHNICAL LEARNING FROM THE COURSE

1. ALGORITHM ANALYSIS

Algorithm analysis is an important part of computational complexity theory, which provides


theoretical estimation for the required resources of an algorithm to solve a specific
computational problem. Analysis of algorithms is the determination of the amount of time and
space resources required to execute it.
TYPES:
1. BEST CASE
2. WORST CASE
3. AVERAGE CASE

 Worst, Average and Best-Case Time Complexities Worst

case:
This represents the maximum time an algorithm will take to complete, given the worst
possible input of size n. It provides an upper bound on the running time.

Knowing the worst-case complexity is important for ensuring that an algorithm can
handle the most difficult scenarios within an acceptable time.

• Example: For a linear search in an unsorted array of size n, the worst-case time
complexity is O(n). This happens when the element being searched for is at the last
position or not present at all.

Average Case:

This measures the expected time an algorithm will take to complete, averaged over all possible
inputs of size n. It provides a realistic estimate of an algorithm's performance.
It helps understand the algorithm's behavior under typical conditions, rather than just in the
worst-case scenario.
Example: For a linear search, the average-case time complexity is O(n) as well, assuming the
element is equally likely to be located at any position or not present.

Best Case:

This represents the minimum time an algorithm will take to complete, given the best possible
input of size n.

Knowing the best-case complexity is useful to understand how well an algorithm can perform in
the most favorable conditions.

Example: For a linear search, the best-case time complexity is O(1). This occurs when the
element being searched for is at the first position.

Algorithm for performing linear search:

// Linearly search x in arr[].


// If x is present then return the index,
// otherwise return -1
int search(int arr[], int n, int x)
{
int i;
for (i=0;i<n;i++){
if(arr[i]==x){
return i;
}
}
return -1;
}

//Driver program to test above functions


int main(){
int arr[]={2,8,12,9};
int x=12;
int n=sizeof(arr)/sizeof(arr[0]);
printf("%d is present in %d index",x,search(arr,n,x));
getchar(); return 0;
}

Examples of Time Complexities for Common Algorithms:

1. Linear Search (in an unsorted array):


o Best Case: O(1) (element is at the first position) o
Average Case: O(n) (element is equally likely to be
anywhere) o Worst Case: O(n) (element is at the last
position or not present)
2. Binary Search (in a sorted array):
o Best Case: O(1) (element is at the middle position) o
Average Case: O(log n) o Worst Case: O(log n) (element
is not found or in a position requiring maximum
comparisons)
3. Bubble Sort:
o Best Case: O(n) (array is already sorted) o Average
Case: O(n^2) o Worst Case: O(n^2) (array is in reverse
order)
4. Quicksort:
o Best Case: O(n log n) (pivot divides array into two
equal halves) o Average Case: O(n log n) o Worst Case:
O(n^2) (pivot is the smallest or largest element, creating
unbalanced partitions)
5. Merge Sort:
o Best Case: O(n log n) o Average Case: O(n log n) o
Worst Case: O(n log n) o
6. Insertion Sort:
o Best Case: O(n) (array is already sorted) o Average
Case: O(n^2) o Worst Case: O(n^2) (array is in reverse
order)

Asymptic Notation

Asymptotic notations are mathematical tools used to describe the behavior of algorithms in
terms of time or space complexity, as the input size (denoted as n) grows. These notations help
us analyze and compare the efficiency of algorithms, especially for large inputs. The primary
asymptotic notations are Big O, Big Theta, and Big Omega. Let's explore each one:

Big O Notation (O)

Big O notation provides an upper bound on the time (or space) complexity of an
algorithm. It describes the worst-case scenario by showing how the runtime increases as
the input size n grows.

To provide a guarantee that the algorithm will not run slower than a certain time, even in
the worst-case situation.

• How to Interpret: If f(n)=O(g(n)), it means that the function f(n) grows at most as fast as
g(n) to a constant factor for sufficiently large n.

Mathematically: f(n)=O(g(n)) if and only if there exist positive constants c and n 0 such
that:

0 ≤ f(n) ≤ c. g(n). for all n ≥ n0

Example: If f(n)=3n+2, then f(n)=O(n). Here, c could be 4 and n0 could be 1, indicating


that the growth of f(n) is bounded by a linear function.
Big Theta Notation (Θ)

Big Theta notation provides a tight bound on the time (or space) complexity of an
algorithm. It describes both the upper and lower bounds, showing the exact rate of
growth.

To show that the algorithm's running time is guaranteed to grow at a certain rate, both in
the worst and best scenarios.

• How to Interpret: If f(n)=Ω(g(n)), it means that f(n) grows at the same rate as g(n)) up
to constant factors, both above and below.

Mathematically: f(n)=Ω(g(n)), if and only if there exist positive constants c and n0 such
that:

0 ≤ c. g(n) ≤ f(n). for all n ≥ n0

Example: If f(n)=5n2+2n+3, then f(n)=Θ(n2). This means f(n)) grows at the same rate as
n2, ignoring lower-order terms and constant factors.

Big Omega Notation (Ω)

Big Omega notation provides a lower bound on the time (or space) complexity of an
algorithm. It describes the best-case scenario by showing the minimum time the
algorithm will take.

To show that the algorithm's running time will not be faster than a certain time.

• How to Interpret: If f(n)=Ω(g(n)), it means that f(n) grows at least as fast as g(n) up to a
constant factor for sufficiently large n.

Mathematically: f(n)=Ω(g(n)) if and only if there exist positive constants c and n0 such
that:
0 ≤ c g(n) ≤f (n)for all n ≥ n0

Example: If f(n)=3n+2, then f(n)=Ω(n). This implies that the running time of the
algorithm grows at least linearly with n.

Little O Notation (o)

Little o notation provides a strict upper bound on the time complexity. It means that f(n)
grows strictly slower than g(n) as n approaches infinity.

• Purpose: To show that the growth rate of one function is negligible compared to another.
• How to Interpret: If f(n)=o(g(n)), it means f(n) grows slower than g(n) as n becomes
large.

Mathematically: f(n)=o(g(n)) if and only if for all positive constants ccc, there exists an
n0 such that:

0≤ f(n) < c g(n) for all n ≥ n0

Example: n=o(n2) because n grows strictly slower than n2.

Little Omega Notation (ω)

Little omega notation provides a strict lower bound on the time complexity. It indicates
that f(n) grows strictly faster than g(n).

To show that f(n) grows faster than g(n) and not at the same rate.

• How to Interpret: If f(n)=ω(g(n)), it means f(n) grows faster than g(n) as n becomes large.

Mathematically: f(n)=ω(g(n)) if and only if for all positive constants c, there exists an n 0
such that:
0 ≤ c g(n) < f(n) for all n ≥ n0

Example: n2=ω(n), since n2 grows faster than n.

Space Complexity:
The term Space Complexity is misused for Auxiliary Space at many places. Following are the
correct definitions of Auxiliary Space and Space Complexity.
Auxiliary Space is the extra space or temporary space used by an algorithm.
Space Complexity of an algorithm is the total space taken by the algorithm with respect to the
input size. Space complexity includes both Auxiliary space and space used by input. For
example, if we want to compare standard sorting algorithms on the basis of space, then
Auxiliary Space would be a better criterion than Space Complexity. Merge Sort uses O(n)
auxiliary space, Insertion sort, and Heap Sort use O(1) auxiliary space. The space complexity of
all these sorting algorithms is O(n) though.

Space complexity is a parallel concept to time complexity. If we need to create an array of size
n, this will require O(n) space. If we create a two-dimensional array of size n*n, this will
require O(n2) space.

In recursive calls stack space also counts.

Example :
int add (int n){
if (n <= 0){
return 0;
}
return n + add (n-1);
}

Here each call add a level to the stack :

1. add(4)
2. -> add(3)
3. -> add(2)
4. -> add(1)
5. -> add(0)

Each of these calls is added to call stack and takes up actual memory.
So it takes O(n) space.
However, just because you have n calls total doesn't mean it takes O(n) space.
Look at the below function :

int addSequence (int n){


int sum = 0; for (int i =
0; i < n; i++){
sum += pairSum(i, i+1);
}
return sum;
}

int pairSum(int x, int y){


return x + y; }

There will be roughly O(n) calls to pairSum. However, those

calls do not exist simultaneously on the call stack, so you

only need O(1) space.

MATHEMATICS

• Finding number of Digits in a Number


• Arithmetic and Geometric Progressions
• Quadratic Equations
• Mean and Median
• Prime Numbers
• LCM and HCF
• Factorials
• Permutation and Combinations Basics
• Modular Arithmetic

BITWISE MAGIC
Bitwise operations are a powerful and efficient way to manipulate data at the binary level. They
operate directly on the individual bits of data, making them faster than arithmetic operations for
certain tasks. Here’s a brief overview of some common bitwise operations and their "magic"
tricks:

Basic Bitwise Operations:

• AND (&): Sets each bit to 1 if both corresponding bits are 1.


o Example: 5 & 3 in binary: 0101 & 0011 = 0001 (result is 1)

int a = 5; // Binary: 0101 int

b = 3; // Binary: 0011

int result = a & b; // Result is 1 (Binary: 0001)

• OR (|): Sets each bit to 1 if at least one of the corresponding bits is 1.


o Example: 5 | 3 in binary: 0101 | 0011 = 0111 (result is 7)

int a = 5; // Binary: 0101 int

b = 3; // Binary: 0011

int result = a | b; // Result is 7 (Binary: 0111)

• XOR (^): Sets each bit to 1 if only one of the corresponding bits is 1 (exclusive OR).
o Example: 5 ^ 3 in binary: 0101 ^ 0011 = 0110 (result is 6)
int a = 5; // Binary: 0101 int

b = 3; // Binary: 0011

int result = a ^ b; // Result is 6 (Binary: 0110)

• NOT (~): Inverts all the bits (0 becomes 1, and 1 becomes 0).
o Example: ~5 in binary: ~0101 = 1010 (result is -6 in two's complement)

int a = 5; // Binary: 0101 int result = ~a; // Result is -6 (Binary:

1010 in two's complement)

• Left Shift (<<): Shifts bits to the left, filling with 0s on the right.
o Example: 5 << 1 shifts 0101 to 1010 (result is 10)

int a = 5; // Binary: 0101 int result = a << 1; //

Result is 10 (Binary: 1010)

• Right Shift (>>): Shifts bits to the right, discarding bits on the right.
o Example: 5 >> 1 shifts 0101 to 0010 (result is 2)

int a = 5; // Binary: 0101 int result = a >> 1; //

Result is 2 (Binary: 0010)

Bitwise Tricks and Magic:

Checking if a Number is Even or Odd:


o x & 1: If the result is 1, the number is odd; if 0, it's even. o Example: 7 & 1 results
in 1, so 7 is odd.

int x = 7; if (x & 1) {
std::cout << "Odd" << std::endl;
} else {
std::cout << "Even" << std::endl;
}

Swapping Two Numbers Without a Temporary Variable:

int a = 5, b = 3; a

= a ^ b; // Step 1

b = a ^ b; // Step

2 a = a ^ b; // Step

std::cout << "a: " << a << ", b: " << b << std::endl; // a: 3, b: 5

Checking if Two Numbers Have Opposite Signs:

• (x ^ y) < 0: This will be true if x and y have opposite signs.


• Example: -5 and 3: (-5 ^ 3) < 0 is true.

int x = -5, y = 3; if ((x ^ y) < 0) { std::cout << "x and y

have opposite signs" << std::endl;


} else { std::cout << "x and y have the same sign" <<

std::endl;

Finding the Only Odd Occurring Number:

• XOR all elements in an array. Numbers appearing an even number of times cancel out,
leaving the one with an odd occurrence.
• Example: [1, 2, 3, 2, 3, 1, 3]: 1 ^ 2 ^ 3 ^ 2 ^ 3 ^ 1 ^ 3 = 3

int arr[] = {1, 2, 3, 2, 3, 1,

3}; int result = 0; for (int

num : arr) { result ^= num;

std::cout << "Odd occurring number: " << result << std::endl; // Output: 3

Counting the Number of 1s in an Integer (Hamming Weight):

This efficiently counts the set bits in an integer by flipping the least significant set bit to 0 in
each iteration.

int countOnes(int n) {
int count = 0;
while (n) { n=n
& (n - 1);
count++;
}
return count;
}
int n = 29; // Binary: 11101
std::cout << "Number of 1s: " << countOnes(n) << std::endl; // Output: 4

Finding the Most Significant Bit (MSB):

Repeatedly right shift until the number becomes 0, or use a more efficient method with
logarithms or bit manipulation to find the MSB position.

int findMSB(int n) {

int msb = 0; while (n >>= 1) { // Shift right

until n becomes 0 msb++;

return 1 << msb; // 2 raised to the position of the MSB

} int n = 18; // Binary: 10010 std::cout << "Most significant bit: " <<

findMSB(n) << std::endl; // Output: 16

Applications of Bitwise Operations:

• Encryption/Decryption: XOR is widely used in cryptography for simple encryption


schemes.
• Data Compression: Efficiently manipulate individual bits to compress data.
• Graphics and Image Processing: Quickly manipulate pixels and colours.
• Networking: Flags and masks are used for setting and checking bit flags, such as IP
address handling.

Performance Advantage:
Bitwise operations are often faster than arithmetic operations because they are directly
supported by the processor at the hardware level. This makes them useful in performance
critical applications.

RECURSION

Recursion is a powerful programming technique where a function calls itself to solve smaller
instances of the same problem. It is widely used in algorithms, data structures, and problem-
solving in general.

Understanding Recursion:

• Base Case: The condition under which the recursion stops. It prevents the function from
calling itself indefinitely.
• Recursive Case: The part of the function where it calls itself with a modified argument,
moving towards the base case.

Examples of Recursion in C++:


1. Factorial Calculation:

Factorial of a non-negative integer n (denoted as n!) is the product of all positive integers less
than or equal to n. The recursive definition is:
n!=n×(n−1)!

With the base case 0!=1.

#include <iostream> using


namespace std;
int factorial(int n) { if
(n == 0) // Base case
return 1; else
return n * factorial(n - 1); // Recursive case
} int main() {
int num = 5;
cout << "Factorial of " << num << " is " << factorial(num) << endl; // Output: 120
return 0;
}

2. Fibonacci Sequence:

The Fibonacci sequence is defined as:

F(n)=F(n−1)+F(n−2)

With base cases F(0)=0 and F(1)=1.

#include <iostream> using


namespace std;

int fibonacci(int n) {
if (n == 0) // Base case
return 0;
else if (n == 1) // Base case
return 1; else
return fibonacci(n - 1) + fibonacci(n - 2); // Recursive case
} int main() {
int num = 6;
cout << "Fibonacci number at position " << num << " is " << fibonacci(num) << endl; //
Output: 8

return 0;
}

3. Sum of Natural Numbers:

Sum of first n natural numbers using recursion:

sum(n)=n+sum(n−1) With

base case sum(0)=0.

#include <iostream> using


namespace std;

int sum(int n) { if (n
== 0) // Base case
return 0; else
return n + sum(n - 1); // Recursive case
}

int main() {
int num = 10;
cout << "Sum of first " << num << " natural numbers is " << sum(num) << endl; // Output:
55
return 0;
}
4. Power Function (Exponentiation):

Calculating ab using recursion:

power(a,b)= a×power(a,b−1) With

base case power(a,0)=1.

#include <iostream> using


namespace std;

int power(int a, int b) {


if (b == 0) // Base case
return 1; else
return a * power(a, b - 1); // Recursive case
}

int main() {
int base = 2, exponent = 3;
cout << base << " raised to power " << exponent << " is " << power(base, exponent) << endl;
// Output: 8
return 0;
}

Applications of Recursion:

1. Sorting Algorithms: Algorithms like Quick Sort and Merge Sort use recursion to sort
elements.
2. Tree Traversals: Pre-order, in-order, and post-order traversals of binary trees are
naturally implemented using recursion.
3. Backtracking: Problems like solving a maze, N-Queens, and Sudoku use recursion to
explore different possibilities.
4. Divide and Conquer: Algorithms that split problems into smaller sub-problems (like
binary search) often use recursion.
5. Dynamic Programming: Some problems use a recursive approach with memorization to
optimize repetitive sub-problem calculations.

ARRAYS

Arrays are a fundamental data structure in programming that allow you to store a fixed-size
sequential collection of elements of the same type.

An array is a collection of elements, all of the same type, stored in contiguous memory
locations. It allows you to store multiple items of the same type using a single variable name,
with each item being accessible via its index (or position) in the array.

Key Concepts of Arrays:

1. Fixed Size: The size of an array is determined when it is created and cannot be changed.
This means that if you define an array to hold 10 elements, it will always be able to hold
exactly 10 elements.
2. Indexing: Arrays use zero-based indexing. This means that the first element is accessed
with index 0, the second with index 1, and so on up to n-1 for an array of size n.
3. Contiguous Memory: Arrays store their elements in contiguous memory locations,
which allows for efficient access to the elements using indices.
4. Homogeneous Elements: All elements in an array must be of the same type (e.g., all
integers, all characters, etc.).
Types of Arrays:

1. Single-Dimensional Arrays: These are the most common type of arrays. They represent
a list of items of the same type, like a list of numbers or a list of names.
2. Multi-Dimensional Arrays: These arrays can have more than one dimension. The most
common is the two-dimensional array, which can be thought of as a table or matrix.
Three-dimensional arrays and higher dimensions are also possible, used for more
complex data structures.
3. Character Arrays: Special type of arrays that hold characters, often used to store strings
of text.

Common Operations on Arrays:

1. Traversal: Visiting each element in the array to perform some action, like printing the
values or processing them.

int arr[] = {10, 20, 30, 40, 50}; int


size = sizeof(arr) / sizeof(arr[0]);

for (int i = 0; i < size; i++) {


cout << arr[i] << " ";
}
// Output: 10 20 30 40 50

2. Insertion: Adding a new element to the array. In static arrays, this typically involves
shifting elements to make space, which can be time-consuming.

void insertElement(int arr[], int& size, int element, int position) {


// Shift elements to the right
for (int i = size; i > position; i--) {
arr[i] = arr[i - 1];
}
arr[position] = element;
size++;
}

int arr[10] = {1, 2, 3, 4, 5}; int


size = 5;
insertElement(arr, size, 99, 2); // Array becomes {1, 2, 99, 3, 4, 5}

3. Deletion: Removing an element from the array, which usually requires shifting elements
to fill the gap.

void deleteElement(int arr[], int& size, int position) {


// Shift elements to the left
for (int i = position; i < size - 1; i++) {
arr[i] = arr[i + 1];
}
size--; }
int arr[10] = {1, 2, 3, 4, 5}; int
size = 5;
deleteElement(arr, size, 2); // Array becomes {1, 2, 4, 5}

4. Searching: Finding the position of a particular element in the array. Linear search
(checking each element one by one) and binary search (efficiently searching sorted
arrays) are common search methods.
int findElement(int arr[], int size, int key)
{ for (int i = 0; i < size; i++) { if
(arr[i] == key) {
return i; // Return the index if found
}
}
return -1; // Return -1 if not found
}
int arr[] = {10, 20, 30, 40, 50}; int
size = sizeof(arr) / sizeof(arr[0]);
int index = findElement(arr, size, 30); // Output: 2

5. Sorting: Arranging the elements of the array in a certain order (e.g., ascending or
descending). Common sorting algorithms include bubble sort, selection sort, merge sort,
and quicksort.

Advantages of Arrays:

1. Fast Access: Arrays provide fast and direct access to their elements using indices, which
makes them suitable for applications where quick data retrieval is important.
2. Ease of Use: They are straightforward to declare and use, making them suitable for
beginners.
3. Memory Efficiency: Arrays have low overhead since they store data in contiguous
memory locations.

Disadvantages of Arrays:

1. Fixed Size: The size of an array is fixed upon creation, which can lead to memory
wastage if the array is not fully utilized, or memory shortage if more elements are needed.
2. Insertion and Deletion: These operations can be inefficient because they often require
shifting elements, especially in large arrays.
3. Lack of Flexibility: Unlike dynamic data structures like linked lists, arrays do not allow
easy resizing or dynamic memory allocation.
Applications of Arrays:

• Storing and managing data: Arrays are used for storing collections of data such as lists
of names, scores, or other collections of items.
• Implementing other data structures: Many complex data structures (e.g., stacks,
queues, hash tables) are implemented using arrays.
• Matrix operations: Arrays are used to store and perform operations on matrices in
scientific computing and graphics.
• Searching and sorting algorithms: Arrays are fundamental to the implementation of
various searching and sorting techniques.

SEARCHING

Searching is a fundamental operation in computer science used to find the position of a specific
element within a data structure, such as an array. Various searching algorithms are used
depending on the type of data structure and the requirements of the search operation. Here’s an
overview of the common searching algorithms:

1. Linear Search

Linear search (or sequential search) involves checking each element in the array sequentially
until the desired element is found or the end of the array is reached.

Time Complexity:

• Worst Case: O(n) (where nnn is the number of elements in the array) • Best Case:
O(1) (if the element is found at the first position)
When to Use:

• The array is unsorted.


• The array is small.
•Simplicity is preferred over efficiency.
Example:

1. Start from the first element.


2. Compare the target element with the current element.
3. If they match, return the index.
4. If not, move to the next element and repeat until the end is reached.

2. Binary Search

Binary search is an efficient algorithm for finding an element in a sorted array by repeatedly
dividing the search interval in half.

Time Complexity:

• Worst Case: O( log n)


• Best Case: O(1) (if the target element is at the middle)

When to Use:

• The array is sorted.


• Efficiency is required for large datasets.

Example:

1. Start with the entire array.


2. Compare the target element with the middle element.
3. If they match, return the index.
4. If the target is smaller, narrow the search to the left half.
5. If the target is larger, narrow the search to the right half.
6. Repeat until the target is found or the search interval is empty.
3. Jump Search

Jump search is used on sorted arrays. It divides the array into blocks and performs a linear
search within the block where the target element might be found.

Time Complexity:

• Worst Case: O(√n)


• Best Case: O(1) (if the target is in the first block)

When to Use:

• The array is sorted.


• The dataset is too large for binary search to be efficient due to frequent random access.

Example:

1. Jump ahead by a fixed number of steps (e.g., square root of the array length).
2. Perform a linear search within the block where the target might be.
3. If the target is found, return the index; otherwise, continue jumping.

4. Interpolation Search

Interpolation search is an improvement over binary search for uniformly distributed data. It
estimates the position of the target element based on the value of the element and the target.

Time Complexity:

• Worst Case: O(n) (in the worst case, similar to linear search)
• Best Case: O(log log n)
When to Use:

• The array is sorted.


•The values are uniformly distributed.
Example:

1. Estimate the position of the target element based on its value and the values at the
bounds.
2. Compare the target with the estimated position.
3. If it matches, return the index; otherwise, adjust the bounds and repeat.

5. Exponential Search

Exponential search is used on sorted arrays. It first finds a range where the target element might
be located and then performs binary search within that range.

Time Complexity:

• Worst Case: O( log n )

When to Use:

• The array is sorted.


• The dataset size is unknown or too large.

Example:

1. Start with the first element and double the index until the target element is less than or
equal to the element at that index.
2. Perform binary search within the identified range.

Choosing the Right Search Algorithm:


• Linear Search is simple and works with unsorted arrays but is inefficient for large
datasets.
• Binary Search is efficient but requires the array to be sorted.
• Jump Search and Interpolation Search offer variations that may be more efficient
under certain conditions.
• Exponential Search is useful for large datasets where the size is unknown or the array is
sorted.

SORTING

Sorting is a fundamental operation in computer science that involves arranging elements in a


specific order, typically ascending or descending. Sorting algorithms are crucial for optimizing
performance in various applications, such as searching, data processing, and more. Here's an
overview of common sorting algorithms:

1. Bubble Sort

Description: Bubble Sort is a simple comparison-based algorithm that repeatedly steps through
the list, compares adjacent elements, and swaps them if they are in the wrong order.

Time Complexity:

• Worst Case: O(n2)


• Best Case: O(n) (if the array is already sorted)

When to Use:

• Small datasets
• Educational purposes or simple implementations

Example:
1. Compare each pair of adjacent elements.
2. Swap them if they are in the wrong order.
3. Repeat the process until no more swaps are needed.
2. Selection Sort

Description: Selection Sort divides the array into two parts: a sorted section and an unsorted
section. It repeatedly selects the smallest (or largest) element from the unsorted section and
moves it to the end of the sorted section.

Time Complexity:

• Worst Case: O(n2)


• Best Case: O(n2) When to Use:

• Small datasets
• When memory write operations are costly

Example:

1. Find the minimum element in the unsorted section.


2. Swap it with the first element of the unsorted section.
3. Move the boundary between sorted and unsorted sections.

3. Insertion Sort

Description: Insertion Sort builds the final sorted array one item at a time by repeatedly
picking the next item and inserting it into its correct position within the already sorted section.

Time Complexity:

• Worst Case: O(n2)


• Best Case: O(n) (if the array is already sorted)
When to Use:

• Small datasets
• Partially sorted datasets

Example:

1. Take the next element from the unsorted section.


2. Insert it into the correct position within the sorted section.
3. Repeat until the entire array is sorted.

4. Merge Sort

Description: Merge Sort is a divide-and-conquer algorithm that divides the array into two
halves, sorts each half, and then merges the sorted halves to produce the final sorted array.

Time Complexity:

• Worst Case: O(n log n)


• Best Case: O(n log n)

When to Use:

• Large datasets
• When stable sorting is required Example:

1. Divide the array into two halves.


2. Recursively sort each half.
3. Merge the two sorted halves to create a single sorted array.

5. Quick Sort
Description: Quick Sort is a divide-and-conquer algorithm that picks an element as a pivot and
partitions the array into elements less than the pivot and elements greater than the pivot. It then
recursively sorts the partitions.
Time Complexity:

• Worst Case: O(n2) (when the pivot selection is poor)


• Best Case: O(n log n)

When to Use:

• Large datasets
• When average-case performance is important

Example:

1. Choose a pivot element.


2. Partition the array into elements less than the pivot and elements greater than the pivot.
3. Recursively apply Quick Sort to the partitions.

6. Heap Sort

Description: Heap Sort is based on the heap data structure. It first builds a max-heap (or
minheap), then repeatedly extracts the maximum (or minimum) element from the heap and
reconstructs the heap.

Time Complexity:

• Worst Case: O(n log n)


• Best Case: O( n log n)

When to Use:

• Large datasets
• When in-place sorting is required Example:

1. Build a max-heap from the array.


2. Extract the maximum element and move it to the end of the array.
3. Rebuild the heap and repeat until the array is sorted.

7. Counting Sort

Description: Counting Sort is a non-comparison-based sorting algorithm that counts the


occurrences of each distinct element and then places them in their correct positions based on
these counts.

Time Complexity:

• Worst Case: O(n + k) (where k is the range of the input values)


• Best Case: O(n + k

When to Use:

• When the range of input values is small


• For integer-based sorting

Example:

1. Count the occurrences of each element.


2. Compute the position of each element based on the counts.
3. Place each element in its correct position.

8. Radix Sort

Description: Radix Sort is a non-comparison-based sorting algorithm that sorts numbers by


processing individual digits. It uses a stable sorting algorithm (like Counting Sort) as a
subroutine to sort digits.
Time Complexity:

• Worst Case: O(n k) (where k is the number of digits in the largest number)
• Best Case: O(n k)

When to Use:

• When sorting large numbers or keys with a fixed number of digits


• When stable sorting is required

Example Process:

1. Sort the numbers by the least significant digit.


2. Move to the next significant digit and sort again.
3. Repeat until all digits are processed.

Choosing the Right Sorting Algorithm:

• Bubble, Selection, and Insertion Sort are suitable for small datasets or educational
purposes.
• Merge Sort and Quick Sort are preferred for larger datasets due to their efficient O(n
log n) time complexity.
• Heap Sort is useful when you need in-place sorting with O(n log n) complexity.
• Counting and Radix Sort are optimal for specific use cases with known constraints.

Matrices

A matrix is a two-dimensional array of elements arranged in rows and columns. It is used in


various fields such as mathematics, physics, computer graphics, and machine learning.

Basic Concepts:

1. Matrix Representation:
o Matrix Dimensions: Defined by the number of rows (m) and columns (n). For
example, a 3x4 matrix has 3 rows and 4 columns.
o Element Access: Each element is accessed using two indices: row and column
(e.g., A[i][j]).
2. Matrix Operations:

o Addition/Subtraction: Matrices of the same dimensions can be added or


subtracted element-wise.
o Scalar Multiplication: Each element of the matrix is multiplied by a scalar value.
o Matrix Multiplication: The product of two matrices is computed by taking the dot
product of rows and columns.
o Transpose: The transpose of a matrix flips it over its diagonal, swapping rows
with columns.
o Determinant and Inverse: These operations are used in solving linear equations
and other advanced mathematical applications.
3. Applications:
o Computer Graphics: Matrices are used for transformations such as rotation,
scaling, and translation.
o Machine Learning: Used in algorithms for data representation and
transformation. o Solving Systems of Equations: Used in linear algebra to solve
systems of linear equations.

Example of Matrix Operations:

• Addition:

If A and B are matrices of the same size, then:

• Multiplication:

If A is an m× n matrix and Bis an n ×p matrix, then the product C=A×B is an m× p


matrix, where:
C[i][j]=k=1∑nA[i][k]×B[k][j]

Hashing

Hashing is a technique used to map data to a fixed-size value or index, known as a hash code,
using a hash function. It is commonly used in hash tables for efficient data retrieval.

Basic Concepts:

1. Hash Function:
o A hash function takes an input (or key) and produces a hash code, which determines
the index in a hash table where the data will be stored. o Good hash functions distribute
keys uniformly across the hash table to minimize collisions.
2. Hash Table:
o A data structure that uses hashing to store and retrieve data efficiently. o Consists of
an array of buckets or slots, where each slot can store multiple items in case of
collisions.
3. Collision Handling:
o Chaining: Uses linked lists to handle collisions by storing multiple elements in the
same bucket. o Open Addressing: Finds another slot within the table using probing
methods like linear probing, quadratic probing, or double hashing.
4. Load Factor:
o The load factor is the ratio of the number of elements to the number of buckets in the
hash table. A higher load factor increases the likelihood of collisions.
5. Applications:
o Database Indexing: Used for quick data retrieval. o Caching: Fast access
to frequently used data. o Data Deduplication: Identifying duplicate data by
comparing hash values.

Example of Hashing:

• Hash Function Example:

A simple hash function might be:

hash(key)=(key%table_size)

where table size is the number of slots in the hash table.


• Collision Handling Example:

If two keys hash to the same index:

o Chaining:Store both keys in a linked list at that index. o Open Addressing:


Search for the next available slot using a probing strategy.
Linked List
A linked list is a linear data structure where elements, called nodes, are stored in noncontiguous
memory locations. Each node contains data and a reference (or link) to the next node in the
sequence.

Types:

• Singly Linked List: Each node has a single link to the next node.
• Doubly Linked List: Each node has two links, one to the next node and one to the
previous node.
• Circular Linked List: The last node links back to the first node, forming a circle.

Operations:

• Insertion: Add a node at the beginning, end, or a specific position.


• Deletion: Remove a node from the beginning, end, or a specific position.
• Traversal: Visit each node and perform an operation, such as printing the data.
• Search: Find a node containing specific data.

Advantages:

• Dynamic size, allowing efficient insertion and deletion.


• Flexibility in memory usage since nodes are not stored in contiguous memory locations.

Disadvantages:

• More memory overhead due to additional pointers.


• Slower access time compared to arrays, as nodes must be traversed sequentially.

Stack

A stack is a linear data structure that follows the Last In, First Out (LIFO) principle. Elements
are added and removed from one end, called the top of the stack.
Operations:

• Push: Add an element to the top of the stack.


• Pop: Remove the top element from the stack.
• Peek (or Top): Retrieve the top element without removing it.
• IsEmpty: Check if the stack is empty.

Applications:

• Function Call Management: Keeping track of function calls and local variables in
programming languages.
• Expression Evaluation: Evaluating mathematical expressions and converting between
infix, postfix, and prefix notations.
• Undo Mechanisms: Implementing undo features in software.

Advantages:

• Simple implementation.
• Efficient for managing data with LIFO order.

Disadvantages:

• Limited access to elements (only the top can be accessed).

3. Queue
Description: A queue is a linear data structure that follows the First In, First Out (FIFO)
principle. Elements are added at the rear (or end) and removed from the front.

Operations:

• Enqueue: Add an element to the rear of the queue.


• Dequeue: Remove an element from the front of the queue.
• Front: Retrieve the front element without removing it.
• IsEmpty: Check if the queue is empty.

Applications:

• Task Scheduling: Managing tasks or processes in operating systems.


• Breadth-First Search (BFS): Traversing graphs or trees level by level.
• Print Job Management: Managing print jobs in a printer queue.

Advantages:

• Simple implementation for managing data with FIFO order.


• Efficient for tasks requiring sequential processing.

Disadvantages:

• Limited access to elements (only the front can be accessed for removal).

4. Deque (Double-Ended Queue)

Description: A deque is a linear data structure that allows elements to be added or removed
from both ends (front and rear). It combines features of both stacks and queues.

Operations:

• AddFirst: Add an element to the front.


• AddLast: Add an element to the rear.
• RemoveFirst: Remove an element from the front.
• RemoveLast: Remove an element from the rear.
• PeekFirst: Retrieve the front element without removing it.
• PeekLast: Retrieve the rear element without removing it.

Applications:

• Sliding Window Problems: Handling problems where you need to maintain a window of
elements with efficient insertion and deletion.
• Deque Operations: Useful in scenarios where both ends of the data structure need to be
accessed.

Advantages:

• Flexibility to add and remove elements from both ends.


• Can be implemented using arrays or linked lists for different performance characteristics.

Disadvantages:

• More complex implementation compared to stacks and queues.

Tree

A tree is a hierarchical data structure consisting of nodes connected by edges. It has a root node
and zero or more subtrees, each represented as a tree itself. Trees are used to represent
hierarchical relationships and organize data in a structured way.

Key Terms:

• Root: The top node of the tree.


• Node: An element in the tree containing data and references to child nodes.
• Edge: A connection between two nodes.
• Leaf: A node with no children.
• Internal Node: A node with at least one child.
• Subtree: A tree formed by a node and its descendants.

Common Types:

• Binary Tree: Each node has at most two children (left and right).
• Binary Search Tree (BST): A binary tree where each node’s left subtree contains values
less than the node, and the right subtree contains values greater than the node.
• N-ary Tree: A tree where each node can have up to NNN children.
Binary Search Tree (BST)

A binary search tree is a binary tree that maintains a specific ordering property to allow efficient
search, insertion, and deletion operations.

Properties:

• Left Subtree: Contains only nodes with values less than the current node.
• Right Subtree: Contains only nodes with values greater than the current node.
• No Duplicate Values: Typically, BSTs do not allow duplicate values.

Operations:

• Search: Start at the root and recursively search the left or right subtree based on
comparison.
• Insertion: Place the new value in the correct position while maintaining the BST
property.
• Deletion: Remove a node and adjust the tree to preserve the BST property.
• Traversal: Inorder (left, root, right), preorder (root, left, right), postorder (left, right,
root).
Applications:

• Efficient searching and sorting.


• Implementing associative arrays and sets.

Heap

A heap is a specialized tree-based data structure that satisfies the heap property. It can be a max-
heap or min-heap.

Heap Properties:

• Max-Heap: The key at each node is greater than or equal to the keys of its children. The
maximum key is at the root.
• Min-Heap: The key at each node is less than or equal to the keys of its children. The
minimum key is at the root.

Operations:

• Insert: Add a new element while maintaining the heap property.


• Extract-Max/Min: Remove and return the maximum (or minimum) element while
maintaining the heap property.
• Heapify: Adjust the heap to maintain the heap property after an operation.

Applications:

• Priority queues.
• Heap sort algorithm.
• Graph algorithms like Dijkstra's shortest path.

Graph
A graph is a collection of nodes (vertices) and edges connecting pairs of nodes. Graphs can
represent various structures and relationships.

Types:

• Directed Graph (Digraph): Edges have a direction, going from one vertex to another.
• Undirected Graph: Edges have no direction, and the connection is mutual.
• Weighted Graph: Edges have weights or costs associated with them.
• Unweighted Graph: Edges have no weights.

Key Concepts:

• Adjacency Matrix: A 2D array representing edge connections.


• Adjacency List: A list where each node has a list of adjacent nodes.
• Path: A sequence of edges connecting two nodes.
• Cycle: A path that starts and ends at the same node.

Algorithms:

• Depth-First Search (DFS): Explores as far as possible along each branch before
backtracking.
• Breadth-First Search (BFS): Explores all neighbors at the present depth before moving
on to nodes at the next depth level.
• Dijkstra’s Algorithm: Finds the shortest path from a source node to all other nodes in a
weighted graph.
• Kruskal’s Algorithm: Finds the Minimum Spanning Tree (MST) for a weighted graph.
• Prim’s Algorithm: Another algorithm for finding the MST.

5. Greedy Algorithms

Greedy algorithms build up a solution piece by piece, always choosing the next piece that offers
the most immediate benefit.
Characteristics:

• Local Optimum: At each step, the algorithm makes the choice that seems best at the
moment.
• Global Optimum: The goal is to find a globally optimal solution, though not all
problems can be solved optimally with a greedy approach.

Examples:

• Fractional Knapsack Problem: Select items to maximize total value while staying
within weight limits.
• Huffman Coding: Used for lossless data compression by assigning variable-length codes
to input characters.
• Activity Selection Problem: Select the maximum number of activities that don't overlap.
Applications:

• Scheduling problems.
• Network design.
• Optimization problems with specific properties.

6. Dynamic Programming

Dynamic programming is a technique used to solve problems by breaking them down into
simpler subproblems and storing the results to avoid redundant computations.

Characteristics:

• Optimal Substructure: The optimal solution to a problem can be constructed from


optimal solutions to its subproblems.
• Overlapping Subproblems: Subproblems are solved multiple times in different parts of
the problem.
Approaches:

• Top-Down (Memoization): Solve the problem recursively and store the results of
subproblems to avoid redundant calculations.
• Bottom-Up (Tabulation): Solve all subproblems iteratively and build up solutions to
larger problems using previously computed results.

Examples:

• Fibonacci Sequence: Compute Fibonacci numbers efficiently by storing previously


computed values.
• Knapsack Problem: Determine the maximum value that can be carried in a knapsack
with given capacity and item weights/values.
• Longest Common Subsequence: Find the longest sequence that can be derived from two
sequences without reordering.
Applications:

• Algorithm optimization.
• Resource allocation problems.
• Decision-making in various fields such as economics and operations research.

Introduction to the Mini Project 1: Sudoku Solver Using DSA


and C++

The goal of this mini project is to develop a console-based Sudoku solver using C++. This
project leverages fundamental data structures and algorithms (DSA) to efficiently solve Sudoku
puzzles. It will help us understand how backtracking works, apply basic data structures, and
improving our problem-solving skills in C++.

Project Overview
Sudoku Solver: Sudoku is a logic-based number-placement puzzle played on a 9x9 grid. The
goal is to fill the grid such that each row, each column, and each 3x3 sub grid contains the digits
1 to 9 without repetition. The solver will use backtracking to find a valid solution.

Key Features to Implement

1. Game Board Representation:


o Use a data structure (such as a 2D array) to represent the 9x9 Sudoku grid.

2. Game Logic:
o Move Validation: Ensure that a number placed in a cell follows Sudoku rules.
o Backtracking Algorithm: Implement a recursive function to fill the board with
valid numbers.
o Solution Display: Show the solved Sudoku grid once the solution is found.

3. User Interaction:
o Allow the user to input an incomplete Sudoku puzzle.
o Display the board before and after solving.
o Inform the user if the puzzle has no valid solution.

Data Structures and Algorithms Used

1. Array:
o Representation of the Board: Use a 2D array to store the Sudoku grid, where each
cell contains a number from 1 to 9 or remains empty.

2. Functions:
o printGrid(): A function to display the current state of the board.
o isValid(): A function to check whether a number can be placed in a given cell.
o solveSudoku(): A recursive function implementing backtracking to solve the
puzzle.

3. Backtracking Algorithm:
o Try placing a number from 1 to 9 in an empty cell.
o Check if the placement is valid.
o Recursively solve the rest of the board.
o If no solution is found, backtrack and try another number.

4. Game Execution:
o Load a partially filled Sudoku board.
o Solve the board using backtracking.
o Display the solved puzzle or inform the user if no solution exists.

CODE:

#include <iostream>
using namespace std;

#define N 9

bool isValid(int grid[N][N], int row, int col, int num) {


for (int x = 0; x < N; x++) {
if (grid[row][x] == num || grid[x][col] == num)
return false;
}
int startRow = row - row % 3, startCol = col - col % 3;
for (int i = 0; i < 3; i++) {
for (int j = 0; j < 3; j++) {
if (grid[i + startRow][j + startCol] == num)
return false;
}
}
return true;
}

bool isSolved(int grid[N][N]) {


for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
if (grid[i][j] == 0)
return false;
}
}
return true;
}

void printGrid(int grid[N][N]) {


for (int i = 0; i < N; i++) {
for (int j = 0; j < N; j++) {
cout << grid[i][j] << " ";
}
cout << endl;
}
}

int main() {
int grid[N][N] = {
{5, 3, 0, 0, 7, 0, 0, 0, 0},
{6, 0, 0, 1, 9, 5, 0, 0, 0},
{0, 9, 8, 0, 0, 0, 0, 6, 0},
{8, 0, 0, 0, 6, 0, 0, 0, 3},
{4, 0, 0, 8, 0, 3, 0, 0, 1},
{7, 0, 0, 0, 2, 0, 0, 0, 6},
{0, 6, 0, 0, 0, 0, 2, 8, 0},
{0, 0, 0, 4, 1, 9, 0, 0, 5},
{0, 0, 0, 0, 8, 0, 0, 7, 9}
};

while (!isSolved(grid)) {
printGrid(grid);
int row, col, num;
cout << "Enter row (0-8), column (0-8), and number (1-9): ";
cin >> row >> col >> num;

if (row >= 0 && row < N && col >= 0 && col < N && num >= 1 && num <= 9) {
if (grid[row][col] == 0 && isValid(grid, row, col, num)) {
grid[row][col] = num;
} else {
cout << "Invalid move. Try again." << endl;
}
} else {
cout << "Invalid input. Try again." << endl;
}
}

cout << "Congratulations! You solved the Sudoku." << endl;


printGrid(grid);

return 0;
}
Introduction to the Mini Project 2: Tic-Tac-Toe Using DSA and
C++
The goal of this mini project is to develop a console-based Tic-Tac-Toe game using C++ that
leverages fundamental data structures and algorithms (DSA). This project will help you
understand how to apply basic data structures and algorithms to solve a real-world problem and
improve your coding skills in C++.

Project Overview

Tic-Tac-Toe Game: Tic-Tac-Toe is a classic two-player game played on a 3x3 grid. Players
take turns marking a cell with either an 'X' or an 'O'. The player who places three of their marks
in a row (horizontally, vertically, or diagonally) wins. If all cells are filled without any player
winning, the game ends in a draw.
Key Features to Implement

1. Game Board Representation:


o Use a data structure to represent the 3x3 game board. For example, a 2D array or a
vector of vectors in C++.
2. Game Logic:
o Move Validation: Ensure that a move is valid (e.g., the chosen cell is empty and
within bounds).
o Win Checking: Check if there is a winner after each move. This involves checking
rows, columns, and diagonals.
o Draw Checking: Determine if the game is a draw (i.e., the board is full, and no
player has won).
3. Player Interaction:

o Allow two players to take turns making moves.


o Provide a way to display the current state of the board after each move.
o Announce the winner or if the game is a draw.
4. Game Restart:
o Provide an option to restart the game after it ends.

Data Structures and Algorithms Used

1. Array or Vector:
o Representation of the Board: Use a 2D array or a vector of vectors to store the
game state. Each cell can hold 'X', 'O', or be empty.
2. Functions:
o Display Board: A function to print the current state of the board.
o Make Move: A function to handle player moves and update the board.
o Check Win: A function to check if a player has won the game.
o Check Draw: A function to determine if the game has ended in a draw.
3. Input Validation: o Ensure valid user input (e.g., check if the chosen cell is available).
4. Game Loop:
o Implement the main game loop that alternates between players and checks for
game end conditions.

CODE:
#include <iostream>
#include <vector>

using namespace std;

// Constants for the board size


const int SIZE = 3; const
char EMPTY = ' '; const
char PLAYER_X = 'X';
const char PLAYER_O = 'O';

// Function prototypes void printBoard(const


vector<vector<char>>& board); bool isBoardFull(const
vector<vector<char>>& board); bool checkWin(const
vector<vector<char>>& board, char player); bool
makeMove(vector<vector<char>>& board, int row, int col, char
player); bool isMoveValid(const vector<vector<char>>& board,
int row, int col);

int main() {
vector<vector<char>> board(SIZE, vector<char>(SIZE, EMPTY));
char currentPlayer = PLAYER_X;
bool gameWon = false;

while (!isBoardFull(board) && !gameWon) {


printBoard(board);
int row, col;

// Get move from player


cout << "Player " << currentPlayer << ", enter your move (row and column): ";
cin >> row >> col;

// Adjust for 0-based index


row--;
col--;

// Validate and make the move if


(isMoveValid(board, row, col)) {
makeMove(board, row, col, currentPlayer);
if (checkWin(board, currentPlayer)) {
gameWon = true; printBoard(board);
cout << "Player " << currentPlayer << " wins!" << endl;
} else {
// Switch player
currentPlayer = (currentPlayer == PLAYER_X) ? PLAYER_O : PLAYER_X;
}
} else {
cout << "Invalid move, try again." << endl;
}
}

if (!gameWon) {
printBoard(board);
cout << "The game is a draw!" << endl;
}

return 0;
}

void printBoard(const vector<vector<char>>& board) {


for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
cout << board[i][j];
if (j < SIZE - 1) cout << " | ";
}
cout << endl;
if (i < SIZE - 1) cout << "---------" << endl;
}
}

bool isBoardFull(const vector<vector<char>>& board) {


for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
if (board[i][j] == EMPTY) return
false;
}
}
return true;
}

bool checkWin(const vector<vector<char>>& board, char player) {


// Check rows and columns
for (int i = 0; i < SIZE; ++i) {
if ((board[i][0] == player && board[i][1] == player && board[i][2] == player) ||
(board[0][i] == player && board[1][i] == player && board[2][i] == player)) {
return true;
}
}
// Check diagonals if ((board[0][0] == player && board[1][1] == player &&
board[2][2] == player) || (board[0][2] == player && board[1][1] == player &&
board[2][0] == player)) { return true;
}

return false;
}

bool makeMove(vector<vector<char>>& board, int row, int col, char player) { if (row
>= 0 && row < SIZE && col >= 0 && col < SIZE && board[row][col] == EMPTY) {
board[row][col] = player;
return true;
}
return false;
}

bool isMoveValid(const vector<vector<char>>& board, int row, int col) {


return row >= 0 && row < SIZE && col >= 0 && col < SIZE && board[row][col] == EMPTY;
}
Grade sheet of assignments/ marks card from the MOOC
Reference:

• https://www.geeksforgeeks.org/batch/dsa-self-paced-april?tab=Contest
• https://w3schools.com/dsa/dsa_intro.php
• https://en.wikipedia.org/wiki/Sorting_algorithm
• https://docs.python.org/3/howto/sorting.html
• https://www.javatpoint.com/ds-graph

---END---

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy