DSA REPORT FIN 1
DSA REPORT FIN 1
ON
Submitted by
SAMHITHA MADALA
Registration No 12214200
Program Name: B. Tech CSE (Cyber security
and blockchain)
The Summer Internship opportunity I had with GeeksForGeeks was a great chance for learning and professional
development. Therefore, I consider myself as a very lucky individual as I was provided with an opportunity to be
a part of it. I am also grateful for having a chance to learn from professionals who led me through this internship
period.
I express my deepest thanks to Training and Placement Coordinator, School of Computer Application, Lovely
Professional University for allowing me to grab this opportunity. I choose this moment to acknowledge his
contribution gratefully by giving necessary advice and guidance to make my internship a good learning
experience.
[SAMHITHA MADALA]
[12214200]
INTERNSHIP CERTIFICATE
INTRODUCTION
In today's rapidly evolving technological landscape, data structures and algorithms form the
backbone of efficient problem-solving and software development. Whether we are a beginner
stepping into the world of programming or an experienced developer looking to deepen your
understanding, mastering data structures and algorithms is crucial. These foundational concepts
are not only essential for writing optimized and effective code but also for acing technical
interviews, competing in coding competitions, and understanding the inner workings of various
software applications.
What I learn:
• Data Structures: You will gain an in-depth understanding of various data structures like
arrays, linked lists, stacks, queues, hash tables, trees, and graphs. I learn how to
implement, manipulate, and optimize these structures to solve real-world problems.
• Algorithms: The course will cover essential algorithms, including sorting, searching,
recursion, dynamic programming, and graph traversal techniques. I learn how to design,
analyses, and optimize algorithms for maximum efficiency.
• Problem-Solving Skills: By working through numerous coding challenges and exercises,
I will develop strong analytical and problem-solving skills, enabling us to approach
complex problems with confidence.
This self-paced course is designed to cater to learners of all levels, providing a flexible and
comprehensive learning experience. Whether we are preparing for a job interview, participating
in a coding competition, or simply looking to improve our programming skills, this course
offers the tools, resources, and support that we need to succeed.
TECHNICAL LEARNING FROM THE COURSE
1. ALGORITHM ANALYSIS
case:
This represents the maximum time an algorithm will take to complete, given the worst
possible input of size n. It provides an upper bound on the running time.
Knowing the worst-case complexity is important for ensuring that an algorithm can
handle the most difficult scenarios within an acceptable time.
• Example: For a linear search in an unsorted array of size n, the worst-case time
complexity is O(n). This happens when the element being searched for is at the last
position or not present at all.
Average Case:
This measures the expected time an algorithm will take to complete, averaged over all possible
inputs of size n. It provides a realistic estimate of an algorithm's performance.
It helps understand the algorithm's behavior under typical conditions, rather than just in the
worst-case scenario.
Example: For a linear search, the average-case time complexity is O(n) as well, assuming the
element is equally likely to be located at any position or not present.
Best Case:
This represents the minimum time an algorithm will take to complete, given the best possible
input of size n.
Knowing the best-case complexity is useful to understand how well an algorithm can perform in
the most favorable conditions.
Example: For a linear search, the best-case time complexity is O(1). This occurs when the
element being searched for is at the first position.
Asymptic Notation
Asymptotic notations are mathematical tools used to describe the behavior of algorithms in
terms of time or space complexity, as the input size (denoted as n) grows. These notations help
us analyze and compare the efficiency of algorithms, especially for large inputs. The primary
asymptotic notations are Big O, Big Theta, and Big Omega. Let's explore each one:
Big O notation provides an upper bound on the time (or space) complexity of an
algorithm. It describes the worst-case scenario by showing how the runtime increases as
the input size n grows.
To provide a guarantee that the algorithm will not run slower than a certain time, even in
the worst-case situation.
• How to Interpret: If f(n)=O(g(n)), it means that the function f(n) grows at most as fast as
g(n) to a constant factor for sufficiently large n.
Mathematically: f(n)=O(g(n)) if and only if there exist positive constants c and n 0 such
that:
Big Theta notation provides a tight bound on the time (or space) complexity of an
algorithm. It describes both the upper and lower bounds, showing the exact rate of
growth.
To show that the algorithm's running time is guaranteed to grow at a certain rate, both in
the worst and best scenarios.
• How to Interpret: If f(n)=Ω(g(n)), it means that f(n) grows at the same rate as g(n)) up
to constant factors, both above and below.
Mathematically: f(n)=Ω(g(n)), if and only if there exist positive constants c and n0 such
that:
Example: If f(n)=5n2+2n+3, then f(n)=Θ(n2). This means f(n)) grows at the same rate as
n2, ignoring lower-order terms and constant factors.
Big Omega notation provides a lower bound on the time (or space) complexity of an
algorithm. It describes the best-case scenario by showing the minimum time the
algorithm will take.
To show that the algorithm's running time will not be faster than a certain time.
• How to Interpret: If f(n)=Ω(g(n)), it means that f(n) grows at least as fast as g(n) up to a
constant factor for sufficiently large n.
Mathematically: f(n)=Ω(g(n)) if and only if there exist positive constants c and n0 such
that:
0 ≤ c g(n) ≤f (n)for all n ≥ n0
Example: If f(n)=3n+2, then f(n)=Ω(n). This implies that the running time of the
algorithm grows at least linearly with n.
Little o notation provides a strict upper bound on the time complexity. It means that f(n)
grows strictly slower than g(n) as n approaches infinity.
• Purpose: To show that the growth rate of one function is negligible compared to another.
• How to Interpret: If f(n)=o(g(n)), it means f(n) grows slower than g(n) as n becomes
large.
Mathematically: f(n)=o(g(n)) if and only if for all positive constants ccc, there exists an
n0 such that:
Little omega notation provides a strict lower bound on the time complexity. It indicates
that f(n) grows strictly faster than g(n).
To show that f(n) grows faster than g(n) and not at the same rate.
• How to Interpret: If f(n)=ω(g(n)), it means f(n) grows faster than g(n) as n becomes large.
Mathematically: f(n)=ω(g(n)) if and only if for all positive constants c, there exists an n 0
such that:
0 ≤ c g(n) < f(n) for all n ≥ n0
Space Complexity:
The term Space Complexity is misused for Auxiliary Space at many places. Following are the
correct definitions of Auxiliary Space and Space Complexity.
Auxiliary Space is the extra space or temporary space used by an algorithm.
Space Complexity of an algorithm is the total space taken by the algorithm with respect to the
input size. Space complexity includes both Auxiliary space and space used by input. For
example, if we want to compare standard sorting algorithms on the basis of space, then
Auxiliary Space would be a better criterion than Space Complexity. Merge Sort uses O(n)
auxiliary space, Insertion sort, and Heap Sort use O(1) auxiliary space. The space complexity of
all these sorting algorithms is O(n) though.
Space complexity is a parallel concept to time complexity. If we need to create an array of size
n, this will require O(n) space. If we create a two-dimensional array of size n*n, this will
require O(n2) space.
Example :
int add (int n){
if (n <= 0){
return 0;
}
return n + add (n-1);
}
1. add(4)
2. -> add(3)
3. -> add(2)
4. -> add(1)
5. -> add(0)
Each of these calls is added to call stack and takes up actual memory.
So it takes O(n) space.
However, just because you have n calls total doesn't mean it takes O(n) space.
Look at the below function :
MATHEMATICS
BITWISE MAGIC
Bitwise operations are a powerful and efficient way to manipulate data at the binary level. They
operate directly on the individual bits of data, making them faster than arithmetic operations for
certain tasks. Here’s a brief overview of some common bitwise operations and their "magic"
tricks:
b = 3; // Binary: 0011
b = 3; // Binary: 0011
• XOR (^): Sets each bit to 1 if only one of the corresponding bits is 1 (exclusive OR).
o Example: 5 ^ 3 in binary: 0101 ^ 0011 = 0110 (result is 6)
int a = 5; // Binary: 0101 int
b = 3; // Binary: 0011
• NOT (~): Inverts all the bits (0 becomes 1, and 1 becomes 0).
o Example: ~5 in binary: ~0101 = 1010 (result is -6 in two's complement)
• Left Shift (<<): Shifts bits to the left, filling with 0s on the right.
o Example: 5 << 1 shifts 0101 to 1010 (result is 10)
• Right Shift (>>): Shifts bits to the right, discarding bits on the right.
o Example: 5 >> 1 shifts 0101 to 0010 (result is 2)
int x = 7; if (x & 1) {
std::cout << "Odd" << std::endl;
} else {
std::cout << "Even" << std::endl;
}
int a = 5, b = 3; a
= a ^ b; // Step 1
b = a ^ b; // Step
2 a = a ^ b; // Step
std::cout << "a: " << a << ", b: " << b << std::endl; // a: 3, b: 5
std::endl;
• XOR all elements in an array. Numbers appearing an even number of times cancel out,
leaving the one with an odd occurrence.
• Example: [1, 2, 3, 2, 3, 1, 3]: 1 ^ 2 ^ 3 ^ 2 ^ 3 ^ 1 ^ 3 = 3
std::cout << "Odd occurring number: " << result << std::endl; // Output: 3
This efficiently counts the set bits in an integer by flipping the least significant set bit to 0 in
each iteration.
int countOnes(int n) {
int count = 0;
while (n) { n=n
& (n - 1);
count++;
}
return count;
}
int n = 29; // Binary: 11101
std::cout << "Number of 1s: " << countOnes(n) << std::endl; // Output: 4
Repeatedly right shift until the number becomes 0, or use a more efficient method with
logarithms or bit manipulation to find the MSB position.
int findMSB(int n) {
} int n = 18; // Binary: 10010 std::cout << "Most significant bit: " <<
Performance Advantage:
Bitwise operations are often faster than arithmetic operations because they are directly
supported by the processor at the hardware level. This makes them useful in performance
critical applications.
RECURSION
Recursion is a powerful programming technique where a function calls itself to solve smaller
instances of the same problem. It is widely used in algorithms, data structures, and problem-
solving in general.
Understanding Recursion:
• Base Case: The condition under which the recursion stops. It prevents the function from
calling itself indefinitely.
• Recursive Case: The part of the function where it calls itself with a modified argument,
moving towards the base case.
Factorial of a non-negative integer n (denoted as n!) is the product of all positive integers less
than or equal to n. The recursive definition is:
n!=n×(n−1)!
2. Fibonacci Sequence:
F(n)=F(n−1)+F(n−2)
int fibonacci(int n) {
if (n == 0) // Base case
return 0;
else if (n == 1) // Base case
return 1; else
return fibonacci(n - 1) + fibonacci(n - 2); // Recursive case
} int main() {
int num = 6;
cout << "Fibonacci number at position " << num << " is " << fibonacci(num) << endl; //
Output: 8
return 0;
}
sum(n)=n+sum(n−1) With
int sum(int n) { if (n
== 0) // Base case
return 0; else
return n + sum(n - 1); // Recursive case
}
int main() {
int num = 10;
cout << "Sum of first " << num << " natural numbers is " << sum(num) << endl; // Output:
55
return 0;
}
4. Power Function (Exponentiation):
int main() {
int base = 2, exponent = 3;
cout << base << " raised to power " << exponent << " is " << power(base, exponent) << endl;
// Output: 8
return 0;
}
Applications of Recursion:
1. Sorting Algorithms: Algorithms like Quick Sort and Merge Sort use recursion to sort
elements.
2. Tree Traversals: Pre-order, in-order, and post-order traversals of binary trees are
naturally implemented using recursion.
3. Backtracking: Problems like solving a maze, N-Queens, and Sudoku use recursion to
explore different possibilities.
4. Divide and Conquer: Algorithms that split problems into smaller sub-problems (like
binary search) often use recursion.
5. Dynamic Programming: Some problems use a recursive approach with memorization to
optimize repetitive sub-problem calculations.
ARRAYS
Arrays are a fundamental data structure in programming that allow you to store a fixed-size
sequential collection of elements of the same type.
An array is a collection of elements, all of the same type, stored in contiguous memory
locations. It allows you to store multiple items of the same type using a single variable name,
with each item being accessible via its index (or position) in the array.
1. Fixed Size: The size of an array is determined when it is created and cannot be changed.
This means that if you define an array to hold 10 elements, it will always be able to hold
exactly 10 elements.
2. Indexing: Arrays use zero-based indexing. This means that the first element is accessed
with index 0, the second with index 1, and so on up to n-1 for an array of size n.
3. Contiguous Memory: Arrays store their elements in contiguous memory locations,
which allows for efficient access to the elements using indices.
4. Homogeneous Elements: All elements in an array must be of the same type (e.g., all
integers, all characters, etc.).
Types of Arrays:
1. Single-Dimensional Arrays: These are the most common type of arrays. They represent
a list of items of the same type, like a list of numbers or a list of names.
2. Multi-Dimensional Arrays: These arrays can have more than one dimension. The most
common is the two-dimensional array, which can be thought of as a table or matrix.
Three-dimensional arrays and higher dimensions are also possible, used for more
complex data structures.
3. Character Arrays: Special type of arrays that hold characters, often used to store strings
of text.
1. Traversal: Visiting each element in the array to perform some action, like printing the
values or processing them.
2. Insertion: Adding a new element to the array. In static arrays, this typically involves
shifting elements to make space, which can be time-consuming.
3. Deletion: Removing an element from the array, which usually requires shifting elements
to fill the gap.
4. Searching: Finding the position of a particular element in the array. Linear search
(checking each element one by one) and binary search (efficiently searching sorted
arrays) are common search methods.
int findElement(int arr[], int size, int key)
{ for (int i = 0; i < size; i++) { if
(arr[i] == key) {
return i; // Return the index if found
}
}
return -1; // Return -1 if not found
}
int arr[] = {10, 20, 30, 40, 50}; int
size = sizeof(arr) / sizeof(arr[0]);
int index = findElement(arr, size, 30); // Output: 2
5. Sorting: Arranging the elements of the array in a certain order (e.g., ascending or
descending). Common sorting algorithms include bubble sort, selection sort, merge sort,
and quicksort.
Advantages of Arrays:
1. Fast Access: Arrays provide fast and direct access to their elements using indices, which
makes them suitable for applications where quick data retrieval is important.
2. Ease of Use: They are straightforward to declare and use, making them suitable for
beginners.
3. Memory Efficiency: Arrays have low overhead since they store data in contiguous
memory locations.
Disadvantages of Arrays:
1. Fixed Size: The size of an array is fixed upon creation, which can lead to memory
wastage if the array is not fully utilized, or memory shortage if more elements are needed.
2. Insertion and Deletion: These operations can be inefficient because they often require
shifting elements, especially in large arrays.
3. Lack of Flexibility: Unlike dynamic data structures like linked lists, arrays do not allow
easy resizing or dynamic memory allocation.
Applications of Arrays:
• Storing and managing data: Arrays are used for storing collections of data such as lists
of names, scores, or other collections of items.
• Implementing other data structures: Many complex data structures (e.g., stacks,
queues, hash tables) are implemented using arrays.
• Matrix operations: Arrays are used to store and perform operations on matrices in
scientific computing and graphics.
• Searching and sorting algorithms: Arrays are fundamental to the implementation of
various searching and sorting techniques.
SEARCHING
Searching is a fundamental operation in computer science used to find the position of a specific
element within a data structure, such as an array. Various searching algorithms are used
depending on the type of data structure and the requirements of the search operation. Here’s an
overview of the common searching algorithms:
1. Linear Search
Linear search (or sequential search) involves checking each element in the array sequentially
until the desired element is found or the end of the array is reached.
Time Complexity:
• Worst Case: O(n) (where nnn is the number of elements in the array) • Best Case:
O(1) (if the element is found at the first position)
When to Use:
2. Binary Search
Binary search is an efficient algorithm for finding an element in a sorted array by repeatedly
dividing the search interval in half.
Time Complexity:
When to Use:
Example:
Jump search is used on sorted arrays. It divides the array into blocks and performs a linear
search within the block where the target element might be found.
Time Complexity:
When to Use:
Example:
1. Jump ahead by a fixed number of steps (e.g., square root of the array length).
2. Perform a linear search within the block where the target might be.
3. If the target is found, return the index; otherwise, continue jumping.
4. Interpolation Search
Interpolation search is an improvement over binary search for uniformly distributed data. It
estimates the position of the target element based on the value of the element and the target.
Time Complexity:
• Worst Case: O(n) (in the worst case, similar to linear search)
• Best Case: O(log log n)
When to Use:
1. Estimate the position of the target element based on its value and the values at the
bounds.
2. Compare the target with the estimated position.
3. If it matches, return the index; otherwise, adjust the bounds and repeat.
5. Exponential Search
Exponential search is used on sorted arrays. It first finds a range where the target element might
be located and then performs binary search within that range.
Time Complexity:
When to Use:
Example:
1. Start with the first element and double the index until the target element is less than or
equal to the element at that index.
2. Perform binary search within the identified range.
SORTING
1. Bubble Sort
Description: Bubble Sort is a simple comparison-based algorithm that repeatedly steps through
the list, compares adjacent elements, and swaps them if they are in the wrong order.
Time Complexity:
When to Use:
• Small datasets
• Educational purposes or simple implementations
Example:
1. Compare each pair of adjacent elements.
2. Swap them if they are in the wrong order.
3. Repeat the process until no more swaps are needed.
2. Selection Sort
Description: Selection Sort divides the array into two parts: a sorted section and an unsorted
section. It repeatedly selects the smallest (or largest) element from the unsorted section and
moves it to the end of the sorted section.
Time Complexity:
• Small datasets
• When memory write operations are costly
Example:
3. Insertion Sort
Description: Insertion Sort builds the final sorted array one item at a time by repeatedly
picking the next item and inserting it into its correct position within the already sorted section.
Time Complexity:
• Small datasets
• Partially sorted datasets
Example:
4. Merge Sort
Description: Merge Sort is a divide-and-conquer algorithm that divides the array into two
halves, sorts each half, and then merges the sorted halves to produce the final sorted array.
Time Complexity:
When to Use:
• Large datasets
• When stable sorting is required Example:
5. Quick Sort
Description: Quick Sort is a divide-and-conquer algorithm that picks an element as a pivot and
partitions the array into elements less than the pivot and elements greater than the pivot. It then
recursively sorts the partitions.
Time Complexity:
When to Use:
• Large datasets
• When average-case performance is important
Example:
6. Heap Sort
Description: Heap Sort is based on the heap data structure. It first builds a max-heap (or
minheap), then repeatedly extracts the maximum (or minimum) element from the heap and
reconstructs the heap.
Time Complexity:
When to Use:
• Large datasets
• When in-place sorting is required Example:
7. Counting Sort
Time Complexity:
When to Use:
Example:
8. Radix Sort
• Worst Case: O(n k) (where k is the number of digits in the largest number)
• Best Case: O(n k)
When to Use:
Example Process:
• Bubble, Selection, and Insertion Sort are suitable for small datasets or educational
purposes.
• Merge Sort and Quick Sort are preferred for larger datasets due to their efficient O(n
log n) time complexity.
• Heap Sort is useful when you need in-place sorting with O(n log n) complexity.
• Counting and Radix Sort are optimal for specific use cases with known constraints.
Matrices
Basic Concepts:
1. Matrix Representation:
o Matrix Dimensions: Defined by the number of rows (m) and columns (n). For
example, a 3x4 matrix has 3 rows and 4 columns.
o Element Access: Each element is accessed using two indices: row and column
(e.g., A[i][j]).
2. Matrix Operations:
• Addition:
• Multiplication:
Hashing
Hashing is a technique used to map data to a fixed-size value or index, known as a hash code,
using a hash function. It is commonly used in hash tables for efficient data retrieval.
Basic Concepts:
1. Hash Function:
o A hash function takes an input (or key) and produces a hash code, which determines
the index in a hash table where the data will be stored. o Good hash functions distribute
keys uniformly across the hash table to minimize collisions.
2. Hash Table:
o A data structure that uses hashing to store and retrieve data efficiently. o Consists of
an array of buckets or slots, where each slot can store multiple items in case of
collisions.
3. Collision Handling:
o Chaining: Uses linked lists to handle collisions by storing multiple elements in the
same bucket. o Open Addressing: Finds another slot within the table using probing
methods like linear probing, quadratic probing, or double hashing.
4. Load Factor:
o The load factor is the ratio of the number of elements to the number of buckets in the
hash table. A higher load factor increases the likelihood of collisions.
5. Applications:
o Database Indexing: Used for quick data retrieval. o Caching: Fast access
to frequently used data. o Data Deduplication: Identifying duplicate data by
comparing hash values.
Example of Hashing:
hash(key)=(key%table_size)
Types:
• Singly Linked List: Each node has a single link to the next node.
• Doubly Linked List: Each node has two links, one to the next node and one to the
previous node.
• Circular Linked List: The last node links back to the first node, forming a circle.
Operations:
Advantages:
Disadvantages:
Stack
A stack is a linear data structure that follows the Last In, First Out (LIFO) principle. Elements
are added and removed from one end, called the top of the stack.
Operations:
Applications:
• Function Call Management: Keeping track of function calls and local variables in
programming languages.
• Expression Evaluation: Evaluating mathematical expressions and converting between
infix, postfix, and prefix notations.
• Undo Mechanisms: Implementing undo features in software.
Advantages:
• Simple implementation.
• Efficient for managing data with LIFO order.
Disadvantages:
3. Queue
Description: A queue is a linear data structure that follows the First In, First Out (FIFO)
principle. Elements are added at the rear (or end) and removed from the front.
Operations:
Applications:
Advantages:
Disadvantages:
• Limited access to elements (only the front can be accessed for removal).
Description: A deque is a linear data structure that allows elements to be added or removed
from both ends (front and rear). It combines features of both stacks and queues.
Operations:
Applications:
• Sliding Window Problems: Handling problems where you need to maintain a window of
elements with efficient insertion and deletion.
• Deque Operations: Useful in scenarios where both ends of the data structure need to be
accessed.
Advantages:
Disadvantages:
Tree
A tree is a hierarchical data structure consisting of nodes connected by edges. It has a root node
and zero or more subtrees, each represented as a tree itself. Trees are used to represent
hierarchical relationships and organize data in a structured way.
Key Terms:
Common Types:
• Binary Tree: Each node has at most two children (left and right).
• Binary Search Tree (BST): A binary tree where each node’s left subtree contains values
less than the node, and the right subtree contains values greater than the node.
• N-ary Tree: A tree where each node can have up to NNN children.
Binary Search Tree (BST)
A binary search tree is a binary tree that maintains a specific ordering property to allow efficient
search, insertion, and deletion operations.
Properties:
• Left Subtree: Contains only nodes with values less than the current node.
• Right Subtree: Contains only nodes with values greater than the current node.
• No Duplicate Values: Typically, BSTs do not allow duplicate values.
Operations:
• Search: Start at the root and recursively search the left or right subtree based on
comparison.
• Insertion: Place the new value in the correct position while maintaining the BST
property.
• Deletion: Remove a node and adjust the tree to preserve the BST property.
• Traversal: Inorder (left, root, right), preorder (root, left, right), postorder (left, right,
root).
Applications:
Heap
A heap is a specialized tree-based data structure that satisfies the heap property. It can be a max-
heap or min-heap.
Heap Properties:
• Max-Heap: The key at each node is greater than or equal to the keys of its children. The
maximum key is at the root.
• Min-Heap: The key at each node is less than or equal to the keys of its children. The
minimum key is at the root.
Operations:
Applications:
• Priority queues.
• Heap sort algorithm.
• Graph algorithms like Dijkstra's shortest path.
Graph
A graph is a collection of nodes (vertices) and edges connecting pairs of nodes. Graphs can
represent various structures and relationships.
Types:
• Directed Graph (Digraph): Edges have a direction, going from one vertex to another.
• Undirected Graph: Edges have no direction, and the connection is mutual.
• Weighted Graph: Edges have weights or costs associated with them.
• Unweighted Graph: Edges have no weights.
Key Concepts:
Algorithms:
• Depth-First Search (DFS): Explores as far as possible along each branch before
backtracking.
• Breadth-First Search (BFS): Explores all neighbors at the present depth before moving
on to nodes at the next depth level.
• Dijkstra’s Algorithm: Finds the shortest path from a source node to all other nodes in a
weighted graph.
• Kruskal’s Algorithm: Finds the Minimum Spanning Tree (MST) for a weighted graph.
• Prim’s Algorithm: Another algorithm for finding the MST.
5. Greedy Algorithms
Greedy algorithms build up a solution piece by piece, always choosing the next piece that offers
the most immediate benefit.
Characteristics:
• Local Optimum: At each step, the algorithm makes the choice that seems best at the
moment.
• Global Optimum: The goal is to find a globally optimal solution, though not all
problems can be solved optimally with a greedy approach.
Examples:
• Fractional Knapsack Problem: Select items to maximize total value while staying
within weight limits.
• Huffman Coding: Used for lossless data compression by assigning variable-length codes
to input characters.
• Activity Selection Problem: Select the maximum number of activities that don't overlap.
Applications:
• Scheduling problems.
• Network design.
• Optimization problems with specific properties.
6. Dynamic Programming
Dynamic programming is a technique used to solve problems by breaking them down into
simpler subproblems and storing the results to avoid redundant computations.
Characteristics:
• Top-Down (Memoization): Solve the problem recursively and store the results of
subproblems to avoid redundant calculations.
• Bottom-Up (Tabulation): Solve all subproblems iteratively and build up solutions to
larger problems using previously computed results.
Examples:
• Algorithm optimization.
• Resource allocation problems.
• Decision-making in various fields such as economics and operations research.
The goal of this mini project is to develop a console-based Sudoku solver using C++. This
project leverages fundamental data structures and algorithms (DSA) to efficiently solve Sudoku
puzzles. It will help us understand how backtracking works, apply basic data structures, and
improving our problem-solving skills in C++.
Project Overview
Sudoku Solver: Sudoku is a logic-based number-placement puzzle played on a 9x9 grid. The
goal is to fill the grid such that each row, each column, and each 3x3 sub grid contains the digits
1 to 9 without repetition. The solver will use backtracking to find a valid solution.
2. Game Logic:
o Move Validation: Ensure that a number placed in a cell follows Sudoku rules.
o Backtracking Algorithm: Implement a recursive function to fill the board with
valid numbers.
o Solution Display: Show the solved Sudoku grid once the solution is found.
3. User Interaction:
o Allow the user to input an incomplete Sudoku puzzle.
o Display the board before and after solving.
o Inform the user if the puzzle has no valid solution.
1. Array:
o Representation of the Board: Use a 2D array to store the Sudoku grid, where each
cell contains a number from 1 to 9 or remains empty.
2. Functions:
o printGrid(): A function to display the current state of the board.
o isValid(): A function to check whether a number can be placed in a given cell.
o solveSudoku(): A recursive function implementing backtracking to solve the
puzzle.
3. Backtracking Algorithm:
o Try placing a number from 1 to 9 in an empty cell.
o Check if the placement is valid.
o Recursively solve the rest of the board.
o If no solution is found, backtrack and try another number.
4. Game Execution:
o Load a partially filled Sudoku board.
o Solve the board using backtracking.
o Display the solved puzzle or inform the user if no solution exists.
CODE:
#include <iostream>
using namespace std;
#define N 9
int main() {
int grid[N][N] = {
{5, 3, 0, 0, 7, 0, 0, 0, 0},
{6, 0, 0, 1, 9, 5, 0, 0, 0},
{0, 9, 8, 0, 0, 0, 0, 6, 0},
{8, 0, 0, 0, 6, 0, 0, 0, 3},
{4, 0, 0, 8, 0, 3, 0, 0, 1},
{7, 0, 0, 0, 2, 0, 0, 0, 6},
{0, 6, 0, 0, 0, 0, 2, 8, 0},
{0, 0, 0, 4, 1, 9, 0, 0, 5},
{0, 0, 0, 0, 8, 0, 0, 7, 9}
};
while (!isSolved(grid)) {
printGrid(grid);
int row, col, num;
cout << "Enter row (0-8), column (0-8), and number (1-9): ";
cin >> row >> col >> num;
if (row >= 0 && row < N && col >= 0 && col < N && num >= 1 && num <= 9) {
if (grid[row][col] == 0 && isValid(grid, row, col, num)) {
grid[row][col] = num;
} else {
cout << "Invalid move. Try again." << endl;
}
} else {
cout << "Invalid input. Try again." << endl;
}
}
return 0;
}
Introduction to the Mini Project 2: Tic-Tac-Toe Using DSA and
C++
The goal of this mini project is to develop a console-based Tic-Tac-Toe game using C++ that
leverages fundamental data structures and algorithms (DSA). This project will help you
understand how to apply basic data structures and algorithms to solve a real-world problem and
improve your coding skills in C++.
Project Overview
Tic-Tac-Toe Game: Tic-Tac-Toe is a classic two-player game played on a 3x3 grid. Players
take turns marking a cell with either an 'X' or an 'O'. The player who places three of their marks
in a row (horizontally, vertically, or diagonally) wins. If all cells are filled without any player
winning, the game ends in a draw.
Key Features to Implement
1. Array or Vector:
o Representation of the Board: Use a 2D array or a vector of vectors to store the
game state. Each cell can hold 'X', 'O', or be empty.
2. Functions:
o Display Board: A function to print the current state of the board.
o Make Move: A function to handle player moves and update the board.
o Check Win: A function to check if a player has won the game.
o Check Draw: A function to determine if the game has ended in a draw.
3. Input Validation: o Ensure valid user input (e.g., check if the chosen cell is available).
4. Game Loop:
o Implement the main game loop that alternates between players and checks for
game end conditions.
CODE:
#include <iostream>
#include <vector>
int main() {
vector<vector<char>> board(SIZE, vector<char>(SIZE, EMPTY));
char currentPlayer = PLAYER_X;
bool gameWon = false;
if (!gameWon) {
printBoard(board);
cout << "The game is a draw!" << endl;
}
return 0;
}
return false;
}
bool makeMove(vector<vector<char>>& board, int row, int col, char player) { if (row
>= 0 && row < SIZE && col >= 0 && col < SIZE && board[row][col] == EMPTY) {
board[row][col] = player;
return true;
}
return false;
}
• https://www.geeksforgeeks.org/batch/dsa-self-paced-april?tab=Contest
• https://w3schools.com/dsa/dsa_intro.php
• https://en.wikipedia.org/wiki/Sorting_algorithm
• https://docs.python.org/3/howto/sorting.html
• https://www.javatpoint.com/ds-graph
---END---