0% found this document useful (0 votes)
26 views

Programming Basics and Backtraking

The document discusses techniques for shortening code such as typedef and macros, explains time complexity analysis which estimates how long algorithms take based on input size, and covers common sorting algorithms like bubble sort and merge sort which run in O(n^2) and O(nlogn) time respectively.

Uploaded by

Parichay Papnoi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views

Programming Basics and Backtraking

The document discusses techniques for shortening code such as typedef and macros, explains time complexity analysis which estimates how long algorithms take based on input size, and covers common sorting algorithms like bubble sort and merge sort which run in O(n^2) and O(nlogn) time respectively.

Uploaded by

Parichay Papnoi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Competitive

Programming Basics &


Backtracking
By - Shubham Shukla

1
www.azcom.in
www.azcom.in
Shortening Code

Short code is ideal in competitive programming, because programs should be written as fast and as clean as
possible. Because of this, programmers often define shorter names for data types and other parts of code.

2 common practices are using typedef and Macros.

Typedef

Using the command typedef it is possible to give a shorter name to a data type.

typedef long long ll;

long long a = 123456789;

ll b = 987654321;

The command typedef can also be used with more complex types typedef vector<int> vi;

www.azcom.in
www.azcom.in
Macros

Another way to shorten code is to define macros. A macro means that certain strings in
the code will be changed before the compilation. In C++, macros are defined using the
#define keyword.

#define REP(i,a,b) for (int i = a; i <= b; i++)

REP(i,0,n)

cin>>arr[i];

Important :

typedef interpretation is performed by the compiler where #define statements are performed
by preprocessor. 3

www.azcom.in
www.azcom.in
 #define will just copy-paste the definition values at the point of use, while typedef is the
actual definition of a new type.

typedef char* ptr;


ptr a, b, c;    //char *a, *b, *c;
#define PTR char*
PTR x, y, z;    //char *x, y, z;

www.azcom.in
www.azcom.in
Time complexity

The efficiency of algorithms is important in competitive programming. Usually, it is easy to


design an algorithm that solves the problem slowly, but the real challenge is to invent a fast
algorithm. The time complexity of an algorithm estimates how much time the algorithm will
use for some input. The idea is to represent the efficiency as a function whose parameter is
the size of the input. By calculating the time complexity, we can find out whether the algorithm
is fast enough without implementing it.

The time complexity of an algorithm is denoted O(···) where the three dots represent some
function. Usually, the variable n denotes the input size.
For example, if the input is an array of numbers, n will be the size of the array, and if the input
is a string, n will be the length of the string.

www.azcom.in
www.azcom.in
Order of magnitude
A time complexity does not tell us the exact number of times the code inside a loop is
executed, but it only shows the order of magnitude. In the following examples, the code inside
the loop is executed 3n, n+5 times, but the time complexity of each code is O(n).

for(int i = 0; i < n; i++);


for(int i = 0; i < 3*n; i++);
for(int i = 0; i < n; i+=5);

for(int i = 0; i < n; i++)


for(int j = 0; j < n; j++); // number of times : n*n >> O(n^2)

for(int i = 0; i < n; i++)


for(int j = i+1; j < n; j++); // number of times : n*(n-1)/2 >> O(n^2)

www.azcom.in
www.azcom.in
Recursion
The time complexity of a recursive function depends on the number of times the function
is called and the time complexity of a single call. The total time complexity is the product of
these values.
void repeat(int n)
void repeat(int n)
{
{
if(n)
if(n)
{
repeat(n-1);
repeat(n-1);
}
repeat(n-1);
}
}

Time Complexity : O(2^n)


Time Complexity : O(n)
7

www.azcom.in
www.azcom.in
The following list contains common time complexities of algorithms:

O(1) The running time of a constant-time algorithm does not depend on the input size. A typical
constant-time algorithm is a direct formula that calculates the answer.

O(logn) A logarithmic algorithm often halves the input size at each step. The running time of such
an algorithm is logarithmic, because log2n equals the number of times n must be divided by 2 to
get 1. O(pn) A square root algorithm is slower than O(logn) but faster than O(n). A special property
of square roots is that pn=n/pn, so the square root pn lies, in some sense, in the middle of the
input.

O(n) A linear algorithm goes through the input a constant number of times. This is often the best
possible time complexity, because it is usually necessary to access each input element at least
once before reporting the answer.

O(nlogn) This time complexity often indicates that the algorithm sorts the input, because the time
complexity of efficient sorting algorithms is O(nlogn). Another possibility is that the algorithm uses a
8
data structure where each operation takes O(logn) time.
www.azcom.in
www.azcom.in
O(n^2) A quadratic algorithm often contains two nested loops. It is possible to go through all pairs
of the input elements in O(n^2) time.

O(n^3) A cubic algorithm often contains three nested loops. It is possible to go through all triplets
of the input elements in O(n^3) time.

O(2^n) This time complexity often indicates that the algorithm iterates through all subsets of the
input elements. For example, the subsets of {1,2,3} are;, {}, {1}, {2}, {3}, {1,2}, {1,3}, {2,3} and
{1,2,3}.

O(n!) This time complexity often indicates that the algorithm iterates through all permutations of the
input elements. For example, the permutations of {1,2,3} are (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2)
and (3,2,1).
9

www.azcom.in
www.azcom.in
Sorting

Sorting is a fundamental algorithm design problem. Many efficient algorithms use sorting as a
subroutine, because it is often easier to process data if the elements are in a sorted order.
std::sort( begin, end, comparator<>);

O(n^2) algorithms

Simple algorithms for sorting an array work in O(n^2) time. Such algorithms are short
and usually consist of two nested loops. A famous O(n^2) time sorting algorithm is
bubble sort where the elements ”bubble” in the array according to their values.

O(n*logn) algorithms

It is possible to sort an array efficiently in O(nlogn) time using algorithms that are not
limited to swapping consecutive elements. One such algorithm is merge sort, which is
10
based on recursion.
www.azcom.in
www.azcom.in
Bubble Sort

for (int i = 0; i < n; i++)


{
for (int j = 0; j < n-1; j++)
{
if (array[j] > array[j+1])
{
swap(array[j],array[j+1]);
}
}
}

11

www.azcom.in
www.azcom.in
void merge(int *arr, int start, int mid, int end)
Merge Sort {
int size_l = mid - start + 1, size_r = end - mid;
int la[size_l], ra[size_r];
for(int i = 0 ; i < size_l; i++)
void merge_sort(int *arr, int start, int la[i] = arr[start + i];
end) for(int i = 0 ; i < size_r; i++)
{ ra[i] = arr[mid + i + 1];
if(start < end)
{ int l_index = 0, r_index = 0, index = start;
int mid = (start + end) / 2; while(l_index < size_l && r_index <= size_r)
{
merge_sort(arr, start, mid); if(la[l_index] > ra[r_index])
merge_sort(arr, mid + 1, end); arr[index++] = ra[r_index++];
merge(arr, start, mid + 1, end); else arr[index++] = la[l_index++];
} }
} while(l_index < size_l)
arr[index++] = la[l_index++];
while(r_index <= size_r)
arr[index++] = ra[r_index++];
} 12

www.azcom.in
www.azcom.in
13

www.azcom.in
www.azcom.in
Counting sort : O(n)?

vi arr = {1, 3, 6, 9, 9, 3, 5, 2};


void countingSort(vi& arr)
{
int bookSize = maxElem(arr) + 1;
vi book(bookSize, 0);
for( int i : arr)
book[i]++;

for(int i = 0, j = 0; i < arr.size() && j < bookSize; j++)


{
//book[] = {0, 1, 1, 2, 0, 1, 1, 0, 0, 2}
while(book[j]--)
arr[i++] = j;
}
}

14

www.azcom.in
www.azcom.in
STL Data Structures

● Array
● Vector
● Map (multi, unordered)
● Set (multi, unordered)
● Queue
● Dequeue
● Stack
● List
● Pair

15

www.azcom.in
www.azcom.in
Algorithm Analysis

It is often possible to solve a problem using either data structures or sorting. Sometimes there
are remarkable differences in the actual efficiency of these approaches, which may be hidden
in their time complexities.

Let us consider a problem where we are given two lists A and B that both contain n elements.
Our task is to calculate the number of elements that belong to both of the lists. For example,
for the lists A=[5,2,8,9,4] and B=[3,2,9,5]

The answer is 3 because the numbers 2, 5 and 9 belong to both of the lists. A
straightforward solution to the problem is to go through all pairs of elements in O(n^2) time,
but next we will focus on more efficient algorithms.

16

www.azcom.in
www.azcom.in
Algo 1 and Algo2:

We construct a set of the elements that appear in A, and after this, we iterate through the
elements of B and check for each elements if it also belongs to A. This is efficient because the
elements of A are in a set. Using the set structure, the time complexity of the algorithm is
O(nlogn). Change it to unordered_set and the time complexity is reduced to O(n)

int algo1(vi arr, vi arr2) int algo2(vi arr, vi arr2)


{ int cnt = 0; { int cnt = 0;
set<int> vals; unordered_set<int> vals;
for(int i : arr) for(int i : arr)
vals.insert(i); vals.insert(i);
for(int j : arr2) for(int j : arr2)
if(vals.count(j)) if(vals.count(j))
cnt++; cnt++;
return cnt; return cnt;
} } 17

www.azcom.in
www.azcom.in
Algo 3

int algo3(vi arr, vi arr2)


{
int cnt = 0;
sort(arr.begin(), arr.end());
sort(arr2.begin(), arr2.end());
for(int i = 0, j = 0; i < arr.size() && j < arr2.size(); )
{
if(arr[i] == arr2[j])
{
cnt++; i++; j++;
}
else
(arr[i]<arr2[j]) ? i++ : j++;
}
return cnt;
}
18

www.azcom.in
www.azcom.in
19

www.azcom.in
www.azcom.in
Complete Searching

Complete search is a general method that can be used to solve almost any algorithm
problem. The idea is to generate all possible solutions to the problem using brute force, and
then select the best solution or count the number of solutions, depending on the problem.
Complete search is a good technique if there is enough time to go through all the
solutions, because the search is usually easy to implement and it always gives the correct
answer. If complete search is too slow, other techniques, such as greedy algorithms or
dynamic programming, may be needed.
Also known as brute force approach this method can be effective for small inputs but its
cost can easily go all over the roof as the size of input begins to increase.

20

www.azcom.in
www.azcom.in
Backtracking - Technique

A backtracking based algorithm begins with an empty solution and extends the solution step
by step. The search recursively goes through all different ways how a solution can be
constructed.

21

www.azcom.in
www.azcom.in
Generic Backtracking Code Structure

bool backtrack(solution)
{
for(pos in solution)
if(empty)
for(all possible values)
if(value valid at pos)
fill value in Solution
if(backtrack(Solution))
return true;
remove value from Solution
return false;
print Solution
return true;
}

22

www.azcom.in
www.azcom.in
N Queen Problem

23

www.azcom.in
www.azcom.in
Search Pruning

We can often optimize backtracking by pruning the search tree. The idea is to add
”intelligence” to the algorithm so that it will notice as soon as possible if a partial solution
cannot be extended to a complete solution. Such optimizations can have a tremendous effect
on the efficiency of the search.
Let us consider the problem of calculating the
number of paths in an n×n grid from the upper-left
corner to the lower-right corner such that the path
visits each square exactly once. For example, in a
7×7 grid, there are 111712 such paths. One of the
paths is as follows:

• running time: 483 seconds


• number of recursive calls: 76 billion

24

www.azcom.in
www.azcom.in
Optimisation 1

In any solution, we first move one step down or right. There are always two paths that are
symmetric about the diagonal of the grid after the first step. For example, the following paths
are symmetric:

25

www.azcom.in
www.azcom.in
Optimisation 2

If the path reaches the lower-right square before it has visited all other squares of the grid, it
is clear that it will not be possible to complete the solution. An example of this is the following
path:

We can terminate the search immediately if we reach the


lower-right square too early.
• running time: 119 seconds
• number of recursive calls: 20 billion
26

www.azcom.in
www.azcom.in
Optimisation 3

If the path touches a wall and can turn either left or right, the grid splits into two parts that
contain unvisited squares. For example, in the following situation, the path can turn either left
or right:

In this case, we cannot visit all squares anymore, so we can


terminate the search. This optimization is very useful:
• running time: 1.8 seconds
• number of recursive calls: 221 million
27

www.azcom.in
www.azcom.in
Optimization 4

The idea of Optimization 3 can be generalized: if the path cannot continue forward but can
turn either left or right, the grid splits into two parts that both contain unvisited squares. For
example, consider the following path:

It is clear that we cannot visit all squares anymore, so we


can terminate the search. After this optimization, the search
is very efficient:
• running time: 0.6 seconds
• number of recursive calls: 69 million

28

www.azcom.in
www.azcom.in
References

● N-Queen Video Animation : https://youtu.be/ckC2hFdLff0


● Merge Sort Reference : https://www.101computing.net/merge-sort-algorithm/
● Competitive Programmer’s Handbook By Antti Laaksonen
● STL Containers : https://www.geeksforgeeks.org/containers-cpp-stl/

Thank You

29

www.azcom.in
www.azcom.in

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy