0% found this document useful (0 votes)
6 views4 pages

DAA Importants

The document outlines various algorithm design techniques, including Divide and Conquer, Greedy Technique, Dynamic Programming, Branch and Bound, and Backtracking, each with its own methodology and examples. It also discusses algorithm complexity, including time and space complexity, and the importance of analyzing and validating algorithms. Additionally, it covers specific algorithms such as Merge Sort, Prim's Minimum Spanning Tree, and the Traveling Salesman Problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views4 pages

DAA Importants

The document outlines various algorithm design techniques, including Divide and Conquer, Greedy Technique, Dynamic Programming, Branch and Bound, and Backtracking, each with its own methodology and examples. It also discusses algorithm complexity, including time and space complexity, and the importance of analyzing and validating algorithms. Additionally, it covers specific algorithms such as Merge Sort, Prim's Minimum Spanning Tree, and the Traveling Salesman Problem.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

1 2

if (low < high) { //If there are more than one© // Divide P into
Algorithm design techniques are methods used to develop sub-problems.
efficient algorithms for solving complex problems. // Find where to split the set.
int mid = (low + high)/2;
- *Divide and Conquer*: This technique involves breaking down a // Solve the sub-problems.
problem into smaller subproblems, solving each subproblem recursively, MergeSort (low, mid);
and then combining the solutions to solve the original problem. MergeSort (mid + 1, high);
- *Greedy Technique*: This technique involves making the locally optimal // Combinethe solutions.
choice at each step with the hope that these local choices will lead to a Merge(low, mid, high);
global optimum solution. }
- *Dynamic Programming*: This technique involves solving complex *2. Greedy Technique*
problems by breaking them down into smaller subproblems, solving each 1. Make the locally optimal choice at each step.
subproblem only once, and storing the solutions to subproblems to avoid 2. Hope that these local choices will lead to a global optimum solution.
redundant computation. *Example:* Fractional Knapsack Problem
- *Branch and Bound*: This technique involves solving complex Given a set of items, each with a weight and a value, determine the
problems by dividing them into smaller subproblems, solving each number of each item to include in a collection so that the total weight
subproblem, and then combining the solutions to solve the original isless than or equal to a given limit and the total value is as large as
problem. possible.
- *Backtracking*: This technique involves solving problems recursively by In this problem, the greedy approach involves choosing the item with the
trying to build a solution incrementally, removing the solutions that fail to highest value-to-weight ratio at each step. This approach guarantees an
satisfy the constraints of the problem. optimal solution for the fractional knapsack problem.²
*1. Divide and Conquer
1. Divide the problem into smaller subproblems.
*Time Complexity* measures how long an algorithm takes to
2. Solve each subproblem recursively.
3. Combine the solutions to solve the original problem. complete. It's usually expressed as a function of the input size.
*Example:* Merge Sort Algorithm A lower time complexity means the algorithm is faster. Time
Merge sort is a classic example of the divide-and-conquer technique. complexity is often represented using Big O notation (e.g.,
Here's how it works¹: O(n), O(n^2)). It's a crucial factor in determining an algorithm's
- Divide the array into two halves. efficiency.
- Recursively sort each half.
- Merge the two sorted halves into a single sorted array.
*Space Complexity*
Merge Sort - Algorithm
void MergeSort (int low, int high) Space complexity measures how much memory an algorithm uses. It's
// a[low :high] is a global array to be sorted. also expressed as a function of the input size. A lower space complexity
// Small(P) is true if there is only one element means the algorithm uses less memory. Space complexity is important
// to sort. In this case the list is already sorted. for systems with limited memory. It's also represented using Big O
{ notation (e.g., O(n), O(1)).

3 4

The following algorithm is recursively finding the maximum and 11. Min cost: = Cost [k,l];
minimum. 12. t(1,1):= k; t (1,2):= l;
13. for i:= 1 to n do //initialize near
1 void MaxMin (int i, int j, Type & max, Type & min) 14. if (cost (i,l)<cost (i,k) then near (i):= l;
2 // a[1:n] is a global array. Parameters i and j are integers, 15. else near (i): = k;
3 // 1 <= i <= j <=n. The effect is to set max and min to the 16. near (k): = near (l): = 0;
4 // largest and smallest values in a[i :j],respectively. 17. for i: = 2 to n-1 do
5{ 18. { //find n-2 additional edges for t
6 if (i = = j) max = min = a[i]; // Small(P) 19. let j be an index such that near (j) !=0 & cost (j, near (i)) is minimum;
7 else if (i = = j - 1) { // Another case of Small(P) 20. t (i,1): = j ;t (i,2): = near (j);
8 if (a[i] < a[j]) { max = a[j]; min =a[i]; } 21. min cost: = Min cost + cost (j, near (j));
9 else { max = a[i]; min:=a[j]; } 22. near (j): = 0;
10 } 23. for k:=1 to n do // update near ()
11 else { // if Pis not small, divide P into sub-problems. 24. if ((near (k) !=0) and (cost {k, near (k)) > cost (k,j)))
12 // Find where to split the set. 25. then near (k): = j;
13 int mid = (i+j)/2; Tpe max1, min1; 26. }
14 // Solve the sub-problems. 27. return mincost;
15 MaxMin (i, mid, max, min);
16 MaxMin (mid+l, j, maxl, minl);
17 // Combine the solutions.
18 if (max < max1) max = maxl;
19 if (min > min1) min = mini;
20 }
21 }

1. Prim's minimum spanning tree algorithm


2. Algorithm Prim (E, cost, n,t)
3. // E is the set of edges in G. Cost [1:n, 1:n] is the
4. // Cost adjacency matrix of an n vertex graph such that
5. // Cost [i,j] is either a positive real no. or ∞ if no edge (i,j) exists.
6. //A minimum spanning tree is computed and
7. //Stored in the array T[1:n-1,1: 2].
8. //(t [i, 1], t[i,2]) is an edge in the minimum cost spanning tree. The final *Best-Case Complexity*
cost is
returned The best-case complexity is the minimum time or resources an algorithm
9. { requires to complete, often occurring when the input is already optimized
10. Let (k, l) be an edge with min cost in E or sorted.*Worst-Case Complexity*
5 6

The worst-case complexity is the maximum time or resources an algorithms or performance analysis refers to the task of determining how
algorithm requires to complete, often occurring when the input much computing time and storage an algorithm requires. This is a
challenging area which sometimes requires greatmathematical skill. An
is the most difficult to process.
important result of this study is that it allows to make quantitative
judgmentsabout the value of one algorithm over anotheralgorithm
*Average-Case Complexity* performs in the best case, in the worst case, or on the averageare
The average-case complexity is the expected time or resources an typical.
algorithm requires to complete, averaged over all possible inputs. 4. How to test a program – Testing a program consists of two phases:
Debugging and profiling
Algorithm analysis (or performance measurement).Debugging is the process of executing
programs on sample data sets to determine whether faulty results occur
1. How to devise algorithms - Creating an algorithm is an art which may
and, if so, to correct them.However, as E. Dijkstra has
never be fully automated. A major goal is to study various design
pointed out, "debugging can only point to the presence f errors, but not
techniques that have proven to be useful in that they have often yielded
to their absence". In casesin which it cannot verify the correctness of
good algorithm. By mastering these design strategies, it will
output on sample data, the following strategy can be
becomeeasier to devise new and useful algorithms. Dynamic
employed:
programming is one such technique. Some of the techniques are
especially useful in fields other than computer science such as
operations researchand electrical engineering. . Other some important
All-Pairs Shortest Paths
design techniques are linear, nonlinear, and integer programming. 1 void AllPaths(float cost[] [SIZE], float A[] [SIZE], int n)
2. How to validate algorithms – Once an algorithm 2 // cost[1:n] [1:n] is the cost adjacency matrix of
isdevised,itisnecessary to show that it computes the correct answer for 3 // a graph with vertices ; A[i][ j]is the cost of
all possible legal inputs. We refer to this process as algorithm 4 //a shortest path from vertex i to vertex j.
validation. The purpose of the validation is to assure us that this 5 // cost[i][i] = 0.0 for 1≤i ≤n.6 {
algorithm will work correctly independently of the issues concerning the 7 for( i = 1; i<=n;i++)
programming language it will eventually be written in. 8 for(int j =1; j <=n; j++)
Once the validity of the method has been shown, a program can be 9 A[i][j]=cost[i][j]; //Copy cost into A.
written and a second phase begins. This phase is referred to as program 10 for(int k =1; k<=n; k++)
proving or sometimes as program verification. 11 for( i =1; i<=n; i++)
A complete proof of program correctness requires that each statement of 12 for( j =1; j<=n; j++)
the programming language be precisely defined and all basic operations 13 A[i][j] = min(A[i][j], A[i][k] + A[k][j]);
be proved correct. All these details may 14 }
cause a proof to be very much longer than the program.
3. How to analyze algorithms - This field of study is called analysis of Algorithm for finding single source Shortest Path
algorithms. As an algorithm is executed, it uses the computer's central
1. Algorithm ShortestPath(v, cost, dist, n)
processing unit (CPU) to perform operationsand its memory (both
2. //dist[j], 1≤j≤n, is set to the length of the shortest path from vertex v
immediate and auxiliary) to hold the program and data. Analysis of
to vertex j in graph g with n-vertices.

7 8

3. // dist[v] is zero The goal is to find the shortest possible tour that visits each city exactly
4. { for i=1 to n do{ once and returns to city A.
5. s[i]=false; TSP remains an active area of research, with many applications in fields
6. dist[i]=cost[v,i]; like logistics, transportation, and computer science.
7. } 8. s[v]=true;
9. dist[v]:=0.0; // put v in s 1.MULTISTAGE GRAPHS
10. for num=2 to n do{
1. The multistage graph problem is to find a minimum cost from a source
11. // determine n-1 paths from v
to a sink.
12. choose u form among those vertices not in s such that dist[u] is
2. A multistage graph is a directed graph having a number of multiple
minimum.
stages, where
13. s[u]=true; // put u in s
stages element should be connected consecutively.
14. for (each w adjacent to u with s[w]=false) do
3. In this multiple stage graph, there is a vertex whose in degree is 0 that
15. if(dist[w]>(dist[u]+cost[u, w])) then
is known as the
16. dist[w]=dist[u]+cost[u, w];
source. And the vertex with only one out degree is 0 is known as the
17. } 18. }
destination
vertex.
Algorithm: Traveling-Salesman-Problem 4. A multistage graph G = (V, E) is a directed graph where vertices are
1. C ({1}, 1) = 0 partitioned
2. for s = 2 to n do into k (where k > 1) number of disjoint subsets S = {s1,s2,…,sk}such that
3. for all subsets S Є {1, 2, 3, … , n} of size s and containing 1 edge (u, v) is
4. C (S, 1) = ∞ in E, then u Є si and v Є s1 + 1 for some subsets in the partition and |s1|
5. for all j Є S and j ≠ 1 = |sk| = 1.
6. C (S, j) = min {C (S – {j}, i) + d(i, j) for i Є S and i ≠ j} 5. The vertex s Є s1 is called the source and the vertex t Є sk is called
7. Return minj C ({1, 2, 3, …, n}, j) + d(j, i) sink.
(TSP): Given a set of cities and distance between every pair 6. G is usually assumed to be a weighted graph. In this graph, cost of an
of cities, the problem is to find the shortest possible route that visits edge (i, j) is
every city exactly once and returns to the starting point represented by c(i, j). Hence, the cost of path from source s to sink t is
*Example:* the sum of
Suppose a salesman needs to visit 5 cities (A, B, C, D, E) and return to costs of each edges in this path.
the starting city (A). The distances between cities are: 7. The multistage graph problem is finding the path with minimum cost
A B C D E from source s to sink t
A 0 10 15 20 25
B 10 0 35 30 20 1. Algorithm Fgraph(G, k, n, p)
C 15 35 0 25 30
2. // The input is a k-stage graph G = (V, E) with n vertices // indexed in
D 20 30 25 0 15
order or
E 25 20 30 15 0
9 10

stages. E is a set of edges and c [i, j] // is the cost of (i, j). p [1 : k] is a 5. If(x[k]=0) then return;
minimum 6. If(k=n) then write(x[1:n]);
cost path. 7. Else Hamiltonian(k+1);
3. { 8. } until(false)
4. cost [n] := 0.0; 9. }
5. for j:= n - 1 to 1 step – 1 do
6. { // compute cost [j] Basic search: preorder 1. Algorithm
7. let r be a vertex such that (j, r) is an edge of G and c [j, r] + cost [r] is
2. Algorithm preorder(t)
minimum;
3. //t is a binary tree. Each node of t has
8. cost [j] := c [j, r] + cost [r];
4. //three fields: lchild, data, and rchild.
9. d [j] := r: 10. }
5. {if t!=0 then6. {Vist(t);
11. // Find a minimum cost path.
7. preorder(t->lchild);
12. p [1] := 1; p [k] := n;
8. preorder(t->rchild);9. }}
13. for j := 2 to k - 1 do
Inorder:1. Algorithm:
14. p [j] := d [p [j - 1]];15. }
2. Algorithm inorder(t)
3. //t is a binary tree. Each node of tHas
HAMILTONIAN CYCLES: 4. //three fields: lchild, data, and rchild.
Def: Let G=(V, E) be a connected graph with n vertices. A Hamiltonian 5. {
cycle is a round 6. if t!=0 then
trip path along n-edges of G that visits every vertex once & returns to its 7. {postorder(t->lchild);
starting position. 8. Visit(t);
It is also called the Hamiltonian circuit. 9. postorder(t->rchild);
Hamiltonian circuit is a graph cycle (i.e., closed loop) through a graph 10. }}
that visits each node Postorder:1. Algorithm:
exactly once. 2. Algorithm postorder(t)
A graph possessing a Hamiltonian cycle is said to be Hamiltonian graph. 3. //t is a binary tree. Each node of t has
In graph G, Hamiltonian cycle begins at some vertiex v1 ∈ G and the 4. //three fields: lchild, data,
vertices of G are 5. and rchild.
visited in the order v1,v2,---vn+1, then the edges (vi, vi+1) are in E, 1 ≤ i 6. {if t!=0 then
≤ n. 7. {postorder(t->lchild);
Finding all Hamiltonian Cycles 8. postorder(t->rchild);
1. Algorithm Hamiltonian(k) 9. Visit(t);
2. { 10. }11. }
3. Repeat{
4. NextValue(k); //assign a legal next
value to x[k]

11 12

1.DepthFirst Traversal • In other words, it exhaustively searches the entire graph or sequence
without considering the goal until it finds it. It does not use a heuristic
• Depth first search (DFS) is an algorithm for traversing or searching a
algorithm.
tree, tree structure, or graph.
• From the standpoint of the algorithm , all child nodes obtained by
• One starts at the root (selecting some node as the root in the graph
expanding a node are added to a FIFO(i.e., First In, First Out) queue.
case) and explores
• In typical implementations, nodes that have not yet been examined for
as far as possible along each branch before backtracking.
their neighbors are placed in some container (such as a queue or linked
• Formally, DFS is an uninformed search that progresses by expanding
list) called "open" and then once examined are placed in the container
the first child
"closed”
node of the search tree that appears and thus going deeper and deeper
1. Algorithm BFS(v)
until a goal node is found, or until it hits a node that has no children.
2. 2.// Abredth first search of G is carried out beginning at vertex v for
• Then the search backtracks, returning to the most recent node it hasn't
any node i,vistited[i]=1 if
finished exploring. In a non-recursive implementation, all freshly
i has already been visited.The graph G and array visited[] are global;
expanded nodes are added to a stack for exploration.
visited [] is initialized to
1. Algorithm DFS(v)
zero
2. //Given an undirected graph G=(V,E) with n vertices and an array
3. { u=v;// q is a queue of unexplored vertices
visited[] initially set to zero,this algorithm visits all vertices
4. Visited [v]=1;
reachable from v.G and visite [] are global3. {
5. Repeat
4. Visited[v]=1
6. {
5. For each vertex w adjacent from v do
7. For all vertices w adjacent from u do
6. {
8. { if (visited [w])=0 then
7. If(visited[w]=0) then DFS (w);
9. {
8. }}
10. Add w to q;// w is unexplored
11. Visited[w]=1;
• Rule 1 − Visit the adjacent unvisited vertex. Mark it as visited. Display
12. }}
it. Push it in a stack.
13. If q is empty then return;// No unexplored vertex
• Rule 2 − If no adjacent vertex is found, pop up a vertex from the stack.
14. Delete the next element u from q;
(It will pop up all the vertices from the stack, which do not have adjacent
15. //get first unexplored vertex
vertices.)
16. }until(false);17. }
• Rule 3 − Repeat Rule 1 and Rule 2 until the stack is empty.

2.Breadth first search and traversal


Backtracking
• BFS is an uninformed search method that aims to expand an d
There are three types of problems in backtracking –
examine all nodes of a Graph or combination of sequences by
1. Decision Problem – In this, we search for a feasible solution.
systematically searching through every solution.
2. Optimization Problem – In this, we search for the best solution.
3. Enumeration Problem – In this, we find all feasible solutions.
13 14

Consider the below example to understand the Backtracking approach void GreedyKnapsack ( float m, int n)
more formally,
// p[1:n] and w[1:n] contain the profits and weights
Given an instance of any computational problem P and data D
// respectively of the n objects ordered such that
corresponding to the instance,
//p[i] / w[i] >= p[i++1] / w[i+1]. m is the knapsack
all the constraints that need to be satisfied in order to solve the problem
// size and x[1:n] is the solution vector.
are represented by C.
{
A backtracking algorithm will then work as follows:
for (int i=1; i <= n; i++) x[i] = 0.0; //initialize x
The Algorithm begins to build up a solution, starting with an empty
float U = m;
solution set S. S = {}
for ( i=1; i <= n; i++)
1. Add to S the first move that is still left (All possible moves are added
{ if( w[i] > U) break;
to S one by one).
x[i] =1.0;
This now creates a new sub-tree s in the search tree of the algorithm.
U -= w[i];
2. Check if S+s satisfies each of the constraints in C.
}
• If Yes, then the sub-tree s is “eligible” to add more “children”.
if (i <= n) x[i] = U / w[i];
• Else, the entire sub-tree s is useless, so recurs back to step 1 using
}applications-
argument S.
• portfolio optimization
3. In the event of “eligibility” of the newly formed sub-tree s, recurs back
• Cutting stock problems
to step 1, using argument S+s.
4. If the check for S+s returns that it is a solution for the entire data D.
Output and terminate 1. Algorithm Bicomp(u,v)
the program.If not, then return that no solution is possible with the 2. \\u is a start vertex for dept first search.visits parent if any in the depth
current s and hence discard it. firstspanningtree.It is assumed that the global arraydfn is initially zero
and that
Algorithm for new queen be placed the global variable num is initialized to 1.n is the number of vertices in G.
3. { dfn [u]=num,L[u]=num;num=num+1;
1. Algorithm void NQueens(int k, int n)
4. For each vertex w adjacent from u do
2. // Using backtracking, this procedure prints all
5. {if((v!=w) and (dfn[w]<dfn[u]))then
3. // possible placements of n queens on an nXn
6. Add(u,w) to the top of a stack s;
4. // chessboard so that they are nonattacking.
7. If(dfn[w]=0)then
5. {
8. {if(L[w]>=dfn[u])then
6. for i=1 to n do
9. {write(“new bicomponent”)10. Repeat
7. {
11. {deleteanedje from the top of stack s.let this edge be(x,y);
8. if (Place(k, i)) then9. {
12. Write(x,y);
10. x[k] = i;
13. }until((x,y)=(u,w))or ((x,y)=(w,u)));}
11. if (k==n) then write (x[1:n]);
14. Bicomp(w,u);//w is unvisited.
12. else NQueens(k+1, n);
15. L[u]=min(L[u],L[w]);16. }
13. } } }
17. Else if (w!=v) then L[u]=min(L[u],dfn[w]);

15 16

18. }} Algorithm Kruskal (E, cost, n,t)


2. //E is the set of edges in G. ‘G’ has ‘n’ vertices
*Strassen's Matrix Multiplication* 3. //Cost {u,v} is the cost of edge (u,v) t is the set
Strassen's algorithm is a fast matrix multiplication algorithm that reduces 4. //of edges in the minimum cost spanning tree
the time complexity of matrix multiplication from O(n^3) to O(n^log2(7)) ≈ 5. //The final cost is returned 6. {
O(n^2.81). 7. construct a heap out of the edge costs using heapify;
*How it works:* 8. for i:= 1 to n do parent (i):= -1 // place in different sets
1. Divide the matrices into smaller sub-matrices. 9. //each vertex is in different set {1} {1} {3}
2. Use a recursive approach to multiply the sub-matrices. 10. i: = 0; min cost: = 0.0;
3. Combine the results using additions and subtractions. 11. While (i<n-1) and (heap not empty))do 12. {
*Key idea:* 13. Delete a minimum cost edge (u,v) from the heaps; and reheapify
Strassen's algorithm uses 7 multiplications and 18 additions/subtractions using adjust;
to multiply two 2x2 matrices, reducing the number of multiplications 14. j:= find (u); k:=find (v);
required. 15. if (j!=k) then
*Advantages:* 16. { i: = 1+1; 17. t (i,1)=u; t (i, 2)=v;
1. Faster than standard matrix multiplication for large matrices. 18. mincost: = mincost+cost(u,v);
2. Useful in applications where matrix multiplication is a bottleneck. 19. Union (j,k);
*Disadvantages:* 20. } 21. }
1. More complex to implement than standard matrix multiplication. 22. if (i!=n-1) then write (“No spanning tree”);
2. May not be suitable for small matrices due to overhead.. 23. else return mincost; 24. }

Algorithm Greedy(a,n)
//a[1:n] contains the n inputs
{
Solution :=0;
For i=1 to n do
{
X:=select(a);
If Feasible(solution, x) then
Solution=
Union(Solution,x)
}
Return

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy