0% found this document useful (0 votes)
27 views20 pages

DAA R21 Unit2

The document discusses greedy algorithms and their applications. It covers the components of greedy algorithms, including candidate sets, selection functions, feasibility functions, objective functions, and solution functions. Specific greedy algorithms covered include minimum spanning trees (using Prim's and Kruskal's algorithms), the knapsack problem, and single source shortest paths. The greedy approach works well for optimization problems but may not always find the optimal solution, as some problems like the traveling salesman problem cannot be solved greedily.

Uploaded by

d38693402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views20 pages

DAA R21 Unit2

The document discusses greedy algorithms and their applications. It covers the components of greedy algorithms, including candidate sets, selection functions, feasibility functions, objective functions, and solution functions. Specific greedy algorithms covered include minimum spanning trees (using Prim's and Kruskal's algorithms), the knapsack problem, and single source shortest paths. The greedy approach works well for optimization problems but may not always find the optimal solution, as some problems like the traveling salesman problem cannot be solved greedily.

Uploaded by

d38693402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Design & Analysis of Algorithms R21, Autonomous II B. Tech.

II Sem CSE

UNIT II
The Greedy Method: The general Method, knapsack problem, minimum-cost spanning Trees, Optimal
Merge Patterns, Single Source Shortest Paths.
……………………………………………………………………………………………………………………………..
Among all the algorithmic approaches, the simplest and straightforward approach is the Greedy method. In this
approach, the decision is taken on the basis of current available information without worrying about the effect of the
current decision in future.
Greedy algorithms build a solution part by part, choosing the next part in such a way, that it gives an immediate benefit. This
approach never reconsiders the choices taken previously. This approach is mainly used to solve optimization problems.
Greedy method is easy to implement and quite efficient in most of the cases. Hence,we can say that Greedy algorithm
is an algorithmic paradigm based on heuristic that follows local optimal choice at each step with the hope of finding
global optimal solution.
In many problems, it does not produce an optimal solution though it gives an approximate (near optimal) solution in a
reasonable time.
Components of Greedy Algorithm
Greedy algorithms have the following five components −
 A candidate set − A solution is created from this set.
 A selection function − Used to choose the best candidate to be added to the solution.
 A feasibility function − Used to determine whether a candidate can be used to contribute to the
solution.
 An objective function − Used to assign a value to a solution or a partial solution.
 A solution function − Used to indicate whether a complete solution has been reached.
Areas of Application
Greedy approach is used to solve many problems, such as
 Finding the shortest path between two vertices using Dijkstra’s algorithm.
 Finding the minimal spanning tree in a graph using Prim’s /Kruskal’s algorithm, etc.
Where Greedy Approach Fails
In many problems, Greedy algorithm fails to find an optimal solution, moreover it may produce a worstsolution.
Problems like Travelling Salesman and Knapsack cannot be solved using this approach.
General method of greedy
Algorithm greedy(a,n)
{
For i=1 to n do
{
X=select(a)
If feasible(x) then
Solution = solution+x;
}

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 1 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

2. knapsack problem
Given a set of items, each with a weight and a value, determine a subset of items to include in a collection sothat the total
weight is less than or equal to a given limit and the total value is as large as possible.
The knapsack problem is in combinatorial optimization problem. It appears as a subproblem in many, more complex
mathematical models of real-world problems. One general approach to difficult problems is to identify the most
restrictive constraint, ignore the others, solve a knapsack problem, and somehow adjust thesolution to satisfy the ignored
constraints.
Applications
In many cases of resource allocation along with some constraint, the problem can be derived in a similar way of
Knapsack problem. Following is a set of example.
 Finding the least wasteful way to cut raw materials
 portfolio optimization
 Cutting stock problems
Problem Scenario
A thief is robbing a store and can carry a maximal weight of W into his knapsack. There are n items availablein the store
and weight of ith item is wi and its profit is pi. What items should the thief take?
In this context, the items should be selected in such a way that the thief will carry those items for which he will gain
maximum profit. Hence, the objective of the thief is to maximize the profit.
Based on the nature of the items, Knapsack problems are categorized as
 Fractional Knapsack
 Knapsack
Fractional Knapsack
In this case, items can be broken into smaller pieces, hence the thief can select fractions of items.According to the
problem statement,
 There are n items in the store
 Weight of ith item wi>0
 Profit for ith item pi>0
 and
 Capacity of the Knapsack is W

In this version of Knapsack problem, items can be broken into smaller pieces. So, the thief may take only a fraction xi of
ith item.
0⩽xi⩽1
The ith item contributes the weight xi.wi to the total weight in the knapsack and profit xi.pi
to the total profit.
Hence, the objective of this algorithm is to

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 2 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

It is clear that an optimal solution must fill the knapsack exactly, otherwise we could add a fraction of one of the
remaining items and increase the overall profit.
Thus, an optimal solution can be obtained by

Algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W)for i =


1 to n
do x[i] = 0
weight = 0 for
i = 1 to n
if weight + w[i] ≤ W then
x[i] = 1
weight = weight + w[i]
else
x[i] = (W - weight) / w[i]
weight = W
break
return x

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 3 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Analysis
If the provided items are already sorted into a decreasing order of piwi, then the while loop takes a time in
O(n); Therefore, the total time including the sort is in O(n logn).Example
Let us consider that the capacity of the knapsack W = 60 and the list of provided items are shown in thefollowing table –
Solution

After sorting all the items according to piwi


First all of B is chosen as weight of B is less than the capacity of the knapsack. Next, item A is chosen, as theavailable
capacity of the knapsack is greater than the weight of A. Now, C is chosen as the next item.
However, the whole item cannot be chosen as the remaining capacity of the knapsack is less than the weight of C.
Hence, fraction of C (i.e. (60 − 50)/20) is chosen.
Now, the capacity of the Knapsack is equal to the selected items. Hence, no more item can be selected.The total
weight of the selected items is 10 + 40 + 20 * (10/20) = 60
And the total profit is 100 + 280 + 120 * (10/20) = 380 + 60 = 440
This is the optimal solution. We cannot gain more profit selecting any different combination of items.

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 4 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

3. minimum-cost spanning Trees


What is spanning Tree?
A spanning tree is a subset of Graph G, which has all the vertices covered with minimum possible number ofedges.
Hence, a spanning tree does not have cycles and it cannot be disconnected.
By this definition, we can draw a conclusion that every connected and undirected Graph G has at least onespanning tree.
A disconnected graph does not have any spanning tree, as it cannot be spanned to all its vertices.

We found three spanning trees off one complete graph. A complete undirected graph can have maximum n n-
2 number of spanning trees, where n is the number of nodes. In the above addressed example, n is 3, hence 33−2 = 3

spanning trees are possible.


Properties of Spanning Tree
 Spanning tree has n-1 edges, where n is the number of nodes (vertices).
 From a complete graph, by removing maximum e - n + 1 edges, we can construct a spanning tree.
 A complete graph can have maximum nn-2 number of spanning trees.
Thus, we can conclude that spanning trees are a subset of connected Graph G and disconnected graphs donot have
spanning tree.

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 5 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Minimum Spanning Tree (MST)


In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight than all otherspanning trees of
the same graph. In real-world situations, this weight can be measured as distance, congestion, traffic load or any arbitrary
value denoted to the edges.
Minimum Spanning-Tree Algorithm
We shall learn about two most important spanning tree algorithms here −
 Kruskal's Algorithm
 Prim's Algorithm
Both are greedy algorithms.
1. Prim’s Algorithm
 Prim’s Algorithm is a famous greedy algorithm.
 It is used for finding the Minimum Spanning Tree (MST) of a given graph.
 To apply Prim’s algorithm, the given graph must be weighted, connected and undirected.Prim’s
Algorithm Implementation-
The implementation of Prim’s Algorithm is explained in the following steps-Step-
01:
 Randomly choose any vertex.
 The vertex connecting to the edge having least weight is usually selected.Step-
02:
 Find all the edges that connect the tree to new vertices.
 Find the least weight edge among those edges and include it in the existing tree.
 If including that edge creates a cycle, then reject that edge and look for the next least weight edge.Step-03:
 Keep repeating step-02 until all the vertices are included and Minimum Spanning Tree (MST) is
obtained.

Prim’s Algorithm Time Complexity-

Worst case time complexity of Prim’s Algorithm is-


 O(ElogV) using binary heap
 O(E + VlogV) using Fibonacci heap

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 6 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Example: Construct the minimum spanning tree (MST) for the given graph using Prim’s Algorithm-

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 7 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Step-1:
 Randomly choose any vertex. ( Here vertex 1)
 The vertex connecting to the edge having least weight is usually selected.

Step-2: Now we are at node / Vertex 6, It has two adjacent edges, one is already selected, select second one.

Step-3: Now we are at node 5, it has three edges connected, one is already selected, from reaming two select minimum
cost edge (that is having minimum weight) Such that no loops can be formed by adding thatvertex.

Step-4: Now we are at node 4, select the minimum cost edge from the edges connected to this node. Suchthat no loops
can be formed by adding that vertex.

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 8 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Step-5: Now we are at node 3, since the minimum cost edge is already selected, so to reach node 2 we selected the
edge which cost 16. Then the MST is

Step-6: Now we are at node 2, select minimum cost edge from the edges attached to this node. Such that no loops can be
formed by adding that vertex.

Since all the vertices have been included in the MST, so we stop.Now,
Cost of Minimum Spanning Tree
= Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14
= 99 units
Time Complexity: O(V2), If the input graph is represented using an adjacency list, then the time complexity of Prim’s
algorithm can be reduced to O(E log V) with the help of a binary heap. In this implementation, weare always
considering the spanning tree to start from the root of the graph

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 9 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

2. Kruskal’s Algorithm-

 Kruskal’s Algorithm is a famous greedy algorithm.


 It is used for finding the Minimum Spanning Tree (MST) of a given graph.
 To apply Kruskal’s algorithm, the given graph must be weighted, connected and undirected.Kruskal’s
Algorithm Implementation-
The implementation of Kruskal’s Algorithm is explained in the following steps-Step-
01: Sort all the edges from low weight to high weight.
Step-02:

 Take the edge with the lowest weight and use it to connect the vertices of graph.
 If adding an edge creates a cycle, then reject that edge and go for the next least weight edge.Step-03:
Keep adding edges until all the vertices are connected and a Minimum Spanning Tree (MST) is obtained.

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 10 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Analysis: Where E is the number of edges in the graph and V is the number of vertices, Kruskal's Algorithm can be
shown to run in O (E log E) time, or simply, O (E log V) time, all with simple data structures. These running times are
equivalent because:
 E is at most V2 and log V2= 2 x log V is O (log V).
 If we ignore isolated vertices, which will each their components of the minimum spanning tree, V ≤
2 E, so log V is O (log E).
Thus, the total time is
1. O (E log E) = O (E log V).

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 11 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

4. Optimal Merge Patterns


Given n number of sorted files, the task is to find the minimum computations done to reach the OptimalMerge Pattern.
When two or more sorted files are to be merged altogether to form a single file, the minimum computationsare done to
reach this file are known as Optimal Merge Pattern.
If more than 2 files need to be merged then it can be done in pairs. For example, if need to merge 4 files A, B, C, D. First
Merge A with B to get X1, merge X1 with C to get X2, merge X2 with D to get X3 as the outputfile.
If we have two files of sizes m and n, the total computation time will be m+n. Here, we use the greedystrategy by
merging the two smallest size files among all the files present.
Examples:
Given 3 files with sizes 2, 3, 4 units. Find an optimal way to combine these files
Input: n = 3, size = {2, 3, 4}
Output: 14
Explanation: There are different ways to combine these files:Method 1:
Optimal method

Method 2:

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 12 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Method 3:

Input: n = 6, size = {2, 3, 4, 5, 6, 7}


Output: 68
Explanation: Optimal way to combine these files

Input: n = 5, size = {5,10,20,30,30}


Output: 205
Input: n = 5, size = {8,8,8,8,8}
Output: 96
Observations:
From the above results, we may conclude that for finding the minimum cost of computation we need to have our
array always sorted, i.e., add the minimum possible computation cost and remove the files fromthe array. We can
achieve this optimally using a min-heap(priority-queue) data structure.

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 13 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Approach:
Node represents a file with a given size also given nodes are greater than 2

1. Add all the nodes in a priority queue (Min Heap).{pq.poll = file size}

2. Initialize count = 0 // variable to store file computations.

3. Repeat while (size of priority Queue is greater than 1)

1. int weight = pq.poll(); pq.pop;//pq denotes priority queue, remove 1st smallest and
pop(remove) it out

2. weight+=pq.poll() && pq.pop(); // add the second element and then pop(remove) it out

3. count +=weight;

4. pq.add(weight) // add this combined cost to priority queue;

4. count is the final answer


Time Complexity: O(nlogn)
Auxiliary Space: O(n)
4.1 Huffman coding
Huffman coding is a lossless data compression algorithm. The idea is to assign variable-length codes to input characters,
lengths of the assigned codes are based on the frequencies of corresponding characters. The most frequent character gets
the smallest code and the least frequent character gets the largest code.
Huffman tree or Huffman coding tree defines as a full binary tree in which each leaf of the tree corresponds to a letter in
the given alphabet.
The Huffman tree is treated as the binary tree associated with minimum external path weight that means, theone associated
with the minimum sum of weighted path lengths for the given set of leaves. So the goal is to construct a tree with the
minimum external path weight.
An example is given below-
Letter frequency table

Letter zkm c u d l e

Frequency 2 7 24 32 37 42 42 120

Huffman code

Letter Freq Code Bits

e 120 0 1

d 42 101 3

l 42 110 3

u 37 100 3

c 32 1110 4

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 14 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Letter Freq Code Bits

m 24 11111 5

k 7 111101 6

z 2 111100 6

The Huffman tree (for the above example) is given below -

5. Single Source Shortest Paths.


Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.
It differs from the minimum spanning tree because the shortest distance between two vertices might notinclude all
the vertices of the graph.
How Dijkstra's Algorithm works
Dijkstra's Algorithm works on the basis that any subpath B -> D of the shortest path A -> D between vertices A and D is
also the shortest path between vertices B and D.

Each subpath is the shortest path

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 15 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Djikstra used this property in the opposite direction i.e we overestimate the distance of each vertex from thestarting
vertex. Then we visit each node and its neighbors to find the shortest subpath to those neighbors.
The algorithm uses a greedy approach in the sense that we find the next best solution hoping that the endresult is the best
solution for the whole problem.
Example of Dijkstra's algorithm
It is easier to start with an example and then think about the algorithm.

Start with a weighted graph

Choose a starting vertex and assign infinity path values to all other devices

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 16 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Go to each vertex and update its path length

If the path length of the adjacent vertex is lesser than new path length, don't update it

Avoid updating path lengths of already visited vertices

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 17 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

After each iteration, we pick the unvisited vertex with the least path length. So we choose 5 before 7

Notice how the rightmost vertex has its path length updated twice

Repeat until all the vertices have been visited

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 18 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

Djikstra's algorithm pseudocode


We need to maintain the path distance of every vertex. We can store that in an array of size v, where v isthe number of
vertices.
We also want to be able to get the shortest path, not only know the length of the shortest path. For this, wemap each
vertex to the vertex that last updated its path length.
Once the algorithm is over, we can backtrack from the destination vertex to the source vertex to find thepath.
A minimum priority queue can be used to efficiently receive the vertex with least path distance.function
dijkstra(G, S)
for each vertex V in G
distance[V] <- infinite
previous[V] <- NULL
If V != S, add V to Priority Queue Q
distance[S] <- 0
while Q IS NOT EMPTY
U <- Extract MIN from Q
for each unvisited neighbour V of U
tempDistance <- distance[U] + edge_weight(U, V)if
tempDistance < distance[V]
distance[V] <- tempDistance
previous[V] <- U
return distance[], previous[]
…………………………………………………………………………………………………………
University Previous Year Questions topic wise

The Greedy Method


1. Explain the general principle of Greedy method and also list the applications of Greedy method.
2. Solve the following instance of knapsack problem using greedy method. n=7(objects), m=15, profitsare
(P1,P2,P3,P4,P5,P6,P7)=(10,5,15,7,6,18,3) and its corresponding weights are (W1,W2,W3,W4, W5, W6, W7
)=(2,3,5,7,1,4,1).
3. State the Greedy Knapsack. Find an optimal solution to the Knapsack instance n=3, m=20, (P1, P2, P3) = (25,
24, 15) and (W1, W2, W3) = (18, 15, 10)
4. Find an optimal solution to the knapsack instance n=7 objects and the capacity of knapsack m=15.The profits
and weights of the objects are (P1,P2,P3,P4,P5,P6,P7)=(10,5,15,7,6,18,3), (W1,W2,W3,W4,
W5,W6,W7)=(2,3,5,7,1,4,1) respectively.
5. Write and explain Prism’s algorithm for finding minimum cost spanning tree of a graph with an
example.
6. What is a Minimum Cost Spanning tree? Explain Kruskal’s Minimum costs panning tree algorithm
with a suitable example.
7. What is a Spanning tree? Explain Prim’s Minimum cost spanning tree algorithm with suitable example
8. What is optimal merge pattern? Find optimal merge pattern for ten files whose record lengths are 28, 32, 12, 5,
84, 53, 91, 35, 3, and 11
9. Discuss the Dijkstra’s single source shortest path algorithm and derive its time complexity.
10. A motorist wishing to ride from city A to B. Formulate greedy based algorithms to generate shortestpath and

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 19 of 20
Design & Analysis of Algorithms R21, Autonomous II B. Tech. II Sem CSE

explain with an example graph.


11. Discuss the single-source shortest paths algorithm with a suitable example

Prepared by: Pavan Kumar Ravinuthala, Asst. Prof., Dept. of CSE, PACE ITS, Vallur-523272. Page 20 of 20

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy