Unit 5 Notes
Unit 5 Notes
GRAPH
A Graph is a non-linear data structure consisting of vertices and edges.
The vertices are sometimes also referred to as nodes and the edges are lines or arcs that
connect any two nodes in the graph.
A Graph is composed of a set of vertices( V ) and a set of edges( E ). The graph is
denoted by G(E, V).
Components of a Graph
Vertices: Vertices are the fundamental units of the graph. Sometimes, vertices are also
known as vertex or nodes. Every node/vertex can be labeled or unlabelled.
Edges: Edges are drawn or used to connect two nodes of the graph. It can be ordered
pair of nodes in a directed graph. Edges can connect any two nodes in any possible way.
There are no rules. Sometimes, edges are also known as arcs. Every edge can be
labeled/unlabelled.
Graphs are used to solve many real-life problems. Graphs are used to represent networks. The
networks may include paths in a city or telephone network or circuit network. Graphs are also
used in social networks like linkedIn, Facebook. For example, in Facebook, each person is
represented with a vertex(or node). Each node is a structure and contains information like
person id, name, gender, locale etc.
Types of Graphs:
Null Graph:
A graph of order n and size zero is a graph where there are only isolated vertices
with no edges connecting any pair of vertices.
1
Trivial Graph:
A graph is said to be trivial if a finite graph contains only one vertex and no edge.
Directed Graph
A graph in which all the edges are Unidirectional.
Example: Twitter
If I follow you there is no rule that you need to follow me.
Undirected Graph
A graph in which all the edges are bi-directional.
2
Example: Facebook
When I accept your friend request, me and you are friends now, we both can like or
comment on each other story or photos.
Weighted Graph
A graph in which each edge is assigned with some weight/cost/value.
Example:
3
Step 3:
Path from B-E,
B-A-D-F-E
Cost=10+1+5+1=17
Cost=17
Unweighted Graph
A graph where there is no value or weight associated with the edge.
Cyclic Graph
A graph contain cycle is called as cyclic graph.
If the graph contains the path that starts from the vertex and ends at the same vertex.
B-A-D-B
Example for directed Graph:
4
Here the path starts from the vertex B and ends in vertex B.
B-A-C-E-B
Acyclic Graph
A graph does not have any cycle is called Acyclic graph.
Connected Graph
The graph in which from one node we can visit any other node in the graph is known
as a connected graph.
Disconnected Graph
The graph in which at least one node is not reachable from a node is known as a
disconnected graph.
5
Degree
Number of edges connected to it.
Degree of node = number of edges connected to it.
Example:
Degree of A=3
Degree of B=1
Degree of C=3
Degree of D=1
Degree of E=2
Degree of F=2
Degree of G=2
Types of degree
There are two types of degrees. They are
Indegree
Outdegree
Indegree
6
Indegree of A=1
Indegree of B=3
Indegree of C=1
Indegree of D=1
Indegree of E=1
Outdegree
The number of edges going outside from that node is called Outdegree.
Outdegree of A=2
Outdegree of B=0
Outdegree of C=0
Outdegree of D=3
Outdegree of E=1
Complete Graph
A graph is complete if any node in the graph adjacent to all the nodes of the graph.
7
Path
Sequence of vertices in which each pair of successive nodes is connected by an edge.
8
Simple Path
Closed Path
Cycle
Simple Path
Cycle
Cycle is the path in which the first and last node need to be same and also all other nodes
need to be distinct.
9
GRAPH OPERATIONS
The operations involved in graph are,
Insertion
Deletion
Example Program:
def add_node(v):
if v in graph:
else:
graph[v]=[]
def add_edge(v1,v2):
if v1 not in graph:
else:
#list1=[v2,cost]
#list2=[v1,cost]
graph[v1].append(v2)
graph[v2].append(v1)
def delete_node(v):
if v not in graph:
else:
graph.pop(v)
for i in graph:
list1=graph[i]
if v in list1:
10
list1.remove(v)
graph={}
add_node("A")
add_node("B")
add_node("C")
add_node("D")
add_node("E")
add_edge("A","B")
add_edge("A","C")
add_edge("C","D")
delete_node("B")
print(graph)
Output:
GRAPH REPRESENTATIONS
In graph theory, a graph representation is a technique to store graph into the memory of
computer.
To represent a graph, we just need the set of vertices, and for each vertex the neighbors of the
vertex (vertices which is directly connected to it by an edge). If it is a weighted graph, then
the weight will be associated with each edge.
There are different ways to optimally represent a graph, depending on the density of its
edges, type of operations to be performed and ease of use.
1. Adjacency Matrix
11
o If there is any weighted graph then instead of 1s and 0s, we can store the weight of the
edge.
Example
2. Adjacency List
12
o We have an array of vertices which is indexed by the vertex number and for each
vertex v, the corresponding array element points to a singly linked list of neighbors of
v.
Example
GRAPH TRAVERSALS
Visiting all the nodes in the graph.
Any node can be the starting node. From that node we want to visit all the nodes.
Types of Techniques
It is a recursive algorithm to search all the vertices of a tree data structure or a graph.
The depth-first search (DFS) algorithm starts with the initial node of graph G and
goes deeper until we find the goal node or the node with no children.
Because of the recursive nature, stack data structure can be used to implement the
DFS algorithm.
The step by step process to implement the DFS traversal is given as follows -
1. First, create a stack with the total number of vertices in the graph.
2. Now, choose any vertex as the starting point of traversal, and push that vertex into the
stack.
3. After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to
the top of the stack.
4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the
stack's top.
5. If no vertex is left, go back and pop a vertex from the stack.
13
6. Repeat steps 2, 3, and 4 until the stack is empty.
Algorithm
Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS =
1) and set their STATUS = 2 (waiting state)
[END OF LOOP]
Step 6: EXIT
Example:
14
Step 1:
Step 2:
Pop A and place it in DFS expression.
15
Step 3:
Step 4
Push the adjacent of G in stack but here not having adjacent node. So pop from the stack.
16
Step 5
Step 6
Now, all the graph nodes have been traversed, and the stack is empty.
Example Program:
17
DFS
def add_node(v):
if v in graph:
print(v,"is already present in graph")
else:
graph[v]=[]
def add_edge(v1,v2):
if v1 not in graph:
print(v1,"is not present in the graph")
elif v2 not in graph:
print(v2,"is not present in the graph")
else:
#list1=[v2,cost]
#list2=[v1,cost]
graph[v1].append(v2)
graph[v2].append(v1)
def delete_node(v):
if v not in graph:
print(v,"is not present in the graph")
else:
graph.pop(v)
for i in graph:
list1=graph[i]
if v in list1:
list1.remove(v)
def DFS(node,visited,graph):
if node not in graph:
print("Node is not present in the graph")
if node not in visited:
18
print(node)
visited.add(node)
for i in graph[node]:
DFS(i,visited,graph)
visited=set()
graph={}
add_node("A")
add_node("B")
add_node("C")
add_node("D")
add_node("E")
add_edge("A","B")
add_edge("B","E")
add_edge("A","C")
add_edge("A","D")
add_edge("B","D")
add_edge("C","D")
add_edge("E","D")
delete_node("B")
print(graph)
DFS("A",visited,graph)
Output:
19
Complexity of Depth-first search algorithm
The time complexity of the DFS algorithm is O(V+E), where V is the number of vertices and
E is the number of edges in the graph.
BFS ALGORITHM
Breadth-first search is a graph traversal algorithm that starts traversing the graph from
the root node and explores all the neighboring nodes.
Then, it selects the nearest node and explores all the unexplored nodes.
While using BFS for traversal, any node in the graph can be considered as the root
node.
There are many ways to traverse the graph, but among them, BFS is the most
commonly used approach.
It is a recursive algorithm to search all the vertices of a tree or graph data structure.
BFS puts every vertex of the graph into two categories - visited and non-visited.
It selects a single node in a graph and, after that, visits all the nodes adjacent to the
selected node.
Applications of BFS algorithm
o BFS can be used to find the neighboring locations from a given source location.
o In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all
the neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ
this process to find "seeds" and "peers" in the network.
o BFS can be used in web crawlers to create web page indexes. It is one of the main
algorithms that can be used to index web pages. It starts traversing from the source
page and follows the links associated with the page. Here, every web page is
considered as a node in the graph.
o BFS is used to determine the shortest path and minimum spanning tree.
o BFS is also used in Cheney's technique to duplicate the garbage collection.
o It can be used in ford-Fulkerson method to compute the maximum flow in a flow
network.
Algorithm
The steps involved in the BFS algorithm to explore a graph are given as follows –
Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
20
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and
set
their STATUS = 2
(waiting state)
[END OF LOOP]
Step 6: EXIT
Example:
Step 1:
Insert A into the queue .
Step 2:
Remove the element A from the queue and placed in BFS Expression.
Step 3:
21
Insert adjacent element of A in Queue ie) B,C
Remove the element B from the queue and placed in BFS Expression.
Step 4:
Insert adjacent element of B in Queue
Remove the element C from the queue and placed in BFS Expression.
Step 5:
Insert adjacent element of C in Queue
Remove the element D from the queue and placed in BFS Expression.
22
Step 6:
D does not have any node.
Remove the element E from the queue and placed in BFS Expression.
Step 7:
E does not have any node.
Remove the element F from the queue and placed in BFS Expression.
Step 8:
F does not have any node.
Remove the element G from the queue and placed in BFS Expression.
'5' : ['3','7'],
'7' : ['8'],
23
'2' : [],
'4' : ['8'],
'8' : []
visited.append(node)
queue.append(node)
while queue:
m = queue.pop(0)
visited.append(neighbour)
queue.append(neighbour)
Time complexity of BFS depends upon the data structure used to represent the graph. The
time complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm
explores every node and edge. In a graph, the number of vertices is O(V), whereas the
number of edges is O(E).
24
DAG
A DAG is always topologically ordered, i.e. for each edge in the graph, the start vertex of the
edge occurs earlier in the sequence than the ending vertex of the edge.
Example
In the above directed graph, if we find the paths from any node, say u, we will never find a
path that come back to u. Hence, this is a DAG.
Algorithm
1. First step is to identify the node that has no in-degree (no incoming edges) and select
that node as the source node of the graph.
2. Now delete the source node that has zero in-degree and its associated edges. The
deleted vertex will be added in the result array.
3. Update the in-degree of the adjacent nodes after deleting the outgoing edges.
4. The above steps will be repeated until the graph is empty.
The result array that we will get the end of the process is known as the topological ordering
of the directed graph. If some node left but they have incoming edges, that mean graph is not
acyclic. If given graph is not acyclic, topological ordering doesn't exist.
Example Program:
25
def __init__(self,n):
self.graph = defaultdict(list)
self.N = n
def addEdge(self,m,n):
self.graph[m].append(n)
def sortUtil(self,n,visited,stack):
visited[n] = True
if visited[element] == False:
self.sortUtil(element,visited,stack)
stack.insert(0,n)
def topologicalSort(self):
visited = [False]*self.N
stack =[]
if visited[element] == False:
self.sortUtil(element,visited,stack)
print(stack)
graph = Graph(5)
graph.addEdge(0,1)
graph.addEdge(0,3)
graph.addEdge(1,2)
graph.addEdge(2,3)
graph.addEdge(2,4)
graph.addEdge(3,4)
Output:
26
The Topological Sort Of The Graph Is:
[0, 1, 2, 3, 4]
The time complexity of the topology sorting is O(M +N) where M is the number of edges in
the graph and N is the number of nodes in the graph.
Application
GREEDY ALGORITHMS
An algorithm is designed to achieve optimum solution for a given problem. In greedy
algorithm approach, decisions are made from the given solution domain. As being greedy, the
closest solution that seems to provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution, which may eventually lead to
globally optimized solutions. However, generally greedy algorithms do not provide globally
optimized solutions.
Examples
Most networking algorithms use the greedy approach. Here is a list of few of them −
Prim's Minimal Spanning Tree Algorithm
Kruskal's Minimal Spanning Tree Algorithm
Dijkstra's Minimal Spanning Tree Algorithm
27
Prim's Algorithm
Prim's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds
the subset of the edges of that graph which
form a tree that includes every vertex
has the minimum sum of weights among all the trees that can be formed from the graph.
T = ∅;
M = { 1 };
T = T ∪ {(m, n)}
M = M ∪ {n}
Here create two sets of nodes i.e M and M-N. M set contains the list of nodes that have been
visited and the M-N set contains the nodes that haven’t been visited. Later, we will move
each node from M to M-N after each step by connecting the least weight edge.
Example
Let us consider the below-weighted graph
28
consider the source vertex to initialize the algorithm
Now, we will choose the shortest weight edge from the source vertex and add it to finding the
spanning tree.
Then, choose the next nearest node connected with the minimum edge and add it to the
solution. If there are multiple choices then choose anyone.
29
Continue the steps until all nodes are included and we find the minimum spanning tree.
Cost of MST=2+1+3+5=11
Output
30
Python Code for Prim’s Algorithm
INF = 9999999
N=5
G = [[0, 19, 5, 0, 0],
[19, 0, 5, 9, 2],
[5, 5, 0, 1, 6],
[0, 9, 1, 0, 1],
[0, 2, 6, 1, 0]]
selected_node = [0, 0, 0, 0, 0]
no_edge = 0
selected_node[0] = True
print("Edge : Weight\n")
while (no_edge < N - 1):
minimum = INF
a=0
b=0
for m in range(N):
if selected_node[m]:
for n in range(N):
if ((not selected_node[n]) and G[m][n]):
# not in selected and there is an edge
if minimum > G[m][n]:
minimum = G[m][n]
a=m
b=n
print(str(a) + "-" + str(b) + ":" + str(G[a][b]))
selected_node[b] = True
no_edge += 1
Time Complexity:
The running time for prim’s algorithm is O(VlogV + ElogV) which is equal to O(ElogV)
because every insertion of a node in the solution takes logarithmic time. Here, E is the
number of edges and V is the number of vertices/nodes. However, we can improve the
running time complexity to O(E + logV) of prim’s algorithm using Fibonacci Heaps.
Applications
31
Prim’s algorithm is used in network design
It is used in network cycles and rail tracks connecting all the cities
Prim’s algorithm is used in laying cables of electrical wiring
Prim’s algorithm is used in irrigation channels and placing microwave towers
It is used in cluster analysis
Prim’s algorithm is used in gaming development and cognitive science
Pathfinding algorithms in artificial intelligence and traveling salesman problems make
use of prim’s algorithm.
Kruskal's Algorithm
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and
finds the subset of the edges of that graph which
form a tree that includes every vertex
has the minimum sum of weights among all the trees that can be formed from the graph
Kruskal's algorithm work process
It falls under a class of algorithms called greedy algorithms that find the local optimum in the
hopes of finding a global optimum.
We start from the edges with the lowest weight and keep adding edges until we reach our
goal.
Step 1:
32
Step 2:
Step 3:
Step 4:
33
def add_edge(self, u, v, w):
self.graph.append([u, v, w])
def find(self, parent, i):
if parent[i] == i:
return i
return self.find(parent, parent[i])
def apply_union(self, parent, rank, x, y):
xroot = self.find(parent, x)
yroot = self.find(parent, y)
if rank[xroot] < rank[yroot]:
parent[xroot] = yroot
elif rank[xroot] > rank[yroot]:
parent[yroot] = xroot
else:
parent[yroot] = xroot
rank[xroot] += 1
def kruskal_algo(self):
result = []
i, e = 0, 0
self.graph = sorted(self.graph, key=lambda item: item[2])
parent = []
rank = []
for node in range(self.V):
parent.append(node)
rank.append(0)
while e < self.V - 1:
u, v, w = self.graph[i]
i=i+1
x = self.find(parent, u)
y = self.find(parent, v)
if x != y:
e=e+1
result.append([u, v, w])
self.apply_union(parent, rank, x, y)
for u, v, weight in result:
print("%d - %d: %d" % (u, v, weight))
g = Graph(6)
g.add_edge(0, 1, 4)
g.add_edge(0, 2, 4)
g.add_edge(1, 2, 2)
g.add_edge(1, 0, 4)
g.add_edge(2, 0, 4)
g.add_edge(2, 1, 2)
g.add_edge(2, 3, 3)
g.add_edge(2, 5, 2)
g.add_edge(2, 4, 4)
g.add_edge(3, 2, 3)
g.add_edge(3, 4, 3)
g.add_edge(4, 2, 4)
g.add_edge(4, 3, 3)
34
g.add_edge(5, 2, 2)
g.add_edge(5, 4, 3)
g.kruskal_algo()
Output:
DYNAMIC PROGRAMMING
Dynamic programming is a problem-solving technique for resolving complex
problems by recursively breaking them up into sub-problems, which are then each
solved individually.
Mostly, these algorithms are used for optimization.
Dynamic programming can be used in both top-down and bottom-up manner.
Example
The following computer problems can be solved using dynamic programming approach −
35
Characteristics of Dynamic Programming Algorithm:
In general, dynamic programming (DP) is one of the most powerful techniques for
solving a certain class of problems.
There is an elegant way to formulate the approach and a very simple thinking process,
and the coding part is very easy.
Additionally, the optimal solutions to the subproblems contribute to the optimal
solution of the given problem.
1. Top-Down(Memoization):
Break down the given problem in order to begin solving it. If you see that the problem has
already been solved, return the saved answer. If it hasn’t been solved, solve it and save it.
This is usually easy to think of and very intuitive, This is referred to as Memoization.
2. Bottom-Up(Dynamic Programming):
Analyze the problem and see in what order the subproblems are solved, and work your way
up from the trivial subproblem to the given problem. This process ensures that the
subproblems are solved before the main problem. This is referred to as Dynamic
Programming.
Tabulation Memoization
36
Tabulation Memoization
Generally, tabulation(dynamic
programming) is an iterative On the other hand, memoization is
Approach approach a recursive approach.
Example:
Towers of Hanoi
Tower of Hanoi is a mathematical puzzle where we have three rods (A, B, and C)
and N disks. Initially, all the disks are stacked in decreasing value of diameter i.e., the
smallest disk is placed on the top and they are on rod A. The objective of the puzzle is to
move the entire stack to another rod (here considered C), obeying the following simple
rules:
Only one disk can be moved at a time.
Each move consists of taking the upper disk from one of the stacks and placing it on top
of another stack i.e. a disk can only be moved if it is the uppermost disk on a stack.
No disk may be placed on top of a smaller disk.
Program
def TowerOfHanoi(n, from_rod, to_rod, aux_rod):
37
if n == 0:
return
TowerOfHanoi(n-1, from_rod, aux_rod, to_rod)
print("Move disk", n, "from rod", from_rod, "to rod", to_rod)
TowerOfHanoi(n-1, aux_rod, to_rod, from_rod)
N=3
TowerOfHanoi(N, 'A', 'C', 'B')
Output
Move disk 1 from rod A to rod C
Move disk 2 from rod A to rod B
Move disk 1 from rod C to rod B
Move disk 3 from rod A to rod C
Move disk 1 from rod B to rod A
Move disk 2 from rod B to rod C
Move disk 1 from rod A to rod C
Time complexity: O(2N), There are two possibilities for every disk. Therefore, 2 * 2 * 2
* . . . * 2(N times) is 2 N
SHORTEST PATHS
The shortest path problem is the problem of finding a path between two vertices (or
nodes) in a graph such that the sum of the weights of its constituent edges is
minimized.
The shortest path between any two nodes of the graph can be founded using many
algorithms, such as Dijkstra’s algorithm.
Dijkstra's Algorithm
Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.
It differs from the minimum spanning tree because the shortest distance between two vertices
might not include all the vertices of the graph.
This Algorithm is greedy because it always chooses the shortest or closest node from the
origin.
The term “greedy” means that among a set of outcomes or results, the Algorithm will choose
the best of them.
38
So, Dijkstra’s Algorithm finds all the shortest paths from a single destination node. As a
result, it behaves like a greedy algorithm.
Step 1) Initialize the starting node with 0 costs and the rest of the node as Infinity Cost.
Step 2) Maintain an array or list to keep track of the visited nodes
Step 3) Update the node cost with the minimum cost. It can be done by comparing the current
cost with the path cost.
Step 4) Continue step 3 until all the node is visited.
Example:
Step 1:
Here the shortest path starts from ‘a’. Initially the cost of ‘a’ is zero because there is no Path
from ‘a’ initially.
All other vertices are set to be INFINITY.
39
There are two possibilities for ‘a’
ie) from a to b
from a to c
From a to b the cost is 0+4=4
From a to c the cost is 0+2=2
Here the minimum cost is 2 which is travel from a to c
Step 2:
40
From c to e
From c to b the cost is 2+1=3
From c to d the cost is 2+8=10
From c to e the cost is 2+10=12
Here the minimum cost is 3 which is travel from c to b
Step 3:
41
Step 4:
42
Step 5:
Example Program
class Graph():
def __init__(self, vertices):
43
self.V = vertices
self.graph = [[0 for column in range(vertices)]
for row in range(vertices)]
def printSolution(self, dist):
print("Vertex \t Distance from Source")
for node in range(self.V):
print(node, "\t\t", dist[node])
def minDistance(self, dist, sptSet):
min = 1e7
for v in range(self.V):
if dist[v] < min and sptSet[v] == False:
min = dist[v]
min_index = v
return min_index
def dijkstra(self, src):
dist = [1e7] * self.V
dist[src] = 0
sptSet = [False] * self.V
for cout in range(self.V):
u = self.minDistance(dist, sptSet)
sptSet[u] = True
for v in range(self.V):
if (self.graph[u][v] > 0 and
sptSet[v] == False and
dist[v] > dist[u] + self.graph[u][v]):
dist[v] = dist[u] + self.graph[u][v]
self.printSolution(dist)
g = Graph(9)
g.graph = [[0, 4, 0, 0, 0, 0, 0, 8, 0],
44
[4, 0, 8, 0, 0, 0, 0, 11, 0],
[0, 8, 0, 7, 0, 4, 0, 0, 2],
[0, 0, 7, 0, 9, 14, 0, 0, 0],
[0, 0, 0, 9, 0, 10, 0, 0, 0],
[0, 0, 4, 14, 10, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 2, 0, 1, 6],
[8, 11, 0, 0, 0, 0, 1, 0, 7],
[0, 0, 2, 0, 0, 0, 6, 7, 0]
]
g.dijkstra(0)
Output:
Basically, a spanning tree is used to find a minimum path to connect all nodes of the graph.
Some of the common applications of the spanning tree are listed as follows -
o Cluster Analysis
o Civil network planning
45
o Computer network routing protocol
Properties of spanning-tree
So, a spanning tree is a subset of connected graph G, and there is no spanning tree of a
disconnected graph.
A minimum spanning tree can be defined as the spanning tree in which the sum of the
weights of the edge is minimum.
The weight of the spanning tree is the sum of the weights given to the edges of the
spanning tree.
In the real world, this weight can be considered as the distance, traffic load,
congestion, or any random value.
The sum of the edges of the above graph is 16. Now, some of the possible spanning trees
created from the above graph are –
Step 1:
46
Step 2:
Step 3:
Step 4:
So, the minimum spanning tree that is selected from the above spanning trees for the given
weighted graph is –
47
Applications of minimum spanning tree
A minimum spanning tree can be found from a weighted graph by using the algorithms given
below -
o Prim's Algorithm
o Kruskal's Algorithm
Prim's Algorithm
Prim's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds
the subset of the edges of that graph which
form a tree that includes every vertex
has the minimum sum of weights among all the trees that can be formed from the graph.
48
4. Initialize the minimum spanning tree with a vertex chosen at random.
5. Find all the edges that connect the tree to new vertices, find the minimum and add it to the
tree
6. Keep repeating step 2 until we get a minimum spanning tree
Pseudocode
T = ∅;
M = { 1 };
T = T ∪ {(m, n)}
M = M ∪ {n}
Here create two sets of nodes i.e M and M-N. M set contains the list of nodes that have been
visited and the M-N set contains the nodes that haven’t been visited. Later, we will move
each node from M to M-N after each step by connecting the least weight edge.
Example
Let us consider the below-weighted graph
49
Now, we will choose the shortest weight edge from the source vertex and add it to finding the
spanning tree.
Then, choose the next nearest node connected with the minimum edge and add it to the
solution. If there are multiple choices then choose anyone.
Continue the steps until all nodes are included and we find the minimum spanning tree.
50
Cost of MST=2+1+3+5=11
Output
51
b=0
for m in range(N):
if selected_node[m]:
for n in range(N):
if ((not selected_node[n]) and G[m][n]):
# not in selected and there is an edge
if minimum > G[m][n]:
minimum = G[m][n]
a=m
b=n
print(str(a) + "-" + str(b) + ":" + str(G[a][b]))
selected_node[b] = True
no_edge += 1
Time Complexity:
The running time for prim’s algorithm is O(VlogV + ElogV) which is equal to O(ElogV)
because every insertion of a node in the solution takes logarithmic time. Here, E is the
number of edges and V is the number of vertices/nodes. However, we can improve the
running time complexity to O(E + logV) of prim’s algorithm using Fibonacci Heaps.
Applications
Kruskal's Algorithm
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and
finds the subset of the edges of that graph which
form a tree that includes every vertex
has the minimum sum of weights among all the trees that can be formed from the graph
Kruskal's algorithm work process
It falls under a class of algorithms called greedy algorithms that find the local optimum in the
hopes of finding a global optimum.
We start from the edges with the lowest weight and keep adding edges until we reach our
goal.
52
5. Take the edge with the lowest weight and add it to the spanning tree. If adding the edge
created a cycle, then reject this edge.
6. Keep adding edges until we reach all vertices.
Example:
Step 1:
Step 2:
Step 3:
53
Step 4:
54
rank.append(0)
while e < self.V - 1:
u, v, w = self.graph[i]
i=i+1
x = self.find(parent, u)
y = self.find(parent, v)
if x != y:
e=e+1
result.append([u, v, w])
self.apply_union(parent, rank, x, y)
for u, v, weight in result:
print("%d - %d: %d" % (u, v, weight))
g = Graph(6)
g.add_edge(0, 1, 4)
g.add_edge(0, 2, 4)
g.add_edge(1, 2, 2)
g.add_edge(1, 0, 4)
g.add_edge(2, 0, 4)
g.add_edge(2, 1, 2)
g.add_edge(2, 3, 3)
g.add_edge(2, 5, 2)
g.add_edge(2, 4, 4)
g.add_edge(3, 2, 3)
g.add_edge(3, 4, 3)
g.add_edge(4, 2, 4)
g.add_edge(4, 3, 3)
g.add_edge(5, 2, 2)
g.add_edge(5, 4, 3)
g.kruskal_algo()
Output:
55
In computer network (LAN connection)
P Class
NP Class
56
It is the collection of decision problems that can be solved by a non-deterministic
machine in polynomial time.
Features:
1. The solutions of the NP class are hard to find since they are being solved by a non-
deterministic machine but the solutions are easy to verify.
2. Problems of NP can be verified by a Turing machine in polynomial time.
This class contains many problems that one would like to be able to solve effectively:
1. Boolean Satisfiability Problem (SAT).
2. Hamiltonian Path Problem.
3. Graph coloring.
Co-NP Class
NP-hard class
An NP-hard problem is at least as hard as the hardest problem in NP and it is the class of
the problems such that every problem in NP reduces to NP-hard.
Features:
57
NP-complete class
A problem is NP-complete if it is both NP and NP-hard. NP-complete problems are the
hard problems in NP.
Features:
58