0% found this document useful (0 votes)
2 views58 pages

Unit 5 Notes

A graph is a non-linear data structure made up of vertices (nodes) and edges that connect them, represented as G(E, V). Various types of graphs include directed, undirected, weighted, and cyclic graphs, each serving different applications such as social networks and routing paths. Graph operations include insertion and deletion of nodes and edges, and traversal techniques like Depth First Search (DFS) and Breadth First Search (BFS) are used to explore all nodes in the graph.

Uploaded by

sowmyaprema249
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views58 pages

Unit 5 Notes

A graph is a non-linear data structure made up of vertices (nodes) and edges that connect them, represented as G(E, V). Various types of graphs include directed, undirected, weighted, and cyclic graphs, each serving different applications such as social networks and routing paths. Graph operations include insertion and deletion of nodes and edges, and traversal techniques like Depth First Search (DFS) and Breadth First Search (BFS) are used to explore all nodes in the graph.

Uploaded by

sowmyaprema249
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 58

UNIT V GRAPH STRUCTURES

GRAPH
 A Graph is a non-linear data structure consisting of vertices and edges.
 The vertices are sometimes also referred to as nodes and the edges are lines or arcs that
connect any two nodes in the graph.
 A Graph is composed of a set of vertices( V ) and a set of edges( E ). The graph is
denoted by G(E, V).

Components of a Graph

 Vertices: Vertices are the fundamental units of the graph. Sometimes, vertices are also
known as vertex or nodes. Every node/vertex can be labeled or unlabelled.
 Edges: Edges are drawn or used to connect two nodes of the graph. It can be ordered
pair of nodes in a directed graph. Edges can connect any two nodes in any possible way.
There are no rules. Sometimes, edges are also known as arcs. Every edge can be
labeled/unlabelled.

Graphs are used to solve many real-life problems. Graphs are used to represent networks. The
networks may include paths in a city or telephone network or circuit network. Graphs are also
used in social networks like linkedIn, Facebook. For example, in Facebook, each person is
represented with a vertex(or node). Each node is a structure and contains information like
person id, name, gender, locale etc.

Types of Graphs:

Null Graph:
A graph of order n and size zero is a graph where there are only isolated vertices
with no edges connecting any pair of vertices.

1
Trivial Graph:
A graph is said to be trivial if a finite graph contains only one vertex and no edge.

Directed Graph
A graph in which all the edges are Unidirectional.

Example: Twitter
If I follow you there is no rule that you need to follow me.

Undirected Graph
A graph in which all the edges are bi-directional.

2
Example: Facebook
When I accept your friend request, me and you are friends now, we both can like or
comment on each other story or photos.

Weighted Graph
A graph in which each edge is assigned with some weight/cost/value.
Example:

Taking path from B-E


Step 1:
Path from B-E, the cost is 2
ie) Cost=2
Step 2:
Path from B-E,
B-A-C-E
Cost=10+3+4=17
Cost=17

3
Step 3:
Path from B-E,
B-A-D-F-E
Cost=10+1+5+1=17
Cost=17
Unweighted Graph
A graph where there is no value or weight associated with the edge.

Cyclic Graph
 A graph contain cycle is called as cyclic graph.
 If the graph contains the path that starts from the vertex and ends at the same vertex.

Example for Undirected Graph:

B-A-D-B
Example for directed Graph:

4
Here the path starts from the vertex B and ends in vertex B.
B-A-C-E-B
Acyclic Graph
A graph does not have any cycle is called Acyclic graph.

Connected Graph
The graph in which from one node we can visit any other node in the graph is known
as a connected graph.

Disconnected Graph
The graph in which at least one node is not reachable from a node is known as a
disconnected graph.

5
Degree
Number of edges connected to it.
Degree of node = number of edges connected to it.
Example:

Degree of A=3
Degree of B=1
Degree of C=3
Degree of D=1
Degree of E=2
Degree of F=2
Degree of G=2
Types of degree
There are two types of degrees. They are
Indegree
Outdegree
Indegree

The number of edges coming to that node is called Indegree.

Indegree of a node= Number of edges coming to that node.

6
Indegree of A=1

Indegree of B=3

Indegree of C=1

Indegree of D=1

Indegree of E=1

Outdegree

The number of edges going outside from that node is called Outdegree.

Outdegree of A=2

Outdegree of B=0

Outdegree of C=0

Outdegree of D=3

Outdegree of E=1

Complete Graph
A graph is complete if any node in the graph adjacent to all the nodes of the graph.

7
Path
Sequence of vertices in which each pair of successive nodes is connected by an edge.

Path from A-G is


A-C-F-G
Length of Path
Length of path is calculated by the number of edges in the graph. Length of path is always
equal to or greater than 1. It cannot be 0.
Example:

Here the length of the path from A to G is 3.


Types of path

8
 Simple Path
 Closed Path
 Cycle
Simple Path

A path is simple if all of its vertices are distinct.

Vertices are not repeated so it is a simple path.


Closed Path
A path is closed if the first and last node of that path is same.
It can also have duplicate vertices in middle.

Cycle
Cycle is the path in which the first and last node need to be same and also all other nodes
need to be distinct.

9
GRAPH OPERATIONS
The operations involved in graph are,
Insertion
Deletion
Example Program:
def add_node(v):

if v in graph:

print(v,"is already present in graph")

else:

graph[v]=[]

def add_edge(v1,v2):

if v1 not in graph:

print(v1,"is not present in the graph")

elif v2 not in graph:

print(v2,"is not present in the graph")

else:

#list1=[v2,cost]

#list2=[v1,cost]

graph[v1].append(v2)

graph[v2].append(v1)

def delete_node(v):

if v not in graph:

print(v,"is not present in the graph")

else:

graph.pop(v)

for i in graph:

list1=graph[i]

if v in list1:

10
list1.remove(v)

graph={}

add_node("A")

add_node("B")

add_node("C")

add_node("D")

add_node("E")

add_edge("A","B")

add_edge("A","C")

add_edge("C","D")

delete_node("B")

print(graph)
Output:

GRAPH REPRESENTATIONS
In graph theory, a graph representation is a technique to store graph into the memory of
computer.

To represent a graph, we just need the set of vertices, and for each vertex the neighbors of the
vertex (vertices which is directly connected to it by an edge). If it is a weighted graph, then
the weight will be associated with each edge.

There are different ways to optimally represent a graph, depending on the density of its
edges, type of operations to be performed and ease of use.

1. Adjacency Matrix

o Adjacency matrix is a sequential representation.


o It is used to represent which nodes are adjacent to each other. i.e. is there any edge
connecting nodes to a graph.
o In this representation, we have to construct a nXn matrix A. If there is any edge from
a vertex i to vertex j, then the corresponding element of A, ai,j = 1, otherwise ai,j= 0.

11
o If there is any weighted graph then instead of 1s and 0s, we can store the weight of the
edge.

Example

Undirected graph representation

Directed graph representation

Undirected weighted graph representation

2. Adjacency List

o Adjacency list is a linked representation.


o In this representation, for each vertex in the graph, we maintain the list of its
neighbors. It means, every vertex of the graph contains list of its adjacent vertices.

12
o We have an array of vertices which is indexed by the vertex number and for each
vertex v, the corresponding array element points to a singly linked list of neighbors of
v.

Example

GRAPH TRAVERSALS
 Visiting all the nodes in the graph.
 Any node can be the starting node. From that node we want to visit all the nodes.
Types of Techniques

Depth First Search (DFS)

 It is a recursive algorithm to search all the vertices of a tree data structure or a graph.

 The depth-first search (DFS) algorithm starts with the initial node of graph G and
goes deeper until we find the goal node or the node with no children.

 Because of the recursive nature, stack data structure can be used to implement the
DFS algorithm.

The step by step process to implement the DFS traversal is given as follows -

1. First, create a stack with the total number of vertices in the graph.
2. Now, choose any vertex as the starting point of traversal, and push that vertex into the
stack.
3. After that, push a non-visited vertex (adjacent to the vertex on the top of the stack) to
the top of the stack.
4. Now, repeat steps 3 and 4 until no vertices are left to visit from the vertex on the
stack's top.
5. If no vertex is left, go back and pop a vertex from the stack.

13
6. Repeat steps 2, 3, and 4 until the stack is empty.

Applications of DFS algorithm

The applications of using the DFS algorithm are given as follows -

o DFS algorithm can be used to implement the topological sorting.


o It can be used to find the paths between two vertices.
o It can also be used to detect cycles in the graph.
o DFS algorithm is also used for one solution puzzles.
o DFS is used to determine if a graph is bipartite or not.

Algorithm

Step 1: SET STATUS = 1 (ready state) for each node in G

Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)

Step 3: Repeat Steps 4 and 5 until STACK is empty

Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)

Step 5: Push on the stack all the neighbors of N that are in the ready state (whose STATUS =
1) and set their STATUS = 2 (waiting state)

[END OF LOOP]

Step 6: EXIT

Example:

14
Step 1:

Step 2:
Pop A and place it in DFS expression.

Push the adjacent nodes of A

15
Step 3:

Push the adjacent of C ie)F,G

Step 4
Push the adjacent of G in stack but here not having adjacent node. So pop from the stack.

F also have no adjacent nodes.

16
Step 5

Push the adjacent nodes of B

Step 6

Now, all the graph nodes have been traversed, and the stack is empty.
Example Program:

17
DFS
def add_node(v):
if v in graph:
print(v,"is already present in graph")
else:
graph[v]=[]
def add_edge(v1,v2):
if v1 not in graph:
print(v1,"is not present in the graph")
elif v2 not in graph:
print(v2,"is not present in the graph")
else:
#list1=[v2,cost]
#list2=[v1,cost]
graph[v1].append(v2)
graph[v2].append(v1)
def delete_node(v):
if v not in graph:
print(v,"is not present in the graph")
else:
graph.pop(v)
for i in graph:
list1=graph[i]
if v in list1:
list1.remove(v)
def DFS(node,visited,graph):
if node not in graph:
print("Node is not present in the graph")
if node not in visited:

18
print(node)
visited.add(node)
for i in graph[node]:
DFS(i,visited,graph)
visited=set()
graph={}
add_node("A")
add_node("B")
add_node("C")
add_node("D")
add_node("E")
add_edge("A","B")
add_edge("B","E")
add_edge("A","C")
add_edge("A","D")
add_edge("B","D")
add_edge("C","D")
add_edge("E","D")
delete_node("B")
print(graph)
DFS("A",visited,graph)
Output:

19
Complexity of Depth-first search algorithm

The time complexity of the DFS algorithm is O(V+E), where V is the number of vertices and
E is the number of edges in the graph.

The space complexity of the DFS algorithm is O(V).

BFS ALGORITHM

 Breadth-first search is a graph traversal algorithm that starts traversing the graph from
the root node and explores all the neighboring nodes.
 Then, it selects the nearest node and explores all the unexplored nodes.
 While using BFS for traversal, any node in the graph can be considered as the root
node.
 There are many ways to traverse the graph, but among them, BFS is the most
commonly used approach.
 It is a recursive algorithm to search all the vertices of a tree or graph data structure.
 BFS puts every vertex of the graph into two categories - visited and non-visited.
 It selects a single node in a graph and, after that, visits all the nodes adjacent to the
selected node.
Applications of BFS algorithm

The applications of breadth-first-algorithm are given as follows -

o BFS can be used to find the neighboring locations from a given source location.
o In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all
the neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ
this process to find "seeds" and "peers" in the network.
o BFS can be used in web crawlers to create web page indexes. It is one of the main
algorithms that can be used to index web pages. It starts traversing from the source
page and follows the links associated with the page. Here, every web page is
considered as a node in the graph.
o BFS is used to determine the shortest path and minimum spanning tree.
o BFS is also used in Cheney's technique to duplicate the garbage collection.
o It can be used in ford-Fulkerson method to compute the maximum flow in a flow
network.

Algorithm

The steps involved in the BFS algorithm to explore a graph are given as follows –

Step 1: SET STATUS = 1 (ready state) for each node in G

Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)

Step 3: Repeat Steps 4 and 5 until QUEUE is empty

20
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).

Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and
set

their STATUS = 2

(waiting state)

[END OF LOOP]

Step 6: EXIT

Example:

Step 1:
Insert A into the queue .

Step 2:
Remove the element A from the queue and placed in BFS Expression.

Step 3:

21
Insert adjacent element of A in Queue ie) B,C

Remove the element B from the queue and placed in BFS Expression.

Step 4:
Insert adjacent element of B in Queue

Remove the element C from the queue and placed in BFS Expression.

Step 5:
Insert adjacent element of C in Queue

Remove the element D from the queue and placed in BFS Expression.

22
Step 6:
D does not have any node.
Remove the element E from the queue and placed in BFS Expression.

Step 7:
E does not have any node.
Remove the element F from the queue and placed in BFS Expression.

Step 8:
F does not have any node.
Remove the element G from the queue and placed in BFS Expression.

Hence the Queue is Empty .


Example:
graph = {

'5' : ['3','7'],

'3' : ['2', '4'],

'7' : ['8'],

23
'2' : [],

'4' : ['8'],

'8' : []

visited = [] # List for visited nodes.

queue = [] #Initialize a queue

def bfs(visited, graph, node):

visited.append(node)

queue.append(node)

while queue:

m = queue.pop(0)

print (m, end = " ")

for neighbour in graph[m]:

if neighbour not in visited:

visited.append(neighbour)

queue.append(neighbour)

print("Following is the Breadth-First Search")

bfs(visited, graph, '5')


Output:

Complexity of BFS algorithm

Time complexity of BFS depends upon the data structure used to represent the graph. The
time complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm
explores every node and edge. In a graph, the number of vertices is O(V), whereas the
number of edges is O(E).

24
DAG
A DAG is always topologically ordered, i.e. for each edge in the graph, the start vertex of the
edge occurs earlier in the sequence than the ending vertex of the edge.

Example

In the above directed graph, if we find the paths from any node, say u, we will never find a
path that come back to u. Hence, this is a DAG.

TOPOLOGICAL ORDERING / TOPOLOGICAL SORTING


 It is an algorithm which takes a direct acyclic graph and returns the sequence of the
nodes. Every node will appear before other nodes that points to.
 A direct acyclic graph is kind of graph that has directed edge between one node to
other without creating any cycle.
 It is remember that the topology sorting won't work if graph is not directed acyclic.
The ordering of the nodes in the array is called topological ordering.

Algorithm

1. First step is to identify the node that has no in-degree (no incoming edges) and select
that node as the source node of the graph.
2. Now delete the source node that has zero in-degree and its associated edges. The
deleted vertex will be added in the result array.
3. Update the in-degree of the adjacent nodes after deleting the outgoing edges.
4. The above steps will be repeated until the graph is empty.

The result array that we will get the end of the process is known as the topological ordering
of the directed graph. If some node left but they have incoming edges, that mean graph is not
acyclic. If given graph is not acyclic, topological ordering doesn't exist.

Example Program:

from collections import defaultdict


class Graph:

25
def __init__(self,n):

self.graph = defaultdict(list)

self.N = n

def addEdge(self,m,n):

self.graph[m].append(n)

def sortUtil(self,n,visited,stack):

visited[n] = True

for element in self.graph[n]:

if visited[element] == False:

self.sortUtil(element,visited,stack)

stack.insert(0,n)

def topologicalSort(self):

visited = [False]*self.N

stack =[]

for element in range(self.N):

if visited[element] == False:

self.sortUtil(element,visited,stack)

print(stack)

graph = Graph(5)
graph.addEdge(0,1)
graph.addEdge(0,3)
graph.addEdge(1,2)
graph.addEdge(2,3)
graph.addEdge(2,4)
graph.addEdge(3,4)

print("The Topological Sort Of The Graph Is: ")


graph.topologicalSort()

Output:

26
The Topological Sort Of The Graph Is:
[0, 1, 2, 3, 4]

Topological Sort Time Complexity

The time complexity of the topology sorting is O(M +N) where M is the number of edges in
the graph and N is the number of nodes in the graph.

Application

Topology sorting provides many types of real-life solution -

o It is used in course scheduling problems to schedule jobs.


o It is used to dependency resolution.
o It is very beneficial to find sentence ordering in very fewer efforts.
o It is helpful to identify there exists cycle in the graph or not.
o It is used for ordering the cell evaluation while recomputing formula values in excel
or spreadsheet.
o Topological sort helpful to find the deadlock condition in an operating system.

GREEDY ALGORITHMS
An algorithm is designed to achieve optimum solution for a given problem. In greedy
algorithm approach, decisions are made from the given solution domain. As being greedy, the
closest solution that seems to provide an optimum solution is chosen.
Greedy algorithms try to find a localized optimum solution, which may eventually lead to
globally optimized solutions. However, generally greedy algorithms do not provide globally
optimized solutions.

Advantages of Greedy Approach

 The algorithm is easier to describe.


 This algorithm can perform better than other algorithms (but, not in all cases).

Examples
Most networking algorithms use the greedy approach. Here is a list of few of them −
 Prim's Minimal Spanning Tree Algorithm
 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm

27
Prim's Algorithm

Prim's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds
the subset of the edges of that graph which
 form a tree that includes every vertex

 has the minimum sum of weights among all the trees that can be formed from the graph.

Prim’s Algorithm work process


It falls under a class of algorithms called greedy algorithms that find the local optimum in the
hopes of finding a global optimum.
We start from one vertex and keep adding edges with the lowest weight until we reach our
goal.

The steps for implementing Prim's algorithm are as follows:

1. Initialize the minimum spanning tree with a vertex chosen at random.


2. Find all the edges that connect the tree to new vertices, find the minimum and add it to the
tree
3. Keep repeating step 2 until we get a minimum spanning tree
Pseudocode

T = ∅;
M = { 1 };

let (m, n) be the lowest cost edge such that m ∈ M and n ∈ N - M;


while (M ≠ N)

T = T ∪ {(m, n)}
M = M ∪ {n}

Here create two sets of nodes i.e M and M-N. M set contains the list of nodes that have been
visited and the M-N set contains the nodes that haven’t been visited. Later, we will move
each node from M to M-N after each step by connecting the least weight edge.

Example
Let us consider the below-weighted graph

28
consider the source vertex to initialize the algorithm

Now, we will choose the shortest weight edge from the source vertex and add it to finding the
spanning tree.

Then, choose the next nearest node connected with the minimum edge and add it to the
solution. If there are multiple choices then choose anyone.

29
Continue the steps until all nodes are included and we find the minimum spanning tree.

Cost of MST=2+1+3+5=11
Output

30
Python Code for Prim’s Algorithm
INF = 9999999
N=5
G = [[0, 19, 5, 0, 0],
[19, 0, 5, 9, 2],
[5, 5, 0, 1, 6],
[0, 9, 1, 0, 1],
[0, 2, 6, 1, 0]]
selected_node = [0, 0, 0, 0, 0]
no_edge = 0
selected_node[0] = True
print("Edge : Weight\n")
while (no_edge < N - 1):
minimum = INF
a=0
b=0
for m in range(N):
if selected_node[m]:
for n in range(N):
if ((not selected_node[n]) and G[m][n]):
# not in selected and there is an edge
if minimum > G[m][n]:
minimum = G[m][n]
a=m
b=n
print(str(a) + "-" + str(b) + ":" + str(G[a][b]))
selected_node[b] = True
no_edge += 1

Time Complexity:
The running time for prim’s algorithm is O(VlogV + ElogV) which is equal to O(ElogV)
because every insertion of a node in the solution takes logarithmic time. Here, E is the
number of edges and V is the number of vertices/nodes. However, we can improve the
running time complexity to O(E + logV) of prim’s algorithm using Fibonacci Heaps.

Applications

31
 Prim’s algorithm is used in network design
 It is used in network cycles and rail tracks connecting all the cities
 Prim’s algorithm is used in laying cables of electrical wiring
 Prim’s algorithm is used in irrigation channels and placing microwave towers
 It is used in cluster analysis
 Prim’s algorithm is used in gaming development and cognitive science
 Pathfinding algorithms in artificial intelligence and traveling salesman problems make
use of prim’s algorithm.

Kruskal's Algorithm
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and
finds the subset of the edges of that graph which
 form a tree that includes every vertex
 has the minimum sum of weights among all the trees that can be formed from the graph
Kruskal's algorithm work process
It falls under a class of algorithms called greedy algorithms that find the local optimum in the
hopes of finding a global optimum.
We start from the edges with the lowest weight and keep adding edges until we reach our
goal.

The steps for implementing Kruskal's algorithm are as follows:

1. Sort all the edges from low weight to high


2. Take the edge with the lowest weight and add it to the spanning tree. If adding the edge
created a cycle, then reject this edge.
3. Keep adding edges until we reach all vertices.
Example:

Step 1:

32
Step 2:

Step 3:

Step 4:

The cost of the MST is = AB + DE + BC + CD


= 1 + 2 + 3 + 4 = 10.
Example Program:
class Graph:
def __init__(self, vertices):
self.V = vertices
self.graph = []

33
def add_edge(self, u, v, w):
self.graph.append([u, v, w])
def find(self, parent, i):
if parent[i] == i:
return i
return self.find(parent, parent[i])
def apply_union(self, parent, rank, x, y):
xroot = self.find(parent, x)
yroot = self.find(parent, y)
if rank[xroot] < rank[yroot]:
parent[xroot] = yroot
elif rank[xroot] > rank[yroot]:
parent[yroot] = xroot
else:
parent[yroot] = xroot
rank[xroot] += 1
def kruskal_algo(self):
result = []
i, e = 0, 0
self.graph = sorted(self.graph, key=lambda item: item[2])
parent = []
rank = []
for node in range(self.V):
parent.append(node)
rank.append(0)
while e < self.V - 1:
u, v, w = self.graph[i]
i=i+1
x = self.find(parent, u)
y = self.find(parent, v)
if x != y:
e=e+1
result.append([u, v, w])
self.apply_union(parent, rank, x, y)
for u, v, weight in result:
print("%d - %d: %d" % (u, v, weight))
g = Graph(6)
g.add_edge(0, 1, 4)
g.add_edge(0, 2, 4)
g.add_edge(1, 2, 2)
g.add_edge(1, 0, 4)
g.add_edge(2, 0, 4)
g.add_edge(2, 1, 2)
g.add_edge(2, 3, 3)
g.add_edge(2, 5, 2)
g.add_edge(2, 4, 4)
g.add_edge(3, 2, 3)
g.add_edge(3, 4, 3)
g.add_edge(4, 2, 4)
g.add_edge(4, 3, 3)

34
g.add_edge(5, 2, 2)
g.add_edge(5, 4, 3)
g.kruskal_algo()

Output:

Kruskal's Algorithm Complexity

The time complexity Of Kruskal's Algorithm is: O(E log E).

Kruskal's Algorithm Applications

 In order to layout electrical wiring

 In computer network (LAN connection)

DYNAMIC PROGRAMMING
 Dynamic programming is a problem-solving technique for resolving complex
problems by recursively breaking them up into sub-problems, which are then each
solved individually.
 Mostly, these algorithms are used for optimization.
 Dynamic programming can be used in both top-down and bottom-up manner.

Example
The following computer problems can be solved using dynamic programming approach −

 Fibonacci number series


 Knapsack problem
 Tower of Hanoi
 All pair shortest path by Floyd-Warshall
 Shortest path by Dijkstra
 Project scheduling

35
Characteristics of Dynamic Programming Algorithm:
 In general, dynamic programming (DP) is one of the most powerful techniques for
solving a certain class of problems.
 There is an elegant way to formulate the approach and a very simple thinking process,
and the coding part is very easy.
 Additionally, the optimal solutions to the subproblems contribute to the optimal
solution of the given problem.

Techniques to solve Dynamic Programming Problems:

1. Top-Down(Memoization):

Break down the given problem in order to begin solving it. If you see that the problem has
already been solved, return the saved answer. If it hasn’t been solved, solve it and save it.
This is usually easy to think of and very intuitive, This is referred to as Memoization.

2. Bottom-Up(Dynamic Programming):

Analyze the problem and see in what order the subproblems are solved, and work your way
up from the trivial subproblem to the given problem. This process ensures that the
subproblems are solved before the main problem. This is referred to as Dynamic
Programming.

Tabulation Memoization

State transition relation is difficult State Transition relation is easy to


State to think think

Code gets complicated when a lot


of Code is easy and less complicated
Code conditions are required

36
Tabulation Memoization

Fast, as we directly access previous Slow due to a lot of recursive calls


Speed states from the table and return statements

If some subproblems in the


If all subproblems must be solved at subproblem space need not be
least once, a bottom-up dynamic solved at all, the memoized
programming algorithm usually solution has the advantage of
Subproblem outperforms a top-down memoized solving only those subproblems
solving algorithm by a constant factor that are definitely required

Unlike the Tabulated version, all


entries of the lookup table are not
In the Tabulated version, starting necessarily filled in Memoized
from the first entry, all entries are version. The table is filled on
Table entries filled one by one demand.

Generally, tabulation(dynamic
programming) is an iterative On the other hand, memoization is
Approach approach a recursive approach.

Example:
Towers of Hanoi
Tower of Hanoi is a mathematical puzzle where we have three rods (A, B, and C)
and N disks. Initially, all the disks are stacked in decreasing value of diameter i.e., the
smallest disk is placed on the top and they are on rod A. The objective of the puzzle is to
move the entire stack to another rod (here considered C), obeying the following simple
rules:
 Only one disk can be moved at a time.
 Each move consists of taking the upper disk from one of the stacks and placing it on top
of another stack i.e. a disk can only be moved if it is the uppermost disk on a stack.
 No disk may be placed on top of a smaller disk.

Steps to solve the problem:


 Create a function towerOfHanoi where pass the N (current number of
disk), from_rod, to_rod, aux_rod.
 Make a function call for N – 1 th disk.
 Then print the current the disk along with from_rod and to_rod
 Again make a function call for N – 1 th disk.

Program
def TowerOfHanoi(n, from_rod, to_rod, aux_rod):

37
if n == 0:
return
TowerOfHanoi(n-1, from_rod, aux_rod, to_rod)
print("Move disk", n, "from rod", from_rod, "to rod", to_rod)
TowerOfHanoi(n-1, aux_rod, to_rod, from_rod)
N=3
TowerOfHanoi(N, 'A', 'C', 'B')

Output
Move disk 1 from rod A to rod C
Move disk 2 from rod A to rod B
Move disk 1 from rod C to rod B
Move disk 3 from rod A to rod C
Move disk 1 from rod B to rod A
Move disk 2 from rod B to rod C
Move disk 1 from rod A to rod C
Time complexity: O(2N), There are two possibilities for every disk. Therefore, 2 * 2 * 2
* . . . * 2(N times) is 2 N

SHORTEST PATHS
 The shortest path problem is the problem of finding a path between two vertices (or
nodes) in a graph such that the sum of the weights of its constituent edges is
minimized.
 The shortest path between any two nodes of the graph can be founded using many
algorithms, such as Dijkstra’s algorithm.
Dijkstra's Algorithm

Dijkstra's algorithm allows us to find the shortest path between any two vertices of a graph.

It differs from the minimum spanning tree because the shortest distance between two vertices
might not include all the vertices of the graph.

How Dijkstra’s Algorithm Works


Dijkstra algorithm can find the shortest distance in both directed and undirected weighted
graphs.

This Algorithm is greedy because it always chooses the shortest or closest node from the
origin.

The term “greedy” means that among a set of outcomes or results, the Algorithm will choose
the best of them.

38
So, Dijkstra’s Algorithm finds all the shortest paths from a single destination node. As a
result, it behaves like a greedy algorithm.
Step 1) Initialize the starting node with 0 costs and the rest of the node as Infinity Cost.
Step 2) Maintain an array or list to keep track of the visited nodes
Step 3) Update the node cost with the minimum cost. It can be done by comparing the current
cost with the path cost.
Step 4) Continue step 3 until all the node is visited.

Example:

Step 1:

Here the shortest path starts from ‘a’. Initially the cost of ‘a’ is zero because there is no Path
from ‘a’ initially.
All other vertices are set to be INFINITY.

39
There are two possibilities for ‘a’
ie) from a to b
from a to c
From a to b the cost is 0+4=4
From a to c the cost is 0+2=2
Here the minimum cost is 2 which is travel from a to c

Step 2:

Here ‘c’ is the next node to travel.


ie) from c to b
From c to d

40
From c to e
From c to b the cost is 2+1=3
From c to d the cost is 2+8=10
From c to e the cost is 2+10=12
Here the minimum cost is 3 which is travel from c to b

Step 3:

For ‘b’ there is only one possibilities


ie) from b to d
From b to d the cost is 3+5=8

41
Step 4:

There are two possibilities for ‘d’


ie) from d to e
From d to z
From d to e the cost is 8+2=10
From d to z the cost is 8+6=14
Here the minimum cost is 10 which is from d to e

42
Step 5:

For ‘e’ there is only one possibility


ie) from e to z

From e to z the cost is 10+3=13


Here the shortest path from a to z contains the minimum cost of 13

Example Program
class Graph():
def __init__(self, vertices):

43
self.V = vertices
self.graph = [[0 for column in range(vertices)]
for row in range(vertices)]
def printSolution(self, dist):
print("Vertex \t Distance from Source")
for node in range(self.V):
print(node, "\t\t", dist[node])
def minDistance(self, dist, sptSet):
min = 1e7
for v in range(self.V):
if dist[v] < min and sptSet[v] == False:
min = dist[v]
min_index = v
return min_index
def dijkstra(self, src):
dist = [1e7] * self.V
dist[src] = 0
sptSet = [False] * self.V
for cout in range(self.V):
u = self.minDistance(dist, sptSet)
sptSet[u] = True
for v in range(self.V):
if (self.graph[u][v] > 0 and
sptSet[v] == False and
dist[v] > dist[u] + self.graph[u][v]):
dist[v] = dist[u] + self.graph[u][v]
self.printSolution(dist)
g = Graph(9)
g.graph = [[0, 4, 0, 0, 0, 0, 0, 8, 0],

44
[4, 0, 8, 0, 0, 0, 0, 11, 0],
[0, 8, 0, 7, 0, 4, 0, 0, 2],
[0, 0, 7, 0, 9, 14, 0, 0, 0],
[0, 0, 0, 9, 0, 10, 0, 0, 0],
[0, 0, 4, 14, 10, 0, 2, 0, 0],
[0, 0, 0, 0, 0, 2, 0, 1, 6],
[8, 11, 0, 0, 0, 0, 1, 0, 7],
[0, 0, 2, 0, 0, 0, 6, 7, 0]
]
g.dijkstra(0)
Output:

MINIMUM SPANNING TREES


Spanning tree
 A spanning tree is a subset of an undirected Graph that has all the vertices connected
by minimum number of edges.
 If all the vertices are connected in a graph, then there exists at least one spanning
tree. In a graph, there may exist more than one spanning tree.

Applications of the spanning tree

Basically, a spanning tree is used to find a minimum path to connect all nodes of the graph.
Some of the common applications of the spanning tree are listed as follows -

o Cluster Analysis
o Civil network planning

45
o Computer network routing protocol

Properties of spanning-tree

Some of the properties of the spanning tree are given as follows -

o There can be more than one spanning tree of a connected graph G.


o A spanning tree does not have any cycles or loop.
o A spanning tree is minimally connected, so removing one edge from the tree will
make the graph disconnected.
o A spanning tree is maximally acyclic, so adding one edge to the tree will create a
loop.
o There can be a maximum nn-2 number of spanning trees that can be created from a
complete graph.
o A spanning tree has n-1 edges, where 'n' is the number of nodes.
o If the graph is a complete graph, then the spanning tree can be constructed by
removing maximum (e-n+1) edges, where 'e' is the number of edges and 'n' is the
number of vertices.

So, a spanning tree is a subset of connected graph G, and there is no spanning tree of a
disconnected graph.

Minimum Spanning tree

 A minimum spanning tree can be defined as the spanning tree in which the sum of the
weights of the edge is minimum.
 The weight of the spanning tree is the sum of the weights given to the edges of the
spanning tree.
 In the real world, this weight can be considered as the distance, traffic load,
congestion, or any random value.

Example of minimum spanning tree

The sum of the edges of the above graph is 16. Now, some of the possible spanning trees
created from the above graph are –

Step 1:

46
Step 2:

Step 3:

Step 4:

So, the minimum spanning tree that is selected from the above spanning trees for the given
weighted graph is –

47
Applications of minimum spanning tree

The applications of the minimum spanning tree are given as follows -

o Minimum spanning tree can be used to design water-supply networks,


telecommunication networks, and electrical grids.
o It can be used to find paths in the map.

Algorithms for Minimum spanning tree

A minimum spanning tree can be found from a weighted graph by using the algorithms given
below -

o Prim's Algorithm
o Kruskal's Algorithm

Prim's Algorithm

Prim's algorithm is a minimum spanning tree algorithm that takes a graph as input and finds
the subset of the edges of that graph which
 form a tree that includes every vertex

 has the minimum sum of weights among all the trees that can be formed from the graph.

Prim’s Algorithm work process


It falls under a class of algorithms called greedy algorithms that find the local optimum in the
hopes of finding a global optimum.
We start from one vertex and keep adding edges with the lowest weight until we reach our
goal.

The steps for implementing Prim's algorithm are as follows:

48
4. Initialize the minimum spanning tree with a vertex chosen at random.
5. Find all the edges that connect the tree to new vertices, find the minimum and add it to the
tree
6. Keep repeating step 2 until we get a minimum spanning tree
Pseudocode

T = ∅;
M = { 1 };

let (m, n) be the lowest cost edge such that m ∈ M and n ∈ N - M;


while (M ≠ N)

T = T ∪ {(m, n)}
M = M ∪ {n}

Here create two sets of nodes i.e M and M-N. M set contains the list of nodes that have been
visited and the M-N set contains the nodes that haven’t been visited. Later, we will move
each node from M to M-N after each step by connecting the least weight edge.

Example
Let us consider the below-weighted graph

consider the source vertex to initialize the algorithm

49
Now, we will choose the shortest weight edge from the source vertex and add it to finding the
spanning tree.

Then, choose the next nearest node connected with the minimum edge and add it to the
solution. If there are multiple choices then choose anyone.

Continue the steps until all nodes are included and we find the minimum spanning tree.

50
Cost of MST=2+1+3+5=11
Output

Python Code for Prim’s Algorithm


INF = 9999999
N=5
G = [[0, 19, 5, 0, 0],
[19, 0, 5, 9, 2],
[5, 5, 0, 1, 6],
[0, 9, 1, 0, 1],
[0, 2, 6, 1, 0]]
selected_node = [0, 0, 0, 0, 0]
no_edge = 0
selected_node[0] = True
print("Edge : Weight\n")
while (no_edge < N - 1):
minimum = INF
a=0

51
b=0
for m in range(N):
if selected_node[m]:
for n in range(N):
if ((not selected_node[n]) and G[m][n]):
# not in selected and there is an edge
if minimum > G[m][n]:
minimum = G[m][n]
a=m
b=n
print(str(a) + "-" + str(b) + ":" + str(G[a][b]))
selected_node[b] = True
no_edge += 1

Time Complexity:
The running time for prim’s algorithm is O(VlogV + ElogV) which is equal to O(ElogV)
because every insertion of a node in the solution takes logarithmic time. Here, E is the
number of edges and V is the number of vertices/nodes. However, we can improve the
running time complexity to O(E + logV) of prim’s algorithm using Fibonacci Heaps.

Applications

 Prim’s algorithm is used in network design


 It is used in network cycles and rail tracks connecting all the cities
 Prim’s algorithm is used in laying cables of electrical wiring
 Prim’s algorithm is used in irrigation channels and placing microwave towers
 It is used in cluster analysis
 Prim’s algorithm is used in gaming development and cognitive science
 Pathfinding algorithms in artificial intelligence and traveling salesman problems make
use of prim’s algorithm.

Kruskal's Algorithm
Kruskal's algorithm is a minimum spanning tree algorithm that takes a graph as input and
finds the subset of the edges of that graph which
 form a tree that includes every vertex
 has the minimum sum of weights among all the trees that can be formed from the graph
Kruskal's algorithm work process
It falls under a class of algorithms called greedy algorithms that find the local optimum in the
hopes of finding a global optimum.
We start from the edges with the lowest weight and keep adding edges until we reach our
goal.

The steps for implementing Kruskal's algorithm are as follows:

4. Sort all the edges from low weight to high

52
5. Take the edge with the lowest weight and add it to the spanning tree. If adding the edge
created a cycle, then reject this edge.
6. Keep adding edges until we reach all vertices.
Example:

Step 1:

Step 2:

Step 3:

53
Step 4:

The cost of the MST is = AB + DE + BC + CD


= 1 + 2 + 3 + 4 = 10.
Example Program:
class Graph:
def __init__(self, vertices):
self.V = vertices
self.graph = []
def add_edge(self, u, v, w):
self.graph.append([u, v, w])
def find(self, parent, i):
if parent[i] == i:
return i
return self.find(parent, parent[i])
def apply_union(self, parent, rank, x, y):
xroot = self.find(parent, x)
yroot = self.find(parent, y)
if rank[xroot] < rank[yroot]:
parent[xroot] = yroot
elif rank[xroot] > rank[yroot]:
parent[yroot] = xroot
else:
parent[yroot] = xroot
rank[xroot] += 1
def kruskal_algo(self):
result = []
i, e = 0, 0
self.graph = sorted(self.graph, key=lambda item: item[2])
parent = []
rank = []
for node in range(self.V):
parent.append(node)

54
rank.append(0)
while e < self.V - 1:
u, v, w = self.graph[i]
i=i+1
x = self.find(parent, u)
y = self.find(parent, v)
if x != y:
e=e+1
result.append([u, v, w])
self.apply_union(parent, rank, x, y)
for u, v, weight in result:
print("%d - %d: %d" % (u, v, weight))
g = Graph(6)
g.add_edge(0, 1, 4)
g.add_edge(0, 2, 4)
g.add_edge(1, 2, 2)
g.add_edge(1, 0, 4)
g.add_edge(2, 0, 4)
g.add_edge(2, 1, 2)
g.add_edge(2, 3, 3)
g.add_edge(2, 5, 2)
g.add_edge(2, 4, 4)
g.add_edge(3, 2, 3)
g.add_edge(3, 4, 3)
g.add_edge(4, 2, 4)
g.add_edge(4, 3, 3)
g.add_edge(5, 2, 2)
g.add_edge(5, 4, 3)
g.kruskal_algo()

Output:

Kruskal's Algorithm Complexity

The time complexity Of Kruskal's Algorithm is: O(E log E).

Kruskal's Algorithm Applications

 In order to layout electrical wiring

55
 In computer network (LAN connection)

COMPLEXITY CLASSES AND INTRACTABILITY


 In computer science, there exist some problems whose solutions are not yet found,
the problems are divided into classes known as Complexity Classes.
 In complexity theory, a Complexity Class is a set of problems with related
complexity.
 The common resources are time and space, meaning how much time the algorithm
takes to solve a problem and the corresponding memory usage.
 The time complexity of an algorithm is used to describe the number of steps
required to solve a problem, but it can also be used to describe how long it takes to
verify the answer.
 The space complexity of an algorithm describes how much memory is required for
the algorithm to operate.
 Complexity classes are useful in organizing similar types of problems.

Types of Complexity Classes


1. P Class
2. NP Class
3. CoNP Class
4. NP hard
5. NP complete

P Class

 The P in the P class stands for Polynomial Time.


 It is the collection of decision problems(problems with a “yes” or “no” answer) that
can be solved by a deterministic machine in polynomial time.
Features:
1. The solution to P problems is easy to find.
2. P is often a class of computational problems that are solvable and tractable. Tractable
means that the problems can be solved in theory as well as in practice. But the problems
that can be solved in theory but not in practice are known as intractable.

This class contains many natural problems like:


1. Calculating the greatest common divisor.
2. Finding a maximum matching.
3. Decision versions of linear programming.

NP Class

 The NP in NP class stands for Non-deterministic Polynomial Time.

56
 It is the collection of decision problems that can be solved by a non-deterministic
machine in polynomial time.

Features:

1. The solutions of the NP class are hard to find since they are being solved by a non-
deterministic machine but the solutions are easy to verify.
2. Problems of NP can be verified by a Turing machine in polynomial time.
This class contains many problems that one would like to be able to solve effectively:
1. Boolean Satisfiability Problem (SAT).
2. Hamiltonian Path Problem.
3. Graph coloring.

Co-NP Class

 Co-NP stands for the complement of NP Class.


 It means if the answer to a problem in Co-NP is No, then there is proof that can be
checked in polynomial time.
Features:

1. If a problem X is in NP, then its complement X’ is also is in CoNP.


2. For an NP and CoNP problem, there is no need to verify all the answers at once in
polynomial time, there is a need to verify only one particular answer “yes” or “no” in
polynomial time for a problem to be in NP or CoNP.
Some example problems for C0-NP are:
1. To check prime number.
2. Integer Factorization.

NP-hard class

An NP-hard problem is at least as hard as the hardest problem in NP and it is the class of
the problems such that every problem in NP reduces to NP-hard.
Features:

1. All NP-hard problems are not in NP.


2. It takes a long time to check them. This means if a solution for an NP-hard problem is
given then it takes a long time to check whether it is right or not.
3. A problem A is in NP-hard if, for every problem L in NP, there exists a polynomial-
time reduction from L to A.

Some of the examples of problems in Np-hard are:


1. Halting problem.
2. Qualified Boolean formulas.
3. No Hamiltonian cycle.

57
NP-complete class
A problem is NP-complete if it is both NP and NP-hard. NP-complete problems are the
hard problems in NP.
Features:

1. NP-complete problems are special as any problem in NP class can be transformed or


reduced into NP-complete problems in polynomial time.
2. If one could solve an NP-complete problem in polynomial time, then one could also
solve any NP problem in polynomial time.

Some example problems include:


1. Decision version of 0/1 Knapsack.
2. Hamiltonian Cycle.
3. Satisfiability.
4. Vertex cover.

58

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy