6ECE - CN Lab Manual
6ECE - CN Lab Manual
ECL – 17
LABORATORY MANUAL
Table of Content
Page
S. NO. CONTENTS
No.
1 Vision Mission and Quality Policy of Institute 4
7 RTU Syllabus 8
8 List of Experiments 9
Beyond Syllabus Experiments and their mapping with POs
9 10
and PSOs
10 Instructions for the Lab 11
Study and use of common TCP/IP protocols and term viz.
Exp: 1 12
telnet rlogin ftp, ping, finger, Socket, Port etc.
Write a program for representation of unidirectional,
Exp: 2 20
directional weighted and unweighted graph in C language.
Write a program to compute shortest path for one source - one
Exp: 3 30
destination and one source –all destination in C language.
Write a program for Simulation of M/M/1 and M/M/1/N
Exp: 4(A) 38
queues Network Protocols.
Write a program for Simulation of pure and slotted ALOHA. 50
Exp: 4(B)
Write a program for Simulation of link state routing
Exp: 4(C) 54
algorithm in MATLAB.
Observe the behavior & measure the throughput under
various network load conditions for following MAC layer
Exp: 5(A) Protocols: Ethernet LAN protocol to create scenario and 60
Study the performance of CSMA/CD Protocol through
simulation.
Observe the behavior & measure the throughput under
various network load conditions for following MAC layer
Exp: 5(B) 72
Protocols: Implementation of Wireless LAN Protocol. To
create scenario and study the performance network with
Vision: To promote higher learning in advanced technology and industrial research to make our
country a global player
Mission: To promote quality education, training and research in the field of Engineering by
establishing effective interface with industry and to encourage faculty to undertake industry
sponsored projects for students
• All its endeavor’s like admissions, teaching- learning processes, examinations, extra and co-
curricular activities, industry institution interaction, research & development, continuing education,
and consultancy.
• Functional areas like teaching departments, Training & Placement Cell, library, administrative
office, accounts office, hostels, canteen, security services, transport, maintenance section and all
other services.”
Vision: To evolve the department as a center of excellence in the field of electronics and
communication engineering for enriched education, higher learning, research and
development.
Mission: To empower students by imparting quality education in electronics and
communication engineering for better employability and preparing them to be competent in
dealing with industrial and societal challenges.
Graduates from the Electronics and Communication Engineering Program are expected to attain
or achieve the following Program Educational Objectives within a few years of graduation:
I. To pursue their career successfully in the field of Electronics & Communication
Engineering and advance in their profession.
II. To excel in pursuing higher education and life-long learning.
III. To hold high ethical standards and work effectively in multidisciplinary teams with strong
management and team work skills.
After the completion of the program, engineering graduates will be able to:
1. Engineering Knowledge: Apply the knowledge of mathematics, science, engineering
fundamentals, and an engineering specialization to the solution of complex engineering
problems.
2. Problem Analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
3. Design/development of solutions: Design solutions for complex engineering problems and
design system components or processes that meet the specified needs with appropriate
consideration for the public health and safety, and the cultural, societal, and environmental
considerations.
4. Conduct investigations of complex problems: Use research-based knowledge and research
methods including design of experiments, analysis and interpretation of data, and synthesis
of the information to provide valid conclusions.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modelling to complex engineering
activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
7. Environment and sustainability: Understand the impact of the professional engineering
solutions in societal and environmental contexts, and demonstrate the knowledge of, and
need for sustainable development.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
9. Individual and team work: Function effectively as an individual, and as a member or
leader in diverse teams, and in multidisciplinary settings.
10. Communication: Communicate effectively on complex engineering activities with the
engineering community and with society at large, such as, being able to comprehend and
write effective reports and design documentation, make effective presentations, and give and
receive clear instructions.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multidisciplinary environments.
12. Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
CO1 2 2 2 3 3 - - - - 3 - 3 2 2
CO2 2 3 2 3 3 - - - - 3 - 3 2 2
CO3 2 3 1 3 3 - - - - 3 - 3 2 2
CO4 2 3 2 3 3 - - - - 3 - 3 2 2
CO5 2 3 2 3 3 - - - - 3 - 3 2 2
RTU Syllabus
Class: VI Sem.
Evaluation
B.Tech.
Branch: Electronics &
Communication Examination Time = Three (2) Hours
Engg. Maximum Marks = 100 (2 credit)
Schedule per Week [Mid-term (60) & End-term (40)]
Practical Hrs.:4
S. no List of Experiment
1. Introduction: Objective, scope and outcome of the course.
PRELIMINARIES: Study and use of common TCP/IP protocols and term
2.
viz. telnet rlogin ftp, ping, finger, Socket, Port etc.
DATA STRUCTURES USED IN NETWORK PROGRAMMING:
3. Representation of unidirectional, Directional weighted and unweighted
graphs.
ALGORITHMS IN NETWORKS: computation of shortest path for one
4.
source-one destination and one source –all destination
SIMULATION OF NETWORK PROTOCOLS:
i. Simulation of M/M/1 and M/M/1/N queues.
5. ii. Simulation of pure and slotted ALOHA.
Simulation of link state routing algorithm.
Case study: on LAN Training kit
i. Observe the behavior& measure the throughput of reliable data
transfer protocols under various Bit error rates for following
DLL layer protocols
a. Stop & Wait
6. b. Sliding Window: Go-Back-N and Selective Repeat
ii. Observe the behavior& measure the throughput under various
network load conditions for following MAC layer Protocols
a. Aloha
b. CSMA, CSMA/CD & CSMA/CA
Token Bus & Token Ring
Software and hardware realization of the following:
7. i. Encoding schemes: Manchester, NRZ.
Error control schemes: CRC, Hamming code.
List of Experiments
Page
Exp. NO. Name of Experiment
No.
Study and use of common TCP/IP protocols and term viz.
Exp: 1 12
telnet rlogin ftp, ping, finger, Socket, Port etc.
Write a program for representation of unidirectional,
Exp: 2 20
directional weighted and unweighted graph in C language.
Write a program to compute shortest path for one source - one
Exp: 3 30
destination and one source –all destination in C language.
Write a program for Simulation of M/M/1 and M/M/1/N
Exp: 4(A) 38
queues Network Protocols.
Write a program for Simulation of pure and slotted ALOHA.
Exp: 4(B) 50
Write a program for Simulation of link state routing
Exp: 4(C) 54
algorithm in MATLAB.
Observe the behavior & measure the throughput under
various network load conditions for following MAC layer
Exp: 5(A) Protocols: Ethernet LAN protocol to create scenario and 60
Study the performance of CSMA/CD Protocol through
simulation.
Observe the behavior & measure the throughput under
various network load conditions for following MAC layer
Exp: 5(B) Protocols: Implementation of Wireless LAN Protocol. To 72
create scenario and study the performance network with
CSMA/CA protocol and compare with CSAMA/CD protocol.
Observe the behavior& measure the throughput of reliable
data transfer protocols under various Bit error rates for
Exp: 5(C) 80
following DLL layer protocols: Implementation and study of
Stop and Wait protocol
Observe the behavior & measure the throughput under
Exp: 5(D) various network load conditions for following MAC layer 88
Protocols: Sliding Window: Go-Back-N and Selective Repeat
To create the scenario and study the performance of token bus
Exp: 5(E) 98
protocols through simulation.
To create the scenario and study the performance of token
Exp: 5(F) 104
ring protocols through simulation.
Software and hardware realization of the Manchester
Exp: 6(A) 110
Encoding scheme.
Software and hardware realization of the NRZ Encoding
Exp: 6(B) 126
schemes.
Software and hardware realization of the CRC Error control
Exp: 6(C) 138
schemes.
B1. Write a program in C language to find shortest path in a graph using bellman ford algorithm.
B2. Implementation of PC-to-PC communication using IEEE 802.3.
Beyond
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12 PSO1 PSO2
Topics
B1 1 2 2 1 2 - - - - - - - 1 -
B2 - 3 1 1 1 - - - - - - - - 1
DO’S
1. Student should get the record of previous experiment checked before starting the new
experiment.
2. Read the manual carefully before starting the experiment.
3. Before starting the experiment, get circuit diagram checked by the teacher.
4. Before switching on the power supply, get the circuit connections checked.
5. Get your readings checked by the teacher.
6. Apparatus must be handled carefully.
7. Maintain strict discipline.
8. Keep your mobile phone switched off or in vibration mode.
9. Students should get the experiment allotted for next turn, before leaving the lab.
DON’TS
1. Do not touch or attempt to touch the mains power supply wire with bare hands.
2. Do not overcrowd the tables.
3. Do not tamper with equipment’s.
4. Do not leave the lab without permission from the teacher.
EXPERIMENT NO. – 1
AIM:
Study and use of common TCP/IP protocols and term viz. telnet rlogin ftp, ping, finger, Socket,
Port etc.
THEORY: -
TCP/IP (Transmission Control Protocol/Internet Protocol) is actually a suite, or stack, of
communication protocols that interconnect and work together to provide for reliable and efficient
data communications across an internet. TCP/IP is the basic communication language or protocol of
the internet. It can also be used as a communications protocol in a private network (either an
intranet or an extranet). It specifies how data is exchanged over the internet by providing end-to-end
communications that identify how it should be broken into packets, addressed, transmitted, routed
and received at the destination.
The two main protocols in the internet protocol suite serves specific functions. TCP defines how
applications can create channels of communication across a network. It also manages how a
message is assembled into smaller packets before they are then transmitted over the internet and
reassembled in the right order at the destination address.
IP defines how to address and route each packet to make sure it reaches the right destination. Each
gateway computer on the network checks this IP address to determine where to forward the
message.
Importance of TCP/IP
TCP/IP is nonproprietary and, as a result, is not controlled by any single company. Therefore, the
internet protocol suite can be modified easily. It is compatible with all operating systems, so it can
communicate with any other system. The internet protocol suite is also compatible with all types of
computer hardware and networks. TCP/IP is highly scalable and, as a routable protocol, can
determine the most efficient path through the network.
TCP/IP Protocol Stack Maps to the OSI Model
OSI Layer TCP/IP Layer TCP/IP Protocols
Application, Application Telnet, FTP, SMTP, TFTP,
Presentation, Session DNS, HTTP, DHCP
Transport Transport TCP, UDP
Network Internet IP, ICMP, ARP, RARP
Data Link, Physical Network Access Ethernet, Token Ring,
APPLICATION LAYER
The application layer of the TCP/IP Model consists of various protocols that performs all the
functions of the OSI model’s Application, Presentation and Session layers. This includes interaction
with the application, data transition and encoding, dialogue control and communication
coordination between systems.
The following are few of the most common Application Layer protocols: -
TELNET
The Telnet program provides a remote login capability. Telnet is a terminal emulation protocol used
to access the resources of a remote host. A host, called the Telnet server, runs a telnet server
application that receives a connection from a remote host called the Telnet client. This connection is
presented to the operating system of the telnet server as though it is a terminal connection
connected directly (using keyboard and mouse). It is a text-based connection and usually provides
access to the command line interface of the host. Remember that the application used by the client
is usually named telnet also in most operating systems. One should not confuse the telnet
application with the Telnet protocol.
HTTP
The Hypertext Transfer Protocol is foundation of the World Wide Web. It is used to transfer
Webpages and such resources from the Web Server or HTTP server to the Web Client or the HTTP
client. Web client is used when a web browser such as Internet Explorer or Firefox is used. It uses
HTTP to transfer web pages that you request from the remote servers.
FTP
File Transfer Protocol (FTP) enables a file on one system to be copied to another system. File
Transfer Protocol is a protocol used for transferring files between two hosts. Just like telnet and
HTTP, one host runs the FTP server application and is called the FTP server while the FTP client
runs the FTP client application. A client connecting to the FTP server may be required to
authenticate before being given access to the file structure. Once authenticated, the client can view
directory listings, get and send files, and perform some other file related functions. Just like telnet,
the FTP client application available in most operating systems is called ftp. So, the protocol and the
application should not be confused.
SMTP
Simple Mail Transfer Protocol is used to send e-mails. An email client is configured to send e-mails
then one is using SMTP. The mail client acts as a SMTP client here. SMTP is also used between
two mails servers to send and receive emails. However, the end client does not receive emails using
SMTP. The end clients use the POP3 protocol to do that.
TFTP
Trivial File Transfer Protocol is a stripped-down version of FTP. Where FTP allows a user to see a
directory listing and perform some directory related functions, TFTP only allows sending and
receiving of files. It is a small and fast protocol, but it does not support authentication. Because of
this inherent security risk, it is not widely used.
DNS
Every host in a network has a logical address called the IP address. These addresses are a bunch of
numbers. When user go to a website such as www.cisco.com then user is actually going to a host
which has an IP address, but user do not have to remember the IP Address of every WebSite. This
is because Domain Name Service (DNS) helps map a name such as www.cisco.com to the IP
address of the host where the site resides. This obviously makes it easier to find resources on a
network. When user type the address of a website in browser, the system first sends out a DNS
query to its DNS server to resolve the name to an IP address. Once the name is resolved, a HTTP
session is established with the IP Address.
DHCP
Every host requires a logical address such as an IP address to communicate in a network. The host
gets this logical address either by manual configuration or by a protocol such as Dynamic Host
Configuration Protocol (DHCP). Using DHCP, a host can be provided with an IP address
automatically. To understand the importance of DHCP, imagine having to manage 5000 hosts in a
network and assigning them IP address manually! Apart from the IP address, a host needs other
information such as the address of the DNS server it needs to contact to resolve names, gateways,
subnet masks, etc. DHCP can be used to provide all these information along with the IP address.
TRANSPORT LAYER
The TCP/IP transport layer’s function is same as the OSI layer’s transport layer. It is concerned
with end-to-end transportation of data and setups up a logical connection between the hosts.
Network protocols are either connection-oriented or connectionless.
The functions of the transport layer are:
1. It facilitates the communicating hosts to carry on a conversation.
2. It provides an interface for the users to the underlying network.
3. It can provide for a reliable connection. It can also carry out error checking, flow control,
and verification.
Connectionless protocols - Packets are sent over the network without regard to whether they
actually arrive at their destinations. There are no acknowledgments or guarantees, but user can send
a datagram to many different destinations at the same time. Connectionless protocols are fast
because no time is used in establishing and tearing down connections. Connectionless protocols are
also referred to as best-effort protocols.
A port is a logical connection place and specifically, using the Internet's protocol, TCP/IP, the way
a client program specifies a particular server program on a computer in a network. Higher-level
applications that use TCP/IP such as the Web protocol, Hypertext Transfer Protocol, have ports
with pre-assigned numbers. These are known as well-known ports that have been assigned by the
Internet Assigned Numbers Authority (IANA). Other application processes are given port numbers
dynamically for each connection. When a service (server program) initially is started, it is said to
bind to its designated port number.
Two protocols available in this layer are Transmission Control Protocol (TCP) and User Datagram
Protocol (UDP).
NETWORK LAYER
A number of TCP/IP protocols operate on the Network layer of the OSI Model, including IP, ARP,
RARP, BOOTP, and ICMP. Remember, the OSI Network layer is concerned with routing messages
across the internetwork. It provides logical addressing, path determination and forwarding.
The protocols available in this layer are: -
ETHERNET
The Ethernet protocol is by far the most widely used one. Ethernet uses an access method called
CSMA/CD (Carrier Sense Multiple Access/Collision Detection). This is a system where each
computer listens to the cable before sending anything through the network. If the network is clear,
the computer will transmit. If some other nodes have already transmitted on the cable, the computer
will wait and try again when the line is clear. Sometimes, two computers attempt to transmit at the
same instant. A collision occurs when this happens. Each computer then backs off and waits a
random amount of time before attempting to retransmit. With this access method, it is normal to
have collisions. However, the delay caused by collisions and retransmitting is very small and does
not normally affect the speed of transmission on the network. The Ethernet protocol allows for
linear bus, star, or tree topologies. Data can be transmitted over wireless access points, twisted pair,
coaxial, or fiber optic cable at a speed of 10 Mbps up to 1000 Mbps.
TOKEN RING
The Token Ring protocol was developed by IBM in the mid-1980s. The access method used
involves token-passing. In Token Ring, the computers are connected so that the signal travels
around the network from one computer to another in a logical ring. A single electronic token moves
around the ring from one computer to the next. If a computer does not have information to transmit,
it simply passes the token on to the next workstation. If a computer wishes to transmit and receives
an empty token, it attaches data to the token. The token then proceeds around the ring until it comes
to the computer for which the data is meant. At this point, the data is captured by the receiving
computer. The Token Ring protocol requires a star-wired ring using twisted pair or fiber optic
cable. It can operate at transmission speeds of 4 Mbps or 16 Mbps. Due to the increasing popularity
of Ethernet, the use of Token Ring in school environments has decreased.
Ping
Ping is a computer network administration utility used to test the reachability of a host on an
Internet Protocol (IP) network and to measure the round-trip time for messages sent from the host to
a destination computer. The name comes from active sonar terminology. Ping operates by sending
Internet Control Message Protocol (ICMP) echo request packets to the target host and waiting for
an ICMP response. In the process it measures the time from transmission to reception (round-trip
time) and records any packet loss. The results of the test are printed in form of a statistical summary
of the response packets received, including the minimum, maximum, and the mean round-trip
times, and sometimes the standard deviation of the mean.
Finger
Finger was one of the first computer network applications. It enabled people to see who else was
using the computer system as well as find basic information on that user. To find information about
a specific user, it was necessary to know that person's email address. Typical information provided
by Finger would be a person's real name, their office location and phone number, and they last time
they logged in. Users also could modify the plan field to add whatever text they wished.
Socket
A socket represents a single connection between two network applications. These two applications
nominally run-on different computers, but sockets can also be used for interposes communication
on a single computer. Applications can create multiple sockets for communicating with each other.
RESULT: -
We have successfully studied about the TCP/IP Model, protocols and associated computer network
terms.
DISCUSSION: -
1. What do you mean by network?
2. What is OSI model and explain the different layers of the OSI model.?
3. Compare connection oriented and connection less protocols.
4. What do you mean by TCP/IP model and explain the different layers of TCP/IP model?
5. What is the importance of TCP/IP?
6. Compare TCP and UDP protocols.
EXPERIMENT No. – 2
AIM:
Write a program for representation of unidirectional, directional weighted and unweighted
graph in C language.
THEORY:
A Graph is a non-linear data structure consisting of nodes and edges. The nodes are sometimes also
referred to as vertices and the edges are lines or arcs that connect any two nodes in the graph.
Graph consists of two following components:
1. Vertices
2. Edges
Graph is a set of vertices (V) and set of edges (E).
V is a finite number of vertices also called as nodes.
E is a set of ordered pair of vertices representing edges.
Figure 2.1: Graph 1 and Graph 2 are Undirected Graph and Graph 3 is directed Graph
Graph 1:
V = {A, B, C, D, E, F}
E = {(A, B), (A, C), (B, C), (B, D), (D, E), (D, F), (E, F)}
Graph 2:
V = {A, B, C, D, E, F}
E = {(A, B), (A, C), (B, D), (C, E), (C, F)}
Graph 3:
V = {A, B, C}
E = {(A, B), (A, C), (C, B)}
Adjacency matrix of an undirected graph is always a symmetric matrix which means an edge (i, j)
implies the edge (j, i).
Adjacency matrix of a directed graph is never symmetric adj[i][j] = 1, indicated a directed edge
from vertex i to vertex j.
2. Adjacency List
Adjacency list is another representation of graphs.
It is a collection of unordered list, used to represent a finite graphs.
Each list describes the set of neighbors of a vertex in the graph.
Adjacency list requires less amount of memory.
For every vertex, adjacency list stores a list of vertices, which are adjacent to the current one.
In adjacency list, an array of linked list is used. Size of the array is equal to the number of
vertices.
Types of graphs
Undirected: An undirected graph is a graph in which all the edges are bi-directional i.e. the
edges do not point in any specific direction.
Directed: A directed graph is a graph in which all the edges are uni-directional i.e. the edges point
in a single direction.
Weighted directed graph: In a weighted graph, each edge is assigned a weight or cost. Consider a
graph of 4 nodes as in the diagram below. As you can see each edge has a weight/cost assigned to
it.
2(a) C Program for compilation of Undirected Unweighted Graph: for vertices 6 and edges 7
#include<stdio.h>
#include<conio.h>
#define N 100
/*
* Graph is the graph representation in adjacency matrix
*/
int Graph[N][N];
/*
* u is the current or source vertex
* v is the next or destination vertex
*/
int vertices, edges;
int u, v;
int i, j;
void InputGraph(){
printf("Enter vertices and Edges:\n");
scanf("%d%d", &vertices, &edges);
// Reset graph
for(i = 1; i<=vertices; i++)
for(j = 1; j <=vertices; j++)
Graph[i][j] = 0;
// Input Graph
printf("Enter (u v):\n");
for(i = 1; i<=edges; i++){
scanf("%d%d", &u, &v);
// Here value of 1 represents there is an edge (u,v)
Graph[u][v] = Graph[v][u] = 1;
}
}
void PrintGraph(){
OUTPUT:
2(b) C Program for compilation of Directional Unweighted Graph: vertices 4 and edges 5
#include<stdio.h>
#include<conio.h>
#define N 100
/*
* Graph is the graph representation in adjacency matrix
*/
int Graph[N][N];
/*
* u is the current or source vertex
* v is the next or destination vertex
*/
int vertices, edges;
int u, v;
int i, j;
//Function for Input the graph
void InputGraph()
{
printf("Enter vertices and Edges:\n");
scanf("%d%d", &vertices, &edges);
// Reset graph
for(i = 1; i<=vertices; i++)
for(j = 1; j <=vertices; j++)
Graph[i][j] = 0;
// Input Graph
printf("Enter (u v):\n");
for(i = 1; i<=edges; i++)
{
printf("edge-%d \t",i);
scanf("%d%d", &u, &v);
// Here value of 1 represents there is an edge (u,v)
Graph[u][v] = Graph[v][u] = 1;
}
}
//Function for printing the adjacency matrix of graph
void PrintGraph()
{
// Print the current Graph
printf("\n");
printf("Adjacency matrix of Undirected Graph :\n");
for(i = 1; i<=vertices; i++)
{
for(j = 1; j <=vertices; j++)
OUTPUT:
2(c) C Program for compilation of Directional Weighted Graph: vertices 4 and edges 5
#include<stdio.h>
#include<conio.h>
#define N 100
/*
* Graph is the graph representation in adjacency matrix
*/
int Graph[N][N];
/*
* u is the current or source vertex
* v is the next or destination vertex
*/
// Reset graph
for(i = 1; i<=vertices; i++)
for(j = 1; j <=vertices; j++)
Graph[i][j] = 0;
// Input Graph
printf("Enter (u v):\n");
for(i = 1; i<=edges; i++)
{
printf("edge-%d \t",i);
scanf("%d%d", &u, &v);
// Here value of 1 represents there is an edge (u,v)
Graph[u][v] = Graph[v][u] = 1;
}
}
//Function for printing the adjacency matrix of graph
void PrintGraph()
{
// Print the current Graph
printf("\n");
printf("Adjacency matrix of Undirected Graph :\n");
//Main Function
void main()
{
clrscr();
printf("Undirected Unweighted Graph:\n");
printf("============================\n\n");
InputGraph();
PrintGraph();
getch();
}
OUTPUT:
RESULT:
Undirected Graph, directed graph is represented through Adjacency Matrix. Input values are taken
from user in the form of vertices and edges. Output is in the form of Adjacency Matrix where zero
shows that there is no connection between vertices and non-zero shows there is a connection
between any two vertices and that non-zero value define the weight between two vertices. In
direction graph, the direction of flow is from row address to column address. Vertices are connected
with others through Edges. Adjacency Matrix is a 2D matrix where rows and column are a set of
vertices.
DISCUSSION:
EXPERIMENT No. – 3
AIM:
Write a program to compute shortest path for one source - one destination and one source –all
destination in C language.
THEORY:
The Shortest Path problem involves finding a path from a source vertex to a destination vertex which
has the least length among all such paths.
3(a) C code for calculating the shortest path for one source - all Destinations
#include<stdio.h>
#include<conio.h>
#define infinity 999
void dij(int n, int v, int cost[10][10], int dis[10]);
void main()
{
int v, n, i, j, cost[10][10], dis[10];
clrscr();
printf("enter the number of nodes \t");
scanf("%d",&n);
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
if(cost[i][j]==0)
cost[i][j]=infinity;
printf("enter the source vertex \t");
scanf("%d",&v);
dij(n,v,cost,dis);
printf("shortest path");
for(i=1;i<=n;i++)
if(i!=v)
printf("%d->%d\ncost=%d\n",v,i,dis[i]);
getch();
}
OUTPUT:
No of vertices=5
3(b) C code for calculating the shortest path for one source - one Destinations
#include<stdio.h>
#include<conio.h>
#define infinity 999
void dij(int n,intv,int cost[10][10],int dis[10]);
void main()
{
int v,n,m,i,j,cost[10][10],dis[10];
clrscr();
printf("enter the number of nodes \t");
scanf("%d",&n);
printf("enter the cost matrix \n");
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
printf("cost[%d][%d]=",i,j);
scanf("%d",&cost[i][j]);
}
printf("Adjacency Matrix for Undirected Weighted Graph:- \n");
for(i=1;i<=n;i++)
{
for(j=1;j<=n;j++)
printf("\t%d",cost[i][j]);
printf("\n");
}
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
if(cost[i][j]==0)
cost[i][j]=infinity;
getch();
}
//Function defination for shortest path using Dijkstra's Algo
void dij(int n,int v,int cost[10][10],int dis[10])
{
int i,u,count,w,flag[10],min;
for(i=1;i<=n;i++)
{
flag[i]=0;
dis[i]=cost[v][i];
}
count=2;
while(count<=n)
{
min=99;
for(w=1;w<=n;w++)
{
if(dis[w]<min && !flag[w])
{
min=dis[w];
u=w;
flag[u]=1;
}
count++;
}
for(w=1;w<=n;w++)
{
if(dis[u]+cost[u][w]<dis[w] && !flag[w])
{
dis[w]=dis[u]+cost[u][w];
}
}
}
}
OUTPUT:
No of vertices=5
RESULT:
The Shortest Path problem involves finding a path from a source vertex to a destination vertex which
has the least length among all such paths between source and destination. In computer network, there
are many shortest path algorithms. If the edges value is positive then we use Dijkstra’s Shortest Path
Algorithm Otherwise Bellman Ford Algorithm. In this algorithm we repeatedly calculate the shortest
path until it is achieved between one source and one destination and one source all destination..
DISCUSSION:
Q1. What is Single source shortest path problem?
Q2. Data structure used in Dijkstra and bellman ford algorithm.
Q3. Write down the complexity for Dijkstra’s algorithm?
AIM:
Write a program for Simulation of M/M/1 and M/M/1/N queues Network Protocols.
THEORY:
In queueing theory, a discipline within the mathematical theory of probability, an M/M/1 queue
represents the queue length in a system having a single server, where arrivals are determined by a
Poisson process and job service times have an exponential distribution. The model name is written in
Kendall's notation.
An M/M/1 queue is a stochastic process whose state space is the set {0,1,2,3,...} where the value
corresponds to the number of customers in the system, including any currently in service.
Arrivals occur at rate λ according to a Poisson process and move the process from state i to i+1.
Service times have an exponential distribution with rate parameter μ in the M/M/1 queue, where
1/μ is the mean service time.
A single server serves customers one at a time from the front of the queue, according to a first-
come, first-served discipline. When the service is complete the customer leaves the queue and the
number of customers in the system reduces by one.
The buffer is of infinite size, so there is no limit on the number of customers it can contain.
In fig 4.1 l represent the birth (Creation) of new process and m represent the death (Completion) of
old processes.
arrival_mean_time(1:65)=0.01;
service_mean_time=0.01;
sim_packets=750; %number of clients to be simulated
util(1:65) = 0;
avg_num_in_queue(1:65) = 0;
avg_delay(1:65) = 0;
P(1:65) = 1;
for j=1:64 %loop for increasing the mean arrrival time
arrival_mean_time(j+1)=arrival_mean_time(j) + 0.001;
num_events=2;
% initialization
sim_time = 0.0;
server_status=0;
queue_size=0;
time_last_event=0.0;
num_pack_insys=0;
total_delays=0.0;
time_in_queue=0.0;
time_in_server=0.0;
delay = 0.0;
time_next_event(1) = sim_time + exponential(arrival_mean_time(j+1));
time_next_event(2) = exp(30);
disp(['Launching Simulation...',num2str(j)]);
while(num_pack_insys<sim_packets)
min_time_next_event = exp(29);
type_of_event=0;
fori=1:num_events
if(time_next_event(i)<min_time_next_event)
min_time_next_event = time_next_event(i);
type_of_event = i;
end;
end
if(type_of_event == 0)
disp(['no event in time ',num2str(sim_time)]);
end
sim_time = min_time_next_event;
time_since_last_event = sim_time - time_last_event;
time_last_event = sim_time;
%results output
util(j+1) = time_in_server/sim_time;
avg_num_in_queue(j+1) = time_in_queue/sim_time;
avg_delay(j+1) = total_delays/num_pack_insys;
P(j+1) = service_mean_time./arrival_mean_time(j+1);
end
%----------------------graphs--------------------------------
figure('name','mean number of clients in system diagram(simulated)');
plot(P,avg_num_in_queue,'r');
xlabel('P');
ylabel('mean number of clients');
axis([0 0.92 0 15]);
Method 2:
M/M/1 Queuing System: This example shows how to model a single-queue single-server system with
a single traffic source and an infinite storage capacity. In the notation, the M stands for Markovian;
M/M/1means that the system has a Poisson arrival process, an exponential service time distribution, and
one server. Queuing theory provides exact theoretical results for some performance measures of an
M/M/1 queuing system and this model makes it easy to compare empirical results with the
corresponding theoretical results.
Structure
The model includes the components listed below:
Entity Generator block: Models a Poisson arrival process by generating entities (also known as in
queuing theory).
Simulink Function exponential Arrival Time (): Returns data representing the interarrival times for
the generated entities. The interarrival time of a Poisson arrival process is an exponential random
variable.
Entity Queue block: Stores entities that have yet to be served in FIFO order
Entity Server block: Models a server whose service time has an exponential distribution.
Figure 4.3: Mean number of clients versus ratio of service time and arrival time
Figure 4.4: Mean delay versus ratio of service time and arrival time
Figure 4.5: Utilisation versus ratio of service time and arrival time
In this queuing model, the arrivals occur one by one in accordance to Poisson process with parameter
(1+ ), where ‘ ’ represents the percentage change in number of customers calculated from past or
observed data. For instance, if in past an organization offered discounts and the percentage change in
number of customers was observed + 50% or 1 20% then =0.5 or =1.2respectively.
Program
%%
gapt = exprnd(lambda,1,nv);
st = exprnd(mu,1,nv);
st = [st; exprnd(mu,1,nv)];
st = [st; exprnd(mu,1,nv)];
%%
%gapt = zeros(1,nv) + 10;
%st = zeros(3,nv) + 1;
%%
%init the index of three service
at(1) = 0;
fori =2 :nv
at(i) = at(i-1) + gapt(i-1);
end
%%
fori = 2 : k
ct(i) = at(i);
lt(i) = ct(i) + st(i,pointer(i));
pointer(i) = pointer(i) + 1;
index(i) = i;
end
%%
fori = (k+1):nv
if at(i) >= v
ct(i) = at(i);
else
prev = index(pos);
ct(i) = lt(prev);
end
%%
%legnth of the queue
lenat = zeros(nv,2);
lenlt = zeros(nv,2);
fori = 1:nv
lenat(i,1) = at(1,i);
lenat(i,2) = 1;
lenlt(i,1) = lt(1,i);
lenlt(i,2) = -1;
end
%%
len = [lenat; lenlt];
len = sortrows(len,1);
%%
leng = zeros(1,size(len,1));
fori = 2:size(len,1)
iflen(i,2) > 0
leng(1,i) = leng(1,i-1) + 1;
else
ifleng(1,i-1) > 0
leng(1,i) = leng(1,i-1)-1 ;
else
leng(1,i) = 0;
end
end
end
figure
hold on
subplot(2,1,1)
x = 1:size(len,1);
plot(x, leng);
title('queue length vs time');
xlabel('Time(ms)');
ylabel('Queue Length');
meanlen(x) = mean(leng);
subplot(2,1,2)
plot(x, meanlen)
Figure 4.9: Queue Length v/s time graph for M/M/1/N queuing model
RESULT:
M/M/1 queuing theory is simulated in MATLAB environment. This experiment is done using two
methods (i) Matlab code (ii) Simulink representation. The outcome of this experiment is to calculate the
mean delay, mean number of clients and utilization of processor, when the processes is continuously
arriving and execution. This calculation is useful to the finding the number of severs required to execute
the processes without delay.
M/M/1/N queuing model is simulated in Matlab environment. In this model one server is used for
execution of process. The mean Queue length is higher than the other Queuing model.
DISCUSSION:
Q1. How is queuing delay calculated?
Q2. What is utilization factor in Queuing theory?
Q3. What is queuing model in simulation?
Q4. What is steady state in Queuing theory?
AIM:
THEORY:
Pure ALOHA and Slotted ALOHA both are the Random-Access Protocols, that are implemented on the
Medium Access Control (MAC) layer, a sub layer of Data Link Layer. The purpose of the ALOHA
protocol is to determine that which competing station must get the next chance of accessing the multi-
access channel at MAC layer. The main difference between Pure ALOHA and Slotted ALOHA is that
the time in Pure Aloha is continuous whereas, the time in Slotted ALOHA is discrete.
PURE ALOHA is introduced by Norman Abramson and his associates at the University of Hawaii in
early 1970. The Pure ALOHA just allows every station to transmit the data whenever they have the data
to be sent. When every station transmits the data without checking whether the channel is free or not
there is always the possibility of the collision of data frames. If the acknowledgment arrived for the
received frame, then it is ok or else if the two frames collide (Overlap), they are damaged. After the
pure ALOHA in 1970, Roberts introduced an another method to improve the capacity of the Pure
ALOHA which is called Slotted ALOHA. He proposed to divide the time into discrete intervals called
time slots. Each time slot corresponds to the length of the frame. In contrast to the Pure ALOHA,
Slotted ALOHA does not allow to transmit the data whenever the station has the data to be send. The
Slotted ALOHA makes the station to wait till the next time slot begins and allow each data frame to be
transmitted in the new time slot.
Figure 4.10: Pure ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which have
collided.
SLOTTED ALOHA
An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced discrete
timeslots and increased the maximum throughput. A station can start a transmission only at the
beginning of a timeslot, and thus collisions are reduced. In this case, only transmission-attempts within
1 frame-time and not 2 consecutive frame-times need to be considered, since collisions can only occur
during each timeslot.
Figure 4.11: Slotted ALOHA protocol. Boxes indicate frames. Shaded boxes indicate frames which are
in the same slots
MATLAB Code For efficiency of Pure Aloha and Slotted Aloha Protocol
G=0:0.1:3;
S=G.*exp(-G);
plot(G,S,'b+:');
text(1,.38,'MAX THROUGHPUT FOR SLOTTED ALOHA')
xlabel('load offered');
ylabel('throughput');
title('aloha protocol');
hold on;
S1=G.*exp(-2*G);
plot(G,S1,'rd:');
text(0.5,.2,'MAX THROUGHPUT FOR PURE ALOHA')
xlabel('load offered');
ylabel('throughput');
title('aloha protocol');
legend('Slotted ALOHA','Pure ALOHA','Location','NorthEast')
OUTPUT:
Figure 4.12: Aloha and Slotted Aloha Protocol throughput v/s load offered
RESULT:
Pure ALOHA and Slotted ALOHA both are the Random-Access Protocols, that are implemented on the
Medium Access Control (MAC) layer, a sub layer of Data Link Layer. The purpose of the ALOHA
protocol is to determine that which competing station must get the next chance of accessing the multi-
access channel at MAC layer. The output of Aloha and slotted aloha is analyzed between throughput
and load offered.
DISCUSSION:
Q1. What are the disadvantages of Pure Aloha?
Q2. State disadvantages of Slotted Aloha.
Q3. Compare Pure Aloha and Slotted Aloha.
Q4. What is G in slotted Aloha?
AIM:
THEORY:
Link state routing is a dynamic routing algorithm in which each router shares knowledge of its
neighbors with every other router in the network. A router sends its information about its neighbors
only to all the routers through flooding. Information sharing takes place only whenever there is a
change. It makes use of Dijkstra’s Algorithm for making routing tables
end
end
end
disp('matrix');
i=1;
x=1;
mat1=triu(matrix);
for i=1:node
for j=1:node
if(mat1(i,j)~=0)
mat(i,j)=x;
mat(j,i)=mat(i,j);
x=x+1;
end
end
end
view(biograph(triu(matrix),[],'showarrows','off','ShowWeights','on','EdgeTextColor',[0 0 1]));
for from=1:node
for via=1:node
for to=1:node
if(from~=via && from ~=to)
if(via==to && matrix(from,to)~=0)
go(to,via,from)=matrix(from,to);
else
go(to,via,from)=100;
end
else
go(to,via,from)=inf;
end
end
end
end
i=0;
while(i<2)
for from=1:node
for to=1:node
if(from~=to)
if(matrix(from,to)~=0)
for x=1:node
for y=1:node
temp(x,y)=matrix(from,to)+min(go(y,:,to));
if(temp(x,y)<go(y,to,from)&&go(y,to,from)<inf)
go(y,to,from)=temp(x,y);
end
end
end
end
end
end
end
i=i+1;
end
disp(go)
choice='y';
while(choice=='y')
source=input('Enter the source node: ');
dest=input('Enter the destination node: ');
disp('Path between Source and Destination is:- ');
trace(1)=source;
j=2;
while(source~=dest)
[row,col]=find(go(dest,:,source)==min(go(dest,:,source)));
trace(j)=col;
source=col;
j=j+1;
end
k=1:j-1;
disp(trace(k));
bg=biograph(triu(matrix),[],'showarrows','off','ShowWeights','on','EdgeTextColor',[0 0 1]);
for i=1:j-1;
set(bg.nodes(trace(i)), 'color', [1 0 0]);
set(bg.nodes(trace(1)),'color',[0 1 0]);
if(i<j-1)
set(bg.edges(mat(trace(i+1),trace(i))),'linecolor',[1 0 0]);
end
end
view(bg);
choice=input('Do you want to try again (y/n):-','s');
end
OUTPUT:
Input: Enter the no. of nodes: 7
Figure 4.13: Weighted graph (Weight is taken as a Random number between 0-9)
Enter the source node:4
Enter the destination node:7
Path between Source and Destination is- 4 3 1 7
Figure 4.14: Shortest path between source node (4) and destination node (7)
RESULT:
Link state routing algorithm is dynamic routing algorithm. In this algorithm weighted graph is created
using no. of nodes with random weight on the edges. Then shortest path algorithm computes the
shortest path from source vertex to destination vertex to transfer information. Information is shared
with every neighbor but final path will compute with shortest path algorithm.
DISCUSSION:
AIM: Observe the behavior& measure the throughput under various network load conditions
for following MAC layer Protocols: Ethernet LAN protocol to create scenario and Study the
performance of CSMA/CD (Carrier Sense Multiple Access with Collision Detection) Protocol
through simulation.
HARDWARE REQUIRED: LAN Trainer kit ST5002A, 2-CAT5 Cables, 42 mm Patch Cords
THEORY:
A network’s access method is its method of controlling how network nodes access the
communications channel. In comparing a network to a highway, the on-ramps would be one
part of the highway’s access method. A busy highway might use stoplights at each on-ramp to
allow only one person to merge into traffic every five seconds. After merging, cars are
restricted to lanes and each lane is limited as to how many cars it can hold at one time. All of
these highway controls are designed to avoid collisions and help drivers get to their
destinations. On networks, similar restrictions apply to the way in which multiple computers
share a finite amount of bandwidth on a network. These controls make up the network’s access
method.
The access method used in Ethernet is called CSMA/CD (Carrier Sense Multiple Access with
Collision Detection). All Ethernet networks, independent of their speed or frame type, rely on
CSMA/CD. To understand Ethernet, you must first understand CSMA/CD. Take a minute to
think about the full name “Carrier Sense Multiple Access with Collision Detection.” The term
“Carrier Sense” refers to the fact that Ethernet NICs listen on the network and wait until they
detect (or sense) that no other nodes are transmitting data over the signal (or carrier) on the
communications channel before they begin to transmit. The term “Multiple Access” refers to
the fact that several Ethernet nodes can be connected to a network and can monitor traffic, or
access the media, simultaneously.
In CSMA/CD, when a node wants to transmit data it must first access the transmission media
and determine whether the channel is free. If the channel is not free, it waits and checks again
after a very brief amount of time. If the channel is free, the node transmits its data. Any node
can transmit data after it determines that the channel is free. But what if two nodes
simultaneously check the channel, determine that it’s free, and begin to transmit? When this
happens, their two transmissions interfere with each other; this is known as a collision.
The last part of the term CSMA/CD, “collision detection,” refers to the way nodes respond to a
collision. In the event of a collision, the network performs a series of steps known as the
collision detection routine. If a node’s NIC determines that its data has been involved in a
collision, it immediately stops transmitting. Next, in a process called jamming, the NIC issues a
special 32-bit sequence that indicates to the rest of the network nodes that its previous
transmission was faulty and that those data frames are invalid. After waiting, the NIC
determines if the line is again available; if it is available, the NIC retransmits its data.
On heavily trafficked networks, collisions are fairly common. It is not surprising that the more
nodes there are transmitting data on a network, the more collisions that will take place.
(Although a collision rate greater than 5% of all traffic is unusual and may point to a
problematic NIC or poor cabling on the network.) When an Ethernet network grows to include
a particularly large number of nodes, you may see performance suffer as a result of collisions.
This “critical mass” number depends on the type and volume of data that the network regularly
transmits. Collisions can corrupt data or truncate data frames, so it is important that the network
detect and compensate for them.
On an Ethernet network, a collision domain is the portion of a network in which collisions
occur if two nodes transmit data at the same time. When designing an Ethernet network, it’s
important to note that because repeaters simply regenerate any signal they receive, they repeat
collisions just as they repeat data. Thus, connecting multiple parts of a network with repeaters
results in a larger collision domain. Higher-layer connectivity devices, such as switches and
routers, however, can separate collision domains.
Collision domains play a role in the Ethernet cabling distance limitations. For example, if there
is more than 100 meters distance between two nodes on a segment connected to the same
100BASE-TX network bus, data propagation delays will be too long for CSMA/CD to be
effective. A data propagation delay is the length of time data takes to travel from one point on
the segment to another point. When data takes a long time, CSMA/CD’s collision detection
routine cannot identify collisions accurately. In other words, one node on the segment might
begin its CSMA/CD routine and determine that the channel is free even though a second node
has begun transmitting, because the second node’s data is taking so long to reach the first node.
At rates of 100 or 1000 Mbps, data travels so quickly that NICs can’t always keep up with the
collision detection and retransmission routines. For example, because of the speed employed on
a 100BASE-TX network, the window of time for the NIC to both detect and compensate for the
error is much less than that of a 10BASE-T network. To minimize undetected collisions,
100BASE-TX networks can support only a maximum of three network segments connected
with two hubs, whereas 10BaseT buses can support a maximum of five network segments
connected with four hubs. This shorter path reduces the highest potential propagation delay
between nodes.
PROCEDURE:
the ST5002A)
24. Right click with mouse and user will get listing plot all
Figure 5(A).17: Step – 23
25. And select plot all a graph will be displayed user can also zoom particular area by
selecting that area.
Output:
RESULT: We have observed the behavior & measure the throughput (under various network
load conditions) of Ethernet LAN protocol to create scenario and study the performance of
CSMA/CD MAC layer Protocol through simulation.
DISCUSSION:
Q1. What is the difference between MAC sublayer and LLC sublayer?
Q2. What is the difference between repeater and hub?
Q3. List out advantage of token passing protocol over CSMA/CD protocol?
Q4. Define CSMA and CDMA?
AIM: Observe the behavior& measure the throughput under various network load conditions for
following MAC layer Protocols: Implementation of Wireless LAN Protocol. To create scenario and
study the performance network with CSMA/CA (Carrier Sense Multiple Access with Collision
Avoidance) protocol and compare with CSAMA/CD protocol.
HARDWARE REQUIRED: LAN Trainer kit ST5002A, 2-CAT5 Cables, 42 mm Patch Cords,
Wireless 150 Router, Wireless 150 USB Adapters
Theory: Carrier-sense multiple access with collision avoidance (CSMA/CA) in computer networking, is
a network multiple access method in which carrier sensing is used, but nodes attempt to avoid collisions
by beginning transmission only after the channel is sensed to be "idle". CSMA/CA (Carrier Sense
Multiple Access/Collision Avoidance) is a protocol for carrier transmission in 802.11 networks. Unlike
CSMA/CD (Carrier Sense Multiple Access/Collision Detect) which deals with transmissions after a
collision has occurred, CSMA/CA acts to prevent collisions before they happen.
It is particularly important for wireless networks, where the collision detection of the alternative
CSMA/CD is not possible due to wireless transmitters de-sensing their receivers during packet
transmission. CSMA/CA is unreliable due to the hidden node problem. CSMA/CA is a protocol that
operates in the Data Link Layer (Layer 2) of the OSI model.
Collision avoidance is used to improve the performance of the CSMA method by attempting to divide
the channel somewhat equally among all transmitting nodes within the collision domain.
1. Carrier Sense: prior to transmitting, a node first listens to the shared medium (such as listening
for wireless signals in a wireless network) to determine whether another node is transmitting or
not. Note that the hidden node problem means another node may be transmitting which goes
undetected at this stage.
2. Collision Avoidance: if another node was heard, we wait for a period of time (usually random)
for the node to stop transmitting before listening again for a free communications channel.
at least not use it for small packets (the overhead of RTS, CTS and transmission is too
great for small data transfers).
Transmission: if the medium was identified as being clear or the node received a CTS
to explicitly indicate it can send, it sends the frame in its entirety. Unlike CSMA/CD, it
is very challenging for a wireless node to listen at the same time as it transmits (its
transmission will dwarf any attempt to listen). Continuing the wireless example, the node
awaits receipt of an acknowledgement packet from the Access Point to indicate the
packet was received and check summed correctly. If such acknowledgement does not
arrive in a timely manner, it assumes the packet collided with some other transmission,
causing the node to enter a period of binary exponential back off prior to attempting to
re-transmit.
Although CSMA/CA has been used in a variety of wired communication systems, it is particularly
beneficial in a wireless LAN due to a common problem of multiple stations being able to see the Access
Point, but not each other. This is due to differences in transmit power, and receive sensitivity, as well as
distance, and location with respect to the AP. This will cause a station to not be able to 'hear' another
station's broadcast. This is the so-called 'hidden node', or 'hidden station' problem. Devices utilizing
802.11 based standards can enjoy the benefits of collision avoidance (RTS / CTS handshake, also Point
coordination function), although they do not do so by default. By default they use a Carrier sensing
mechanism called 'exponential back-off', or (Distributed coordination function) that relies upon a
station attempting to 'listen' for another station's broadcast before sending. CA, or PCF relies upon the
AP (or the 'receiver' for Ad hoc networks) granting a station the exclusive right to transmit for a given
period of time after requesting it (Request to Send / Clear to Send).
1. Insert power adapter in wireless LAN access point module and switch on the supply.
2. Inserting Router software CD in CD-ROM. Open folder and run Setup wizard
3. Double click on DIR-600 a window appears as shown
9. Select second option if u don’t have internet connection on which you are planning to
connect USB Adapter
10. And finally finish will appear conforming the setup is completed
Figure5(B).4: Step – 10
11. Now take Wireless USB Adapter and insert on the PC where Router is connected
12. It asks for driver insert Wireless USB Adapter CD into CD-ROM and run Setup
13. Double click on auto run D icon a window appears as shown
RESULT: We have observed the behavior & measure the throughput (under various network
load conditions) of Wireless LAN protocol to create scenario and study the performance of
CSMA/CD MAC layer Protocol through simulation and compared with CSMA/CD.
DISCUSSION:
Q1. Write the advantages and disadvantages of star topologies.
Q2. How CSMA/CA differs from CSMA/CD. Explain in brief?
Q3. What do you mean by topology? What are the most popular topologies?
Q4. Discuss the MAC layer functions of IEEE 802.11.
AIM: Observe the behavior& measure the throughput of reliable data transfer protocols under various
Bit error rates for following DLL layer protocols: Implementation and study of Stop and Wait protocol
HARDWARE REQUIRED: LAN Trainer kit ST5002A, 2-CAT5 Cables, 42 mm Patch Cords
THEORY:
The very basic need of communication is that, there should be one sender & one receiver at
least. They should have good coordination & intelligence while communicating with each
other; it can be achieved by having Flow Control & Error Control functions. Among these two
the most important is Flow Control.
Flow Control:
Flow Control coordinates the amount of data that can be sent before receiving acknowledgment
and is one of the most important duties of data link layer. In most protocols, flow control is set
of procedures that tell the sender how much data it can transmit before it must wait for an
acknowledgment from receiver. The flow of data must not be allowed to overwhelm receiver.
Any receiving device has a limited receiving speed at which it can process incoming data & a
limited amount of memory in which to store the incoming data. The receiver must be able to
inform the sender before those limits are reached &to request that the sender send fewer frames
or stop temporarily. Incoming data must be checked & processed before they can be used. The
rate of such processing is often slower than the rate of transmission. For this reason, each
receiver has block of memory, called buffer, reserved for storing incoming data until they are
processed. If the buffer begins to fill up, the receiver must be able to tell the sender to halt
transmission until it is once again able to receive.
Stop-N-wait Protocol:
One of the simplest flows & error control protocol. The sender keeps a copy of the last frame
transmitted until it receives an acknowledgment for that frame. Keeping a copy of the last frame
transmitted allows the sender to retransmit lost or damaged frames until they are received
correctly. For identification purpose, both data frames & ACK frames are numbered. This
numbering allows for identification of data frames in case of duplicate transmission. A
damaged or lost frame is treated in the same manner by the receiver. If the receiver detects an
error in the receiving frame, it simply discarded the frame & sends no acknowledgment. If the
receiver receives the frame that is out of order, it discarded the out of ordered frames. The
sender starts a timer when it sends a frame. If an acknowledgment is not received within an
allotted time period, the sender assumes that the frame was lost or damaged & resendsit.
Case 1: Normal operation
Sender sends frame & waits for the next acknowledgment, when it gets the acknowledgment it
sends the ACK number frame. The acknowledgment must be received before the timer set for
each frame expires.
Case 2: Lost frames or damaged frames
A lost frame is handled by receiver; when receiver receives the damaged frame it simply
discards it & remains silent and keeps ACK number value as it is.
Case 3: Acknowledgment Lost:
A lost or damaged ACK frame is handled by sender; if the sender receives a damaged ACK it
simply discarded, when the timer expires it retransmit the frame. Same will happen in case of a
lost ACK case.
Receiver:
Damaged Y
/ Lost
Discard it.
N
Sender:
PROCEDURE:
1. Connect the computers to the ST5002A using RJ45cables
2. Switch on the power supply ofST5002A.
3. Open ST5002Asoftware
4. Click on ‘Star Topology’
Figure5(C).4: Step – 5
6. You will find IP address and Computer name of each connected node.
Figure5(C).5: Step – 6
Figure5(C).6: Step – 7
8. Select all other settings like; frame size, inter frame delay, & bit error rate/;
Figure5(C).7: Step – 8
Figure5(C).8: Step – 8
9. Repeat Step-6 on each node connected toST5002A
10. Now enter the destination node Name/IP Address.
11. Now choose the parameters (The parameters should be same on each node connected to
the ST5002A)
12. Click on ‘Save Parameters’
13. ‘Ready for communication’ message will be displayed in status window of both source node
& destination node
14. Click on ‘Open’ to open a .txt file to transmit
15. Now click on open
16. Now click on send data
Figure5(C).9: Step – 16
17. You can see a frame status with blue color and Acknowledge status with red color
Figure5(C).10: Step – 17
Figure5(C).12: Step – 19
Figure5(C).13: Step – 20
21. To start transfer of file continuously check on Ack. Send enable button on receiver side
25. Right click with mouse and user will get listing plot all
Figure 5(C).16: Step – 24
26. And select plot all a graph will be displayed user can also zoom particular area by
selecting that area
RESULT: We have observed the behavior & measure the throughput of reliable data transfer protocols
under various Bit error rates for DLL layer Stop and Wait protocol.
DISCUSSION:
Q1. List the four basic network topologies and explain them giving all the relevant features.
Q2. Explain why collision is an issue in a random-access protocol but not in controlled access or
channelizing protocols?
Q3. List the responsibilities of the data link layer in the Internet model.
AIM: Observe the behavior& measure the throughput under various network load conditions
for following MAC layer Protocols: Sliding Window: Go-Back-N and Selective Repeat
HARDWARE REQUIRED: LAN Trainer kit ST5002A, 2-CAT5 Cables, 42 mm Patch Cords
THEORY:
Sliding window:
Sliding window algorithms are a method of flow control for network data transfers. TCP, the
Internet's stream transfer protocol, uses a sliding window algorithm.
A sliding window algorithm places a buffer between the application program and the network
data flow. For TCP, the buffer is typically in the operating system kernel, but this is more of an
implementation detail than a hard-and-fast requirement
Data received from the network is stored in the buffer, from whence the application can read at
its own pace. As the application reads data, buffer space is freed up to accept more input from
the network. The window is the amount of data that can be "read ahead" - the size of the buffer,
less the amount of valid data stored in it. Window announcements are used to inform the remote
host of the current window size.
If the local application can't process data fast enough, the window size will drop to zero and the
remote host will stop sending data. After the local application has processed some of the
queued data, the window size rises, and the remote host starts transmitting again.
On the other hand, if the local application can process data at the rate It’s being transferred;
sliding window still gives us an advantage. If the window size is larger than the packet size,
then multiple packets can be outstanding in the network, since the sender knows that buffer
space is available on the receiver to hold all of them. Ideally, a steady-state condition can be
reached where a series of packets (in the forward direction) and window announcements (in the
reverse direction) are constantly in transit. As each new window announcement is received by
the sender, more data packets are transmitted. As the application reads data from the buffer
(remember, we're assuming the application can keep up with the network), more window
announcements are generated. Keeping a series of data packets in transit ensures the efficient
use of network resources.
The next two protocols are bidirectional protocols that belong to a class called sliding window
protocols. The two differ among themselves in terms of efficiency, complexity, and buffer
requirements, as discussed later. In these, as in all sliding window protocols, each outbound
frame contains a sequence number, ranging from 0 up to some maximum. The maximum is
usually 2^n - fits exactly in an n-bit field. The stop-and-wait sliding
window protocol uses n - stricting the sequence numbers to 0 and 1, but moresophisticated
versions can use arbitrary n. The essence of all sliding window protocols is that at any instant of
time, the sender maintains a set of sequence numbers corresponding to frames it is permitted to
send. These frames are said to fall within the sending window.
Similarly, the receiver also maintains a receiving window corresponding to the set of frames it
is permitted to accept. The sender’s window and the receiver’s window need not have the same
lower and upper limits or even have the same size. In some protocols they are fixed in size, but
in others they can grow or shrink over the course of time as frames are sent and received.
Although these protocols give the data link layer more freedom about the order in which it may
send and receive frames, we have definitely not dropped the requirement that the protocol must
deliver packets to the destination network layer in the same order they were passed to the data
link layer on the sending machine. Nor we have changed the requirement that the physical
communication channel is ‘‘wire-like,’’ that is, it must deliver all frames in the order sent. The
sequence numbers within the sender’s window represent frames that have been sent or can be
sent but are as yet not acknowledged. Whenever a new packet arrives from the network layer, it
is given the next highest sequence number, and the upper edge of the window is advanced by
one. When an acknowledgement comes in, the lower edge is advanced by one. In this way the
window continuously maintains a list of unacknowledged frames. Figure 113 shows an
example. Since frames currently within the sender’s window may ultimately be lost or damaged
in transit, the sender must keep all these frames in its memory for possible retransmission.
Thus, if the maximum window size is n, the sender needs n buffers to hold the unacknowledged
frames. If the window ever grows to its maximum size, the sending data link layer must
forcibly shut off the network layer until another buffer becomes free. The receiving data link
layer’s window corresponds to the frames it may accept. Any frame falling outside the window
is discarded without comment. When a frame whose sequence number is equal to the lower
edge of the window is received, it is passed to the network layer, an acknowledgement is
generated, and the window is rotated by one. Unlike the sender’s window, the receiver’s
window always remains at its initial size.
Go-Back-N Protocol:
The go-back-N protocol is used for reliable data transfer. In this protocol, packets to be
transmitted from A to B are numbered sequentially. This sequence number (SN) is sent in the
packet header and it is checked by the receiver.
Our model is a simple version of the go back n protocol. It consists of a Sender object that
transmits packets to another object and receives acknowledgments for packets correctly
received. A Receiver object accepts packets from the Sender object and transmits
acknowledgments for packets that are received correctly.
In order to simplify our model, we assume that the round trip time (RTT) measured from the
time the sender transmitted a packet until if gets an acknowledgment back (assuming the packet
was correctly received by the receiver and also that the acknowledgment was received correctly
by the transmitter) is included in the ``Channel’’ object. This assumption is useful to maintain
the cardinality of the state space under reasonable size. (The user is encouraged to relax this
assumption and see what happens.) We also assume that the ACK packets are small enough so
that the transmission delay is negligible. Therefore only the propagation delays are included in
the channel object. Furthermore, no sender timeout is modeled, and when the sender transmits
all packets in a window it ``goes back’’ and retransmit from the beginning of the window. In
order to obtain a Markov model, all random variables are assumed to be exponentially
distributed.
In the first model we present we assume that both packets and acknowledgments may be lost.
However, the receiver only sends ACK packets back to the transmitter when it accepts a packet.
The Sender object has two state variables: one (Win_begin) indicates the beginning of the
transmitter window, and the other (SN) points to the sequence number of the next packet to be
transmitted. The Receiver object has only one state variable (RN) that indicates the sequence
number the receiver is expecting. That is, a packet with sequence number equal to RN is
accepted, if received correctly. The Channel object has two variables. The first (ACK_RN)
indicates the serial number of the last ACK that was sent to the receiver. (Note that this last
ACK could have been lost.) The second variable (N_acks) indicates the number of ACK
packets that are in transit in the channel. The model is shown in figure 114
wrong. When a damaged frame arrives at the receiver, it obviously should be discarded, but
what should the receiver do with all the correct frames following it? Remember that the
receiving data link layer is obligated to hand packets to the network layer in sequence. In figure
we see the effects of pipelining on error recovery. We will now examine it in some detail.
Two basic approaches are available for dealing with errors in the presence of pipelining. One
way, called go back n, is for the receiver simply to discard all subsequent frames, sending no
acknowledgements for the discarded frames. This strategy corresponds to a receive window of
size 1. In other words, the data link layer refuses to accept any frame except the next one it must
give to the network layer. If the sender’s window fills up before the timer runs out, the pipeline
will begin to empty. Eventually, the sender will time out and retransmit all unacknowledged
frames in order, starting with the damaged or lost one. This approach can waste a lot of
bandwidth if the error rate is high. In figure we see go back n for the case in which the
receiver’s window is large. Frames 0 and 1 are correctly received and acknowledged. Frame 2,
however, is damaged or lost. The sender, unaware of this problem, continues to send frames
until the timer for frame 2 expires. Then it backs up to frame 2 and starts all over with it,
sending 2, 3, 4, etc. all over again.
this frame arrives at the receiver, a check is made to see if it falls within the receiver’s
window. Unfortunately, in figure 3-20(b) frame 0 is within the new window, so it will be
accepted. The receiver sends a piggybacked acknowledgement for frame 6, since 0 through 6
have been received. The sender is happy to learn that all its transmitted frames did actually
arrive correctly, so it advance sits window and immediately sends frames 7,0,1,2,3,4, and 5.
Frame 7 will be accepted by the receiver and its packet will be passed directly to the network
layer. Immediately thereafter, the receiving data link layer checks to see if it has a valid frame
0 already, discovers that it does, and passes the embedded packet to the network layer.
Consequently, the network layer gets an incorrect packet, and the protocol fails.
PROCEDURE:
1. Implement Star topology with the help ofST5002A.
2. Install the ST5002ASoftware on each node or system connected on ST5002A as per the
given steps in ‘installation procedure’ in case it is not installed
3. Run ST5002A software on each node connected to theST5002A
4. Click on ‘Star Topology’
5. Enter remote nodes IP address or computer name in the ‘Destination IP / Name’ text
window.
6. Select the flow control either Go-Back-N or ‘Selective Repeat’ DLL protocol
Figure5(D).5: Step – 6
7. Select all other settings like; frame size, inter frame delay, &bit error rate
8. Save all parameters by clicking on ‘Save Parameters’ button
9. Browse a file to transmit by clicking on ‘Open’ button
10. Click ‘Send’ button to transmit the file.
RESULT: We have observed the behavior & measure the throughput (under various network load
conditions) for Sliding Window: Go-Back-N and Selective Repeat MAC layer Protocols.
DISCUSSION:
Q3. How the Go-back N-ARQ protocol is different from selective repeat sliding window protocol.
AIM: To create the scenario and study the performance of token bus protocols through
simulation.
HARDWARE REQUIRED: LAN Trainer kit ST5002A, 2-CAT5 Cables, 42 mm Patch Cords,
THEORY:
whether a packet was not received in its entire form. If it discovers a packet has been lost or
corrupted, SPX will resend the packet.
The SPX information is encapsulated by IPX. That is, its fields sit inside the data field of the
IPX datagram. The SPX packet, like the TCP segment, contains a number of fields to ensure
data reliability. An SPX packet consists of a 42-byte header followed by 0 to 534 bytes of data.
An SPX packet can be as small as 42 bytes (the size of its header) or as large as 576 bytes.
Addressing in IPX/SPX:
Just as with TCP/IP-based networks, IPX/SPX-based networks require that each node on a
network be assigned a unique address to avoid communication conflicts. Because IPX is the
component of the protocol that handles addressing, addresses on an IPX/SPX network are
called IPX addresses. IPX addresses contain two parts: the network address (also known as the
external network number) and the node address.
Maintaining network addresses for clients running IPX/SPX is somewhat easier than
maintaining addresses for TCP/IP-based networks, because IPX/SPX-based networks primarily
rely on the MAC address for each workstation. To begin, the network administrator chooses a
network address when installing the (older) NetWare operating system on a server. The
network address must be an 8-bit hexadecimal address, which means that each of its bits can
have a value of either 0–9 or A–F. An example of a valid network address is 000008A2. The
network address then becomes the first part of the IPX address on all nodes that use the
particular server as their primary server.
The second part of an IPX address, the node address, is by default equal to the network
device’s MAC address. Because every network interface card should have a unique MAC
address, no possibility of duplicating IPX addresses exists under this system (unless MAC
addresses have been manually altered). In addition, the use of MAC addresses means that you
need not configure addresses for the IPX/SPX protocol on each client workstation. Instead,
they are already defined by the NIC. Adding a MAC address to the network address example
used previously, a complete IPX address for a workstation on the network might be
000008A2:0060973E97F3.
Core Network and Transport layer protocols are normally included with your computer’s
operating system. When enabled, these protocols attempt to bind with the network interfaces on
your computer. Binding is the process of assigning one network component to work with
another. You can manually bind protocols that are not already associated with a network
interface. For optimal network performance, you should bind only those protocols that you
absolutely need. For example, a Windows Server 2003 server will attempt to use bound
protocols in the order in which they appear in the protocol listing until it finds the correct one
for the response at hand. If not all bound protocols are necessary, this approach wastes
processing time.
Normally, a workstation running the Windows XP operating system would, by default, have the
TCP/IP protocol bound to its network interfaces. The following exercise shows you how to
install the NW Link IPX/SPX/NetBIOS Compatible Transport protocol (which is not, by
default, bound to interfaces) on a Windows XP workstation:
Log on to the workstation as an Administrator.
1. Click Start, then click My Network Places. The My Network Places window appears.
2. From the Network Tasks list, click View network connections. The Network
Connections window appears.
3. Right-click the icon that represents your network adapter, and click Properties in the
shortcut menu. The network adapter’s Properties dialog box appears.
4. Click Install…. The Select Network Component Type dialog box appears.
5. From the list of network components, select Protocol, and then click Add…. The Select
Network Protocol dialog box appears, as shown in figure133.
7. Wait a moment while Windows XP adds the protocol to the network components
already bound to your NIC. Your network adapter Properties dialog box appears, now
with the NW Link NetBIOS and the NW Link IPX/SPX/ NetBIOS Compatible
Transport protocols listed under the “This connection uses the following items:”
heading.
8. Click Close to save your changes, and then close the Network Connections window. On
a Windows XP workstation, you can install any other protocol in the same manner as
you installed the NW Link protocol.
It is possible to bind multiple protocols to the same network adapter. In fact, this is necessary
on networks that use more than one type of protocol. In addition, a workstation may have
multiple NICs, in which case several different protocols might be bound to each NIC. What’s
more, the same protocol may be configured differently on different NICs. For example, let’s
say you managed a Net Ware server that contained two NICs and provided both TCP/IP and
IPX/SPX communications to many clients. Using the network operating system’s protocol
configuration utility, you would need to configure TCP/IP separately for each NIC. Similarly,
you would need to configure IPX/SPX separately for each NIC. If you did not configure the
protocols for each NIC separately, clients would not know which NIC to address when sending
and receiving information to and from the server.
PROCEDURE:
1. Connect nodes to bus topology section onST5002A.
2. Connect End Terminators on ST5002A in Bus topology section.
3. Please check whether the SPX/IPX Protocol is installed on all the systems before
performing experiment. (Refer ‘How to Install SPX/IPX protocol)
4. Open ST5002ASoftware.
5. Click on ‘Bus Topology’
11. Now you can use both the protocols IPX (The data will be broadcasted to all Nodes)
and SPX (The data will be sent to destination node) for transmitting the Data
RESULT: We have studied the performance of token bus protocols through simulation.
DISCUSSION:
Q1. List out advantage of token passing protocol over CSMA/CD protocol?
AIM: To create the scenario and study the performance of token ring protocols through
simulation.
SOFTWARE REQUIRED: ST5002A Software
HARDWARE REQUIRED: LAN Trainer kit ST5002A, 2-CAT5 Cables, 42 mm Patch Cords,
4- DB9 Cables
THEORY:
Token Ring is a network technology first developed by IBM in the 1980s. In the early 1990s,
the Token Ring architecture competed strongly with Ethernet to be the most popular access
method. Since that time, the economics, speed, and reliability of Ethernet have improved,
leaving Token Ring behind. Because IBM developed Token Ring, a few IBM-centric IT
Departments continue to use it. Other network managers have changed their former Token Ring
networks into Ethernet networks.
Token Ring networks have traditionally been more expensive to implement than Ethernet
networks. Proponents of the Token Ring technology argue that, although some of its
connectivity hardware is more expensive, its reliability results in less downtime and lower
network management costs than Ethernet. On a practical level, Token Ring has probably lost
the battle for superiority because its developers were slower to develop high-speed standards.
Token Ring networks can run at either 4, 16, or 100 Mbps. The 100-Mbps Token Ring
standard, finalized in 1999, is known as HSTR (High-Speed Token Ring). HSTR can use either
twisted-pair or fiber-optic cable as its transmission medium. Although it is as reliable and
efficient, it is still less common than Ethernet because of its higher cost and lagging speed.
Token Ring networks use the token-passing routine and a star-ring hybrid physical topology. In
token passing, a 3-byte packet, called a token, is transmitted from one node to another in a
circular fashion around the ring. When a station has something to send, it picks up the token,
changes it to a frame, and then adds the header, information, and trailer fields. The header
includes the address of the destination node. All nodes read the frame as it traverses the ring to
determine whether they are the intended recipient of the message. If they are, they pick up the
data, and then retransmit the frame to the next station on the ring. When the frame finally
reaches the originating station, the originating workstation reissues a free token that can then be
used by another station. The token-passing control scheme avoids the possibility for collisions.
This fact makes Token Ring more reliable and efficient than Ethernet. It also does not impose
distance limitations on the length of a LAN segment, unlike CSMA/CD.
On a Token Ring network, one workstation, called the active monitor, acts as the
controller for token passing. Specifically, the active monitor maintains the timing for
ring passing, monitors token and frame transmission, detects lost tokens, and corrects
errors when a timing error or other disruption occurs. Only one workstation on the
ring can act as the active monitor at any given time.
Token Ring Operation:
Token Ring and IEEE 802.5 are two principal examples of token-passing networks
(FDDI is the other). Token-passing networks move a small frame, called a token,
around the network. Possession of the token grants the right to transmit. If a node
receiving the token has no information to send, it passes the token to the next end
station. Each station can hold the token for a maximum period of time.
If a station possessing the token does have information to transmit, it seizes the
token, alters 1 bit of the token (which turns the token into a start-of-frame sequence),
appends the information that it wants to transmit, and sends this information to the
next station on the ring. While the information frame is circling the ring, no token is
on the network (unless the ring supports early token release), which means that other
stations wanting to transmit must wait. Therefore, collisions cannot occur in Token
Ring networks. If early token release is supported, a new token can be released when
frame transmission is complete.
The information frame circulates the ring until it reaches the intended destination
station, which copies the information for further processing. The information frame
continues to circle the ring and is finally removed when it reaches the sending station.
The sending station can check the returning frame to see whether the frame was seen
and subsequently copied by the destination.
Unlike CSMA/CD networks (such as Ethernet), token-passing networks are
deterministic, which means that it is possible to calculate the maximum time that will
pass before any end station will be capable of transmitting. This feature and several
reliability features, which are discussed in the section "Fault-Management
Mechanisms," later in this chapter, make Token Ring networks ideal for applications
in which delay must be predictable and robust network operation is important.
Factory automation environments are examples of such applications.
Priority System:
Token Ring networks use a sophisticated priority system that permits certain user-
designated, high-priority stations to use the network more frequently. Token Ring
frames have two fields that control priority: the priority field and the reservation
field.
Only stations with a priority equal to or higher than the priority value contained in a
token can seize that token. After the token is seized and changed to an information
frame, only stations with a priority value higher than that of the transmitting station
can reserve the token for the next pass around the network. When the next token is
Computer Network Lab
generated, it includes the higher priority of the reserving station. Stations that raise a
token's priority level must reinstate the previous priority after their transmission is
complete.
Fault-Management Mechanisms:
Token Ring networks employ several mechanisms for detecting and compensating
for network faults. For example, one station in the Token Ring network is selected to
be the active monitor. This station, which potentially can be any station on the
network, acts as a centralized source of timing information for other ring stations and
performs a variety of ring- maintenance functions. One of these functions is the
removal of continuously circulating frames from the ring. When a sending device
fails, its frame may continue to circle the ring. This can prevent other stations from
transmitting their own frames and essentially can lock up the network. The active
monitor can detect such frames, remove them from the ring, and generate a new
token.
The IBM Token Ring network's star topology also contributes to overall network
reliability. Because all information in a Token Ring network is seen by active
MSAUs, these devices can be programmed to check for problems and selectively
remove stations from the ring, if necessary.
A Token Ring algorithm called beaconing detects and tries to repair certain network
faults. Whenever a station detects a serious problem with the network (such as a
cable break), it sends a beacon frame, which defines a failure domain. This domain
includes the station reporting the failure, its nearest active upstream neighbour
(NAUN), and everything in between. Beaconing initiates a process called auto
reconfiguration, in which nodes within the failure domain automatically perform
diagnostics in an attempt to reconfigure the network around the failed areas.
Physically, the MSAU can accomplish this through electrical reconfiguration.
Frame Format:
Token Ring and IEEE 802.5 support two basic frame types: tokens and
data/command frames. Tokens are 3 bytes in length and consist of a start delimiter,
an access control byte, and an end delimiter. Data/command frames vary in size,
depending on the size of the Information field. Data frames carry information for
upper-layer protocols, while command frames contain control information and have
no data for upper-layer protocols.
PROCEDURE:
1. Connect DB9 Cables to ST5002A as well as each computer node connected
to the ST5002A
2. Connect patch cords to ‘Ring In’ & ‘Ring Out’ terminals on the ST5002A to
form a RING
3. Run the ST5002Asoftware on each computer connected to the ST5002A&
follow the steps
4. Click on “Ring Topology”
5. Select a node to treat him as Node1 by checking the ‘Decide this as NODE1’
checkbox, automatically other nodes as decided as NODE2, 3, & 4 according
to connection
RESULT: We have studied the performance of token ring protocols through simulation.
DISCUSSION:
Q1. How the token ring protocol is different from token bus protocol?
SOFTWARE/HARDWARE USED:
THEORY:
In telecommunication and data storage, Manchester code (also known as phase encoding,
or PE) is a line code in which the encoding of each data bit is either low then high, or high
then low, for equal time.
Manchester code always has a transition at the middle of each bit period and may
(depending on the information to be transmitted) have a transition at the start of the period
also. The direction of the mid-bit transition indicates the data. Transitions at the period
boundaries do not carry information. They exist only to place the signal in the correct state to
allow the mid-bit transition.
PROCEDURE:
Creating a project in Vivado 2017.4
Step1: Open Vivado 2017.4
Step5: Select RTL project and also check do not specify sources at this time check box and
click next.
Step6: Select the device on which the design will be implemented and click next.
Step9: Click on create file. Type file name and select file type as VHDL and click OK. Now,
click on Finish. We can add inputs and outputs on the next dialog box or if we want to do it
later then click on OK.
Stpe10: Under Source section double click on the newly created file.
Step11: Type the code for Manchester encoder and save the file.
Step12: Under Flow Navigator section, Click on Run Simulation then Run Behavioral
Simulation.
Step13: Under Flow Navigator section, Click on Run Synthesis and click OK.
After completion of synthesis task, a dialog box will appear, click on Open Synthesis Design.
Here under I/O ports section we have to list out the pins on the port for inputs and outputs.
This can also be done by the help on writing a constraint file by adding source for constraints.
Step14: Under Flow Navigator section, Click on Run Implementation and click OK.
Step15: Under Flow Navigator section, Click on Generate Bit-stream and click OK.
Step16: Under Flow Navigator section, Click on Open Hardware Manager.
Now connect the FPGA board with the computer through USB cable. Then click on Open
Target. The computer will then search for the connected board and will show the device. We
also have to load the bit stream file on the board (by default it is already selected), then click
OK.
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity Manchester_encoder_clk_dvdr is
Port ( clk : in STD_LOGIC;
rst : in STD_LOGIC;
x : in STD_LOGIC;
y : out STD_LOGIC_VECTOR (1 downto 0));
end Manchester_encoder_clk_dvdr;
end Behavioral;
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity clk_dvdr is
Port ( clk : in STD_LOGIC;
fpga_clk : out STD_LOGIC);
end clk_dvdr;
begin
CLK_division : process(clk)
begin
if(clk'event and clk='1')then
if (s<=clk_const)then
s <= s + 1;
else
s <= 0;
t_clk<= not(t_clk);
end if;
end if;
end process CLK_division;
fpga_clk<= t_clk;
end Behavioral;
LIBRARY ieee;
USE ieee.std_logic_1164.ALL;
ENTITY tb_manchester_encoder IS
END tb_manchester_encoder;
--Outputs
signal y : std_logic_vector(1 downto 0);
BEGIN
-- Stimulus process
stim_proc: process
begin
-- hold reset state for 100 ns.
rst<='1'; x <= '0';
wait for 200 ns;
rst<= '0';
wait for 200 ns;
x <= '1';
wait for 200 ns;
x <= '0';
wait for 400 ns;
x <= '1';
wait for 600 ns;
x <= '0';
wait for 200 ns;
x <= '1';
wait for 200 ns;
x <= '0';
wait for 400 ns;
x <= '1';
wait for 400 ns;
x <= '0';
wait for 200 ns;
wait;
end process;
END;
OUTPUT:
RESULT:
Manchester code always has a transition at the middle of each bit period and may (depending
on the information to be transmitted) have a transition at the start of the period alsoFor this
Software and Hardware realisation is done through Xilinx VIVADO and NEXYS 4DDR.
VHDL code Simulation results are shown in the form of waveforms. Values are forced
through user. Functionality of Hamming Code is verified through simulation result.
Simulation result is verified on hardware (NEXYS 4DDR kit), where output show in form of
LED after applying particular input on button on NEXYS 4DDR
DISCUSSION:
AIM:
Software and hardware realization of the NRZ Encoding schemes
SOFTWARE/HARDWARE USED:
VIVADO 2017.4, NEXYS4 DDR FPGA Trainer Kit
THEORY:
It is unipolar line coding scheme in which positive voltage defines bit 1 and the zero voltage
defines bit 0. Signal does not return to zero at the middle of the bit thus it is called NRZ. The
pulses in NRZ have more energy than a return-to-zero (RZ) code, which also has an
additional rest state beside the conditions for ones and zeros. NRZ is not inherently a self-
clocking signal, so some additional synchronization technique must be used for avoiding bit
slips; examples of such techniques are a run-length-limited constraint and a parallel
synchronization signal.
entity NRZ_line_coding is
Port ( rst : in STD_LOGIC;
clk : in STD_LOGIC;
x : in STD_LOGIC;
unipolar_NRZ : out STD_LOGIC_VECTOR (1 downto 0);
polar_NRZ : out STD_LOGIC_VECTOR (1 downto 0);
bipolar_NRZ : out STD_LOGIC_VECTOR (1 downto 0));
end NRZ_line_coding;
begin
CLK_division : process(clk)
begin
if(clk'event and clk='1')then
if (s<=clk_const)then
s <= s + 1;
else
s <= 0;
t_clk<= not(t_clk);
end if;
end if;
end process CLK_division;
process(rst, t_clk)
begin
if(rst='1')then
unipolar_NRZ<= "00";
polar_NRZ<= "11";
bipolar_NRZ<= "00";
next_st<= s0;
elsif(t_clk'event and t_clk = '1')then
case next_st is
when s0 =>
if(x='0')then
unipolar_NRZ<= "00";
polar_NRZ<= "11";
bipolar_NRZ<= "00";
next_st<= s1;
else
unipolar_NRZ<= "01";
polar_NRZ<= "01";
bipolar_NRZ<= "01";
next_st<= s2;
end if;
when s1 =>
if(x='0')then
unipolar_NRZ<= "00";
polar_NRZ<= "11";
bipolar_NRZ<= "00";
next_st<= s4;
else
unipolar_NRZ<= "01";
polar_NRZ<= "01";
bipolar_NRZ<= "01";
next_st<= s2;
end if;
when s2 =>
if(x='0')then
unipolar_NRZ<= "00";
polar_NRZ<= "11";
bipolar_NRZ<= "00";
next_st<= s4;
else
unipolar_NRZ<= "01";
polar_NRZ<= "01";
bipolar_NRZ<= "11";
next_st<= s3;
end if;
when s3 =>
if(x='0')then
unipolar_NRZ<= "00";
polar_NRZ<= "11";
bipolar_NRZ<= "00";
next_st<= s5;
else
unipolar_NRZ<= "01";
polar_NRZ<= "01";
bipolar_NRZ<= "01";
next_st<= s2;
end if;
when s4 =>
if(x='0')then
unipolar_NRZ<= "00";
polar_NRZ<= "11";
bipolar_NRZ<= "00";
next_st<= s4;
else
unipolar_NRZ<= "01";
polar_NRZ<= "01";
bipolar_NRZ<= "11";
next_st<= s3;
end if;
when s5 =>
if(x='0')then
unipolar_NRZ<= "00";
polar_NRZ<= "11";
bipolar_NRZ<= "00";
next_st<= s5;
else
unipolar_NRZ<= "01";
polar_NRZ<= "01";
bipolar_NRZ<= "01";
next_st<= s2;
end if;
end case;
end if;
end process;
end Behavioral;
UCF File for Hardware realization through FPGA (nexy-4 DDR Artix-7 family):-
OUTPUT:
entity RZ_Encoder is
Port ( clk : in STD_LOGIC;
rst : in STD_LOGIC;
x : in STD_LOGIC;
unp_rz : out STD_LOGIC_VECTOR (1 downto 0);
bpl_rz : out STD_LOGIC_VECTOR (1 downto 0));
end RZ_Encoder;
process(rst, t_clk)
begin
if(rst='1')then
unp_rz<= "00";
bpl_rz<= "00";
next_st<= s0;
elsif(t_clk'event and t_clk = '1')then
case next_st is
when s0 =>
if(x='0')then
unp_rz<= "00";
bpl_rz<= "11";
next_st<= s1;
else
unp_rz<= "01";
bpl_rz<= "01";
next_st<= s2;
end if;
when s1 =>
unp_rz<= "00";
bpl_rz<= "00";
next_st<= s3;
when s2 =>
unp_rz<= "00";
bpl_rz<= "00";
next_st<= s3;
when s3 =>
if(x='0')then
unp_rz<= "00";
bpl_rz<= "11";
next_st<= s1;
else
unp_rz<= "01";
bpl_rz<= "01";
next_st<= s2;
end if;
when others => null;
end case;
end if;
end process;
end Behavioral;
ENTITY tb_manchester_encoder IS
END tb_manchester_encoder;
COMPONENT RZ_Encoder is
Port ( clk : in STD_LOGIC;
rst : in STD_LOGIC;
x : in STD_LOGIC;
unp_rz : out STD_LOGIC_VECTOR (1 downto 0);
bpl_rz : out STD_LOGIC_VECTOR (1 downto 0));
end COMPONENT;
--Inputs
signal clk : std_logic := '1';
signal rst : std_logic := '0';
signal x : std_logic := '0';
--Outputs
signal unp_rz : std_logic_vector(1 downto 0);
signal bpl_rz : std_logic_vector(1 downto 0);
BEGIN
-- Stimulus process
stim_proc: process
begin
-- hold reset state for 100 ns.
rst<='1'; x <= '0';
wait for 200 ns;
UCF File for Hardware realization through FPGA (nexy-4 DDR Artix-7 family):
OUTPUT:
RESULT:
It is unipolar line coding scheme in which positive voltage defines bit 1 and the zero voltage
defines bit 0. Signal does not return to zero at the middle of the bit thus it is called NRZ.For
this Software and Hardware realisation is done through Xilinx VIVADO and NEXYS 4DDR.
VHDL code Simulation results are shown in the form of waveforms. Values are forced
through user. Functionality of Hamming Code is verified through simulation result.
Simulation result is verified on hardware (NEXYS 4DDR kit), where output show in form of
LED after applying particular input on button on NEXYS 4DDR.
DISCUSSION:
Q.1 Draw the diagram for signal 011110100 using NRZ encoding scheme.
Q.2 Write down different encoding scheme.
Q.3 What is digital data to digital signal? Specify the name of technique that fall in this
category.
Q.4 What would be minimum bandwidth of Manchester and differential Manchester?
AIM:
Software and hardware realization of the CRC Error control schemes.
SOFTWARE/HARDWARE USED:
VIVADO 2017.4, NEXYS4 DDR FPGA Trainer Kit
THEORY:
Cyclic Redundancy Check (CRC) is a block code used to detect accidental changes to data
transmitted via telecommunications networks and storage devices CRC involves binary
division of the data bits being sent by a predetermined divisor agreed upon by the
communicating system. The divisor is generated using polynomials. So, CRC is also called
polynomial code check sum. The codes used for cyclic redundancy check there by error
detection are known as CRC codes (Cyclic redundancy check codes).Cyclic redundancy-
check codes are shortened cyclic codes. These types of codes are used for error detection and
encoding.
Cyclic Redundancy Check (CRC) CRC is the most widely used error - detecting method
alternative to the simple parity check codes. Instead of adding the number of bits to obtain the
desired parity, in CRC a sequence of 'extra' redundant bits is added at the end of data. These
bits are known as CRC bits. The CRC bits are derived from the original data bits. The method
of deriving the CRC bits at the sending side is given below:
Step 1: A sequence of bit stream is formed by appending n '0' bits to the data at the end.
Step 2: A predetermined devisor of length n+ 1 bits is used to divide the sequence and the
remainder is calculated. The remainder is known as CRC.
Step 3: The remainder replaces the extra bits added to the data at the beginning.
Step 4: The combined sequence of data plus CRC is transmitted by the sender.
At the receiving end the received data plus CRC is again divided by the same divisor as used
at the sending side. If the remainder is zero then it is presumed that the data is error - free and
the receiver accepts the data, on the other hand if the remainder is non-zero, the data is
considered as corrupted and the received data is discarded.
For example, consider the 6-bit data sequence "100110". Let us choose a 3-bit divisor 110at
the sending side. As per the step 1, two 0s are added to the data sequence and the new
sequence is "10011000". As per the second step, the new sequence is divided by 110
(Modulo-2 division is used), and produces a remainder of 10. This is the CRC. As stated in
step 3, this CRC code is added to the data sequence to produce a sequence "10011010" and
then transmitted.
At the receiver side, if the received sequence does not contain an error, the sequence
"10011010" is again divided by the same divisor 110 and the remainder is 00. If an error is
made in one or two bits (corrupted), then the remainder will not be a 00, hence, the receiver
rejects the data.
VHDL CODE FOR CRC ERROR (UNIVERSAL AND GENERIC) CONTROL
SCHEME:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity univ_crc_encoder is
generic (m:integer:=8; -- number of input data bits
n:integer:=4); -- number of bits in divisor polynomial function
Port ( a : in STD_LOGIC_VECTOR (m-1 downto 0);
b : in STD_LOGIC_VECTOR (n-1 downto 0);
x : out STD_LOGIC_VECTOR (n-2 downto 0);
t : out STD_LOGIC_VECTOR (m+n-2 downto 0));
end univ_crc_encoder;
begin
process(a, b)
variable v:std_logic_vector(m+n-2 downto 0);
variable u:std_logic_vector(n-1 downto 0);
variable w:std_logic_vector(n-1 downto 0);
variable y:std_logic_vector(n-1 downto 0);
variable i,j:integer:=0;
begin
v(m+n-2 downto n-1):=a(m-1 downto 0);
end if;
y:=w;
w(n-1 downto 1):=y(n-2 downto 0);
if(i=0) then
w(0):='0';
else
w(0):=v(i-1);
end if;
end loop;
x<=w(n-1 downto 1); ---- redundant bits
t(m+n-2 downto n-1)<=a;
t(n-2 downto 0)<=w(n-1 downto 1);
end process;
end Behavioral;
VHDL CODE FOR CRC ERROR (8-DATA BITS AND 4-BIT POLYNOMIAL)
CONTROL SCHEME:
library IEEE;
use IEEE.STD_LOGIC_1164.ALL;
entity crc_encoder_8_4 is
Port ( a : in STD_LOGIC_VECTOR (7 downto 0);
b : in STD_LOGIC_VECTOR (3 downto 0);
x : out STD_LOGIC_VECTOR (2 downto 0);
t : out STD_LOGIC_VECTOR (10 downto 0));
end crc_encoder_8_4;
begin
process(a, b)
variable v:std_logic_vector(10 downto 0);
variable u:std_logic_vector(3 downto 0);
variable w:std_logic_vector(3 downto 0);
variable y:std_logic_vector(3 downto 0);
variable i,j:integer:=0;
begin
v(10 downto 3):=a(7 downto 0);
v(j):='0';
end loop;
u:=b;
w:=v(10 downto 7);
end Behavioral;
OUTPUT:
Figure 6(C).1: Simulation waveform result for CRC error control Encoder
Figure 6(C).2 RTL Schematic (top view) for CRC error control Encoder
Figure 6(C).3 RTL Schematic (Internal view) for CRC error control Encoder
RESULT:
Cyclic Redundancy Check (CRC) is a block code used to detect accidental changes to data
transmitted via telecommunications networks and storage devices CRC involves binary
division of the data bits being sent by a predetermined divisor agreed upon by the
communicating system. For this Software and Hardware realisation is done through Xilinx
VIVADO and NEXYS 4DDR. VHDL code Simulation results are shown in the form of
waveforms. Values are forced through user. Functionality of Hamming Code is verified
through simulation result. Simulation result is verified on hardware (NEXYS 4DDR kit),
where output show in form of LED after applying particular input on button on NEXYS
4DDR.
DISCUSSION:
Software and hardware realization of the Hamming code Error control scheme.
SOFTWARE/HARDWARE USED:
THEORY:
Hamming code is a set of error-correction codes that can be used to detect and correct the
errors that can occur when the data is moved or stored from the sender to the receiver.
Hamming code is a block code that is capable of detecting up to two simultaneous bit errors
and correcting single-bit errors. In this coding method, the source encodes the message by
inserting redundant bits within the message. These redundant bits are extra bits that are
generated and inserted at specific positions in the message itself to enable error detection and
correction. When the destination receives this message, it performs recalculations to detect
errors and find the bit position that has error.
P(1) <= P(3) xor P(5) xor P(7) xor p(9) XOR p(11);
p(2) <= P(3) xor P(6) xor P(7) xor p(10) XOR p(11);
p(4) <= P(5) xor P(6) xor P(7);
p(8) <= P(9) xor p(10) XOR p(11);
Hout<= P;
end data_flow;
OUTPUT:
RESULT:
Hamming code is a block code that is capable of detecting up to two simultaneous bit errors
and correcting single-bit errors. For this Software and Hardware realisation is done through
Xilinx VIVADO and NEXYS 4DDR. VHDL code Simulation results are shown in the form
of waveforms. Values are forced through user. Functionality of Hamming Code is verified
through simulation result. Simulation result is verified on hardware (NEXYS 4DDR kit),
where output show in form of LED after applying particular input on button on NEXYS
4DDR.
DISCUSSION:
THEORY:
The Shortest Path problem involves finding a path from a source vertex to a destination
vertex which has the least length among all such paths.
Algorithm
Following are the detailed steps.
1) This step initializes distances from source to all vertices as infinite and distance to source
itself as 0. Create an array dist[] of size |V| with all values as infinite except dist[src] where
src is source vertex.
2) This step calculates shortest distances. Do following |V|-1 times where |V| is the number of
vertices in given graph.
a) Do following for each edge u-v
If dist[v] >dist[u] + weight of edge uv, then update dist[v]
dist[v] = dist[u] + weight of edge uv
3) This step reports if there is a negative weight cycle in graph.
Do following for each edge u-v.
If dist[v] >dist[u] + weight of edge uv, then “Graph contains negative weight cycle”
The idea of step 3 is, step 2 guarantees shortest distances if graph doesn’t contain negative
weight cycle. If we iterate through all edges one more time and get a shorter path for any
vertex, then there is a negative weight cycle.
#include<iostream>
#include <list>
using namespace std;
public:
Graph(int V); // Constructor
void addEdge(int u, int v);
void printAllPaths(int s, int d);
};
Graph::Graph(int V)
{
this->V = V;
adj = new list<int>[V];
}
}
else // If current vertex is not destination
{
// Recur for all the vertices adjacent to current vertex
list<int>::iterator i;
for (i = adj[u].begin(); i != adj[u].end(); ++i)
if (!visited[*i])
printAllPathsUtil(*i, d, visited, path, path_index);
}
// Driver program
int main()
{
// Create a graph given in the above diagram
Graph g(4);
g.addEdge(0, 1);
g.addEdge(0, 2);
g.addEdge(0, 3);
g.addEdge(2, 0);
g.addEdge(2, 1);
g.addEdge(1, 3);
int s = 2, d = 3;
cout<< "Following are all different paths from " << s
<< " to " << d <<endl;
g.printAllPaths(s, d);
return 0;
}
OUTPUT:
Following are all different paths from 2 to 3
2013
203
213
RESULT:
The Shortest Path problem involves finding a path from a source vertex to a destination
vertex which has the least length among all such paths between source and destination. In
computer network, there are many shortest path algorithms. If edges weight is negative then
we use Bellman Ford Algorithm Otherwise Dijkstra’s Shortest Path Algorithm. In this
algorithm edges weight may be positive or negative. The weight and shortest path is
calculated repeatedly between one source and all destination nodes (nodes except source
node) until it is achieved.
DISCUSSION:
THEORY: The ISP uses the TCP/IP protocols to make computer-to-computer connections
possible and transmit data between them. When connected to an ISP, you're assigned an IP
address, which is a unique address given to your computer or network to communicate on
the Internet.
Using the Internet, computers connect and communicate with one another, primarily using
the TCP/IP (Transmission Control Protocol / Internet Protocol). Think of TCP/IP as a book of
rules, a step-by-step guide that each computer uses to know how to talk to another computer.
This book of rules dictates what each computer must do to transmit data, when to transmit
data, how to transmit that data. It also states how to receive data in the same manner. If the
rules are not followed, the computer can't connect to another computer, nor send and receive
data between other computers.
To connect to the Internet and other computers on a network, a computer must have a NIC
(network interface card) installed. A network cable plugged into the NIC on one end and
plugged into a cable modem, DSL modem, router, or switch can allow a computer to access
the Internet and connect to other computers.
PROCEDURE:
RESULT: