0% found this document useful (0 votes)
74 views79 pages

Acn Notes

ARP is used to map IP addresses to MAC addresses on a local area network (LAN). It works between the network and data link layers of the OSI model. When a device needs to send data to another device on the LAN, it checks its ARP cache to see if it has the MAC address associated with the IP address. If not, it broadcasts an ARP request to get the MAC address. ICMP is used for reporting errors and control messages in IP networks. It is used by routers to send error messages when packets cannot be delivered, such as when the destination is unreachable. IPv4 is the most widely used version of IP and provides a way to uniquely identify hosts using logical IP addresses and route data between them over networks

Uploaded by

Shub
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views79 pages

Acn Notes

ARP is used to map IP addresses to MAC addresses on a local area network (LAN). It works between the network and data link layers of the OSI model. When a device needs to send data to another device on the LAN, it checks its ARP cache to see if it has the MAC address associated with the IP address. If not, it broadcasts an ARP request to get the MAC address. ICMP is used for reporting errors and control messages in IP networks. It is used by routers to send error messages when packets cannot be delivered, such as when the destination is unreachable. IPv4 is the most widely used version of IP and provides a way to uniquely identify hosts using logical IP addresses and route data between them over networks

Uploaded by

Shub
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Chapter 1

1.1 Address Resolution Protocol (ARP) & RARP

Address Resolution Protocol (ARP) is a procedure for mapping a dynamic Internet Protocol
address (IP address) to a permanent physical machine address in a local area network (LAN).
The physical machine address is also known as a Media Access Control or MAC address.

The job of the ARP is essentially to translate 32-bit addresses to 48-bit addresses and vice-
versa. This is necessary because in IP Version 4 (IPv4), the most common level of Internet
Protocol (IP) in use today, an IP address is 32-bits long, but MAC addresses are 48-bits long.

ARP works between network layers 2 and 3 of the Open Systems Interconnection model (OSI
model). The MAC address exists on layer 2 of the OSI model, the network layer, while the IP
address exists on layer 3, the data link layer.

ARP can also be used for IP over other LAN technologies, such as token ring, fiber distributed
data interface (FDDI) and IP over ATM.

In IPv6, which uses 128-bit addresses, ARP has been replaced by the Neighbor Discovery
protocol.

How ARP works

When a new computer joins a LAN, it is assigned a unique IP address to use for identification
and communication. When an incoming packet destined for a host machine on a particular LAN
arrives at a gateway, the gateway asks the ARP program to find a MAC address that matches
the IP address. A table called the ARP cache maintains a record of each IP address and its
corresponding MAC address.
All operating systems in an IPv4 Ethernet network keep an ARP cache. Every time a host
requests a MAC address in order to send a packet to another host in the LAN, it checks its ARP
cache to see if the IP to MAC address translation already exists. If it does, then a new ARP
request is unnecessary. If the translation does not already exist, then the request for network
addresses is sent and ARP is performed.

ARP broadcasts a request packet to all the machines on the LAN and asks if any of the machines
know they are using that particular IP address. When a machine recognizes the IP address as its
own, it sends a reply so ARP can update the cache for future reference and proceed with the
communication.

Host machines that don't know their own IP address can use the Reverse ARP (RARP) protocol
for discovery.

An ARP cache size is limited and is periodically cleansed of all entries to free up space; in fact,
addresses tend to stay in the cache for only a few minutes. Frequent updates allow other
devices in the network to see when a physical host changes their requested IP address. In the
cleaning process, unused entries are deleted as well as any unsuccessful attempts to
communicate with computers that are not currently powered on.
Proxy ARP

Proxy ARP enables a network proxy to answer ARP queries for IP addresses that are outside the
network. This allows packets to be successfully transferred from one subnetworkto another.

When an ARP inquiry packet is broadcast, the routing table is examined to find which device on
the LAN can reach the destination fastest. This device, which is often a router, becomes
a gateway for forwarding packets outside the network to their intended destinations.

ARP spoofing and ARP cache poisoning

Any LAN that uses ARP must be wary of ARP spoofing, also referred to as ARP poison routing or
ARP cache poisoning. ARP spoofing is a device attack in which a hacker broadcasts false ARP
messages over a LAN in order to link an attacker's MAC address with the IP address of a
legitimate computer or server within the network. Once a link has been established, the target
computer can send frames meant for the original destination to the hacker's computer first as
well as any data meant for the legitimate IP address.

ARP spoofing can have serious impacts on enterprises. When used in their simplest form, ARP
spoofing attacks can steal sensitive information. However, the attacks can also go beyond this
and facilitate other malicious attacks, including:

 man-in-the-middle attacks

 denial-of-service attacks

 session hijacking

Reverse Address Resolution Protocol (RARP)

RARP (Reverse Address Resolution Protocol) is a protocol by which a physical machine in a local
area network can request to learn its IP address from a gateway server's Address Resolution
Protocol (ARP) table or cache. A network administrator creates a table in a local area network's
gateway router that maps the physical machine (or Media Access Control - MAC address)
addresses to corresponding Internet Protocol addresses. When a new machine is set up, its
RARP client program requests from the RARP server on the router to be sent its IP address.
Assuming that an entry has been set up in the router table, the RARP server will return the IP
address to the machine which can store it for future use.

RARP is available for Ethernet, Fiber Distributed-Data Interface, and token ring LANs.
Difference between ARP and RARP:

ARP and RARP both are the Network layer protocol. Whenever a host needs to send an IP
datagram to another host, the sender requires both the logical address and physical address of
the receiver. The dynamic mapping provides two protocols ARP and RARP. The basic difference
between ARP and RARP is that ARP when provided with the logical address of the receiver it
obtains the physical address of the receiver whereas in RARP when provided with the physical
address of the host, it obtains the logical address of the host from the server.
Comparison chart:

BASIS
ARP RARP
COMPARISON

Full Form Address Resolution Protocol. Reverse Address Resolution


Protocol.

Basic Retrieves the physical address of Retrieves the logical address for a
the receiver. computer from the server.

Mapping ARP maps 32-bit logical (IP) address RARP maps 48-bit physical address
to 48-bit physical address. to 32-bit logical (IP) address.
1.2 Internet Control Message Protocol (ICMP) | Computer Networks

Since IP does not have a inbuilt mechanism for sending error and control messages. It depends
on Internet Control Message Protocol(ICMP) to provide an error control. It is used for reporting
errors and management queries. It is a supporting protocol and used by networks devices like
routers for sending the error messages and operations information.
e.g. the requested service is not available or that a host or router could not be reached.

Source quench message :

Source quench message is request to decrease traffic rate for messages sending to the
host(destination). Or we can say, when receiving host detects that rate of sending packets
(traffic rate) to it is too fast it sends the source quench message to the source to slow the pace
down so that no packet can be lost.

ICMP will take source IP from the discarded packet and informs to source by sending source
quench message.

Then source will reduce the speed of transmission so that router will free for congestion.
When the congestion router is far away from the source the ICMP will send hop by hop source
quench message so that every router will reduce the speed of transmission.

Parameter problem:

Whenever packets come to the router then calculated header checksum should be equal to
recieved header checksum then only packet is accepted by the router.

If there is mismatch packet will be dropped by the router.


ICMP will take the source IP from the discarded packet and informs to source by sending
parameter problem message.

Time exceeded message:

When some fragments are lost in a network then the holding fragment by the router will be
dropped then ICMP will take source IP from discarded packet and informs to the source, of
discarded datagram due to time to live field reaches to zero, by sending time exceeded
message.

Destination un-reachable :
Destination unreachable is generated by the host or its inbound gateway to inform the client
that the destination is unreachable for some reason.

There is no necessary condition that only router give the ICMP error message some time
destination host send ICMP error message when any type of failure (link failure, hardware
failure, port failure etc.) happen in the network.

Redirection message:

Redirect requests data packets be sent on an alternate route. The message informs to a host to
update its routing information (to send packets on an alternate route).

Ex. If host tries to send data through a router R1 and R1 sends data on a router R2 and there is
a direct way from host to R2. Then R1 will send a redirect message to inform the host that there
is a best way to the destination directly through R2 available. The host then sends data packets
for the destination directly to R2.
The router R2 will send the original datagram to the intended destination.
But if datagram contains routing information then this message will not be sent even if a better
route is available as redirects should only be sent by gateways and should not be sent by
Internet hosts.
Whenever a packet is forwarded in a wrong direction later it is re-directed in a current direction
then ICMP will send re-directed message.

1.4 Internet Protocol Version 4 (IPv4)


Internet Protocol is one of the major protocol in TCP/IP protocols suite. This protocol works at
Network layer of OSI model and at Internet layer of TCP/IP model. Thus this protocol has the
responsibility of identification of hosts based upon their logical addresses and to route data
between/among them over the underlying network.
IP provides a mechanism to uniquely identify host by IP addressing scheme. IP uses best effort
delivery, i.e. it does not guarantee that packets would be delivered to destined host but it will
do its best to reach the destination. Internet Protocol version 4 uses 32-bits logical address.

IPv4 - Packet Structure

Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport)
and divides it into what’s called packet. IP packet encapsulates data unit received from above
layer and adds its own header information.

The encapsulated data is referred to as IP Payload. IP header contains all the necessary
information to deliver the packet at the other end.

IP header includes many relevant information including Version Number, which, in this
context, is 4. Other details are as follows:

 Version: Version no. of Internet Protocol used (e.g. IPv4)


 IHL: Internet Header Length, Length of entire IP header

 DSCP: Differentiated Services Code Point, This is Type of Service.

 ECN: Explicit Congestion Notification, carries information about the congestion seen in
the route.

 Total Length: Length of entire IP Packet (including IP header and IP Payload)

 Identification: If IP packet is fragmented during the transmission, all the fragments


contain same identification no. to identify original IP packet they belong to.

 Flags: As required by the network resources, if IP Packet is too large to handle these
‘flags’ tell that if they can be fragmented or not. In this 3-bit flag, the MSB is always set
to ‘0’.

 Fragment Offset: This offset tells the exact position of the fragment in the original IP
Packet.

 Time to Live: To avoid looping in the network, every packet is sent with some TTL value
set, which tells the network how many routers (hops) this packet can cross. At each
hop, its value is decremented by one and when the value reaches zero, the packet is
discarded.

 Protocol: Tells the Network layer at the destination host, to which Protocol this packet
belongs to, i.e. the next level Protocol. For example protocol number of ICMP is 1, TCP
is 6 and UDP is 17.

 Header Checksum: This field is used to keep checksum value of entire header which is
then used to check if the packet is received error-free.

 Source Address: 32-bit address of the Sender (or source) of the packet.

 Destination Address: 32-bit address of the Receiver (or destination) of the packet.

 Options: This is optional field, which is used if the value of IHL is greater than 5. These
option may contain values for options such as Security, Record Route, Time Stamp etc.
IPv4 – Addressing:

IPv4 supports three different type of addressing modes:

Unicast Addressing Mode:

In this mode, data is sent only to one destined host. The Destination Address field contains 32-
bit IP address of the destination host. Here client sends data to the targeted server:

Broadcast Addressing Mode:

In this mode the packet is addressed to all hosts in a network segment. The Destination
Address field contains special broadcast address i.e. 255.255.255.255. When a host sees this
packet on the network, it is bound to process it. Here client sends packet, which is entertained
by all the Servers:
Multicast Addressing Mode:

This mode is a mix of previous two modes, i.e. the packet sent is neither destined to a single
host nor all the host on the segment. In this packet, the Destination Address contains special
address which starts with 224.x.x.x and can be entertained by more than one host.
Here a server sends packets which is entertained by more than one Servers. Every network has
one IP address reserved for network number which represents the network and one IP address
reserved for Broadcast Address, which represents all the host in that network.

Hierarchical Addressing Scheme

IPv4 uses hierarchical addressing scheme. An IP address which is 32-bits in length, is divided
into two or three parts as depicted:

A single IP address can contain information about the network and its sub-network and
ultimately the host. This scheme enables IP Address to be hierarchical where a network can
have many sub-networks which in turn can have many hosts.

Subnet Mask

The 32-bit IP address contains information about the host and its network. It is very necessary
to distinguish the both. For this, routers use Subnet Mask, which is as long as the size of the
network address in the IP address. Subnet Mask is also 32 bits long. If the IP address in binary
is ANDed with its Subnet Mask, the result yields the Network address. For example, say the IP
Address 192.168.1.152 and the Subnet Mask is 255.255.255.0 then

This way Subnet Mast helps extract Network ID and Host from an IP Address. It can be
identified now that 192.168.1.0 is the Network number and 192.168.1.152 is the host on that
network.

Binary Representation
The positional value method is the simplest form of converting binary from decimal value. IP
address is 32 bit value which is divided into 4 octets. A binary octet contains 8 bits and the
value of each bit can be determined by the position of bit value '1' in the octet.

Positional value of bits is determined by 2 raised to power (position – 1), that is the value of a
bit 1 at position 6 is 26-1 that is 25 that is 32. The total value of the octet is determined by
adding up the positional value of bits. The value of 11000000 is 128+64 = 192. Some Examples
are shown in the table below:
IPv4 - Address Classes

Internet Protocol hierarchy contains several classes of IP Addresses to be used efficiently in


various situation as per the requirement of hosts per network. Broadly, IPv4 Addressing
system is divided into 5 classes of IP Addresses. All the 5 classes are identified by the first octet
of IP Address.

Internet Corporation for Assigned Names and Numbers - responsible for assigning IP addresses.
The first octet referred here is the left most of all. The octets numbered as follows depicting
dotted decimal notation of IP Address:

Number of networks and number of hosts per class can be derived by this formula:

When calculating hosts IP addresses, 2 IP addresses are decreased because they cannot be
assigned to hosts i.e. the first IP of a network is network number and the last IP is reserved for
Broadcast IP.

Class A Address

The first bit of the first octet is always set to 0 (zero). Thus the first octet ranges from 1 – 127,
i.e.

Class A addresses only include IP starting from 1.x.x.x to 126.x.x.x only. The IP range 127.x.x.x
is reserved for loopback IP addresses.

The default subnet mask for Class A IP address is 255.0.0.0 which implies that Class A
addressing can have 126 networks (27-2) and 16777214 hosts (224-2).

Class A IP address format thus, is 0NNNNNNN.HHHHHHHH.HHHHHHHH.HHHHHHHH

Class B Address

An IP address which belongs to class B has the first two bits in the first octet set to 10, i.e.
Class B IP Addresses range from 128.0.x.x to 191.255.x.x. The default subnet mask for Class B is
255.255.x.x.

Class B has 16384 (214) Network addresses and 65534 (216-2) Host addresses.

Class B IP address format is, 10NNNNNN.NNNNNNNN.HHHHHHHH.HHHHHHHH

Class C Address

The first octet of Class C IP address has its first 3 bits set to 110, that is

Class C IP addresses range from 192.0.0.x to 192.255.255.x. The default subnet mask for Class
B is 255.255.255.x.

Class C gives 2097152 (221) Network addresses and 254 (28-2) Host addresses.

Class C IP address format is 110NNNNN.NNNNNNNN.NNNNNNNN.HHHHHHHH

Class D Address

Very first four bits of the first octet in Class D IP addresses are set to 1110, giving a range of

Class D has IP address range from 224.0.0.0 to 239.255.255.255. Class D is reserved for
Multicasting. In multicasting data is not destined for a particular host, that's why there is no
need to extract host address from the IP address, and Class D does not have any subnet mask.

Class E Address

This IP Class is reserved for experimental purposes only like for R&D or Study. IP addresses in
this class ranges from 240.0.0.0 to 255.255.255.254. Like Class D, this class too is not equipped
with any subnet mask.
IPv4 - Sub netting (CIDR)

Each IP class is equipped with its own default subnet mask which bounds that IP class to have
prefixed number of Networks and prefixed number of Hosts per network. Classful IP
addressing does not provide any flexibility of having less number of Hosts per Network or
more Networks per IP Class.

CIDR or Classless Inter Domain Routing provides the flexibility of borrowing bits of Host part
of the IP address and using them as Network in Network, called Subnet. By using subnetting,
one single Class A IP addresses can be used to have smaller sub-networks which provides
better network management capabilities.

Class A Subnets

In Class A, only the first octet is used as Network identifier and rest of three octets are used to
be assigned to Hosts (i.e. 16777214 Hosts per Network). To make more subnet in Class A, bits
from Host part are borrowed and the subnet mask is changed accordingly.

For example, if one MSB (Most Significant Bit) is borrowed from host bits of second octet and
added to Network address, it creates two Subnets (21=2) with (223-2) 8388606 Hosts per
Subnet.

The Subnet mask is changed accordingly to reflect subnetting. Given below is a list of all
possible combination of Class A subnets:
In case of subnetting too, the very first and last IP address of every subnet is used for Subnet
Number and Subnet Broadcast IP address respectively. Because these two IP addresses cannot
be assigned to hosts, Sub-netting cannot be implemented by using more than 30 bits as
Network Bits which provides less than two hosts per subnet.

Class B Subnets

By Default, using Classful Networking, 14 bits are used as Network bits providing (2 14) 16384
Networks and (216-1) 65534 Hosts. Class B IP Addresses can be subnetted the same way as
Class A addresses, by borrowing bits from Host bits. Below is given all possible combination of
Class B subnetting:

Class C Subnets

Class C IP addresses normally assigned to a very small size network because it only can have
254 hosts in a network. Given below is a list of all possible combination of subnetted Class B IP
address:
IPv4 - Variable Length Subnet Masking (VLSM)

Internet Service Providers may face a situation where they need to allocate IP subnets of
different sizes as per the requirement of customer. One customer may ask Class C subnet of 3
IP addresses and another may ask for 10 IPs. For an ISP, it is not feasible to divide the IP
addresses into fixed size subnets, rather he may want to subnet the subnets in such a way
which results in minimum wastage of IP addresses.

For example, an administrator have 192.168.1.0/24 network. The suffix /24 (pronounced as
"slash 24") tells the number of bits used for network address. He is having three different
departments with different number of hosts. Sales department has 100 computers, Purchase
department has 50 computers, Accounts has 25 computers and Management has 5
computers. In CIDR, the subnets are of fixed size. Using the same methodology the
administrator cannot fulfill all the requirements of the network.

The following procedure shows how VLSM can be used in order to allocate department-wise IP
addresses as mentioned in the example.

Step - 1

Make a list of Subnets possible.

Step - 2

Sort the requirements of IPs in descending order (Highest to Lowest).

 Sales 100
 Purchase 50

 Accounts 25

 Management 5

Step - 3

Allocate the highest range of IPs to the highest requirement, so let's assign 192.168.1.0 /25
(255.255.255.128) to Sales department. This IP subnet with Network number 192.168.1.0 has
126 valid Host IP addresses which satisfy the requirement of Sales Department. The subnet
Mask used for this subnet has 10000000 as the last octet.

Step - 4

Allocate the next highest range, so let's assign 192.168.1.128 /26 (255.255.255.192) to
Purchase department. This IP subnet with Network number 192.168.1.128 has 62 valid Host IP
Addresses which can be easily assigned to all Purchase department's PCs. The subnet mask
used has 11000000 in the last octet.

Step - 5

Allocate the next highest range, i.e. Accounts. The requirement of 25 IPs can be fulfilled with
192.168.1.192 /27 (255.255.255.224) IP subnet, which contains 30 valid host IPs. The network
number of Accounts department will be 192.168.1.192. The last octet of subnet mask is
11100000.

Step - 6

Allocate next highest range to Management. The Management department contains only 5
computers. The subnet 192.168.1.224 /29 with Mask 255.255.255.248 has exactly 6 valid host
IP addresses. So this can be assigned to Management. The last octet of subnet mask will
contain 11111000.

By using VLSM, the administrator can subnet the IP subnet such a way that least number of IP
addresses are wasted. Even after assigning IPs to every department, the administrator, in this
example, still left with plenty of IP addresses which was not possible if he has used CIDR.
IPv4 - Reserved Addresses

There are few Reserved IPv4 address spaces which cannot be used on the internet. These
addresses serve special purpose and cannot be routed outside Local Area Network.

Private IP Addresses

Every class of IP, (A, B & C) has some addresses reserved as Private IP addresses. These IPs can
be used within a network, campus, company and are private to it. These addresses cannot be
routed on Internet so packets containing these private addresses are dropped by the Routers.

In order to communicate with outside world, Internet, these IP addresses must have to be
translated to some public IP addresses using NAT process or Web Proxy server can be used.

The sole purpose to create separate range of private addresses is to control assignment of
already-limited IPv4 address pool. By using private address range within LAN, the requirement
of IPv4 addresses has globally decreased significantly. It has also helped delaying the IPv4
address exhaustion.

IP class, while using private address range, can be chosen as per the size and requirement of
the organization. Larger organization may choose class A private IP address range where
smaller may opt for class C. These IP addresses can be further sub-netted be assigned to
departments within an organization.

Loopback IP Addresses

The IP address range 127.0.0.0 – 127.255.255.255 is reserved for loopback i.e. a Host’s self-
address. Also known as localhost address. This loopback IP address is managed entirely by and
within the operating system. Using loopback addresses, enable the Server and Client processes
on a single system to communicate with each other. When a process creates a packet with
destination address as loopback address, the operating system loops it back to itself without
having any interference of NIC.

Data sent on loopback is forward by the operating system to a virtual network interface within
operating system. This address is mostly used for testing purposes like client-server
architecture on a single machine. Other than that, if a host machine can successfully ping
127.0.0.1 or any IP from loopback range, implies that the TCP/IP software stack on the
machine is successfully loaded and working.

Link-local Addresses

In case of the Host is not able to acquire an IP address from DHCP server and it has not been
assigned any IP address manually, the host can assign itself an IP address from a range of
reserved Link-local addresses. Link local address range is 169.254.0.0 - 169.254.255.255.

Assume a network segment where all systems are configured to acquire IP addresses from a
DHCP server connected to the same network segment. If the DHCP server is not available, no
host on the segment will be able to communicate to any other. Windows (98 or later), and
Mac OS (8.0 or later) support this functionality of self-configuration of Link-local IP address. In
absence of DHCP server, every host machine randomly chooses an IP address from the above
mentioned range and then checks to ascertain by means of ARP, if some other host also has
not configured himself with the same IP address. Once all host are using link local addresses of
same range, they can communicate to each other.

These IP addresses cannot help system to communicate when they do not belong to the same
physical or logical segment. These IPs are also not routable.

IPv4 - Example

This section tells how actual communication happens on the Network using Internet Protocol
version 4.

Packet flow in network

All the hosts in IPv4 environment are assigned unique logical IP addresses. When a host wants
to send some data to another host on the network, it needs the physical (MAC) address of the
destination host. To get the MAC address, the host broadcasts ARP message and asks to give
the MAC address whoever is the owner of destination IP address. All the host on that segment
receives ARP packet but only the host which has its IP matching with the one in ARP message,
replies with its MAC address. Once the sender receives the MAC address of receiving station,
data is sent on the physical media.

In case, the IP does not belong to the local subnet. The data is sent to the destination by
means of Gateway of the subnet. To understand the packet flow we must first understand
following components:

 MAC Address: Media Access Control Address is 48-bit factory hard coded physical
address of network device which can uniquely be identified. This address is assigned by
device manufacturers.

 Address Resolution Protocol: Address Resolution Protocol is used to acquire the MAC
address of a host whose IP address is known. ARP is a Broadcast packet which is
received by all the host in the network segment. But only the host whose IP is
mentioned in ARP responds to it providing its MAC address.

 Proxy Server: To access Internet, networks uses Proxy Server which has a public IP
assigned. All PCs request Proxy Server for a Server on Internet, Proxy Server on behalf
of PC sends the request to server and when it receives response from the Server, the
Proxy Server forwards it to the client PC. This is a way to control Internet access in
computer networks and it helps to implement web based policies.

 Dynamic Host Control Protocol: DHCP is a service by which a host is assigned IP address
from a pre-defined address pool. DHCP server also provides necessary information such
as Gateway IP, DNS Server Address, lease assigned with the IP etc. By using DHCP
services network administrator can manage assignment of IP addresses at ease.

 Domain Name System: This is very likely that a user does not know the IP address of a
remote Server he wants to connect to. But he knows the name assigned to it for
example, tutorialpoints.com. When the user types in the name of remote server he
wants to connect to the localhost behind the screens sends a DNS query. Domain
Name System is a method to acquire the IP address of the host whose Domain Name is
known.

 Network Address Translation: Almost all PCs in a computer network are assigned
private IP addresses which are not routable on Internet. As soon as a router receives an
IP packet with private IP address it drops it. In order to access Servers on public private
address, computer networks use an address translation service, which translates
between public and private addresses, called Network Address Translation. When a PC
sends an IP packet out of a private network, NAT changes the private IP address with
public IP address and vice versa.

Internet Protocol v6 (IPv6)


IETF (Internet Engineering Task Force) has redesigned IP addresses and to mitigate the
drawbacks of IPv4. The new IP address has version 6 and is 128-bit address, by which every
single inch of the earth can be given millions of IP addresses.

Today majority of devices running on Internet are using IPv4 and it is not possible to shift them
to IPv6 in coming days. There are mechanisms provided by IPv6, by which IPv4 and IPv6 can
coexist unless the Internet entirely shifts to IPv6:

 Dual IP Stack

 Tunneling (6to4 and 4to6)

 NAT Protocol Translation

IPV4 vs IPV6:

 IPv4 is 32 bit logical address of a device which contains 4 octets while IPv6 is a 128 bit
logical address of a device which contains 16 octet (8-bit each).
 IPv4 requires checksum while IPv6 requires no checksum.
 Broadcast is present in IPv4 while no broadcast in IPv6.
 No anycast in IPv4 while concept of anycast is present in IPv6.
 Address configuration is done manually or by DHCP in IPv4 while in IPv6, stateless auto
configuration or DHCPv6 are used for address configuration.
 Packet flow is not identified by IPv4 while in IPv6, flow label field (of IPv6 header) can be
used for identifying packet flow for QoS handling.
 IPsec is optional in IPv4 while it is required for IPv6.
 IPv6 has simpler header than IPv4.
 Option field is present in IPv4 while in IPv6, separate extension header is defined for
optional data.
 Packet size of IPv4 is 576 bytes while of IPv6 is 1280 bytes (without fragmentation).

1.4 Unicast Routing:


Unicast – Unicast means the transmission from a single sender to a single receiver. It is a point
to point communication between sender and receiver. There are various unicast protocols such
as TCP, HTTP, etc.
 TCP is the most commonly used unicast protocol. It is a connection oriented protocol that
relay on acknowledgement from the receiver side.
 HTTP stands for Hyper Text Transfer Protocol. It is an object oriented protocol for
communication.
There are three major protocols for unicast routing:
1. Distance Vector Routing
2. Link State Routing
3. Path-Vector Routing
Distance Vector Routing (DVR):
A distance-vector routing (DVR) protocol requires that a router inform its neighbors of
topology changes periodically. Historically known as the old ARPANET routing algorithm (or
known as Bellman-Ford algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table containing the distance
between itself and ALL possible destination nodes. Distances, based on a chosen metric, are
computed using information from the neighbors’ distance vectors.

Information kept by DV router -


 Each router has an ID
 Associated with each link connected to a router,
 there is a link cost (static or dynamic).
 Intermediate hops
Distance Vector Table Initialization -
 Distance to itself = 0
 Distance to ALL other routers = infinity number.

Distance Vector Algorithm –


1. A router transmits its distance vector to each of its neighbors in a routing packet.
2. Each router receives and saves the most recently received distance vector from each of its
neighbors.
3. A router recalculates its distance vector when:
 It receives a distance vector from a neighbor containing different information than
before.
 It discovers that a link to a neighbor has gone down.
The DV calculation is based on minimizing the cost to each destination
Dx(y) = Estimate of least cost from x to y
C(x,v) = Node x knows cost to each neighbor v
Dx = [Dx(y): y ∈ N ] = Node x maintains distance vector
Node x also maintains its neighbors' distance vectors
– For each neighbor v, x maintains Dv = [Dv(y): y ∈ N ]
Note –
 From time-to-time, each node sends its own distance vector estimate to neighbors.
 When a node x receives new DV estimate from any neighbor v, it saves v’s distance vector
and it updates its own DV using B-F equation:
 Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N

Example – Consider 3-routers X, Y and Z as shown in figure. Each router have their routing
table. Every routing table will contain distance to the destination nodes.

Consider router X , X will share it routing table to neighbors and neighbors will share it routing
table to it to X and distance from node X to destination will be calculated using bellmen- ford
equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
As we can see that distance will be less going from X to Z when Y is intermediate node(hop) so it
will be update in routing table X.

Similarly for Z also –

Finally the routing table for all –


Advantages of Distance Vector routing –
 It is simpler to configure and maintain than link state routing.

Disadvantages of Distance Vector routing –


 It is slower to converge than link state.
 It is at risk from the count-to-infinity problem.
 It creates more traffic than link state since a hop count change must be propagated to all
routers and processed on each router. Hop count updates take place on a periodic basis,
even if there are no changes in the network topology, so bandwidth-wasting broadcasts
still occur.
 For larger networks, distance vector routing results in larger routing tables than link state
since each router must know about all other routers. This can also lead to congestion on
WAN links.
Note – Distance Vector routing uses UDP(User datagram protocol) for transportation.

Link State Routing(LSR):


Link State Routing –
Link state routing is the second family of routing protocols. While distance vector routers use a
distributed algorithm to compute their routing tables, link-state routing uses link-state routers
to exchange messages that allow each router to learn the entire network topology. Based on
this learned topology, each router is then able to compute its routing table by using a shortest
path computation.

Features of link state routing protocols –


 Link state packet – A small packet that contains routing information.
 Link state database – A collection information gathered from link state packet.
 Shortest path first algorithm (Dijkstra algorithm) – A calculation performed on the
database results into shortest path
 Routing table – A list of known paths and interfaces.

Calculation of shortest path –


To find shortest path, each node need to run the famous Dijkstra algorithm. This famous
algorithm uses the following steps:
 Step-1: The node is taken and chosen as a root node of the tree, this creates the tree with
a single node, and now set the total cost of each node to some value based on the
information in Link State Database
 Step-2: Now the node selects one node, among all the nodes not in the tree like structure,
which is nearest to the root, and adds this to the tree.The shape of the tree gets changed .
 Step-3: After this node is added to the tree, the cost of all the nodes not in the tree needs
to be updated because the paths may have been changed.
 Step-4: The node repeats the Step 2. and Step 3. until all the nodes are added in the tree
Link State protocols in comparison to Distance Vector protocols have:
1. It requires large amount of memory.
2. Shortest path computations require many CPU circles.
3. If network use the little bandwidth ; it quickly reacts to topology changes
4. All items in the database must be sent to neighbors to form link state packets.
5. All neighbors must be trusted in the topology.
6. Authentication mechanisms can be used to avoid undesired adjacency and problems.
7. No split horizon techniques are possible in the link state routing.

Open shortest path first (OSPF) routing protocol –


 Open Shortest Path First (OSPF) is a unicast routing protocol developed by working group
of the Internet Engineering Task Force (IETF).
 It is a intradomain routing protocol.
 It is an open source protocol.
 It is similar to Routing Information Protocol (RIP)
 OSPF is a classless routing protocol, which means that in its updates, it includes the
subnet of each route it knows about, thus, enabling variable-length subnet masks. With
variable-length subnet masks, an IP network can be broken into many subnets of various
sizes. This provides network administrators with extra network-configuration
flexibility.These updates are multicasts at specific addresses (224.0.0.5 and 224.0.0.6).
 OSPF is implemented as a program in the network layer using the services provided by the
Internet Protocol
 IP datagram that carries the messages from OSPF sets the value of protocol field to 89
 OSPF is based on the SPF algorithm, which sometimes is referred to as the Dijkstra
algorithm
 OSPF has two versions – version 1 and version 2. Version 2 is used mostly

OSPF Messages – OSPF is a very complex protocol. It uses five different types of messages.
These are as follows:
10. Hello message (Type 1) – It is used by the routers to introduce itself to the other routers.
11. Database description message (Type 2) – It is normally send in response to the Hello
message.
12. Link-state request message (Type 3) – It is used by the routers that need information
about specific Link-State packet.
13. Link-state update message (Type 4) – It is the main OSPF message for building Link-State
Database.
14. Link-state acknowledgement message (Type 5) – It is used to create reliability in the OSPF
protocol.
Distance vector routing v/s Link state routing :
Distance Vector Routing –
 It is a dynamic routing algorithm in which each router computes distance between itself
and each possible destination i.e. its immediate neighbors.
 The router share its knowledge about the whole network to its neighbors and accordingly
updates table based on its neighbors.
 The sharing of information with the neighbors takes place at regular intervals.
 It makes use of Bellman Ford Algorithm for making routing tables.
 Problems – Count to infinity problem which can be solved by splitting horizon.
– Good news spread fast and bad news spread slowly.
– Persistent looping problem i.e. loop will be there forever.
Link State Routing –
 It is a dynamic routing algorithm in which each router shares knowledge of its neighbors
with every other router in the network.
 A router sends its information about its neighbors only to all the routers through flooding.
 Information sharing takes place only whenever there is a change.
 It makes use of Dijkastra’s Algorithm for making routing tables.
 Problems – Heavy traffic due to flooding of packets.
– Flooding can result in infinite looping which can be solved by using Time to leave
(TTL) field.
Comparison between Distance Vector Routing and Link State Routing:
1.5 Interior Gateway Routing Protocol (IGRP) & EIGRP :

Interior Gateway Routing Protocol (IGRP) is a proprietary distance vector routing protocol used
to communicate routing information within a host network. It was invented by Cisco.

IGRP manages the flow of routing information within connected routers in the host network or
autonomous system. The protocol ensures that every router has routing tables updated with
the best available path. IGRP also avoids routing loops by updating itself with the changes
occurring over the network and by error management.

Cisco created Interior Gateway Routing Protocol (IGRP) in response to the limitations in Routing
Information Protocol (RIP), which handles a maximum hop count of 15. IGRP supports a
maximum hop count of up to 255. The primary two purposes of IGRP are to:
 Communicate routing information to all connected routers within its boundary or
autonomous system
 Continue updating whenever there is a topological, network or path change that occurs

IGRP sends a notification of any new changes, and information about its status, to its neighbors
every 90 seconds.

IGRP manages a routing table with the most optimal path to respective nodes and to networks
within the parent network. Because it is a distance vector protocol, IGRP uses several
parameters to calculate the metric for the best path to a specific destination. These parameters
include delay, bandwidth, reliability, load and maximum transmission unit (MTU).

Goals for IGRP

The IGRP protocol allows a number of gateways to coordinate their routing. Its goals are the
following:

 Stable routing even in very large or complex networks. No routing loops should occur, even
as transients.

 Fast response to changes in network topology.

 Low overhead. That is, IGRP itself should not use more bandwidth than what is actually
needed for its task.

 Splitting traffic among several parallel routes when they are of roughly equal desirability.

 Taking into account error rates and level of traffic on different paths.

EIGRP (Enhanced Interior Gateway Routing Protocol)


EIGRP (Enhanced Interior Gateway Routing Protocol) is a network protocol that lets routers
exchange information more efficiently than with earlier network protocols. EIGRP evolved from
IGRP (Interior Gateway Routing Protocol) and routers using either EIGRP and IGRP can
interoperate because the metric (criteria used for selecting a route) used with one protocol can
be translated into the metrics of the other protocol. EIGRP can be used not only for Internet
Protocol (IP) networks but also for AppleTalk and Novell NetWare networks.

Using EIGRP, a router keeps a copy of its neighbor's routing tables. If it can't find a route to a
destination in one of these tables, it queries its neighbors for a route and they in turn query
their neighbors until a route is found. When a routing table entry changes in one of the routers,
it notifies its neighbors of the change only (some earlier protocols require sending the entire
table). To keep all routers aware of the state of neighbors, each router sends out a periodic
"hello" packet. A router from which no "hello" packet has been received in a certain period of
time is assumed to be inoperative.

EIGRP uses the Diffusing-Update Algorithm (DUAL) to determine the most efficient (least cost)
route to a destination. A DUAL finite state machine contains decision information used by the
algorithm to determine the least-cost route (which considers distance and whether a
destination path is loop-free).

Difference between IGRP and EIGRP

IGRP vs EIGRP
IGRP, which stands for Internet Gateway Routing Protocol, is a relatively old routing protocol
that was invented by Cisco. It has been largely replaced by the newer and more superior
Enhanced-IGRP, more commonly known as EIGRP, since 1993. Even in Cisco the Cisco
curriculum, IGRP is only discussed as an obsolete protocol as an introduction to EIGRP.
The main reason behind the advent of EIGRP is to move away from classful routing protocols
like IGRP because of the rapidly depleting IPv4 addresses. IGRP simply assumes that
all elements in a given class belong to the same subnet. EIGRP utilizes variable length subnet
masks (VLSM) to make more efficient use of the short supply of IPv4 addresses, prior to the
advent of IPv6 .

Along with the shift from classful routing protocols were a few improvements to the algorithm
used to discover the best way to get around the network was introduced with EIGRP. It now
uses Diffusing Update Algorithm or better known as DUAL to calculate paths while ensuring
that no loops exist in the system since those are detrimental to the performance of the
network.

EIGRP routers periodically broadcast a ‘hello’ packet to all systems to inform other routers that
they are present and working well in the network. Updates on the other hand, are no longer
broadcast to the entire network; they are bounded only to routers that need the information.
Updates are also no longer periodic and only when changes in the metric are observed would
the corresponding updates be sent out to other routers. The partial updates cause a reduction
in network traffic compared to the full updates that are utilized by IGRP.

Metrics, which are used to measure the efficiency of a given, have also changed in EIGRP.
Instead of using a 24 bit value in the calculation of the metric, EIGRP now utilizes 32 bits. To
maintain compatibility the older IGRP metrics are multiplied by a value of 256, thereby bit-
shifting the value 8 bits to the left and conforming to the 32 bit metric of EIGRP.

Summary:
1. EIGRP has totally replaced the obsolete IGRP
2. EIGRP is a classless routing protocol while IGRP is a classful routing protocol
3. EIGRP uses the DUAL while IGRP does not
4. EIGRP consumes much less bandwidth compared to IGRP
5. EIGRP expresses the metric as a 32 bit value while IGRP uses a 24 bit value
Difference between IGRP and EIGRP

S.NO IGRP EIGRP

IGRP stands for Interior EIGRP stands for Enhanced Interior


1. Gateway Routing Protocol. Gateway Routing Protocol.

Interior Gateway Routing While Enhanced Interior Gateway


Protocol is Classful routing Routing Protocol is a classless routing
2. technique. technique.

3. IGRP is a slow convergence. While it is a fast convergence.

In IGRP, Bellman ford algorithm


4. is used. While in this, Dual algorithm is used.

IGRP needs more or high While EIGRP needs low or less


5. bandwidth. bandwidth.

The least hop count in IGRP is While the least hop count in EIGRP is
6. 255. 256.

7. It provides 24 bits for delay. While it provides 32 bits for delay.


1.6 Border Gateway Protocol (BGP)

Border Gateway Protocol (BGP) is a routing protocol used to transfer data and information
between different host gateways, the Internet or autonomous systems. BGP is a Path Vector
Protocol (PVP), which maintains paths to different hosts, networks and gateway routers and
determines the routing decision based on that. It does not use Interior Gateway Protocol (IGP)
metrics for routing decisions, but only decides the route based on path, network policies and
rule sets.

Border Gateway Protocol (BGP) is used to Exchange routing information for the internet and is
the protocol used between ISP which are different ASes.
The protocol can connect together any internetwork of autonomous system using an arbitrary
topology. The only requirement is that each AS have at least one router that is able to run BGP
and that is router connect to at least one other AS’s BGP router. BGP’s main function is to
exchange network reach-ability information with other BGP systems. Border Gateway Protocol
constructs an autonomous systems’ graph based on the information exchanged between BGP
routers.

Characteristics of Border Gateway Protocol (BGP):


 Inter-Autonomous System Configuration: The main role of BGP is to provide
communication between two autonomous systems.
 BGP supports Next-Hop Paradigm.
 Coordination among multiple BGP speakers within the AS (Autonomous System).
 Path Information: BGP advertisement also includes path information, along with the
reachable destination and next destination pair.
 Policy Support: BGP can implement policies that can be configured by the administrator.
For ex:- a router running BGP can be configured to distinguish between the routes that
are known within the AS and that which are known from outside the AS.
 Runs Over TCP.
 BGP conserve network Bandwidth.
 BGP supports CIDR.
 BGP also supports Security.
Functionality of Border Gateway Protocol (BGP):
BGP peers performs 3 functions, which are given below.
1. The first function consists of initial peer acquisition and authentication. both the peers
established a TCP connection and perform message exchange that guarantees both sides
have agreed to communicate.
2. The second function mainly focus on sending of negative or positive reach-ability
information.
3. The third function verifies that the peers and the network connection between them are
functioning correctly.
BGP Route Information Management Functions:
 Route Storage:
Each BGP stores information about how to reach other networks.
 Route Update:
In this task, Special techniques are used to determine when and how to use the
information received from peers to properly update the routes.
 Route Selection:
Each BGP uses the information in its route databases to select good routes to each
network on the internet network.
 Route advertisement:
Each BGP speaker regularly tells its peer what is knows about various networks and
methods to reach them.
BGP roles include:

 Because it is a PVP, BGP communicates the entire autonomous system/network path


topology to other networks
 Maintains its routing table with topologies of all externally connected networks
 Supports classless interdomain routing (CIDR), which allocates Internet Protocol (IP)
addresses to connected Internet devices
1.7 Routing Information Protocol (RIP)
Routing Information Protocol (RIP) is a distance vector protocol that uses hop count as its
primary metric. RIP defines how routers should share information when moving traffic among
an interconnected group of local area networks (LANs).

Routing Information Protocol (RIP) was originally designed for xerox PARC Universal Protocol
and was called GWINFO in the Xerox Network Systems (XNS) protocol suite in 1981. RIP, which
was defined in RFC 1058 in 1988, is known for being easy to configure and easy to use in small
networks.

In the enterprise, Open Shortest Path First (OSPF) routing has largely replaced RIP as the most
widely used Interior Gateway Protocol (IGP). RIP has been supplanted mainly due to its
simplicity and its inability to scale to very large and complex networks.

How Routing Information Protocol (RIP) works

RIP uses a distance vector algorithm to decide which path to put a packet on to get to its
destination. Each RIP router maintains a routing table, which is a list of all the destinations the
router knows how to reach. Each router broadcasts its entire routing table to its closest
neighbors every 30 seconds (224.0.0.9 for RIPv2). In this context, neighbors are the other
routers to which a router is connected directly -- that is, the other routers on the same network
segments this router is on. The neighbors, in turn, pass the information on to their nearest
neighbors, and so on, until all RIP hosts within the network have the same knowledge of routing
paths. This shared knowledge is known as convergence.

If a router receives an update on a route, and the new path is shorter, it will update its table
entry with the length and next-hop address of the shorter path. If the new path is longer, it will
wait through a "hold-down" period to see if later updates reflect the higher value as well. It will
only update the table entry if the new, longer path has been determined to be stable.
If a router crashes or a network connection is severed, the network discovers this because that
router stops sending updates to its neighbors, or stops sending and receiving updates along the
severed connection. If a given route in the routing table isn't updated across six successive
update cycles (that is, for 180 seconds) a RIP router will drop that route and let the rest of the
network know about the problem through its own periodic updates.

Features of RPI

RIP uses a modified hop count as a way to determine network distance. Modified reflects the
fact that network engineers can assign paths a higher cost. By default, if a router's neighbor
owns a destination network and can deliver packets directly to the destination network without
using any other routers, that route has one hop. In network management terminology, this is
described as a cost of 1.

RIP allows only 15 hops in a path. If a packet can't reach a destination in 15 hops, the
destination is considered unreachable. Paths can be assigned a higher cost (as if they involved
extra hops) if the enterprise wants to limit or discourage their use. For example, a satellite
backup link might be assigned a cost of 10 to force traffic follow other routes when available.
Timers in RPI help regulate performance. They include:

Update timer—Frequency of routing updates. Every 30 seconds IP RIP sends a complete copy
of its routing table, subject to split horizon. (IPX RIP does this every 60 seconds.)

Invalid timer—Absence of refreshed content in a routing update. RIP waits 180 seconds to
mark a route as invalid and immediately puts it into holddown.

Hold-down timers and triggered updates—Assist with stability of routes in the Cisco
environment. Holddowns ensure that regular update messages do not inappropriately cause a
routing loop. The router doesn't act on non-superior new information for a certain period of
time. RIP's hold-down time is 180 seconds.

Flush timer—RIP waits an additional 240 seconds after hold down before it actually removes
the route from the table.

Other stability features to assist with routing loops include poison reverse. A poison reverse is a
way in which a gateway node tells its neighbor gateways that one of the gateways is no longer
connected. To do this, the notifying gateway sets the number of hops to the unconnected
gateway to a number that indicates infinite, which in layman's terms simply means 'You can't
get there.' Since RIP allows up to 15 hops to another gateway, setting the hop count to 16 is the
equivalent of "infinite."

Routing Information Protocol (RIP) is a dynamic routing protocol which uses hop count as a
routing metric to find the best path between the source and the destination network. It is a
distance vector routing protocol which has AD value 120 and works on the application layer of
OSI model. RIP uses port number 520.
Hop Count:
Hop count is the number of routers occurring in between the source and destination network.
The path with the lowest hop count is considered as the best route to reach a network and
therefore placed in the routing table. RIP prevents routing loops by limiting the number of
hopes allowed in a path from source and destination. The maximum hop count allowed for RIP
is 15 and hop count of 16 is considered as network unreachable.
Features of RIP:
1. Updates of the network are exchanged periodically.
2. Updates (routing information) are always broadcast.
3. Full routing tables are sent in updates.
4. Routers always trust on routing information received from neighbor routers. This is also
known as Routing on rumors.
RIP versions:
Thre are three versions of routing information protocol – RIP Version1, RIP

Version2 and RIPng.

RIP V1 RIP V2 RIPNG

Sends update as broadcast Sends update as multicast Sends update as multicast

Multicast at FF02::9 (RIPng


Broadcast at can only run on IPv6
255.255.255.255 Multicast at 224.0.0.9 networks)

Doesn’t support
authentication of update Supports authentication of
messages RIPv2 update messages –

Classless protocol, supports


Classful routing protocol classful Classless updates are sent

RIP v1 is known as Classful Routing Protocol because it doesn’t send information of subnet
mask in its routing update.
RIP v2 is known as Classless Routing Protocol because it sends information of subnet mask in its
routing update.

Use debug command to get the details:

# debug ip rip
>> Use this command to show all routes configured in router, say for router R1 :

R1# show ip route


>> Use this command to show all protocols configured in router, say for router R1 :

R1# show ip protocols

Configuration:
Consider the above given topology which has 3-routers R1, R2, R3. R1 has IP address
172.16.10.6/30 on s0/0/1, 192.168.20.1/24 on fa0/0. R2 has IP address 172.16.10.2/30 on
s0/0/0, 192.168.10.1/24 on fa0/0. R3 has IP address 172.16.10.5/30 on s0/1, 172.16.10.1/30 on
s0/0, 10.10.10.1/24 on fa0/0.

Configure RIP for R1 :

R1(config)# router rip


R1(config-router)# network 192.168.20.0
R1(config-router)# network 172.16.10.4
R1(config-router)# version 2
R1(config-router)# no auto-summary
Note : no auto-summary command disables the auto-summarisation. If we don’t select no
auto-summary, then subnet mask will be considered as classful in Version 1.

Configureg RIP for R2 :

R2(config)# router rip


R2(config-router)# network 192.168.10.0
R2(config-router)# network 172.16.10.0
R2(config-router)# version 2
R2(config-router)# no auto-summary

Similarly, Configure RIP for R3 :

R3(config)# router rip


R3(config-router)# network 10.10.10.0
R3(config-router)# network 172.16.10.4
R3(config-router)# network 172.16.10.0
R3(config-router)# version 2
R3(config-router)# no auto-summary

RIP timers :

 Update timer : The default timing for routing information being exchanged by the routers
operating RIP is 30 seconds. Using Update timer, the routers exchange their routing table
periodically.
 Invalid timer: If no update comes until 180 seconds, then the destination router consider
it as invalid. In this scenario, the destination router mark hop count as 16 for that router.
 Hold down timer : This is the time for which the router waits for neighbour router to
respond. If the router isn’t able to respond within a given time then it is declared dead. It
is 180 seconds by default.
 Flush time : It is the time after which the entry of the route will be flushed if it doesn’t
respond within the flush time. It is 60 seconds by default. This timer starts after the route
has been declared invalid and after 60 seconds i.e time will be 180 + 60 = 240 seconds.

Note that all these times are adjustable. Use this command to change the timers :

R1(config-router)# timers basic


R1(config-router)# timers basic 20 80 80 90
1.8 Open shortest path first (OSPF) protocol States

Open Shortest Path First (OSPF) is a link-state routing protocol which is used to find the best
path between the source and the destination router using its own Shortest Path First). OSPF is
developed by Internet Engineering Task Force (IETF) as one of the Interior Gateway Protocol
(IGP), i.e, the protocol which aims at moving the packet within a large autonomous system or
routing domain. It is a network layer protocol which works on the protocol number 89 and uses
AD value 110. OSPF uses multicast address 224.0.0.5 for normal communication and 224.0.0.6
for update to designated router(DR)/Backup Designated Router (BDR).
OSPF terms –
1. Router I’d – It is the highest active IP address present on the router. First, highest
loopback address is considered. If no loopback is configured then the highest active IP
address on the interface of the router is considered.
2. Router priority – It is a 8 bit value assigned to a router operating OSPF, used to elect DR
and BDR in a broadcast network.
3. Designated Router (DR) – It is elected to minimize the number of adjacency formed. DR
distributes the LSAs to all the other routers. DR is elected in a broadcast network to which
all the other routers shares their DBD. In a broadcast network, router requests for an
update to DR and DR will respond to that request with an update.
4. Backup Designated Router (BDR) – BDR is backup to DR in a broadcast network. When DR
goes down, BDR becomes DR and performs its functions.
DR and BDR election – DR and BDR election takes place in broadcast network or multi access
network. Here is the criteria for the election:
1. Router having the highest router priority will be declared as DR.
2. If there is a tie in router priority then highest router I’d will be considered. First, highest
loopback address is considered. If no loopback is configured then the highest active IP
address on the interface of the router is considered.
OSPF states – The device operating OSPF goes through certain states. These states are:
1. Down – In this state, no hello packet have been received on the interface.
Note – The Down state doesn’t mean that the interface is physically down. Her, it means
that OSPF adjacency process has not started yet.
2. INIT – In this state, hello packet have been received from the other router.
3. 2WAY – In the 2WAY state, both the routers have received the hello packets from other
routers. Bidirectional connectivity has been established.
Note – In between the 2WAY state and Exstart state, the DR and BDR election takes place.
4. Exstart – In this state, NULL DBD are exchanged.In this state, master and slave election
take place. The router having the higher router I’d becomes the master while other
becomes the slave. This election decides Which router will send it’s DBD first (routers who
have formed neighbourship will take part in this election).
5. Exchange – In this state, the actual DBDs are exchanged.
6. Loading – In this sate, LSR, LSU and LSA (Link State Acknowledgement) are exchanged.
Important – When a router receives DBD from other router, it compares it’s own DBD
with the other router DBD. If the received DBD is more updated than its own DBD then
the router will send LSR to the other router stating what links are needed. The other
router replies with the LSU containing the updates that are needed. In return to this, the
router replies with the Link State Acknowledgement.
7. Full – In this state, synchronization of all the information takes place. OSPF routing can
begin only after the Full state.

Open shortest path first (OSPF) is a link-state routing protocol which is used to find the best
path between the source and the destination router using its own shortest path first (SPF)
algorithm. A link-state routing protocol is a protocol which uses the concept of triggered
updates, i.e., if there is a change observed in the learned routing table then the updates are
triggered only, not like the distance-vector routing protocol where the routing table are
exchanged at a period of time.
Criteria –
To form neighbourship in OSPF, there is a criteria for both the routers:
1. It should be present in same area
2. Router I’d must be unique
3. Subnet mask should be same
4. Hello and dead timer should be same
5. Stub flag must match
6. Authentication must match
OSPF supports NULL, plain text, MD5 authentication.

Note – Both the routers (neighbors) should have same type of authentication enabled. e.g- if
one neighbor has MD5 authentication enabled then other should also have MD5 authentication
enabled.
OSPF messages –
OSPF uses certain messages for the communication between the routers operating OSPF.
 Hello message – These are keep alive messages used for neighbor discovery /recovery.
These are exchanged in every 10 seconds. This include following information : Router I’d,
Hello/dead interval, Area I’d, Router priority, DR and BDR IP address, authentication data.
 Database Description (DBD) – It is the OSPF routes of the router. This contains topology
of an AS or an area (routing domain).
 Link state request (LSR) – When a router receive DBD, it compares it with its own DBD. If
the DBD received has some more updates than its own DBD then LSR is being sent to its
neighbor.
 Link state update (LSU) – When a router receives LSR, it responds with LSU message
containing the details requested.
 Link state acknowledgement – This provides reliability to the link state exchange process.
It is sent as the acknowledgement of LSU.
 Link state advertisement (LSA) – It is an OSPF data packet that contains link-state routing
information, shared only with the routers to which adjacency has been formed.
Note – Link State Advertisement and Link State Acknowledgement both are different messages.
Timers –
 Hello timer – The interval in which OSPF router sends a hello message on an interface. It is
10 seconds by default.
 Dead timer – The interval in which the neighbor will be declared dead if it is not able to
send the hello packet . It is 40 seconds by default.It is usually 4 times the hello interval but
can be configured manually according to need.
OSPF supports/provides/advantages –
 Both IPv4 and IPv6 routed protocols
 Load balancing with equal cost routes for same destination
 VLSM and route summarization
 Unlimited hop counts
 Trigger updates for fast convergence
 A loop free topology using SPF algorithm
 Run on most routers
 Classless protocol
There are some disadvantages of OSPF like, it requires extra CPU process to run SPF algorithm,
requires more RAM to store adjacency topology and more complex to setup and hard to
troubleshoot.

Open shortest path first (OSPF) is a link-state routing protocol which is used to find the best
path between the source and the destination router using its own SPF algorithm.

Open shortest path first (OSPF) router roles –


An area is a group of contiguous network and routers. Routers belonging to same area shares a
common topology table and area I’d. The area I’d is associated with router’s interface as a
router can belong to more than one area. There are some roles of router in OSPF:
1. Backbone router – The area 0 is known as backbone area and the routers in area 0 are
known as backbone routers. If the routers exists partially in the area 0then also it is a
backbone router.
2. Internal router – An internal router is a router which have all of its interfaces in a single
area.
3. Area Boundary Router (ABR) – The router which connects backbone area with another
area is called Area Boundary Router. It belongs to more than one area. The ABRs
therefore maintain multiple link-state databases that describe both the backbone
topology and the topology of the other areas.
4. Area Summary Border Router (ASBR) – When an OSPF router is connected to a different
protocol like EIGRP, or Border Gateway Protocol, or any other routing protocol then it is
known as AS. The router which connects two different AS (in which one of the interface is
operating OSPF) is known as Area Summary Border Router. These routers perform
redistribution. ASBRs run both OSPF and another routing protocol, such as RIP or BGP.
ASBRs advertise the exchanged external routing information throughout their AS.
Note – A router can be backbone router and Area Boundary Router at the same time i.e a
router can perform more than one role at a time.
Configuration –

There is a small topology in which there are 3 routers namely R1, R2, R3 are connected. R1 is
connected to networks 10.255.255.80/30 (interface fa0/1), 192.168.10.48/29 (interface fa0/0)
and 10.255.255.8/30 (interface gi0/0)
Note – In the figure, IP addresses are written with their respected interfaces but as have to
advertise networks therefore, you have to write network I’d. R2 is connected to networks
192.168.10.64/29 (interface fa0/0), 10.255.255.80/30 (interface fa0/1). R3 is connected to
networks 10.255.255.8/30 (int fa0/1), 192.168.10.16/29 (int fa0/0).
Now, configuring OSPF for R1.

R1(config)#router ospf 1

R1(config-router)#network 192.168.10.48 0.0.0.7 area 1

R1(config-router)#network 10.255.255.80 0.0.0.3 area 1

R1(config-router)#network 10.255.255.8 0.0.0.3 area 1

Here, 1 is the OSPF instance or process I’d. It can be same or different (doesn’t matter). You
have use wildcard mask here and area used is 1.
Now, configuring R2
R2(config)#router ospf 1

R2(config-router)#network 192.168.10.64 0.0.0.7 area 1

R2(config-router)#network 10.255.255.80 0.0.0.3 area 1

Similarly, for R3

R3(config)#router ospf 1

R3(config-router)#network 192.168.10.16 0.0.0.7 area 1

R3(config-router)#network 10.255.255.8 0.0.0.3 area 1

You can check the configuration by typing command

R3#show ip protocols
1.9 Difference between Unicast, Broadcast and Multicast

The cast term here signifies some data(stream of packets) is being transmitted to the
recipient(s) from client(s) side over the communication channel that help them to
communicate. Let’s see some of the “cast” concepts that are prevailing in the computer
networks field.

1. Unicast –This type of information transfer is useful when there is a participation of single
sender and single recipient. So, in short you can term it as a one-to-one transmission. For
example, a device having IP address 10.1.2.0 in a network wants to send the traffic stream(data
packets) to the device with IP address 20.12.4.2 in the other network,then unicast comes into
picture. This is the most common form of data transfer over the networks.

2. Broadcast –Broadcasting transfer (one-to-all) techniques can be classified into two types :

 Limited Broadcasting –
Suppose you have to send stream of packets to all the devices over the network that you
reside, this broadcasting comes handy. For this to achieve,it will append 255.255.255.255
(all the 32 bits of IP address set to 1) called as Limited Broadcast Address in the
destination address of the datagram (packet) header which is reserved for information
tranfer to all the recipients from a single client (sender) over the network.

 Direct Broadcasting –
This is useful when a device in one network wants to transfer packet stream to all the
devices over the other network.This is achieved by translating all the Host ID part bits of
the destination address to 1,referred as Direct Broadcast Address in the datagram header
for information transfer.

This mode is mainly utilized by television networks for video and audio distribution.
One important protocol of this class in Computer Networks is Address Resolution Protocol
(ARP) that is used for resolving IP address into physical address which is necessary for
underlying communication.

3. Multicast – In multicasting, one/more senders and one/more recipients participate in data


transfer traffic. In this method traffic recline between the boundaries of unicast (one-to-one)
and broadcast (one-to-all). Multicast lets server’s direct single copies of data streams that are
then simulated and routed to hosts that request it. IP multicast requires support of some other
protocols like IGMP (Internet Group Management Protocol), Multicast routing for its working.
Also in Classful IP addressing Class D is reserved for multicast groups.

1.10 Internet Group Management Protocol (IGMP)

The Internet Group Management Protocol (IGMP) is an Internet protocol that provides a way
for an Internet computer to report its multicast group membership to adjacent routers.
Multicasting allows one computer on the Internet to send content to multiple other computers
that have identified themselves as interested in receiving the originating computer's content.
Multicasting can be used for such applications as updating the address books of mobile
computer users in the field, sending out company newsletters to a distribution list, and
"broadcasting" high-bandwidth programs of streaming media to an audience that has "tuned
in" by setting up a multicast group membership.

Using the Open Systems Interconnection (OSI) communication model, IGMP is part of
the Network layer. IGMP is formally described in the Internet Engineering Task Force (IETF)
Request for Comments (RFC) 2236.

Difference between ICMP and IGMP:

ICMP stands for Internet Control Message Protocol and IGMP stands for Internet Group
Message Protocol. Both are the most important thing or term in networking.
ICMP (Internet Control Message Protocol) is employed to envision reachability to a network or
host associated it’s additionally wont to PING an information science address to envision if
there’s property or not. Whereas IGMP (Internet Group Message Protocol) is employed for
cluster packet transfer wherever client watch TV through satellite association.
The major distinction between ICMP (Internet Control Message Protocol) and IGMP(Internet
Group Message Protocol) is that, IGMP is employed to form cluster of hosts, whereas ICMP is
employed to send error messages and operational data indicating by hosts.
Let’s see that the difference between ICMP and IGMP:

S.NO ICMP IGMP

ICMP stands for Internet Control While IGMP stands for Internet
1. Message Protocol. Group Message Protocol.

2. ICMP has PING features. While it has the Multicastfeature.

Internet control message protocol is While internet group message


3. unicasting. protocol is multicasting.

ICMP can be operate between host


to host or host to router or router While IGMP can be used between
4. to router. client to multicast router.

IGMP is also a network layer or


5. ICMP is a layer3 protocol. layer3 protocol.

It controls the unicast


communication and used for It controls the multicast
6. reporting error. communication.

ICMP could be a mechanism While IGMP is employed to


employed by hosts and gateway to facilitate the synchronal
send notification of datagram transmission of a message to a
7. downside back to sender. bunch of recipients.
1.11 Multicast Listener Discovery (MLD):

MLD protocol enables IPv6 routers to discover multicast listeners, the nodes that are
configured to receive multicast data packets, on its directly attached interfaces. The protocol
specifically discovers which multicast addresses are of interest to its neighboring nodes and
provides this information to the active multicast routing protocol that makes decisions on the
flow of multicast data packets. Periodically, the multicast router sends general queries
requesting multicast address listener information from systems on an attached networks. These
queries are used to build and refresh the multicast address listener state on the attached
networks. In response to the queries, multicast listeners reply with membership reports. These
membership reports specify their multicast addresses listener state and their desired set of
sources with current-state multicast address records. The multicast router also processes
unsolicited filter- mode-change records and source-list-change records from systems that want
to indicate interest in receiving or not receiving traffic from particular sources.

IPv6 Multicast Overview

An IPv6 multicast group is an arbitrary group of receivers that want to receive a particular data
stream. This group has no physical or geographical boundaries--receivers can be located
anywhere on the Internet or in any private network. Receivers that are interested in receiving
data flowing to a particular group must join the group by signaling their local device. This
signaling is achieved with the MLD protocol.

Devices use the MLD protocol to learn whether members of a group are present on their
directly attached subnets. Hosts join multicast groups by sending MLD report messages. The
network then delivers data to a potentially unlimited number of receivers, using only one copy
of the multicast data on each subnet. IPv6 hosts that wish to receive the traffic are known as
group members.
Packets delivered to group members are identified by a single multicast group address.
Multicast packets are delivered to a group using best-effort reliability, just like IPv6 unicast
packets.

The multicast environment consists of senders and receivers. Any host, regardless of whether it
is a member of a group, can send to a group. However, only the members of a group receive
the message.

A multicast address is chosen for the receivers in a multicast group. Senders use that address as
the destination address of a datagram to reach all members of the group.

Membership in a multicast group is dynamic; hosts can join and leave at any time. There is no
restriction on the location or number of members in a multicast group. A host can be a member
of more than one multicast group at a time.

How active a multicast group is, its duration, and its membership can vary from group to group
and from time to time. A group that has members may have no activity.

Multicast Listener Discovery Protocol for IPv6

To start implementing multicasting in the campus network, users must first define who receives
the multicast. The MLD protocol is used by IPv6 devices to discover the presence of multicast
listeners (for example, nodes that want to receive multicast packets) on their directly attached
links, and to discover specifically which multicast addresses are of interest to those neighboring
nodes. It is used for discovering local group and source-specific group membership. The MLD
protocol provides a means to automatically control and limit the flow of multicast traffic
throughout your network with the use of special multicast queriers and hosts.

The difference between multicast queriers and hosts is as follows:

 A querier is a network device, such as a device, that sends query messages to discover which
network devices are members of a given multicast group.
 A host is a receiver, including devices that send report messages to inform the querier of a host
membership.

A set of queriers and hosts that receive multicast data streams from the same source is called a
multicast group. Queriers and hosts use MLD reports to join and leave multicast groups and to
begin receiving group traffic.

MLD uses the Internet Control Message Protocol (ICMP) to carry its messages. All MLD
messages are link-local with a hop limit of 1, and they all have the alert option set. The alert
option implies an implementation of the hop-by-hop option header.

MLD has three types of messages:

 Query--General, group-specific, and multicast-address-specific. In a query message, the


multicast address field is set to 0 when MLD sends a general query. The general query learns
which multicast addresses have listeners on an attached link.

Group-specific and multicast-address-specific queries are the same. A group address is a


multicast address.

 Report--In a report message, the multicast address field is that of the specific IPv6 multicast
address to which the sender is listening.
 Done--In a done message, the multicast address field is that of the specific IPv6 multicast
address to which the source of the MLD message is no longer listening.

An MLD report must be sent with a valid IPv6 link-local source address, or the unspecified
address (::), if the sending interface has not yet acquired a valid link-local address. Sending
reports with the unspecified address is allowed to support the use of IPv6 multicast in the
Neighbor Discovery Protocol.

For stateless auto configuration, a node is required to join several IPv6 multicast groups in
order to perform duplicate address detection (DAD). Prior to DAD, the only address the
reporting node has for the sending interface is a tentative one, which cannot be used for
communication. Therefore, the unspecified address must be used.
MLD states that result from MLD version 2 or MLD version 1 membership reports can be limited
globally or by interface. The MLD group limits feature provides protection against denial of
service (DoS) attacks caused by MLD packets. Membership reports in excess of the configured
limits will not be entered in the MLD cache, and traffic for those excess membership reports
will not be forwarded.

MLD provides support for source filtering. Source filtering allows a node to report interest in
listening to packets only from specific source addresses (as required to support SSM), or from
all addresses except specific source addresses sent to a particular multicast address.

When a host using MLD version 1 sends a leave message, the device needs to send query
messages to reconfirm that this host was the last MLD version 1 host joined to the group before
it can stop forwarding traffic. This function takes about 2 seconds. This "leave latency" is also
present in IGMP version 2 for IPv4 multicast.

1.12 Protocol-Independent Multicast (PIM):

The term "protocol independent" means that PIM can function by making use of routing
information supplied by a variety of communications protocols. In information technology, a
protocol is a defined set of rules that end points in a circuit or network employ to facilitate
communication.

Protocol-independent multicast is a family of protocols looking after the different modes of


internet multicasting for successful transmission of information in one to many, and many to
many modes. All protocol-independent multicast protocols have a similar format for control
message. Using the routing information available with the help of the different communication
protocols, protocol-independent multicast can function without being dependent on any
specific routing protocol. In other words, protocol-independent multicast does not use its own
mechanism of topology discovery.

The four modes are:


 sparse mode (SM)

 dense mode (DM)

 source-specific multicast (SSM)

 bidirectional

 Sparse mode: This protocol makes use of the assumption that in a multicast group, all
the receivers will be sparsely distributed in the environment. It is largely for wide area
usage. The protocol supports the usage of shared trees, which are nothing but multicast
distribution trees rooted at a specific node. It also supports usage of source based trees,
which has a separate multicast distribution tree for every source transmitting data to a
multicast group. In sparse mode, its important to have a mechanism to discover the root
node or rendezvous point.
 Dense mode: This protocol makes the opposite assumption of the sparse mode. It
assumes that in a multicast group, all receivers are densely distributed in the
environment. By flooding the multicast traffic, it builds the shortest path trees and also
prunes back on the tree branches when there are no presence of receivers. The protocol
is based on only source based trees and as a result does not depend on rendezvous
points, unlike sparse mode. This makes the dense mode easier to implement and
deploy. However, the scaling property of the dense mode is poor.
 Source-specific multicast: This protocol focuses on just one node which acts as a root
and the trees are built based on the same. It offers a reliable, scalable and secure model
for broadcasting information.
 Bidirectional protocol independent multicast: It is similar to sparse mode, with the
difference being in the method of data transmission. In bidirectional, the data flow is
bidirectional, i.e data flows through both directions in a branch of a tree. The data is not
encapsulated. Again, bidirectional does not use source based trees at all and also there
is no designated router in the case of bidirectional protocol. The protocol has great
scalable properties especially when there are a large set of sources for each group.
1.13 Distance Vector Multicast Routing Protocol (DVMRP):

Distance Vector Multicast Routing Protocol is the oldest routing protocol that has been used to
support multicast data transmission over networks. The protocol sends multicast data in the
form of unicast packets that are reassembled into multicast data at the destination.

DVMRP can run over various types of networks, including Ethernet local area networks (LANs).
It can even run through routers that are not multicast-capable. It has been considered as an
intermediate solution while "real" multicast Internet Protocol (IP) routing evolves.

The DVMRP is used for multicasting over IP networks without routing protocols to support
multicast. The DVMRP is based on the RIP protocol but more complicated than RIP. DVRMP
maintains a link-state database to keep track of the return paths to the source of multicast
packages.

The DVMRP operates as follows:

 The first message for any source-group pair is forwarded to the entire multicast network,
with respect to the time-to-live (TTL) of the packet.

 TTL restricts the area to be flooded by the message.

 All the leaf routers that do not have members on directly attached subnetworks send back
prune messages to the upstream router.

 The branch that transmitted a prune message is deleted from the delivery tree.

 The delivery tree, which is spanning to all the members in the multicast group, is
constructed.

In the figure below, DVMRP is running on switches A, B, and C. IGMP is also running on Switch
C, which is connected to the host directly. After the host sends an IGMP report to switch C,
multicast streams are sent from the multicast resource to the host along the path built by
DVMRP.
DVRMP's main tasks include:

 Tracks multicast datagram source paths


 Encapsulates packets as Internet Protocol (IP) datagrams
 Supports multicast IP datagram tunneling via unsupported encapsulated and addressed
unicast packet routers
 Generates dynamic multicast IP delivery trees via reverse path multicasting and a
distributed routing algorithm
 Exchanges routing datagrams made up of small, fixed-length headers and tagged data
streams via Internet Group Management Protocol
 Handles tunnel and physical interfacing according to broadcast routing exchange source
trees produced during truncated tree branch removal
 Manages reverse path forwarding for multicast traffic forwarding to downstream
interfaces

DVMRP header components are as follows:

 Version
 Type
 Subtype: Response, request, non-membership report or non-membership cancellations
 Checksum: Complete message sum of 16-bit ones, not including IP headers. Requires 16-
bit alignment. Checksum computation field is zero.

1.14 MOSPF (Multicast Open Shortest Path First)

MOSPF (Multicast Open Shortest Path First) is an extension to the OSPF (Open Shortest Path
First) protocol that facilitates interoperation between unicast and multicast routers. MOSPF is
becoming popular for proprietary network multicasting and may eventually supersede RIP
(Routing Information Protocol).

Here's a brief explanation of how MOSPF works: Multicast information goes out in OSPFlink
state advertisements (LSA). That information allows a MOSPF router to identify
active multicastgroups and the associated local area networks (LANs). MOSPF creates a
distribution tree for each multicast source and group and another tree for active sources
sending to the group. The current state of the tree is cached. Each time a link state changes or
the cache times out, the the tree must be recomputed to accomodate new changes.

MOSPF uses both source and destination to send a datagram, based on information in the OSPF
link state database about the autonomous system's topology. A group-membership-LSA makes
it possible to identifiy the location of each group member. The shortest path for the datagram is
calculated from that information.

MOSPF was designed to be backwards-compatible with non-multicast OSPF routers for


forwarding regular unicast traffic.

This isn't really a category, but a specific instance of a protocol. MOSPF is the multicast
extension to OSPF (Open Shortest Path First) which is a unicast link-state routing protocol.
Link-state routing protocols work by having each router send a routing message periodically
listing its neighbors and how far away they are. These routing messages are flooded throughout
the entire network, and so every router can build up a map of the network which it can then
use to build forwarding tables (using a Dijkstra algorithm) to decide quickly which is the correct
next hop for send a particular packet.

Extending this to multicast is achieved simply by having each router also list in a routing
message the groups for which it has local receivers. Thus given the map and the locations of the
receivers, a router can also build a multicast forwarding table for each group.

MOSPF also suffers from poor scaling. With flood-and-prune protocols, data traffic is
an implicit message about where there are senders, and so routers need to store unwanted
state where there are no receivers. With MOSPF there are explicit messages about where all
the receivers are, and so routers need to store unwanted state where there are no senders.
However, both types of protocol build very efficient distribution trees.
CHAPTER 2.

2.1 Transport Layer responsibilities

Transport Layer is the second layer of TCP/IP model. It is an end-to-end layer used to deliver
messages to a host. It is termed as end-to-end layer because it provides a point-to-point
connection rather than hop-to- hop, between the source host and destination host to deliver
the services reliably. The unit of data encapsulation in Transport Layer is a segment.
The standard protocols used by Transport Layer to enhance it’s functionalities are :
TCP(Transmission Control Protocol), UDP( User Datagram Protocol), DCCP( Datagram
Congestion Control Protocol) etc.
Various responsibilities of a Transport Layer –

 Process to process delivery –


While Data Link Layer requires the MAC address (48 bits address contained inside the
Network Interface Card of every host machine) of source- destination hosts to correctly
deliver a frame and Network layer requires the IP address for appropriate routing of
packets , in a similar way Transport Layer requires a Port number to correctly deliver the
segments of data to the correct process amongst the multiple processes running on a
particular host. A port number is a 16 bit address used to identify any client-server
program uniquely.
 End-to-end Connection between hosts –
Transport layer is also responsible for creating the end-to-end Connection between hosts
for which it mainly uses TCP and UDP. TCP is a secure, connection- orientated protocol
which uses a handshake protocol to establish a robust connection between two end-
hosts. TCP ensures reliable delivery of messages and is used in various applications. UDP
on the other hand is a stateless and unreliable protocol which ensures best-effort
delivery. It is suitable for the applications which have little concern with flow or error
control and requires to send bulk of data like video conferencing. It is a often used in
multicasting protocols.
 Multiplexing and Demultiplexing –
Multiplexing allows simultaneous use of different applications over a network which is
running on a host. Transport layer provides this mechanism which enables us to send
packet streams from various applications simultaneously over a network. Transport layer
accepts these packets from different processes differentiated by their port numbers and
passes them to network layer after adding proper headers. Similarly Demultiplexing is
required at the receiver side to obtain the data coming from various processes. Transport
receives the segments of data from network layer and delivers it to the appropriate
process running on the receiver’s machine.
 Congestion Control –
Congestion is a situation in which too many sources over a network attempt to send data
and the router buffers start overflowing due to which loss of packets occur. As a result
retransmission of packets from the sources increases the congestion further. In this
situation Transport layer provides Congestion Control in different ways. It uses open
loop congestion control to prevent the congestion and closed loop congestion control to
remove the congestion in a network once it occurred. TCP provides AIMD- additive
increase multiplicative decrease , leaky bucket technique for congestion control.
 Data integrity and Error correction –
Transport layer checks for errors in the messages coming from application layer by using
error detection codes, computing checksums, it checks whether the received data is not
corrupted and uses the ACK and NACK services to inform the sender if the data is arrived
or not and checks for the integrity of data.
 Flow control –
Transport layer provides a flow control mechanism between the adjacent layers of the
TCP/IP model. TCP also prevents the data loss due to a fast sender and slow receiver by
imposing some flow control techniques. It uses the method of sliding window protocol
which is accomplished by receiver by sending a window back to the sender informing the
size of data it can receive.

2.2 User Datagram Protocol (UDP)

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of Internet Protocol
suite, referred as UDP/IP suite. Unlike TCP, it is unreliable and connectionless protocol. So,
there is no need to establish connection prior to data transfer.
Though Transmission Control Protocol (TCP) is the dominant transport layer protocol used with
most of Internet services; provides assured delivery, reliability and much more but all these
services cost us with additional overhead and latency. Here, UDP comes into picture. For the
real-time services like computer gaming, voice or video communication, live conferences; we
need UDP. Since high performance is needed, UDP permits packets to be dropped instead of
processing delayed packets. There is no error checking in UDP, so it also save bandwidth.
User Datagram Protocol (UDP) is more efficient in terms of both latency and bandwidth.

UDP Header –
UDP header is 8-bytes fixed and simple header, while for TCP it may vary from 20 bytes to 60
bytes. First 8 Bytes contains all necessary header information and remaining part consist of
data. UDP port number fields are each 16 bits long, therefore range for port numbers defined
from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish different user
requests or process.
1. Source Port : Source Port is 2 Byte long field used to identify port number of source.
2. Destination Port : It is 2 Byte long field, used to identify the port of destined packet.
3. Length : Length is the length of UDP including header and the data. It is 16-bits field.
4. Checksum : Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the one’s
complement sum of the UDP header, pseudo header of information from the IP header
and the data, padded with zero octets at the end (if necessary) to make a multiple of two
octets.
Notes – Unlike TCP, Checksum calculation is not mandatory in UDP. No Error control or flow
control is provided by UDP. Hence UDP depends on IP and ICMP for error reporting.
Applications of UDP:
 Used for simple request response communication when size of data is less and hence
there is lesser concern about flow and error control.
 It is suitable protocol for multicasting as UDP supports packet switching.
 UDP is used for some routing update protocols like RIP(Routing Information Protocol).
 Normally used for real time applications which cannot tolerate uneven delays between
sections of a received message.
 Following implementations uses UDP as a transport layer protocol:
 NTP (Network Time Protocol)
 DNS (Domain Name Service)
 BOOTP, DHCP.
 NNP (Network News Protocol)
 Quote of the day protocol
 TFTP, RTSP, RIP, OSPF.
 Application layer can do some of the tasks through UDP-
 Trace Route
 Record Route
 Time stamp
 UDP takes datagram from Network Layer, attach its header and send it to the user. So, it
works fast.
 Actually UDP is null protocol if you remove checksum field.

Differences between TCP and UDP

TRANSMISSION CONTROL PROTOCOL (TCP) USER DATAGRAM PROTOCOL (UDP)

TCP is a connection-oriented protocol. UDP is the Datagram oriented protocol. This


Connection-orientation means that the is because there is no overhead for opening
communicating devices should establish a a connection, maintaining a connection, and
connection before transmitting data and terminating a connection. UDP is efficient
should close the connection after for broadcast and multicast type of network
transmitting the data. transmission.

TCP is reliable as it guarantees delivery of The delivery of data to the destination


data to the destination router. cannot be guaranteed in UDP.

TCP provides extensive error checking


mechanisms. It is because it provides flow UDP has only the basic error checking
control and acknowledgment of data. mechanism using checksums.

Sequencing of data is a feature of There is no sequencing of data in UDP. If


TRANSMISSION CONTROL PROTOCOL (TCP) USER DATAGRAM PROTOCOL (UDP)

Transmission Control Protocol (TCP). this ordering is required, it has to be managed by


means that packets arrive in-order at the the application layer.
receiver.

UDP is faster, simpler and more efficient


TCP is comparatively slower than UDP. than TCP.

Retransmission of lost packets is possible in There is no retransmission of lost packets in


TCP, but not in UDP. User Datagram Protocol (UDP).

TCP header size is 20 bytes. UDP Header size is 8 bytes.

TCP is heavy-weight. UDP is lightweight.

TCP is used by HTTP, HTTPs, FTP, SMTP and UDP is used by DNS, DHCP, TFTP, SNMP, RIP,
Telnet and VoIP.

2.3 Working of TCP Tahoe, NewReno and Vegas

TCP Tahoe:

Before TCP Tahoe, TCP used go back n model to control network congestion, then Tahoe added

slow start, congestion voidance and fast transmit.

Slow Start:

The congestion window of sender increases exponentially as a result of every acknowledgement.

Once a thresh hold is achieved, the congestion window increases linearly i.e. congestion window

increases 1 by each RTT and then congestion avoidance begins.


A packet loss is a sign of congestion as soon as congestion occurs congestion window is

reduced to 1 and start over again (start slow start until reached threshold).

Problem:

Tahoe is not very much efficient because every time a packet is lost it waits for

the timeout and then retransmits the packet. It reduces the size of congestion window to

1 just because of 1 packet loss, this inefficiency cost a lost in high bandwidth delay product links.

TCP Reno:

TCP Reno includes algorithms named Fast Retransmit and Fast Recovery for congestion control.

Fast Retransmit:

When the sender receives three duplicate acknowledgements of a sent packet then sender retransmit

it without waiting for the time out.

Fast Recovery:

After retransmission Reno enters into the Fast Recovery. In Fast Recovery after the packet

loss, the congestion window size does not reduce to 1 instead it reduced to half of current

window size.

Problem:

TCP Reno is helpful when only 1 packet is lost, in case of multiple packet loss it acts as Tahoe.

Then evolved TCP New Reno which is a modification of TCP Reno and deals with multiple
packets loss.

TCP New Reno:

TCP New Reno is efficient as compare to Tahoe and Reno. In Reno Fast Recovery exits when

three duplicate acknowledgements are received but in New Reno it does not exist until all

outstanding data is acknowledged or ends when retransmission timeout occurs. In this way it

avoids unnecessary multiple fast retransmit from single window data.

TCP Vegas:

TCP Tahoe,Reno and New Reno detects and controls congestion after congestion occurs but

still there is a better way to overcome congestion problem i.e TCP Vegas.TCP Vegas detects

congestion without causing congestion.

It differs from TCP Reno concerning

· Slow Start
· Packet loss detection
· Detection of available bandwidth

TCP tahoe

When a loss occurs, fast retransmit is sent, half of the current CWND is saved as ssthresh and
slow start begins again from its initial CWND. Once the CWND reaches ssthresh, TCP changes to
congestion avoidance algorithm where each new ACK increases the CWND by MSS /
CWND. This results in a linear increase of the CWND.

TCP Reno
A fast retransmit is sent, half of the current CWND is saved as ssthresh and as new
CWND, thus skipping slow start and going directly to the congestion avoidance
algorithm. The overall algorithm here is called fast recovery.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy