Acn Notes
Acn Notes
Address Resolution Protocol (ARP) is a procedure for mapping a dynamic Internet Protocol
address (IP address) to a permanent physical machine address in a local area network (LAN).
The physical machine address is also known as a Media Access Control or MAC address.
The job of the ARP is essentially to translate 32-bit addresses to 48-bit addresses and vice-
versa. This is necessary because in IP Version 4 (IPv4), the most common level of Internet
Protocol (IP) in use today, an IP address is 32-bits long, but MAC addresses are 48-bits long.
ARP works between network layers 2 and 3 of the Open Systems Interconnection model (OSI
model). The MAC address exists on layer 2 of the OSI model, the network layer, while the IP
address exists on layer 3, the data link layer.
ARP can also be used for IP over other LAN technologies, such as token ring, fiber distributed
data interface (FDDI) and IP over ATM.
In IPv6, which uses 128-bit addresses, ARP has been replaced by the Neighbor Discovery
protocol.
When a new computer joins a LAN, it is assigned a unique IP address to use for identification
and communication. When an incoming packet destined for a host machine on a particular LAN
arrives at a gateway, the gateway asks the ARP program to find a MAC address that matches
the IP address. A table called the ARP cache maintains a record of each IP address and its
corresponding MAC address.
All operating systems in an IPv4 Ethernet network keep an ARP cache. Every time a host
requests a MAC address in order to send a packet to another host in the LAN, it checks its ARP
cache to see if the IP to MAC address translation already exists. If it does, then a new ARP
request is unnecessary. If the translation does not already exist, then the request for network
addresses is sent and ARP is performed.
ARP broadcasts a request packet to all the machines on the LAN and asks if any of the machines
know they are using that particular IP address. When a machine recognizes the IP address as its
own, it sends a reply so ARP can update the cache for future reference and proceed with the
communication.
Host machines that don't know their own IP address can use the Reverse ARP (RARP) protocol
for discovery.
An ARP cache size is limited and is periodically cleansed of all entries to free up space; in fact,
addresses tend to stay in the cache for only a few minutes. Frequent updates allow other
devices in the network to see when a physical host changes their requested IP address. In the
cleaning process, unused entries are deleted as well as any unsuccessful attempts to
communicate with computers that are not currently powered on.
Proxy ARP
Proxy ARP enables a network proxy to answer ARP queries for IP addresses that are outside the
network. This allows packets to be successfully transferred from one subnetworkto another.
When an ARP inquiry packet is broadcast, the routing table is examined to find which device on
the LAN can reach the destination fastest. This device, which is often a router, becomes
a gateway for forwarding packets outside the network to their intended destinations.
Any LAN that uses ARP must be wary of ARP spoofing, also referred to as ARP poison routing or
ARP cache poisoning. ARP spoofing is a device attack in which a hacker broadcasts false ARP
messages over a LAN in order to link an attacker's MAC address with the IP address of a
legitimate computer or server within the network. Once a link has been established, the target
computer can send frames meant for the original destination to the hacker's computer first as
well as any data meant for the legitimate IP address.
ARP spoofing can have serious impacts on enterprises. When used in their simplest form, ARP
spoofing attacks can steal sensitive information. However, the attacks can also go beyond this
and facilitate other malicious attacks, including:
man-in-the-middle attacks
denial-of-service attacks
session hijacking
RARP (Reverse Address Resolution Protocol) is a protocol by which a physical machine in a local
area network can request to learn its IP address from a gateway server's Address Resolution
Protocol (ARP) table or cache. A network administrator creates a table in a local area network's
gateway router that maps the physical machine (or Media Access Control - MAC address)
addresses to corresponding Internet Protocol addresses. When a new machine is set up, its
RARP client program requests from the RARP server on the router to be sent its IP address.
Assuming that an entry has been set up in the router table, the RARP server will return the IP
address to the machine which can store it for future use.
RARP is available for Ethernet, Fiber Distributed-Data Interface, and token ring LANs.
Difference between ARP and RARP:
ARP and RARP both are the Network layer protocol. Whenever a host needs to send an IP
datagram to another host, the sender requires both the logical address and physical address of
the receiver. The dynamic mapping provides two protocols ARP and RARP. The basic difference
between ARP and RARP is that ARP when provided with the logical address of the receiver it
obtains the physical address of the receiver whereas in RARP when provided with the physical
address of the host, it obtains the logical address of the host from the server.
Comparison chart:
BASIS
ARP RARP
COMPARISON
Basic Retrieves the physical address of Retrieves the logical address for a
the receiver. computer from the server.
Mapping ARP maps 32-bit logical (IP) address RARP maps 48-bit physical address
to 48-bit physical address. to 32-bit logical (IP) address.
1.2 Internet Control Message Protocol (ICMP) | Computer Networks
Since IP does not have a inbuilt mechanism for sending error and control messages. It depends
on Internet Control Message Protocol(ICMP) to provide an error control. It is used for reporting
errors and management queries. It is a supporting protocol and used by networks devices like
routers for sending the error messages and operations information.
e.g. the requested service is not available or that a host or router could not be reached.
Source quench message is request to decrease traffic rate for messages sending to the
host(destination). Or we can say, when receiving host detects that rate of sending packets
(traffic rate) to it is too fast it sends the source quench message to the source to slow the pace
down so that no packet can be lost.
ICMP will take source IP from the discarded packet and informs to source by sending source
quench message.
Then source will reduce the speed of transmission so that router will free for congestion.
When the congestion router is far away from the source the ICMP will send hop by hop source
quench message so that every router will reduce the speed of transmission.
Parameter problem:
Whenever packets come to the router then calculated header checksum should be equal to
recieved header checksum then only packet is accepted by the router.
When some fragments are lost in a network then the holding fragment by the router will be
dropped then ICMP will take source IP from discarded packet and informs to the source, of
discarded datagram due to time to live field reaches to zero, by sending time exceeded
message.
Destination un-reachable :
Destination unreachable is generated by the host or its inbound gateway to inform the client
that the destination is unreachable for some reason.
There is no necessary condition that only router give the ICMP error message some time
destination host send ICMP error message when any type of failure (link failure, hardware
failure, port failure etc.) happen in the network.
Redirection message:
Redirect requests data packets be sent on an alternate route. The message informs to a host to
update its routing information (to send packets on an alternate route).
Ex. If host tries to send data through a router R1 and R1 sends data on a router R2 and there is
a direct way from host to R2. Then R1 will send a redirect message to inform the host that there
is a best way to the destination directly through R2 available. The host then sends data packets
for the destination directly to R2.
The router R2 will send the original datagram to the intended destination.
But if datagram contains routing information then this message will not be sent even if a better
route is available as redirects should only be sent by gateways and should not be sent by
Internet hosts.
Whenever a packet is forwarded in a wrong direction later it is re-directed in a current direction
then ICMP will send re-directed message.
Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport)
and divides it into what’s called packet. IP packet encapsulates data unit received from above
layer and adds its own header information.
The encapsulated data is referred to as IP Payload. IP header contains all the necessary
information to deliver the packet at the other end.
IP header includes many relevant information including Version Number, which, in this
context, is 4. Other details are as follows:
ECN: Explicit Congestion Notification, carries information about the congestion seen in
the route.
Flags: As required by the network resources, if IP Packet is too large to handle these
‘flags’ tell that if they can be fragmented or not. In this 3-bit flag, the MSB is always set
to ‘0’.
Fragment Offset: This offset tells the exact position of the fragment in the original IP
Packet.
Time to Live: To avoid looping in the network, every packet is sent with some TTL value
set, which tells the network how many routers (hops) this packet can cross. At each
hop, its value is decremented by one and when the value reaches zero, the packet is
discarded.
Protocol: Tells the Network layer at the destination host, to which Protocol this packet
belongs to, i.e. the next level Protocol. For example protocol number of ICMP is 1, TCP
is 6 and UDP is 17.
Header Checksum: This field is used to keep checksum value of entire header which is
then used to check if the packet is received error-free.
Source Address: 32-bit address of the Sender (or source) of the packet.
Destination Address: 32-bit address of the Receiver (or destination) of the packet.
Options: This is optional field, which is used if the value of IHL is greater than 5. These
option may contain values for options such as Security, Record Route, Time Stamp etc.
IPv4 – Addressing:
In this mode, data is sent only to one destined host. The Destination Address field contains 32-
bit IP address of the destination host. Here client sends data to the targeted server:
In this mode the packet is addressed to all hosts in a network segment. The Destination
Address field contains special broadcast address i.e. 255.255.255.255. When a host sees this
packet on the network, it is bound to process it. Here client sends packet, which is entertained
by all the Servers:
Multicast Addressing Mode:
This mode is a mix of previous two modes, i.e. the packet sent is neither destined to a single
host nor all the host on the segment. In this packet, the Destination Address contains special
address which starts with 224.x.x.x and can be entertained by more than one host.
Here a server sends packets which is entertained by more than one Servers. Every network has
one IP address reserved for network number which represents the network and one IP address
reserved for Broadcast Address, which represents all the host in that network.
IPv4 uses hierarchical addressing scheme. An IP address which is 32-bits in length, is divided
into two or three parts as depicted:
A single IP address can contain information about the network and its sub-network and
ultimately the host. This scheme enables IP Address to be hierarchical where a network can
have many sub-networks which in turn can have many hosts.
Subnet Mask
The 32-bit IP address contains information about the host and its network. It is very necessary
to distinguish the both. For this, routers use Subnet Mask, which is as long as the size of the
network address in the IP address. Subnet Mask is also 32 bits long. If the IP address in binary
is ANDed with its Subnet Mask, the result yields the Network address. For example, say the IP
Address 192.168.1.152 and the Subnet Mask is 255.255.255.0 then
This way Subnet Mast helps extract Network ID and Host from an IP Address. It can be
identified now that 192.168.1.0 is the Network number and 192.168.1.152 is the host on that
network.
Binary Representation
The positional value method is the simplest form of converting binary from decimal value. IP
address is 32 bit value which is divided into 4 octets. A binary octet contains 8 bits and the
value of each bit can be determined by the position of bit value '1' in the octet.
Positional value of bits is determined by 2 raised to power (position – 1), that is the value of a
bit 1 at position 6 is 26-1 that is 25 that is 32. The total value of the octet is determined by
adding up the positional value of bits. The value of 11000000 is 128+64 = 192. Some Examples
are shown in the table below:
IPv4 - Address Classes
Internet Corporation for Assigned Names and Numbers - responsible for assigning IP addresses.
The first octet referred here is the left most of all. The octets numbered as follows depicting
dotted decimal notation of IP Address:
Number of networks and number of hosts per class can be derived by this formula:
When calculating hosts IP addresses, 2 IP addresses are decreased because they cannot be
assigned to hosts i.e. the first IP of a network is network number and the last IP is reserved for
Broadcast IP.
Class A Address
The first bit of the first octet is always set to 0 (zero). Thus the first octet ranges from 1 – 127,
i.e.
Class A addresses only include IP starting from 1.x.x.x to 126.x.x.x only. The IP range 127.x.x.x
is reserved for loopback IP addresses.
The default subnet mask for Class A IP address is 255.0.0.0 which implies that Class A
addressing can have 126 networks (27-2) and 16777214 hosts (224-2).
Class B Address
An IP address which belongs to class B has the first two bits in the first octet set to 10, i.e.
Class B IP Addresses range from 128.0.x.x to 191.255.x.x. The default subnet mask for Class B is
255.255.x.x.
Class B has 16384 (214) Network addresses and 65534 (216-2) Host addresses.
Class C Address
The first octet of Class C IP address has its first 3 bits set to 110, that is
Class C IP addresses range from 192.0.0.x to 192.255.255.x. The default subnet mask for Class
B is 255.255.255.x.
Class C gives 2097152 (221) Network addresses and 254 (28-2) Host addresses.
Class D Address
Very first four bits of the first octet in Class D IP addresses are set to 1110, giving a range of
Class D has IP address range from 224.0.0.0 to 239.255.255.255. Class D is reserved for
Multicasting. In multicasting data is not destined for a particular host, that's why there is no
need to extract host address from the IP address, and Class D does not have any subnet mask.
Class E Address
This IP Class is reserved for experimental purposes only like for R&D or Study. IP addresses in
this class ranges from 240.0.0.0 to 255.255.255.254. Like Class D, this class too is not equipped
with any subnet mask.
IPv4 - Sub netting (CIDR)
Each IP class is equipped with its own default subnet mask which bounds that IP class to have
prefixed number of Networks and prefixed number of Hosts per network. Classful IP
addressing does not provide any flexibility of having less number of Hosts per Network or
more Networks per IP Class.
CIDR or Classless Inter Domain Routing provides the flexibility of borrowing bits of Host part
of the IP address and using them as Network in Network, called Subnet. By using subnetting,
one single Class A IP addresses can be used to have smaller sub-networks which provides
better network management capabilities.
Class A Subnets
In Class A, only the first octet is used as Network identifier and rest of three octets are used to
be assigned to Hosts (i.e. 16777214 Hosts per Network). To make more subnet in Class A, bits
from Host part are borrowed and the subnet mask is changed accordingly.
For example, if one MSB (Most Significant Bit) is borrowed from host bits of second octet and
added to Network address, it creates two Subnets (21=2) with (223-2) 8388606 Hosts per
Subnet.
The Subnet mask is changed accordingly to reflect subnetting. Given below is a list of all
possible combination of Class A subnets:
In case of subnetting too, the very first and last IP address of every subnet is used for Subnet
Number and Subnet Broadcast IP address respectively. Because these two IP addresses cannot
be assigned to hosts, Sub-netting cannot be implemented by using more than 30 bits as
Network Bits which provides less than two hosts per subnet.
Class B Subnets
By Default, using Classful Networking, 14 bits are used as Network bits providing (2 14) 16384
Networks and (216-1) 65534 Hosts. Class B IP Addresses can be subnetted the same way as
Class A addresses, by borrowing bits from Host bits. Below is given all possible combination of
Class B subnetting:
Class C Subnets
Class C IP addresses normally assigned to a very small size network because it only can have
254 hosts in a network. Given below is a list of all possible combination of subnetted Class B IP
address:
IPv4 - Variable Length Subnet Masking (VLSM)
Internet Service Providers may face a situation where they need to allocate IP subnets of
different sizes as per the requirement of customer. One customer may ask Class C subnet of 3
IP addresses and another may ask for 10 IPs. For an ISP, it is not feasible to divide the IP
addresses into fixed size subnets, rather he may want to subnet the subnets in such a way
which results in minimum wastage of IP addresses.
For example, an administrator have 192.168.1.0/24 network. The suffix /24 (pronounced as
"slash 24") tells the number of bits used for network address. He is having three different
departments with different number of hosts. Sales department has 100 computers, Purchase
department has 50 computers, Accounts has 25 computers and Management has 5
computers. In CIDR, the subnets are of fixed size. Using the same methodology the
administrator cannot fulfill all the requirements of the network.
The following procedure shows how VLSM can be used in order to allocate department-wise IP
addresses as mentioned in the example.
Step - 1
Step - 2
Sales 100
Purchase 50
Accounts 25
Management 5
Step - 3
Allocate the highest range of IPs to the highest requirement, so let's assign 192.168.1.0 /25
(255.255.255.128) to Sales department. This IP subnet with Network number 192.168.1.0 has
126 valid Host IP addresses which satisfy the requirement of Sales Department. The subnet
Mask used for this subnet has 10000000 as the last octet.
Step - 4
Allocate the next highest range, so let's assign 192.168.1.128 /26 (255.255.255.192) to
Purchase department. This IP subnet with Network number 192.168.1.128 has 62 valid Host IP
Addresses which can be easily assigned to all Purchase department's PCs. The subnet mask
used has 11000000 in the last octet.
Step - 5
Allocate the next highest range, i.e. Accounts. The requirement of 25 IPs can be fulfilled with
192.168.1.192 /27 (255.255.255.224) IP subnet, which contains 30 valid host IPs. The network
number of Accounts department will be 192.168.1.192. The last octet of subnet mask is
11100000.
Step - 6
Allocate next highest range to Management. The Management department contains only 5
computers. The subnet 192.168.1.224 /29 with Mask 255.255.255.248 has exactly 6 valid host
IP addresses. So this can be assigned to Management. The last octet of subnet mask will
contain 11111000.
By using VLSM, the administrator can subnet the IP subnet such a way that least number of IP
addresses are wasted. Even after assigning IPs to every department, the administrator, in this
example, still left with plenty of IP addresses which was not possible if he has used CIDR.
IPv4 - Reserved Addresses
There are few Reserved IPv4 address spaces which cannot be used on the internet. These
addresses serve special purpose and cannot be routed outside Local Area Network.
Private IP Addresses
Every class of IP, (A, B & C) has some addresses reserved as Private IP addresses. These IPs can
be used within a network, campus, company and are private to it. These addresses cannot be
routed on Internet so packets containing these private addresses are dropped by the Routers.
In order to communicate with outside world, Internet, these IP addresses must have to be
translated to some public IP addresses using NAT process or Web Proxy server can be used.
The sole purpose to create separate range of private addresses is to control assignment of
already-limited IPv4 address pool. By using private address range within LAN, the requirement
of IPv4 addresses has globally decreased significantly. It has also helped delaying the IPv4
address exhaustion.
IP class, while using private address range, can be chosen as per the size and requirement of
the organization. Larger organization may choose class A private IP address range where
smaller may opt for class C. These IP addresses can be further sub-netted be assigned to
departments within an organization.
Loopback IP Addresses
The IP address range 127.0.0.0 – 127.255.255.255 is reserved for loopback i.e. a Host’s self-
address. Also known as localhost address. This loopback IP address is managed entirely by and
within the operating system. Using loopback addresses, enable the Server and Client processes
on a single system to communicate with each other. When a process creates a packet with
destination address as loopback address, the operating system loops it back to itself without
having any interference of NIC.
Data sent on loopback is forward by the operating system to a virtual network interface within
operating system. This address is mostly used for testing purposes like client-server
architecture on a single machine. Other than that, if a host machine can successfully ping
127.0.0.1 or any IP from loopback range, implies that the TCP/IP software stack on the
machine is successfully loaded and working.
Link-local Addresses
In case of the Host is not able to acquire an IP address from DHCP server and it has not been
assigned any IP address manually, the host can assign itself an IP address from a range of
reserved Link-local addresses. Link local address range is 169.254.0.0 - 169.254.255.255.
Assume a network segment where all systems are configured to acquire IP addresses from a
DHCP server connected to the same network segment. If the DHCP server is not available, no
host on the segment will be able to communicate to any other. Windows (98 or later), and
Mac OS (8.0 or later) support this functionality of self-configuration of Link-local IP address. In
absence of DHCP server, every host machine randomly chooses an IP address from the above
mentioned range and then checks to ascertain by means of ARP, if some other host also has
not configured himself with the same IP address. Once all host are using link local addresses of
same range, they can communicate to each other.
These IP addresses cannot help system to communicate when they do not belong to the same
physical or logical segment. These IPs are also not routable.
IPv4 - Example
This section tells how actual communication happens on the Network using Internet Protocol
version 4.
All the hosts in IPv4 environment are assigned unique logical IP addresses. When a host wants
to send some data to another host on the network, it needs the physical (MAC) address of the
destination host. To get the MAC address, the host broadcasts ARP message and asks to give
the MAC address whoever is the owner of destination IP address. All the host on that segment
receives ARP packet but only the host which has its IP matching with the one in ARP message,
replies with its MAC address. Once the sender receives the MAC address of receiving station,
data is sent on the physical media.
In case, the IP does not belong to the local subnet. The data is sent to the destination by
means of Gateway of the subnet. To understand the packet flow we must first understand
following components:
MAC Address: Media Access Control Address is 48-bit factory hard coded physical
address of network device which can uniquely be identified. This address is assigned by
device manufacturers.
Address Resolution Protocol: Address Resolution Protocol is used to acquire the MAC
address of a host whose IP address is known. ARP is a Broadcast packet which is
received by all the host in the network segment. But only the host whose IP is
mentioned in ARP responds to it providing its MAC address.
Proxy Server: To access Internet, networks uses Proxy Server which has a public IP
assigned. All PCs request Proxy Server for a Server on Internet, Proxy Server on behalf
of PC sends the request to server and when it receives response from the Server, the
Proxy Server forwards it to the client PC. This is a way to control Internet access in
computer networks and it helps to implement web based policies.
Dynamic Host Control Protocol: DHCP is a service by which a host is assigned IP address
from a pre-defined address pool. DHCP server also provides necessary information such
as Gateway IP, DNS Server Address, lease assigned with the IP etc. By using DHCP
services network administrator can manage assignment of IP addresses at ease.
Domain Name System: This is very likely that a user does not know the IP address of a
remote Server he wants to connect to. But he knows the name assigned to it for
example, tutorialpoints.com. When the user types in the name of remote server he
wants to connect to the localhost behind the screens sends a DNS query. Domain
Name System is a method to acquire the IP address of the host whose Domain Name is
known.
Network Address Translation: Almost all PCs in a computer network are assigned
private IP addresses which are not routable on Internet. As soon as a router receives an
IP packet with private IP address it drops it. In order to access Servers on public private
address, computer networks use an address translation service, which translates
between public and private addresses, called Network Address Translation. When a PC
sends an IP packet out of a private network, NAT changes the private IP address with
public IP address and vice versa.
Today majority of devices running on Internet are using IPv4 and it is not possible to shift them
to IPv6 in coming days. There are mechanisms provided by IPv6, by which IPv4 and IPv6 can
coexist unless the Internet entirely shifts to IPv6:
Dual IP Stack
IPV4 vs IPV6:
IPv4 is 32 bit logical address of a device which contains 4 octets while IPv6 is a 128 bit
logical address of a device which contains 16 octet (8-bit each).
IPv4 requires checksum while IPv6 requires no checksum.
Broadcast is present in IPv4 while no broadcast in IPv6.
No anycast in IPv4 while concept of anycast is present in IPv6.
Address configuration is done manually or by DHCP in IPv4 while in IPv6, stateless auto
configuration or DHCPv6 are used for address configuration.
Packet flow is not identified by IPv4 while in IPv6, flow label field (of IPv6 header) can be
used for identifying packet flow for QoS handling.
IPsec is optional in IPv4 while it is required for IPv6.
IPv6 has simpler header than IPv4.
Option field is present in IPv4 while in IPv6, separate extension header is defined for
optional data.
Packet size of IPv4 is 576 bytes while of IPv6 is 1280 bytes (without fragmentation).
Example – Consider 3-routers X, Y and Z as shown in figure. Each router have their routing
table. Every routing table will contain distance to the destination nodes.
Consider router X , X will share it routing table to neighbors and neighbors will share it routing
table to it to X and distance from node X to destination will be calculated using bellmen- ford
equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
As we can see that distance will be less going from X to Z when Y is intermediate node(hop) so it
will be update in routing table X.
OSPF Messages – OSPF is a very complex protocol. It uses five different types of messages.
These are as follows:
10. Hello message (Type 1) – It is used by the routers to introduce itself to the other routers.
11. Database description message (Type 2) – It is normally send in response to the Hello
message.
12. Link-state request message (Type 3) – It is used by the routers that need information
about specific Link-State packet.
13. Link-state update message (Type 4) – It is the main OSPF message for building Link-State
Database.
14. Link-state acknowledgement message (Type 5) – It is used to create reliability in the OSPF
protocol.
Distance vector routing v/s Link state routing :
Distance Vector Routing –
It is a dynamic routing algorithm in which each router computes distance between itself
and each possible destination i.e. its immediate neighbors.
The router share its knowledge about the whole network to its neighbors and accordingly
updates table based on its neighbors.
The sharing of information with the neighbors takes place at regular intervals.
It makes use of Bellman Ford Algorithm for making routing tables.
Problems – Count to infinity problem which can be solved by splitting horizon.
– Good news spread fast and bad news spread slowly.
– Persistent looping problem i.e. loop will be there forever.
Link State Routing –
It is a dynamic routing algorithm in which each router shares knowledge of its neighbors
with every other router in the network.
A router sends its information about its neighbors only to all the routers through flooding.
Information sharing takes place only whenever there is a change.
It makes use of Dijkastra’s Algorithm for making routing tables.
Problems – Heavy traffic due to flooding of packets.
– Flooding can result in infinite looping which can be solved by using Time to leave
(TTL) field.
Comparison between Distance Vector Routing and Link State Routing:
1.5 Interior Gateway Routing Protocol (IGRP) & EIGRP :
Interior Gateway Routing Protocol (IGRP) is a proprietary distance vector routing protocol used
to communicate routing information within a host network. It was invented by Cisco.
IGRP manages the flow of routing information within connected routers in the host network or
autonomous system. The protocol ensures that every router has routing tables updated with
the best available path. IGRP also avoids routing loops by updating itself with the changes
occurring over the network and by error management.
Cisco created Interior Gateway Routing Protocol (IGRP) in response to the limitations in Routing
Information Protocol (RIP), which handles a maximum hop count of 15. IGRP supports a
maximum hop count of up to 255. The primary two purposes of IGRP are to:
Communicate routing information to all connected routers within its boundary or
autonomous system
Continue updating whenever there is a topological, network or path change that occurs
IGRP sends a notification of any new changes, and information about its status, to its neighbors
every 90 seconds.
IGRP manages a routing table with the most optimal path to respective nodes and to networks
within the parent network. Because it is a distance vector protocol, IGRP uses several
parameters to calculate the metric for the best path to a specific destination. These parameters
include delay, bandwidth, reliability, load and maximum transmission unit (MTU).
The IGRP protocol allows a number of gateways to coordinate their routing. Its goals are the
following:
Stable routing even in very large or complex networks. No routing loops should occur, even
as transients.
Low overhead. That is, IGRP itself should not use more bandwidth than what is actually
needed for its task.
Splitting traffic among several parallel routes when they are of roughly equal desirability.
Taking into account error rates and level of traffic on different paths.
Using EIGRP, a router keeps a copy of its neighbor's routing tables. If it can't find a route to a
destination in one of these tables, it queries its neighbors for a route and they in turn query
their neighbors until a route is found. When a routing table entry changes in one of the routers,
it notifies its neighbors of the change only (some earlier protocols require sending the entire
table). To keep all routers aware of the state of neighbors, each router sends out a periodic
"hello" packet. A router from which no "hello" packet has been received in a certain period of
time is assumed to be inoperative.
EIGRP uses the Diffusing-Update Algorithm (DUAL) to determine the most efficient (least cost)
route to a destination. A DUAL finite state machine contains decision information used by the
algorithm to determine the least-cost route (which considers distance and whether a
destination path is loop-free).
IGRP vs EIGRP
IGRP, which stands for Internet Gateway Routing Protocol, is a relatively old routing protocol
that was invented by Cisco. It has been largely replaced by the newer and more superior
Enhanced-IGRP, more commonly known as EIGRP, since 1993. Even in Cisco the Cisco
curriculum, IGRP is only discussed as an obsolete protocol as an introduction to EIGRP.
The main reason behind the advent of EIGRP is to move away from classful routing protocols
like IGRP because of the rapidly depleting IPv4 addresses. IGRP simply assumes that
all elements in a given class belong to the same subnet. EIGRP utilizes variable length subnet
masks (VLSM) to make more efficient use of the short supply of IPv4 addresses, prior to the
advent of IPv6 .
Along with the shift from classful routing protocols were a few improvements to the algorithm
used to discover the best way to get around the network was introduced with EIGRP. It now
uses Diffusing Update Algorithm or better known as DUAL to calculate paths while ensuring
that no loops exist in the system since those are detrimental to the performance of the
network.
EIGRP routers periodically broadcast a ‘hello’ packet to all systems to inform other routers that
they are present and working well in the network. Updates on the other hand, are no longer
broadcast to the entire network; they are bounded only to routers that need the information.
Updates are also no longer periodic and only when changes in the metric are observed would
the corresponding updates be sent out to other routers. The partial updates cause a reduction
in network traffic compared to the full updates that are utilized by IGRP.
Metrics, which are used to measure the efficiency of a given, have also changed in EIGRP.
Instead of using a 24 bit value in the calculation of the metric, EIGRP now utilizes 32 bits. To
maintain compatibility the older IGRP metrics are multiplied by a value of 256, thereby bit-
shifting the value 8 bits to the left and conforming to the 32 bit metric of EIGRP.
Summary:
1. EIGRP has totally replaced the obsolete IGRP
2. EIGRP is a classless routing protocol while IGRP is a classful routing protocol
3. EIGRP uses the DUAL while IGRP does not
4. EIGRP consumes much less bandwidth compared to IGRP
5. EIGRP expresses the metric as a 32 bit value while IGRP uses a 24 bit value
Difference between IGRP and EIGRP
The least hop count in IGRP is While the least hop count in EIGRP is
6. 255. 256.
Border Gateway Protocol (BGP) is a routing protocol used to transfer data and information
between different host gateways, the Internet or autonomous systems. BGP is a Path Vector
Protocol (PVP), which maintains paths to different hosts, networks and gateway routers and
determines the routing decision based on that. It does not use Interior Gateway Protocol (IGP)
metrics for routing decisions, but only decides the route based on path, network policies and
rule sets.
Border Gateway Protocol (BGP) is used to Exchange routing information for the internet and is
the protocol used between ISP which are different ASes.
The protocol can connect together any internetwork of autonomous system using an arbitrary
topology. The only requirement is that each AS have at least one router that is able to run BGP
and that is router connect to at least one other AS’s BGP router. BGP’s main function is to
exchange network reach-ability information with other BGP systems. Border Gateway Protocol
constructs an autonomous systems’ graph based on the information exchanged between BGP
routers.
Routing Information Protocol (RIP) was originally designed for xerox PARC Universal Protocol
and was called GWINFO in the Xerox Network Systems (XNS) protocol suite in 1981. RIP, which
was defined in RFC 1058 in 1988, is known for being easy to configure and easy to use in small
networks.
In the enterprise, Open Shortest Path First (OSPF) routing has largely replaced RIP as the most
widely used Interior Gateway Protocol (IGP). RIP has been supplanted mainly due to its
simplicity and its inability to scale to very large and complex networks.
RIP uses a distance vector algorithm to decide which path to put a packet on to get to its
destination. Each RIP router maintains a routing table, which is a list of all the destinations the
router knows how to reach. Each router broadcasts its entire routing table to its closest
neighbors every 30 seconds (224.0.0.9 for RIPv2). In this context, neighbors are the other
routers to which a router is connected directly -- that is, the other routers on the same network
segments this router is on. The neighbors, in turn, pass the information on to their nearest
neighbors, and so on, until all RIP hosts within the network have the same knowledge of routing
paths. This shared knowledge is known as convergence.
If a router receives an update on a route, and the new path is shorter, it will update its table
entry with the length and next-hop address of the shorter path. If the new path is longer, it will
wait through a "hold-down" period to see if later updates reflect the higher value as well. It will
only update the table entry if the new, longer path has been determined to be stable.
If a router crashes or a network connection is severed, the network discovers this because that
router stops sending updates to its neighbors, or stops sending and receiving updates along the
severed connection. If a given route in the routing table isn't updated across six successive
update cycles (that is, for 180 seconds) a RIP router will drop that route and let the rest of the
network know about the problem through its own periodic updates.
Features of RPI
RIP uses a modified hop count as a way to determine network distance. Modified reflects the
fact that network engineers can assign paths a higher cost. By default, if a router's neighbor
owns a destination network and can deliver packets directly to the destination network without
using any other routers, that route has one hop. In network management terminology, this is
described as a cost of 1.
RIP allows only 15 hops in a path. If a packet can't reach a destination in 15 hops, the
destination is considered unreachable. Paths can be assigned a higher cost (as if they involved
extra hops) if the enterprise wants to limit or discourage their use. For example, a satellite
backup link might be assigned a cost of 10 to force traffic follow other routes when available.
Timers in RPI help regulate performance. They include:
Update timer—Frequency of routing updates. Every 30 seconds IP RIP sends a complete copy
of its routing table, subject to split horizon. (IPX RIP does this every 60 seconds.)
Invalid timer—Absence of refreshed content in a routing update. RIP waits 180 seconds to
mark a route as invalid and immediately puts it into holddown.
Hold-down timers and triggered updates—Assist with stability of routes in the Cisco
environment. Holddowns ensure that regular update messages do not inappropriately cause a
routing loop. The router doesn't act on non-superior new information for a certain period of
time. RIP's hold-down time is 180 seconds.
Flush timer—RIP waits an additional 240 seconds after hold down before it actually removes
the route from the table.
Other stability features to assist with routing loops include poison reverse. A poison reverse is a
way in which a gateway node tells its neighbor gateways that one of the gateways is no longer
connected. To do this, the notifying gateway sets the number of hops to the unconnected
gateway to a number that indicates infinite, which in layman's terms simply means 'You can't
get there.' Since RIP allows up to 15 hops to another gateway, setting the hop count to 16 is the
equivalent of "infinite."
Routing Information Protocol (RIP) is a dynamic routing protocol which uses hop count as a
routing metric to find the best path between the source and the destination network. It is a
distance vector routing protocol which has AD value 120 and works on the application layer of
OSI model. RIP uses port number 520.
Hop Count:
Hop count is the number of routers occurring in between the source and destination network.
The path with the lowest hop count is considered as the best route to reach a network and
therefore placed in the routing table. RIP prevents routing loops by limiting the number of
hopes allowed in a path from source and destination. The maximum hop count allowed for RIP
is 15 and hop count of 16 is considered as network unreachable.
Features of RIP:
1. Updates of the network are exchanged periodically.
2. Updates (routing information) are always broadcast.
3. Full routing tables are sent in updates.
4. Routers always trust on routing information received from neighbor routers. This is also
known as Routing on rumors.
RIP versions:
Thre are three versions of routing information protocol – RIP Version1, RIP
Doesn’t support
authentication of update Supports authentication of
messages RIPv2 update messages –
RIP v1 is known as Classful Routing Protocol because it doesn’t send information of subnet
mask in its routing update.
RIP v2 is known as Classless Routing Protocol because it sends information of subnet mask in its
routing update.
# debug ip rip
>> Use this command to show all routes configured in router, say for router R1 :
Configuration:
Consider the above given topology which has 3-routers R1, R2, R3. R1 has IP address
172.16.10.6/30 on s0/0/1, 192.168.20.1/24 on fa0/0. R2 has IP address 172.16.10.2/30 on
s0/0/0, 192.168.10.1/24 on fa0/0. R3 has IP address 172.16.10.5/30 on s0/1, 172.16.10.1/30 on
s0/0, 10.10.10.1/24 on fa0/0.
RIP timers :
Update timer : The default timing for routing information being exchanged by the routers
operating RIP is 30 seconds. Using Update timer, the routers exchange their routing table
periodically.
Invalid timer: If no update comes until 180 seconds, then the destination router consider
it as invalid. In this scenario, the destination router mark hop count as 16 for that router.
Hold down timer : This is the time for which the router waits for neighbour router to
respond. If the router isn’t able to respond within a given time then it is declared dead. It
is 180 seconds by default.
Flush time : It is the time after which the entry of the route will be flushed if it doesn’t
respond within the flush time. It is 60 seconds by default. This timer starts after the route
has been declared invalid and after 60 seconds i.e time will be 180 + 60 = 240 seconds.
Note that all these times are adjustable. Use this command to change the timers :
Open Shortest Path First (OSPF) is a link-state routing protocol which is used to find the best
path between the source and the destination router using its own Shortest Path First). OSPF is
developed by Internet Engineering Task Force (IETF) as one of the Interior Gateway Protocol
(IGP), i.e, the protocol which aims at moving the packet within a large autonomous system or
routing domain. It is a network layer protocol which works on the protocol number 89 and uses
AD value 110. OSPF uses multicast address 224.0.0.5 for normal communication and 224.0.0.6
for update to designated router(DR)/Backup Designated Router (BDR).
OSPF terms –
1. Router I’d – It is the highest active IP address present on the router. First, highest
loopback address is considered. If no loopback is configured then the highest active IP
address on the interface of the router is considered.
2. Router priority – It is a 8 bit value assigned to a router operating OSPF, used to elect DR
and BDR in a broadcast network.
3. Designated Router (DR) – It is elected to minimize the number of adjacency formed. DR
distributes the LSAs to all the other routers. DR is elected in a broadcast network to which
all the other routers shares their DBD. In a broadcast network, router requests for an
update to DR and DR will respond to that request with an update.
4. Backup Designated Router (BDR) – BDR is backup to DR in a broadcast network. When DR
goes down, BDR becomes DR and performs its functions.
DR and BDR election – DR and BDR election takes place in broadcast network or multi access
network. Here is the criteria for the election:
1. Router having the highest router priority will be declared as DR.
2. If there is a tie in router priority then highest router I’d will be considered. First, highest
loopback address is considered. If no loopback is configured then the highest active IP
address on the interface of the router is considered.
OSPF states – The device operating OSPF goes through certain states. These states are:
1. Down – In this state, no hello packet have been received on the interface.
Note – The Down state doesn’t mean that the interface is physically down. Her, it means
that OSPF adjacency process has not started yet.
2. INIT – In this state, hello packet have been received from the other router.
3. 2WAY – In the 2WAY state, both the routers have received the hello packets from other
routers. Bidirectional connectivity has been established.
Note – In between the 2WAY state and Exstart state, the DR and BDR election takes place.
4. Exstart – In this state, NULL DBD are exchanged.In this state, master and slave election
take place. The router having the higher router I’d becomes the master while other
becomes the slave. This election decides Which router will send it’s DBD first (routers who
have formed neighbourship will take part in this election).
5. Exchange – In this state, the actual DBDs are exchanged.
6. Loading – In this sate, LSR, LSU and LSA (Link State Acknowledgement) are exchanged.
Important – When a router receives DBD from other router, it compares it’s own DBD
with the other router DBD. If the received DBD is more updated than its own DBD then
the router will send LSR to the other router stating what links are needed. The other
router replies with the LSU containing the updates that are needed. In return to this, the
router replies with the Link State Acknowledgement.
7. Full – In this state, synchronization of all the information takes place. OSPF routing can
begin only after the Full state.
Open shortest path first (OSPF) is a link-state routing protocol which is used to find the best
path between the source and the destination router using its own shortest path first (SPF)
algorithm. A link-state routing protocol is a protocol which uses the concept of triggered
updates, i.e., if there is a change observed in the learned routing table then the updates are
triggered only, not like the distance-vector routing protocol where the routing table are
exchanged at a period of time.
Criteria –
To form neighbourship in OSPF, there is a criteria for both the routers:
1. It should be present in same area
2. Router I’d must be unique
3. Subnet mask should be same
4. Hello and dead timer should be same
5. Stub flag must match
6. Authentication must match
OSPF supports NULL, plain text, MD5 authentication.
Note – Both the routers (neighbors) should have same type of authentication enabled. e.g- if
one neighbor has MD5 authentication enabled then other should also have MD5 authentication
enabled.
OSPF messages –
OSPF uses certain messages for the communication between the routers operating OSPF.
Hello message – These are keep alive messages used for neighbor discovery /recovery.
These are exchanged in every 10 seconds. This include following information : Router I’d,
Hello/dead interval, Area I’d, Router priority, DR and BDR IP address, authentication data.
Database Description (DBD) – It is the OSPF routes of the router. This contains topology
of an AS or an area (routing domain).
Link state request (LSR) – When a router receive DBD, it compares it with its own DBD. If
the DBD received has some more updates than its own DBD then LSR is being sent to its
neighbor.
Link state update (LSU) – When a router receives LSR, it responds with LSU message
containing the details requested.
Link state acknowledgement – This provides reliability to the link state exchange process.
It is sent as the acknowledgement of LSU.
Link state advertisement (LSA) – It is an OSPF data packet that contains link-state routing
information, shared only with the routers to which adjacency has been formed.
Note – Link State Advertisement and Link State Acknowledgement both are different messages.
Timers –
Hello timer – The interval in which OSPF router sends a hello message on an interface. It is
10 seconds by default.
Dead timer – The interval in which the neighbor will be declared dead if it is not able to
send the hello packet . It is 40 seconds by default.It is usually 4 times the hello interval but
can be configured manually according to need.
OSPF supports/provides/advantages –
Both IPv4 and IPv6 routed protocols
Load balancing with equal cost routes for same destination
VLSM and route summarization
Unlimited hop counts
Trigger updates for fast convergence
A loop free topology using SPF algorithm
Run on most routers
Classless protocol
There are some disadvantages of OSPF like, it requires extra CPU process to run SPF algorithm,
requires more RAM to store adjacency topology and more complex to setup and hard to
troubleshoot.
Open shortest path first (OSPF) is a link-state routing protocol which is used to find the best
path between the source and the destination router using its own SPF algorithm.
There is a small topology in which there are 3 routers namely R1, R2, R3 are connected. R1 is
connected to networks 10.255.255.80/30 (interface fa0/1), 192.168.10.48/29 (interface fa0/0)
and 10.255.255.8/30 (interface gi0/0)
Note – In the figure, IP addresses are written with their respected interfaces but as have to
advertise networks therefore, you have to write network I’d. R2 is connected to networks
192.168.10.64/29 (interface fa0/0), 10.255.255.80/30 (interface fa0/1). R3 is connected to
networks 10.255.255.8/30 (int fa0/1), 192.168.10.16/29 (int fa0/0).
Now, configuring OSPF for R1.
R1(config)#router ospf 1
Here, 1 is the OSPF instance or process I’d. It can be same or different (doesn’t matter). You
have use wildcard mask here and area used is 1.
Now, configuring R2
R2(config)#router ospf 1
Similarly, for R3
R3(config)#router ospf 1
R3#show ip protocols
1.9 Difference between Unicast, Broadcast and Multicast
The cast term here signifies some data(stream of packets) is being transmitted to the
recipient(s) from client(s) side over the communication channel that help them to
communicate. Let’s see some of the “cast” concepts that are prevailing in the computer
networks field.
1. Unicast –This type of information transfer is useful when there is a participation of single
sender and single recipient. So, in short you can term it as a one-to-one transmission. For
example, a device having IP address 10.1.2.0 in a network wants to send the traffic stream(data
packets) to the device with IP address 20.12.4.2 in the other network,then unicast comes into
picture. This is the most common form of data transfer over the networks.
2. Broadcast –Broadcasting transfer (one-to-all) techniques can be classified into two types :
Limited Broadcasting –
Suppose you have to send stream of packets to all the devices over the network that you
reside, this broadcasting comes handy. For this to achieve,it will append 255.255.255.255
(all the 32 bits of IP address set to 1) called as Limited Broadcast Address in the
destination address of the datagram (packet) header which is reserved for information
tranfer to all the recipients from a single client (sender) over the network.
Direct Broadcasting –
This is useful when a device in one network wants to transfer packet stream to all the
devices over the other network.This is achieved by translating all the Host ID part bits of
the destination address to 1,referred as Direct Broadcast Address in the datagram header
for information transfer.
This mode is mainly utilized by television networks for video and audio distribution.
One important protocol of this class in Computer Networks is Address Resolution Protocol
(ARP) that is used for resolving IP address into physical address which is necessary for
underlying communication.
The Internet Group Management Protocol (IGMP) is an Internet protocol that provides a way
for an Internet computer to report its multicast group membership to adjacent routers.
Multicasting allows one computer on the Internet to send content to multiple other computers
that have identified themselves as interested in receiving the originating computer's content.
Multicasting can be used for such applications as updating the address books of mobile
computer users in the field, sending out company newsletters to a distribution list, and
"broadcasting" high-bandwidth programs of streaming media to an audience that has "tuned
in" by setting up a multicast group membership.
Using the Open Systems Interconnection (OSI) communication model, IGMP is part of
the Network layer. IGMP is formally described in the Internet Engineering Task Force (IETF)
Request for Comments (RFC) 2236.
ICMP stands for Internet Control Message Protocol and IGMP stands for Internet Group
Message Protocol. Both are the most important thing or term in networking.
ICMP (Internet Control Message Protocol) is employed to envision reachability to a network or
host associated it’s additionally wont to PING an information science address to envision if
there’s property or not. Whereas IGMP (Internet Group Message Protocol) is employed for
cluster packet transfer wherever client watch TV through satellite association.
The major distinction between ICMP (Internet Control Message Protocol) and IGMP(Internet
Group Message Protocol) is that, IGMP is employed to form cluster of hosts, whereas ICMP is
employed to send error messages and operational data indicating by hosts.
Let’s see that the difference between ICMP and IGMP:
ICMP stands for Internet Control While IGMP stands for Internet
1. Message Protocol. Group Message Protocol.
MLD protocol enables IPv6 routers to discover multicast listeners, the nodes that are
configured to receive multicast data packets, on its directly attached interfaces. The protocol
specifically discovers which multicast addresses are of interest to its neighboring nodes and
provides this information to the active multicast routing protocol that makes decisions on the
flow of multicast data packets. Periodically, the multicast router sends general queries
requesting multicast address listener information from systems on an attached networks. These
queries are used to build and refresh the multicast address listener state on the attached
networks. In response to the queries, multicast listeners reply with membership reports. These
membership reports specify their multicast addresses listener state and their desired set of
sources with current-state multicast address records. The multicast router also processes
unsolicited filter- mode-change records and source-list-change records from systems that want
to indicate interest in receiving or not receiving traffic from particular sources.
An IPv6 multicast group is an arbitrary group of receivers that want to receive a particular data
stream. This group has no physical or geographical boundaries--receivers can be located
anywhere on the Internet or in any private network. Receivers that are interested in receiving
data flowing to a particular group must join the group by signaling their local device. This
signaling is achieved with the MLD protocol.
Devices use the MLD protocol to learn whether members of a group are present on their
directly attached subnets. Hosts join multicast groups by sending MLD report messages. The
network then delivers data to a potentially unlimited number of receivers, using only one copy
of the multicast data on each subnet. IPv6 hosts that wish to receive the traffic are known as
group members.
Packets delivered to group members are identified by a single multicast group address.
Multicast packets are delivered to a group using best-effort reliability, just like IPv6 unicast
packets.
The multicast environment consists of senders and receivers. Any host, regardless of whether it
is a member of a group, can send to a group. However, only the members of a group receive
the message.
A multicast address is chosen for the receivers in a multicast group. Senders use that address as
the destination address of a datagram to reach all members of the group.
Membership in a multicast group is dynamic; hosts can join and leave at any time. There is no
restriction on the location or number of members in a multicast group. A host can be a member
of more than one multicast group at a time.
How active a multicast group is, its duration, and its membership can vary from group to group
and from time to time. A group that has members may have no activity.
To start implementing multicasting in the campus network, users must first define who receives
the multicast. The MLD protocol is used by IPv6 devices to discover the presence of multicast
listeners (for example, nodes that want to receive multicast packets) on their directly attached
links, and to discover specifically which multicast addresses are of interest to those neighboring
nodes. It is used for discovering local group and source-specific group membership. The MLD
protocol provides a means to automatically control and limit the flow of multicast traffic
throughout your network with the use of special multicast queriers and hosts.
A querier is a network device, such as a device, that sends query messages to discover which
network devices are members of a given multicast group.
A host is a receiver, including devices that send report messages to inform the querier of a host
membership.
A set of queriers and hosts that receive multicast data streams from the same source is called a
multicast group. Queriers and hosts use MLD reports to join and leave multicast groups and to
begin receiving group traffic.
MLD uses the Internet Control Message Protocol (ICMP) to carry its messages. All MLD
messages are link-local with a hop limit of 1, and they all have the alert option set. The alert
option implies an implementation of the hop-by-hop option header.
Report--In a report message, the multicast address field is that of the specific IPv6 multicast
address to which the sender is listening.
Done--In a done message, the multicast address field is that of the specific IPv6 multicast
address to which the source of the MLD message is no longer listening.
An MLD report must be sent with a valid IPv6 link-local source address, or the unspecified
address (::), if the sending interface has not yet acquired a valid link-local address. Sending
reports with the unspecified address is allowed to support the use of IPv6 multicast in the
Neighbor Discovery Protocol.
For stateless auto configuration, a node is required to join several IPv6 multicast groups in
order to perform duplicate address detection (DAD). Prior to DAD, the only address the
reporting node has for the sending interface is a tentative one, which cannot be used for
communication. Therefore, the unspecified address must be used.
MLD states that result from MLD version 2 or MLD version 1 membership reports can be limited
globally or by interface. The MLD group limits feature provides protection against denial of
service (DoS) attacks caused by MLD packets. Membership reports in excess of the configured
limits will not be entered in the MLD cache, and traffic for those excess membership reports
will not be forwarded.
MLD provides support for source filtering. Source filtering allows a node to report interest in
listening to packets only from specific source addresses (as required to support SSM), or from
all addresses except specific source addresses sent to a particular multicast address.
When a host using MLD version 1 sends a leave message, the device needs to send query
messages to reconfirm that this host was the last MLD version 1 host joined to the group before
it can stop forwarding traffic. This function takes about 2 seconds. This "leave latency" is also
present in IGMP version 2 for IPv4 multicast.
The term "protocol independent" means that PIM can function by making use of routing
information supplied by a variety of communications protocols. In information technology, a
protocol is a defined set of rules that end points in a circuit or network employ to facilitate
communication.
bidirectional
Sparse mode: This protocol makes use of the assumption that in a multicast group, all
the receivers will be sparsely distributed in the environment. It is largely for wide area
usage. The protocol supports the usage of shared trees, which are nothing but multicast
distribution trees rooted at a specific node. It also supports usage of source based trees,
which has a separate multicast distribution tree for every source transmitting data to a
multicast group. In sparse mode, its important to have a mechanism to discover the root
node or rendezvous point.
Dense mode: This protocol makes the opposite assumption of the sparse mode. It
assumes that in a multicast group, all receivers are densely distributed in the
environment. By flooding the multicast traffic, it builds the shortest path trees and also
prunes back on the tree branches when there are no presence of receivers. The protocol
is based on only source based trees and as a result does not depend on rendezvous
points, unlike sparse mode. This makes the dense mode easier to implement and
deploy. However, the scaling property of the dense mode is poor.
Source-specific multicast: This protocol focuses on just one node which acts as a root
and the trees are built based on the same. It offers a reliable, scalable and secure model
for broadcasting information.
Bidirectional protocol independent multicast: It is similar to sparse mode, with the
difference being in the method of data transmission. In bidirectional, the data flow is
bidirectional, i.e data flows through both directions in a branch of a tree. The data is not
encapsulated. Again, bidirectional does not use source based trees at all and also there
is no designated router in the case of bidirectional protocol. The protocol has great
scalable properties especially when there are a large set of sources for each group.
1.13 Distance Vector Multicast Routing Protocol (DVMRP):
Distance Vector Multicast Routing Protocol is the oldest routing protocol that has been used to
support multicast data transmission over networks. The protocol sends multicast data in the
form of unicast packets that are reassembled into multicast data at the destination.
DVMRP can run over various types of networks, including Ethernet local area networks (LANs).
It can even run through routers that are not multicast-capable. It has been considered as an
intermediate solution while "real" multicast Internet Protocol (IP) routing evolves.
The DVMRP is used for multicasting over IP networks without routing protocols to support
multicast. The DVMRP is based on the RIP protocol but more complicated than RIP. DVRMP
maintains a link-state database to keep track of the return paths to the source of multicast
packages.
The first message for any source-group pair is forwarded to the entire multicast network,
with respect to the time-to-live (TTL) of the packet.
All the leaf routers that do not have members on directly attached subnetworks send back
prune messages to the upstream router.
The branch that transmitted a prune message is deleted from the delivery tree.
The delivery tree, which is spanning to all the members in the multicast group, is
constructed.
In the figure below, DVMRP is running on switches A, B, and C. IGMP is also running on Switch
C, which is connected to the host directly. After the host sends an IGMP report to switch C,
multicast streams are sent from the multicast resource to the host along the path built by
DVMRP.
DVRMP's main tasks include:
Version
Type
Subtype: Response, request, non-membership report or non-membership cancellations
Checksum: Complete message sum of 16-bit ones, not including IP headers. Requires 16-
bit alignment. Checksum computation field is zero.
MOSPF (Multicast Open Shortest Path First) is an extension to the OSPF (Open Shortest Path
First) protocol that facilitates interoperation between unicast and multicast routers. MOSPF is
becoming popular for proprietary network multicasting and may eventually supersede RIP
(Routing Information Protocol).
Here's a brief explanation of how MOSPF works: Multicast information goes out in OSPFlink
state advertisements (LSA). That information allows a MOSPF router to identify
active multicastgroups and the associated local area networks (LANs). MOSPF creates a
distribution tree for each multicast source and group and another tree for active sources
sending to the group. The current state of the tree is cached. Each time a link state changes or
the cache times out, the the tree must be recomputed to accomodate new changes.
MOSPF uses both source and destination to send a datagram, based on information in the OSPF
link state database about the autonomous system's topology. A group-membership-LSA makes
it possible to identifiy the location of each group member. The shortest path for the datagram is
calculated from that information.
This isn't really a category, but a specific instance of a protocol. MOSPF is the multicast
extension to OSPF (Open Shortest Path First) which is a unicast link-state routing protocol.
Link-state routing protocols work by having each router send a routing message periodically
listing its neighbors and how far away they are. These routing messages are flooded throughout
the entire network, and so every router can build up a map of the network which it can then
use to build forwarding tables (using a Dijkstra algorithm) to decide quickly which is the correct
next hop for send a particular packet.
Extending this to multicast is achieved simply by having each router also list in a routing
message the groups for which it has local receivers. Thus given the map and the locations of the
receivers, a router can also build a multicast forwarding table for each group.
MOSPF also suffers from poor scaling. With flood-and-prune protocols, data traffic is
an implicit message about where there are senders, and so routers need to store unwanted
state where there are no receivers. With MOSPF there are explicit messages about where all
the receivers are, and so routers need to store unwanted state where there are no senders.
However, both types of protocol build very efficient distribution trees.
CHAPTER 2.
Transport Layer is the second layer of TCP/IP model. It is an end-to-end layer used to deliver
messages to a host. It is termed as end-to-end layer because it provides a point-to-point
connection rather than hop-to- hop, between the source host and destination host to deliver
the services reliably. The unit of data encapsulation in Transport Layer is a segment.
The standard protocols used by Transport Layer to enhance it’s functionalities are :
TCP(Transmission Control Protocol), UDP( User Datagram Protocol), DCCP( Datagram
Congestion Control Protocol) etc.
Various responsibilities of a Transport Layer –
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of Internet Protocol
suite, referred as UDP/IP suite. Unlike TCP, it is unreliable and connectionless protocol. So,
there is no need to establish connection prior to data transfer.
Though Transmission Control Protocol (TCP) is the dominant transport layer protocol used with
most of Internet services; provides assured delivery, reliability and much more but all these
services cost us with additional overhead and latency. Here, UDP comes into picture. For the
real-time services like computer gaming, voice or video communication, live conferences; we
need UDP. Since high performance is needed, UDP permits packets to be dropped instead of
processing delayed packets. There is no error checking in UDP, so it also save bandwidth.
User Datagram Protocol (UDP) is more efficient in terms of both latency and bandwidth.
UDP Header –
UDP header is 8-bytes fixed and simple header, while for TCP it may vary from 20 bytes to 60
bytes. First 8 Bytes contains all necessary header information and remaining part consist of
data. UDP port number fields are each 16 bits long, therefore range for port numbers defined
from 0 to 65535; port number 0 is reserved. Port numbers help to distinguish different user
requests or process.
1. Source Port : Source Port is 2 Byte long field used to identify port number of source.
2. Destination Port : It is 2 Byte long field, used to identify the port of destined packet.
3. Length : Length is the length of UDP including header and the data. It is 16-bits field.
4. Checksum : Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the one’s
complement sum of the UDP header, pseudo header of information from the IP header
and the data, padded with zero octets at the end (if necessary) to make a multiple of two
octets.
Notes – Unlike TCP, Checksum calculation is not mandatory in UDP. No Error control or flow
control is provided by UDP. Hence UDP depends on IP and ICMP for error reporting.
Applications of UDP:
Used for simple request response communication when size of data is less and hence
there is lesser concern about flow and error control.
It is suitable protocol for multicasting as UDP supports packet switching.
UDP is used for some routing update protocols like RIP(Routing Information Protocol).
Normally used for real time applications which cannot tolerate uneven delays between
sections of a received message.
Following implementations uses UDP as a transport layer protocol:
NTP (Network Time Protocol)
DNS (Domain Name Service)
BOOTP, DHCP.
NNP (Network News Protocol)
Quote of the day protocol
TFTP, RTSP, RIP, OSPF.
Application layer can do some of the tasks through UDP-
Trace Route
Record Route
Time stamp
UDP takes datagram from Network Layer, attach its header and send it to the user. So, it
works fast.
Actually UDP is null protocol if you remove checksum field.
TCP is used by HTTP, HTTPs, FTP, SMTP and UDP is used by DNS, DHCP, TFTP, SNMP, RIP,
Telnet and VoIP.
TCP Tahoe:
Before TCP Tahoe, TCP used go back n model to control network congestion, then Tahoe added
Slow Start:
Once a thresh hold is achieved, the congestion window increases linearly i.e. congestion window
reduced to 1 and start over again (start slow start until reached threshold).
Problem:
Tahoe is not very much efficient because every time a packet is lost it waits for
the timeout and then retransmits the packet. It reduces the size of congestion window to
1 just because of 1 packet loss, this inefficiency cost a lost in high bandwidth delay product links.
TCP Reno:
TCP Reno includes algorithms named Fast Retransmit and Fast Recovery for congestion control.
Fast Retransmit:
When the sender receives three duplicate acknowledgements of a sent packet then sender retransmit
Fast Recovery:
After retransmission Reno enters into the Fast Recovery. In Fast Recovery after the packet
loss, the congestion window size does not reduce to 1 instead it reduced to half of current
window size.
Problem:
TCP Reno is helpful when only 1 packet is lost, in case of multiple packet loss it acts as Tahoe.
Then evolved TCP New Reno which is a modification of TCP Reno and deals with multiple
packets loss.
TCP New Reno is efficient as compare to Tahoe and Reno. In Reno Fast Recovery exits when
three duplicate acknowledgements are received but in New Reno it does not exist until all
outstanding data is acknowledged or ends when retransmission timeout occurs. In this way it
TCP Vegas:
TCP Tahoe,Reno and New Reno detects and controls congestion after congestion occurs but
still there is a better way to overcome congestion problem i.e TCP Vegas.TCP Vegas detects
· Slow Start
· Packet loss detection
· Detection of available bandwidth
TCP tahoe
When a loss occurs, fast retransmit is sent, half of the current CWND is saved as ssthresh and
slow start begins again from its initial CWND. Once the CWND reaches ssthresh, TCP changes to
congestion avoidance algorithm where each new ACK increases the CWND by MSS /
CWND. This results in a linear increase of the CWND.
TCP Reno
A fast retransmit is sent, half of the current CWND is saved as ssthresh and as new
CWND, thus skipping slow start and going directly to the congestion avoidance
algorithm. The overall algorithm here is called fast recovery.