0% found this document useful (0 votes)
5 views53 pages

unit4

The document covers the fundamentals of network layer protocols, specifically IPv4 and IPv6 addressing, including concepts like classful and classless addressing, subnetting, and CIDR. It explains IP address structure, terminology, and the importance of addressing schemes in network communication. Additionally, it discusses congestion control, quality of service, and the evolution from classful to classless addressing to address the depletion of IP addresses.

Uploaded by

d6ui91iw1t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views53 pages

unit4

The document covers the fundamentals of network layer protocols, specifically IPv4 and IPv6 addressing, including concepts like classful and classless addressing, subnetting, and CIDR. It explains IP address structure, terminology, and the importance of addressing schemes in network communication. Additionally, it discusses congestion control, quality of service, and the evolution from classful to classless addressing to address the depletion of IP addresses.

Uploaded by

d6ui91iw1t
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

NUTAN MAHARASHTRA VIDYA PRASARAK MANDAL’S

NUTAN COLLEGE OF ENGINEERING & RESEARCH (NCER)


Department of Computer Science and Engineering

UNIT 4

[Unit 4] 6 Hrs
Network Layer and Congestion Control: IPv4/IPv 6, Routers and Routing Algorithms distance
vector link state. TCP UDP and sockets.
Congestion Control and QOS: General principles, Congestion prevention policies, Load
shading, Jitter control, Quality of service: Packet scheduling, Traffic shaping, integrated Services.

IPV4
One of the most important topics in any discussion of TCP/IP is IP addressing. An IP address is a
numeric identifier assigned to each machine on an IP network. It designates the specific location of
a device on the network.
An IP address is a software address, not a hardware address—the latter is hard-coded on a
network interface card (NIC) and used for finding hosts on a local network. IP addressing was
designed to allow hosts on one network to communicate with a host on a different network
regardless of the type of LANs the hosts are participating in.
Before we get into the more complicated aspects of IP addressing, you need to understand
some of the basics. First, I’m going to explain some of the fundamentals of IP addressing and its
terminology. Then you’ll learn about the hierarchical IP addressing scheme and private IP
addresses.
IP Terminology
Throughout this chapter you’re being introduced to several important terms that are vital
to understanding the Internet Protocol. Here are a few to get you started:
Bit A bit is one digit, either a 1 or a 0.
Byte A byte is 7 or 8 bits, depending on whether parity is used. For the rest of this chapter, always
assume a byte is 8 bits.
Octet An octet, made up of 8 bits, is just an ordinary 8-bit binary number. In this chapter, the terms
byte and octet are completely interchangeable.
Network address This is the designation used in routing to send packets to a remote network—for
example, 10.0.0.0, 172.16.0.0, and 192.168.10.0.
Broadcast address The address used by applications and hosts to send information to all nodes on
a network is called the broadcast address. Examples of layer 3 broadcasts include 255.255.255.255,
which is any network, all nodes; 172.16.255.255, which is all subnets and hosts on network
172.16.0.0; and 10.255.255.255, which
broadcasts to all subnets and hosts on network 10.0.0.0
IP Addressing

 The IPv4 addresses are unique and universal.


 Every host and router on the Internet has an IP address, which encodes its network
number and host number.
 All IP addresses are 32 bits long. It is important to note that an IP address does not
actually refer to a host. It really refers to a network interface, so if a host is on two
networks, it must have two IP addresses.
 The address space of IPv4 is 232 or 4,294,967,296.

Classful Addressing
 IPv4 addressing, at its inception, used the concept of classes. This architecture is
called classful addressing. Although this scheme is becoming obsolete, we briefly
discuss it here to show the rationale behind classless addressing.

 IP addresses were divided into the five categories


 In classful addressing, the address space is divided into five classes: A, B, C, D, and
E. Each class occupies some part of the address space.

Networkid and Hostid


In classful addressing, an IP address in class A, B, or C is divided into netid and
hostid.These parts are of varying lengths, depending on the class of the address.

Find the class of each address.


a. 00000001 00001011 00001011 11101111
b. 11000001 10000011 00011011 11111111
c. 14.23.120.8
d. 252.5.15.111
Solution
a. The first bit is 0. This is a class A address.
b. The first 2 bits are 1; the third bit is 0. This is a class C address.
c. The first byte is 14; the class is A.
d. The first byte is 252; the class is E.

Number of blocks and block size in classful IPv4 addressing

Mask
The length of the network id and host id (in bits) is predetermined in classful addressing,
we can also use a mask (also called the default mask), a 32-bit number made of contiguous Is
followed by contiguous as. The masks for classes A, B, and C are shown in following Table.
The concept does not apply to classes D and E.
Default masks for classful addressing

Subnetting
If an organization was granted a large block in class A or B, it could divide the
addresses into several contiguous groups and assign each group to smaller networks (called
subnets)

Supernetting
In supernetting, an organization can combine several class C blocks to create a larger
range of addresses. In other words, several networks are combined to create a supernetwork
or a supernet. An organization can apply for a set of class C blocks instead of just one.

Address Depletion
The flaws in classful addressing scheme combined with the fast growth of the Internet
led to the near depletion of the available addresses. Yet the number of devices on the Internet
is much less than the 232 address space.

Classful addressing, which is almost obsolete, is replaced with classless addressing.


Classless Addressing (CIDR)
To overcome address depletion and give more organizations access to the Internet,
classless addressing was designed and implemented. In this scheme, there are no classes, but
the addresses are still granted in blocks.
Address Blocks
In classless addressing, when an entity, small or large, needs to be connected to the
Internet, it is granted a block (range) of addresses. The size of the block (the number of
addresses) varies based on the nature and size of the entity. For example, a household may be
given only two addresses; a large organization may be given thousands of addresses. An ISP,
as the Internet service provider, may be given thousands or hundreds of thousands based on the
number of customers it may serve.

Restriction To simplify the handling of addresses, the Internet authorities impose three
restrictions on classless address blocks:
1. The addresses in a block must be contiguous, one after another.
2. The number of addresses in a block must be a power of 2 (I, 2, 4, 8, ... ).
3. The first address must be evenly divisible by the number of addresses.

Following figure shows a block of addresses, in both binary and dotted-decimal notation,
granted to a small business that needs 16 addresses.
We can see that the restrictions are applied to this block. The addresses are contiguous.
The number of addresses is a power of 2 (16 = 24), and the first address is divisible by 16. The
first address, when converted to a decimal number, is 3,440,387,360, which when divided by
16 results in 215,024,210.

In IPv4 addressing, a block of addresses can be defined as x.y.z.t /n in which x.y.z.t defines
one of the addresses and the /n defines the mask.

The first address in the block can be found by setting the rightmost 32 − n bits to 0s.

A block of addresses is granted to a small organization. We know that one of the addresses
is 205.16.37.39/28. What is the first address in the block?

Solution
The binary representation of the given address is
11001101 00010000 00100101 00100111

If we set 32−28 rightmost bits to 0, we get


11001101 00010000 00100101 0010000
or
205.16.37.32.

The last address in the block can be found by setting the rightmost 32 − n bits to 1s.
A block of addresses is granted to a small organization. We know that one of the addresses
is 205.16.37.39/28. What is the last address in the block?

Solution
The binary representation of the given address is
11001101 00010000 00100101 00100111
If we set 32 − 28 rightmost bits to 1, we get
11001101 00010000 00100101 00101111
or
205.16.37.47

The number of addresses in the block can be found by using the formula 232−n.

CIDR – Classless InterDomain Routing


 Classless Addressing is an improved IP Addressing system.
 It makes the allocation of IP Addresses more efficient.
 It replaces the older classful addressing system based on classes.
 It is also known as Classless Inter Domain Routing (CIDR)
CIDR Block-
When a user asks for specific number of IP Addresses,
 CIDR dynamically assigns a block of IP Addresses based on certain rules.
 This block contains the required number of IP Addresses as demanded by the user.
 This block of IP Addresses is called as a CIDR block.
Rules for Creating CIDR Block
 All the IP Addresses in the CIDR block must be contiguous
 The size of the block must be presentable as power of 2.
 Size of the block is the total number of IP Addresses contained in the block.
 Size of any CIDR block will always be in the form 21, 22, 23, 24, 25 and so on.
 First IP Address of the block must be divisible by the size of the block.
Examples-
Consider a binary pattern-
01100100.00000001.00000010.01000000
(represented as 100.1.2.64)
 It is divisible by 25 since its least significant 5 bits are zero.
 It is divisible by 26 since its least significant 6 bits are zero.
 It is not divisible by 27 since its least significant 7 bits are not zero.
32
CIDR Notation-
CIDR IP Addresses look like-
a.b.c.d / n
 They end with a slash followed by a number called as IP network prefix.
 IP network prefix tells the number of bits used for the identification of network.
 Remaining bits are used for the identification of hosts in the network.
Problem-01:
Given the CIDR representation 20.10.30.35 / 27. Find the range of IP Addresses in the CIDR
block.
Solution-
Given CIDR representation is 20.10.30.35 / 27.
It suggests-
27 bits are used for the identification of network.
 Remaining 5 bits are used for the identification of hosts in the network.
Given CIDR IP Address may be represented as-
00010100.00001010.00011110.00100011 / 27
So,
First IP Address = 00010100.00001010.00011110.00100000 = 20.10.30.32
 Last IP Address = 00010100.00001010.00011110.00111111 = 20.10.30.63
Thus, Range of IP Addresses = [ 20.10.30.32 - 20.10.30.63]
Problem-02:
Given the CIDR representation 100.1.2.35 / 20. Find the range of IP Addresses in the CIDR
block.
Solution-
Given CIDR representation is 100.1.2.35 / 20.
It suggests-
 20 bits are used for the identification of network.
 Remaining 12 bits are used for the identification of hosts in the network.

Given CIDR IP Address may be represented as-


01100100.00000001.00000010.00100011 / 20
So,
 First IP Address = 01100100.00000001.00000000.00000000 = 100.1.0.0
 Last IP Address = 01100100.00000001.00001111.11111111 = 100.1.15.255
Thus, Range of IP Addresses = [ 100.1.0.0 - 100.1.15.255]

Sub-netting
Subnetting is the strategy used to partition a single physical network into more than onesmaller
logical sub-networks (subnets).
An IP address includes a network segment and a host segment. Subnets are designed by
accepting bits from the IP address's host part and using these bits to assign a number of smaller sub-
networks inside the original network.
Subnetting allows an organization to add sub-networks without the need to acquire a new
network number via the Internet service provider (ISP). Subnetting helps to reduce the networktraffic
and conceals network complexity.
Subnetting is essential when a single network number has to be allocated over numeroussegments of
a local area network (LAN).
Subnets were initially designed for solving the shortage of IP addresses over the Internet. Each IP
address consists of a subnet mask. All the class types, such as Class A, Class B and
Class C include the subnet mask known as the default subnet mask.
The subnet mask is intended for determining the type and number of IP addresses required for a
given local network. The firewall or router is called the default gateway. The default subnet mask is
as follows:

 Class A: 255.0.0.0
 Class B: 255.255.0.0
 Class C: 255.255.255.0

The subnetting process allows the administrator to divide a single Class A, Class B, or ClassC
network number into smaller portions. The subnets can be sub netted again into sub- subnets.
Dividing the network into a number of subnets provides the following benefits:

 Reduces the network traffic by reducing the volume of broadcasts


 Helps to surpass the constraints in a local area network (LAN), for example, the
maximum number of permitted hosts.
 Enables users to access a work network from their homes; there is no need to open the
complete network.

Table 3.5 Reserved IP address space private address

Address Class Reserved Address Space


Class A 10.0.0.0 through 10.255.255.255
Class B 172.16.0.0 through 172.31.255.255
Class C 192.168.0.0 through 192.168.255.255

*Detailed solved examples given in hand written solved examples given on google classroom
IPV6

IPv6 addresses

An Ipv6 address uses 128 bits as opposed to 32 bits in IPv4.

IPv6 addresses are written using hexadecimal, as opposed to dotted decimal in IPv4.

Because an hexadecimal number uses 4 bits this means that an IPv6 address consists of 32 hexadecimal
numbers.

These numbers are grouped in 4’s giving 8 groups or blocks. The groups are written with a : (colon) as a
separator.

group1:group2: ……etc…. :group8

Here is an IPv6 address example:

Note: Because of the length of IPv6 addresses various shortening techniques are employed.

The main technique being to omit repetitive 0’s as shown in the example above.

Network And Node Addresses

In IPv4 an address is split into two components a network component and a node component.

This was done initially using Address classes and later using subnet masking.

In IPv6 we do the same. The first step is to split the address into two parts.

The address is split into 2 64 bit segments the top 64 bits is the network part and the lower 64 bits the
node part:
The upper 64 bits are used for routing.

The lower 64 bits identify the address of the interface or node, and is derived from the actual physical
or MAC address using IEEE’s Extended Unique Identifier (EUI-64) format.

If we look at the upper 64 bits in more detail we can see that it is split into 2 blocks of 48 and 16
bits respectively the lower 16 bits are used for subnets on an internal networks, and are controlled by a
network administrator.

The upper 48 bits are used for the global network addresses and are for routing over the internet.

Address Types and Scope

IPv6 addresses have three types:

 Global Unicast Address –Scope Internet- routed on Internet


 Unique Local — Scope Internal Network or VPN internally routable, but Not routed on Internet
 Link Local – Scope network link- Not Routed internally or externally.
Global and Public Addresses

Global addresses are routable on the internet and start with 2001:

These addresses are known as global Unicast addresses and are the equivalent of the public addresses of
IPv4 networks.

The Internet authorities allocate address blocks to ISPs who in turn allocate them to their customers.

Internal Addresses- Link Local and Unique Local

In IPv4 internal addresses use the reserved number ranges 10.0.0.0/8,


172.16.0.0/12 and 192.168.0.0/16 and 169.254.0.0/16.

These addresses are not routed on the Internet and are reserved for internal networks.

IPv6 also has two Internal address types.

 Link Local
 Unique Local

Link Local

These are meant to be used inside an internal network, and again they are not routed on the Internet.

It is equivalent to the IPv4 address 169.254.0.0/16 which is allocated on an IPv4 network when no DHCP
server is found.

Link local addresses start with fe80

They are restricted to a link and are not routed on the Internal network or the Internet.
Link Local addresses are self assigned i.e. they do not require a DHCP server.

A link local address is required on every IP6 interface even if no routing is present.

Unique Local

Unique Local are meant to be used inside an internal network.

They are routed on the Internal network but not routed on the Internet.

They are equivalent to the IPv4 addresses are 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16

The address space is divided into two /8 spaces: fc00::/8 for globally assigned addressing, and fd00::/8 for
locally assigned addressing.

For manually assignment by an organisation use the fd00 prefix.

Using IPv6 Addresses in URLs

On IPv4 networks you can access a network rsource e.g. a web page using the format

http://192.168.1.21/webpage

However IPv6 addresses contain a colon as separator and so must be enclosed in square brackets.

http://[IPv6 address]/webpage.

IPv6 Loop Back

The IPv6 loopback address is ::1. You can ping it as follows:

ping ::1
Advantages of IPv6

The various advantages of IPv6 over IPv4 are:


1. Larger address space: An IPv6 address is 128 bit long as compared to 32- bit address ofIPv4. It has
huge 296 increases in the address space.
2. Allowance for extension: IPv6 is designed to allow the extension of the protocol if required by new
technologies or applications.
3. Better header format: IPv6 uses a new header format in which options are separated from the base
header and inserted, when needed, between the base header and upper-layer data.
4. New options: IPv6 has new options to allow for additional functionalities.
5. Support for more security: The encryption and authentication options in IPv6 provide confidentiality
and integrity of the packet.
6. Support for resource allocation: In IPv6, the source can request the special handling of the packet
with he help of flow label field. This mechanism can be used to support traffic such as real-time audio
and video.

Comparison between IPv4 and IPv6

The major difference in IPv4 and IPv6 packet formats are as follows:
1. IPv6 packet format does not contain header length field as IPv6 base header has fixed length of 40
bytes. IPv4 head is variable in length so header length field is required.
2. The header checksum field is not present in IPv6. As a result error detection is not done on the header,
checksum is provided by upper layer protocols. It reduces the processing time of an IP packet.
3. In IPv6, maximum hop field is used whereas in IPv4 Time to line (TTL) field is used.
4. In IPv6, the size of payload (excluding header) is specified whereas in IPv4 total length field is used
that specifies the total size of IP packet including header.
5. There is no fragmentation field in the base header in IPv6. It has been moved to the extension header.
6. The identification, flag, and offset field are eliminated from the base header in IPv6. They are included
in the fragmentation extension header.
7. The options field is moved under extension headers in IPv6.
8. The source and destination address sizes in IPv6 are 128 bits as against 32 bits in IPv4.
9. The service type field is eliminated in IPv6. The priority and flow label fields together take over the
function of the service type field.
ROUTING ALGORITHMS

The main function of NL (Network Layer) is routing packets from the source machine to the
destination machine.
There are two processes inside router:
a) One of them handles each packet as it arrives, looking up the outgoing line to use for it in
the routing table. This process is forwarding.
b) The other process is responsible for filling in and updating the routing tables.

That is where the routing algorithm comes into play. This process is routing.

ROUTING ALGORITHMS
The routing algorithm is that part of the network layer software responsible for deciding which
output line an incoming packet should be transmitted on.

PROPERTIES OF ROUTING ALGORITHM:


Certain properties are desirable in a routing algorithm correctness, simplicity, robustness,
stability, fairness, optimality

Routing algorithms can be grouped into two major classes:


1) non adaptive (Static Routing)
2) adaptive. (Dynamic Routing)
lOMoAR cPSD| 14609304

Non adaptive algorithm


Non adaptive algorithm does not base their routing decisions on measurements or
estimates of the current traffic and topology. Instead, the choice of the route to use to get
from I to J is computed in advance, off line, and downloaded to the routers when the
network is booted. This procedure is sometimes called static routing.

Adaptive algorithm
Adaptive algorithm in contrast, change their routing decisions to reflect
changes in thetopology, and usually the traffic as well.

Adaptive algorithms differ in


1) Where they get their information (e.g., locally, from adjacent routers, or from all routers),
2) When they change the routes (e.g., every ΔT sec, when the load changes or
when thetopology changes), and
3) What metric is used for optimization (e.g., distance, number of hops, or estimated
transittime).
This procedure is called dynamic routing

BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON
Configuration Manual Automatic
Routing table building Routing locations are Locations are dynamically filled inthe
hand-typed table.
Routes User defined Routes are updated according to
change in topology.
Routing algorithms Doesn't employ complex Uses complex routing algorithms to
routing algorithms. perform routing operations.
Implemented in Small networks Large networks
Link failure Link failure obstructs the Link failure doesn't affect the
rerouting. rerouting.
Security Provides high security. Less secure due to sending broadcastsand
multicasts.
Routing protocols No routing protocols are Routing protocols such as RIP,
indulged in the process. EIGRP, etc are involved in the
routing process.
Additional resources Not required Needs additional resources to storethe
information.

Fairness and optimality


Fairness and optimality may sound obvious, but as it turns out, they are often
contradictory goals. There is enough traffic between A and A', between B and B',
and between C and C' to saturate the horizontal links. To maximize the total flow,
the X to X' traffic should be shut off altogether. Unfortunately, X and X' may not
see it that way. Evidently, some compromise between global efficiency and
fairness to individual connections is needed.
lOMoAR cPSD| 14609304

Network with a conflict between fairness and efficiency

THE OPTIMALITY PRINCIPLE

One can make a general statement about optimal routes without regard to
networktopology or traffic. This statement is known as the optimality principle.

If router J is on the optimal path from router I to router K, then the optimal path
from J toK also falls along the same route.

The set of optimal routes from all sources to a given destination form a tree
rooted at thedestination. Such a tree is called a sink tree.
As a direct consequence of the optimality principle, we can see that the set of optimal
routesfrom all sources to a given destination form a tree rooted at the destination.
Such a tree is called a sink tree where the distance metric is the number of hops.
Note thata sink tree is not necessarily unique; other trees with the same path
lengths may exist.
The goal of all routing algorithms is to discover and use the sink trees for all routers.

(a) A network. (b) A sink tree for router B.

Different Routing Algorithms


Shortest Path Algorithm
Flooding
Distance Vector Routing
Link State Routing
RIP
lOMoAR cPSD| 14609304

OSPF
Routing for Mobile Hosts
Routing in Ad Hoc Networks

SHORTEST PATH ROUTING

The idea is to build a graph of the subnet, with each node of the graph representing a
router and each arc of the graph representing a communication line (often called a link).
To choose a route between a given pair of routers, the algorithm just finds the shortest path
betweenthem on the graph.
1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to
this nodeand make it the first permanent node.
2. Examine each neighbor of the node that was the last permanent node.
3. Assign a cumulative cost to each node and make it tentative
4. Among the list of tentative nodes
a. Find the node with the smallest cost and make it Permanent
b. If a node can be reached from more than one route then select the route
with theshortest cumulative cost.
5. Repeat steps 2 to 4 until every node becomes permanent

Fig. 5-7 The first six steps used in computing the shortest path from A to D. The
arrowsindicate the working node.

To illustrate how the labelling algorithm works, look at the weighted, undirected
graph ofFig. 5-7(a), where the weights represent, for example, distance.
We want to find the shortest path from A to D. We start out by marking
node A aspermanent, indicated by a filled-in circle.
Then we examine, in turn, each of the nodes adjacent to A (the working node),
relabelingeach one with the distance to A.
Whenever a node is relabelled, we also label it with the node from which the
probe wasmade so that we can reconstruct the final path later.
lOMoAR cPSD| 14609304

Having examined each of the nodes adjacent to A, we examine all the tentatively
labellednodes in the whole graph and make the one with the smallest label
permanent, as shown in Fig. 5-7(b).
This one becomes the new working node.

We now start at B and examine all nodes adjacent to it. If the sum of the label on B
and the distance from B to the node being considered is less than the label on that
node, we have a shorterpath, so the node is relabeled.
After all the nodes adjacent to the working node have been inspected and the tentative labels
changed if possible, the entire graph is searched for the tentatively-labeled node with the smallest.

Flooding

When a routing algorithm is implemented, each router must make decisions based
on local knowledge, not the complete picture of the network. A simple local technique is
flooding, in which every incoming packet is sent out on every outgoing line except the one
it arrived on.

Another static algorithm is flooding, in which every incoming packet is sent out
on everyoutgoing line except the one it arrived on.
Flooding obviously generates vast numbers of duplicate packets, in fact, an
infinitenumber unless some measures are taken to damp the process.
One such measure is to have a hop counter contained in the header of each packet,
whichis decremented at each hop, with the packet being discarded when the
counter reaches zero.
Ideally, the hop counter should be initialized to the length of the path from source
to destination. If the sender does not know how long the path is, it can initialize
the counterto the worst case, namely, the full diameter of the subnet.
A variation of flooding that is slightly more practical is selective flooding. In this
algorithm the routers do not send every incoming packet out on every line, only on
those lines that are going approximately in the right direction
Flooding is not practical in most applications.

Intra - and Inter domain Routing

An autonomous system (AS) is a group of networks and routers under the


authority of asingle administration.
Routing inside an autonomous system is referred to as intra domain routing.
(DISTANCEVECTOR, LINK STATE)

Routing between autonomous systems is referred to as inter domain routing.


(PATH VECTOR) Each autonomous system can choose one or more intra domain
routing protocols to handle routing inside the autonomous system. However, only
one inter domainrouting protocol handles routing between autonomous systems

Distance Vector Routing


Distance vector routing algorithms operate by having each router maintain a table
(i.e, a vector) giving the best known distance to each destination and which line to
lOMoAR cPSD| 14609304

use to get there.


These tables are updated by exchanging information with the neighbors.
The distance vector routing algorithm is sometimes called by other names, most
commonly the distributed Bellman-Ford routing algorithm and the Ford-
Fulkerson algorithm, after the researchers who developed it (Bellman, 1957; and
Ford and Fulkerson, 1962).
It was the original ARPANET routing algorithm and was also used in the Internet
under the name RIP.

Figure 5-9. (a) A network. (b) Input from A, I, H, K, and the new routing table for J.
Part (a) shows a subnet. The first four columns of part (b) show the delay
vectorsreceived from the neighbors of router J.
A claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to
D, etc.Suppose that J has measured or estimated its delay to its neighbors, A, I,
H, and K as 8, 10, 12, and 6 msec, respectively.
Each node constructs a one-dimensional array containing the "distances"(costs)
to all othernodes and distributes that vector to its immediate neighbors.
1. The starting assumption for distance-vector routing is that each node knows the
cost of thelink to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost.

Example 1
lOMoAR cPSD| 14609304

Each node constructs a one-dimensional array containing the "distances"(costs) to allother


nodes and distributes that vector to its immediate neighbors.

1. The starting assumption for distance-vector routing is that each node knows the
cost of the link to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost.

Information Distance to Reach Node

Stored at A B C D E F G
Node
A 0 1 1 ∞ 1 1 ∞

B 1 0 1 ∞ ∞ ∞ ∞

C 1 1 0 1 ∞ ∞ ∞
D ∞ ∞ 1 0 ∞ ∞ 1

E 1 ∞ ∞ ∞ 0 ∞ ∞

F 1 ∞ ∞ ∞ ∞ 0 1
G ∞ ∞ ∞ 1 ∞ 1 0

Table 1. Initial distances stored at each node(global view).

We can represent each node's knowledge about the distances to all other nodes as a table
like theone given in Table 1.
Note that each node only knows the information in one row of the table.

1. Every node sends a message to its directly connected neighbors containing its
personallist of distance. ( for example, A sends its information to its neighbors
B,C,E, and F. )
2. If any of the recipients of the information from A find that A is advertising a path
shorterthan the one they currently know about, they update their list to give the
new path lengthand note that they should send packets for that destination through
A. ( node B learns from A that node E can be reached at a cost of 1; B also knows
it can reach A at a cost of1, so it adds these to get the cost of reaching E by means
of A. B records that it can
reach E at a cost of 2 by going through A.)
3. After every node has exchanged a few updates with its directly connected
neighbors, allnodes will know the least-cost path to all the other nodes.
4. In addition to updating their list of distances when they receive updates, the nodes
need tokeep track of which node told them about the path that they used to
calculate the cost, so that they can create their forwarding table. ( for example, B
knows that it was A who said" I can reach E in one hop" and so B puts an entry in
its table that says " To reach E, use the link to A.)

Information Distance to Reach Node

Stored at Node A B C D E F G
A 0 1 1 2 1 1 2
lOMoAR cPSD| 14609304

B 1 0 1 2 2 2 3
C 1 1 0 1 2 2 2

D 2 1 0 3 2 1

E 1 2 2 3 0 2 3
F 1 2 2 2 2 0 1

G 3 2 1 3 1 0

Table 2. final distances stored at each node ( global view).

In practice, each node's forwarding table consists of a set of triples of the form: (
Destination, Cost, Next Hop).

For example, Table 3 shows the complete routing table maintained at node Bfor
the network in figure1.

Destination Cost Next Hop

A 1 A

C 1 C

D 2 C

E 2 A

F 2 A

G 3 A

Table 3. Routing table maintained at node B.

Example 2

In distance vector routing, the least- cost route between any two nodes is the
route withminimum distance.
In this protocol, as the name implies, each node maintains a vector (table) of minimum
distancesto every node.
Mainly 3 things in this
lOMoAR cPSD| 14609304

Initiali
zation
Sharin
g
Updati
ng

Initialization
Each node can know only the distance between itself and its immediate neighbors, those
directly connected to it. So for the moment, we assume that each node can send a message to
the immediate neighbors and find the distance between itself and these neighbors. Below
fig shows the initial tables for each node. The distance for any entry that is not a neighbor
is marked as infinite (unreachable).

Initialization of tables in distance vector routing

Sharing
The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So if node C shares its
routing table with A, node A can also know how to reach node E. On the other hand, node
C does not know how to reach node D, but node A does. If node A shares its routing table
with node C, node C also knows how to reach node D. In other words, nodes A and C, as
immediate neighbors, can improvetheir routing tables if they help each other .

NOTE: In distance vector routing, each node shares its routing table with its immediate
neighborsperiodically and when there is a change.

Updating
When a node receives a two -column table from a neighbor, it needs to update its routing
table.Updating takes three steps:
1. The receiving node needs to add the cost between itself and the sending node
to eachvalue in the second column. (x+y)
2. If the receiving node uses information from any row. The sending node is
the nextnode in the route.
3. The receiving node needs to compare each row of its old table with the
correspondingrow of the modified version of the received table.
a. If the next -node entry is different, the receiving node chooses the row
with thesmaller cost. If there is a tie, the old one is kept.
b. If the next - node entry is the same, the receiving node chooses the new
lOMoAR cPSD| 14609304

row. For example, suppose node C has previously advertised a route to node X with
distance 3. Supposethat now there is no path between C and X; node C now advertises
this route with a distance ofinfinity. Node A must not ignore this value even though its old
entry is smaller. The old route doesnot exist anymore. The new route has a distance of
infinity.

Updating in distance vector routing

Distance vector routing tables


lOMoAR cPSD| 14609304

THE COUNT-TO-INFINITY PROBLEM

Counting to infinity is just another name for a routing loop. In distance vector routing,
routing loops usually occur when an interface goes down. It can also occur when two
routers send updatesto each other at the same time.

1. One of the important issue in Distance Vector Routing is County of Infinity Problem.
2. Counting to infinity is just another name for a routing loop.
3. In distance vector routing, routing loops usually occur when an interface goes down.
4. It can also occur when two routers send updates to each other at the same time.

Imagine a network with a graph as shown above in figure


As you see in this graph, there is only one link between A and the other
parts of thenetwork.
Now imagine that the link between A and B is cut.
At this time, B corrects its table.
After a specific amount of time, routers exchange their tables, and so B
receives C'srouting table.
Since C doesn't know what has happened to the link between A and B, it says that
it has alink to A with the weight of 2 (1 for C to B, and 1 for B to A -- it doesn't
know B has no link to A).
B receives this table and thinks there is a separate link between C and A, so it
corrects itstable and changes infinity to 3 (1 for B to C, and 2 for C to A, as C
said).
Once again, routers exchange their tables.
When C receives B's routing table, it sees that B has changed the weight of its link
to A from 1 to 3, so C updates its table and changes the weight of the link to A to
4 (1 for C toB, and 3 for B to A, as B said).
This process loops until all nodes find out that the weight of link to A is infinity.
This situation is shown in the table below
In this way, Distance Vector Algorithms have a slow convergence rate.
lOMoAR cPSD| 14609304

One way to solve this problem is for routers to send information only to the
neighboursthat are not exclusive links to the destination.
For example, in this case, C shouldn't send any information to B about A,
because B isthe only way to A.

Example 2 by Andrew

Consider the five-node (linear) subnet of following Fig., where the delay metric is the
number ofhops. Suppose A is down initially and all the other routers know this. In other
words, they have all recorded the delay to A as infinity.
Now let us consider the situation of Fig. (b), in which all the lines and routers are initially
up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4, respectively. Suddenly A
goes down,or alternatively, the line between A and B is cut, which is effectively the same
thing from B's point of view.

At the first packet exchange, B does not hear anything from A. Fortunately, C says ‘‘Do
not worry; I have a path to A of length 2.’’ Little does B suspect that C’s path runs
through B itself.For all B knows, C might have ten links all with separate paths to A of
length 2. As a result, B thinks it can reach A via C, with a path length of 3. D and E do
not update their entries for A onthe first exchange.

On the second exchange, C notices that each of its neighbors claims to have a path to A of
length 3. It picks one of them at random and makes its new distance to A 4, as shown in
the third row of Fig (b). Subsequent exchanges produce the history shown in the rest of
Fig. (b).

Fig: The count-to-infinity problem


lOMoAR cPSD| 14609304

RIP
The Routing Information Protocol (RIP) is an intradomain routing protocol used
inside an autonomous system. It is a very simple protocol based on distance vector
routing.

RIP Message Format

Requests and Responses

Timers in RIP

RIP Version 2
lOMoAR cPSD| 14609304

Link State Routing(OSPF)

The idea behind link state routing is simple and can be stated as five parts. Each router
must do thefollowing:
1. Discover its neighbors and learn their network addresses.
2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router
Learning about the Neighbours
When a router is booted, its first task is to learn who its neighbours are.
It accomplishes this goal by sending a special HELLO packet on each point-to-point line.
The router on the other end is expected to send back a reply telling who it is.

(a) Nine routers and a LAN. (b) A graph model of (a).

Measuring Line Cost


The link state routing algorithm requires each router to know, or at least have a
reasonable estimate of, the delay to each of its neighbors. The most direct way to
determine this delay is to send over the line a special ECHO packet that the other
side isrequired to send back immediately.
By measuring the round-trip time and dividing it by two, the sending router
can get areasonable estimate of the delay.
For even better results, the test can be conducted several times, and the average
used. Of course, this method implicitly assumes the delays are symmetric, which
may not alwaysbe the case.
Unfortunately, there is also an argument against including the load in the delay
calculation. Consider the subnet of Fig. 5-12, which is divided into two parts,
East andWest, connected by two lines, CF and EI.
lOMoAR cPSD| 14609304

Building Link State Packets

(a) A subnet. (b) The link state packets for this subnet.

Once the information needed for the exchange has been collected, the next step is for each
router tobuild a packet containing all the data.
The packet starts with the identity of the sender, followed by a sequence number and age
and a listof neighbours.
For each neighbour, the delay to that neighbour is given.
An example subnet is given in above Fig. (a) with delays shown as labels on the lines.
Thecorresponding link state packets for all six routers are shown in above Fig. (b).

Distributing the Link State Packets

The packet buffer for router B


In above Fig. the link state packet from A arrives directly, so it must be sent to C and F
and acknowledged to A, as indicated by the flag bits.
Similarly, the packet from F has to be forwarded to A and C and acknowledged to F.
lOMoAR cPSD| 14609304

OSPF

The Open Shortest Path First (OSPF) protocol is an intradomain routing


protocol based onlink state routing. Its domain is also an autonomous system.

OSPF packets

OSPF common header

Link state update packet

LSA general header


lOMoAR cPSD| 14609304

DIFFERENCE BETWEEN DISTANCE VECTOR ROUTING AND LINK STATE ROUTING

DIFFERENCE BETWEEN RIP AND OSPF PROTOCOL

SR.NO RIP OSPF

RIP Stands for Routing Information


1 Protocol. OSPF stands for Open Shortest Path First.

2 RIP works on Bellman Ford algorithm. OSPF works on Dijkstra algorithm.

It is a Distance Vector protocol and it It is a link state protocol and it analyzes


uses the distance or hops count to different sources like the speed, cost and path
3 determine the transmission path. congestion while identifying the shortest path.

It is basically use for smaller size It is basically use for larger size organization
4 organization. in the network.
lOMoAR cPSD| 14609304

5 It allows a maximum of 15 hops. There is no such restriction on the hop count.

It is not a more intelligent dynamic It is a more intelligent routing protocol than


6 routing protocol. RIP.

The networks are classified as areas The networks are classified as areas, sub areas,
7 and tables here. autonomous systems and backbone areas here.

8 Its administrative distance is 120. Its administrative distance is 110.

RIP uses UDP(User Datagram OSPF works for IP(Internet Protocol)


9 Protocol) Protocol. Protocol.

It calculates the metric in terms of Hop


10 Count. It calculates the metric in terms of bandwidth.
lOMoAR cPSD| 14609304

TCP /UDP SOCKETS

A TCP /UDP socket is an endpoint instance defined by an IP address and a port in the context of
either a particular TCP connection or the listening state.

A port is a virtualisation identifier defining a service endpoint (as distinct from a


service instance endpoint aka session identifier).

A TCP socket is not a connection, it is the endpoint of a specific connection.

There can be concurrent connections to a service endpoint, because a connection is identified


by both its local and remote endpoints, allowing traffic to be routed to a specific service instance.

There can only be one listener socket for a given address/port combination.

The use of ports allow computers/devices to run multiple services/applications.

The lANA (Internet Assigned Number Authority) has divided the port numbers into
three ranges: well known, registered, and dynamic (or private), as shown in Figure 23.4.
 Well-known ports. The ports ranging from 0 to 1023 are assigned and controlled
by lANA. These are the well-known ports.
 Registered ports. The ports ranging from 1024 to 49,151 are not assigned or controlled
by lANA. They can only be registered with lANA to prevent duplication.
 Dynamic ports. The ports ranging from 49,152 to 65,535 are neither controlled
nor registered. They can be used by any process. These are the ephemeral ports.
lOMoAR cPSD| 14609304

Socket Addresses
Process-to-process delivery needs two identifiers, IP address and the port number, at
each end to make a connection. The combination of an IP address and a port number is
called a socket address. The client socket address defines the client process uniquely
just as the server socket address defines the server process uniquely (see Figure 23.5).
A transport layer protocol needs a pair of socket addresses: the client socket address
and the server socket address. These four pieces of information are part of the IP header
and the transport layer protocol header. The IP header contains the IP addresses; the
UDP or TCP header contains the port numbers.

Imagine sitting on your PC at home, and you have two browser windows open.
One looking at the Google website, and the other at the Yahoo website.
The connection to Google would be:
Your PC – IP1+port 60200 ——– Google IP2 +port 80 (standard port)
The combination IP1+60200 = the socket on the client computer and IP2 + port 80 = destination
socket on the Google server.
The connection to Yahoo would be:
your PC – IP1+port 60401 ——–Yahoo IP3 +port 80 (standard port)
The combination IP1+60401 = the socket on the client computer and IP3 + port 80 = destination
socket on the Yahoo server.
Notes: IP1 is the IP address of your PC. Client port numbers are dynamically assigned, and can be
reused once the session is closed.

*SEE THE PPT ADDED IN THE GOOGLE CLASSROOM RELATED TO TCP SOCKETS
lOMoAR cPSD| 14609304

Congestion control algorithms


Too many packets present in (a part of) the network causes packet delay and loss that degrades
performance. This situation is called congestion.

The network and transport layers share the responsibility for handling congestion. Since
congestion occurs within the network, it is the network layer that directly experiences it and
must ultimately determine what to do with the excess packets.

However, the most effective way to control congestion is to reduce the load that the transport
layer is placing on the network. This requires the network and transport layers to work together

When too much traffic is offered, congestion sets in and performance degrades sharply Above
Figure depicts the onset of congestion. When the number of packets hosts send into the network
is well within it carrying capacity, the number delivered is proportional to the number sent. If
twice as many are sent, twice as many are delivered.

However, as the offered load approaches the carrying capacity, bursts of traffic occasionally
fill up the buffers inside routers and some packets are lost. These lost packets consume some
of the capacity, so the number of delivered packets falls below the ideal curve. The network is
now congested. Unless the network is well designed, it may experience a congestion collapse.

Difference between congestion control and flow control.

Congestion control has to do with making sure the network is able to carry the offered traffic.
It is a global issue, involving the behavior of all the hosts and routers.

Flow control, in contrast, relates to the traffic between a particular sender and a particular
receiver. Its job is to make sure that a fast sender cannot continually transmit data faster than
the receiver is able to absorb it.
lOMoAR cPSD| 14609304

Congestion Control techniques in Computer Networks

Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens.
The congestion control is handled either by the source or the destination.
Policies adopted by open loop congestion control –
1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion
and also able to optimize efficiency.

2. Window Policy:
The type of window at the sender side may also affect the congestion. Several packets
in the Go-back-n window are resent, although some packets may be received
successfully at the receiver side. This duplication may increase the congestion in the
network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet
that may have been lost.

3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discards the corrupted or less sensitive
package and also able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.

4. Acknowledgment Policy:
Since acknowledgement are also the part of the load in network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be
lOMoAR cPSD| 14609304

used to prevent congestion related to acknowledgment.


The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send a acknowledgment only
if it has to sent a packet or a timer expires.

5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further.
If there is a chance of congestion or there is congestion in the network, router should deny
establishing a virtual network connection to prevent further congestion.

Closed Loop Congestion Control


Closed loop congestion control technique is used to treat or alleviate congestion after it
happens. Several techniques are used by different protocols; some of them are:
1. Backpressure:
Backpressure is a technique in which a congested node stops receiving packet from
upstream node. This may cause the upstream node or nodes to become congested and
rejects receiving data from above nodes. Backpressure is a node-to-node congestion
control technique that propagates in the opposite direction of data flow. The
backpressure technique can be applied only to virtual circuit where each node has
information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and informs the source to slow down.
2. Choke Packet Technique:
Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitors its resources and the utilization at each of its output
lines. Whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback
to reduce the traffic. The intermediate nodes through which the packets have traveled
are not warned about congestion.
lOMoAR cPSD| 14609304

3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network. For example when
sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.

4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to
the source or destination to inform about congestion. The difference between choke
packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
 Forward Signaling : In forward signaling signal is sent in the direction of the
congestion. The destination is warned about congestion. The reciever in this case
adopt policies to prevent further congestion.
 Backward Signaling : In backward signaling signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it needs to
slow down.

We will start our study of congestion control by looking at the approaches that can be used at
different time scales. Then we will look at approaches to preventing congestion from occurring
in the first place, followed by approaches

a. General Principles of Congestion Control


b. Congestion Prevention Policies
c. Congestion Control in Virtual-Circuit Subnets
d. Congestion Control in Datagram Subnets
i. The Warning Bit
ii. Choke Packets
iii. Hop-by-Hop Choke Packets
e. Load Shedding
i. Random Early Detection
f. Jitter Control

a. General Principles of Congestion Control


Many problems in complex systems, such as computer networks, can be viewed from a
control theory point of view. This approach leads to dividing all solutions into two groups:
open loop andclosed loop.

Open loop solutions attempt to solve the problem by good design.


Tools for doing open-loop control include deciding when to accept new traffic, deciding when
to discard packets and which ones, and making scheduling decisions at various points in the
network.
lOMoAR cPSD| 14609304

Closed loop solutions are based on the concept of a feedback loop.


This approach has three parts when applied to congestion control:
1. Monitor the system to detect when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.
A variety of metrics can be used to monitor the subnet for congestion. Chief among these are
the percentage of all packets discarded for lack of buffer space, the average queue lengths, the
number of packets that time out and are retransmitted, the average packet delay, and the
standard deviation of packet delay. In all cases, rising numbers indicate growing congestion.
The second step in the feedback loop is to transfer the information about the congestion from
the point where it is detected to the point where something can be done about it.

In all feedback schemes, the hope is that knowledge of congestion will cause the hosts to take
appropriate action to reduce the congestion.
The presence of congestion means that the load is (temporarily) greater than the resources (in
part of the system) can handle. Two solutions come to mind: increase the resources or decrease
the load.

b. Congestion Prevention Policies

The methods to control congestion by looking at open loop systems. These systems are
designed to minimize congestion in the first place, rather than letting it happen and reacting
after the fact. They try to achieve their goal by using appropriate policies at various levels. In
Fig. 5-26 we see different data link, network, and transport policies that can affect congestion.

The data link layer Policies.


The retransmission policy is concerned with how fast a sender times out and what it
transmits upon timeout. A jumpy sender that times out quickly and retransmits all
outstanding packets using go back n will put a heavier load on the system than will a
leisurely sender that uses selective repeat.
Closely related to this is the buffering policy. If receivers routinely discard all out- of-order
packets, these packets will have to be transmitted again later, creating extra load. With
respect to congestion control, selective repeat is clearly better than go back n.
lOMoAR cPSD| 14609304

Acknowledgement policy also affects congestion. If each packet is acknowledged


immediately, the acknowledgement packets generate extra traffic. However, if
acknowledgements are saved up to piggyback onto reverse traffic, extra timeouts and
retransmissions may result. A tight flow control scheme (e.g., a small window) reduces
the data rate and thus helps fight congestion.
The network layer Policies.
The choice between using virtual circuits and using datagrams affects congestion
since many congestion control algorithms work only with virtual-circuit subnets.
Packet queuing and service policy relates to whether routers have one queue per
input line, one queue per output line, or both. It also relates to the order in which packets
are processed (e.g., round robin or priority based).

Discard policy is the rule telling which packet is dropped when there is no space.
A good routing algorithm can help avoid congestion by spreading the traffic over all
the lines, whereas a bad one can send too much traffic over already congested lines.
Packet lifetime management deals with how long a packet may live before being
discarded. If it is too long, lost packets may clog up the works for a long time, but if it
is too short, packets may sometimes time out before reaching their destination, thus
inducing retransmissions.

The transport layer Policies,


The same issues occur as in the data link layer, but in addition, determining the
timeout interval is harder because the transit time across the network is less predictable
than the transit time over a wire between two routers. If the timeout interval is too short,
extra packets will be sent unnecessarily. If it is too long, congestion will be reduced but
the response time will suffer whenever a packet is lost.

c. Congestion Control in Virtual-Circuit Subnets

 One technique that is widely used to keep congestion that has already started from
gettingworse is admission control.
 Once congestion has been signaled, no more virtual circuits are set up until the
problemhas gone away.
 An alternative approach is to allow new virtual circuits but carefully route all new
virtual circuits around problem areas. For example, consider the subnet of Fig. 5-
27(a), in whichtwo routers are congested, as indicated.
lOMoAR cPSD| 14609304

d. Congestion Control in Datagram Subnets


Each router can easily monitor the utilization of its output lines and other resources. For
example, it can associate with each line a real variable, u, whose value, between 0.0 and
1.0, reflects the recent utilization of that line. To maintain a good estimate of u, a sample
of the instantaneous line utilization, f (either 0 or 1), can be made periodically and u updated
according to

where the constant a determines how fast the router forgets recent history.

Whenever u moves above the threshold, the output line enters a ''warning'' state. Each newly
arriving packet is checked to see if its output line is in warning state. If it is, some action is
taken. The action taken can be one of several alternatives, which we will now discuss.

i. The Warning Bit


ii. Choke Packets
iii. Hop-by-Hop Choke Packets

The Warning Bit

The old DECNET architecture signaled the warning state by setting a special bit in the packet's
header.
When the packet arrived at its destination, the transport entity copied the bit into the next
acknowledgement sent back to the source. The source then cut back on traffic.
As long as the router was in the warning state, it continued to set the warning bit, which meant
that the source continued to get acknowledgements with it set.
The source monitored the fraction of acknowledgements with the bit set and adjusted its
transmission rate accordingly. As long as the warning bits continued to flow in, the source
continued to decrease its transmission rate. When they slowed to a trickle, it increased its
transmission rate.
Note that since every router along the path could set the warning bit, traffic increased only
when no router was in trouble.

Choke Packets

In this approach, the router sends a choke packet back to the source host, giving it the
destination found in the packet.
The original packet is tagged (a header bit is turned on) so that it will not generate any more
choke packets farther along the path and is then forwarded in the usual way.
When the source host gets the choke packet, it is required to reduce the traffic sent to the
specified destination by X percent. Since other packets aimed at the same destination are
probably already under way and will generate yet more choke packets, the host should ignore
choke packets referring to that destination for a fixed time interval. After that period has
expired, the host listens for more choke packets for another interval. If one arrives, the line is
still congested, so the host reduces the flow still more and begins ignoring choke packets
again. If no choke packets arrive during the listening period, the host may increase the flow
again.
lOMoAR cPSD| 14609304

The feedback implicit in this protocol can help prevent congestion yet not throttle any flow
unless trouble occurs.
Hosts can reduce traffic by adjusting their policy parameters.
Increases are done in smaller increments to prevent congestion from reoccurring quickly.
Routers can maintain several thresholds. Depending on which threshold has been crossed, the
choke packet can contain a mild warning, a stern warning, or an ultimatum.

HOP-BY-HOP BACK PRESSURE

At high speeds or over long distances, sending a choke packet to the source hosts does not
work well because the reaction is so slow.

Consider, for example, a host in San Francisco (router A in Fig. 5-28) that is sending traffic to
a host in New York (router D in Fig. 5-28) at 155 Mbps. If the New York host begins to run
out of buffers, it will take about 30 msec for a choke packet to get back to San Francisco to tell
it to slow down. The choke packet propagation is shown as the second, third, and fourth steps
in Fig. 5-28(a). In those 30 msec, another 4.6 megabits will have been sent. Even if the host in
San Francisco completely shuts down immediately, the 4.6 megabits in the pipe will continue
to pour in and have to be dealt with. Only in the seventh diagram in Fig. 5-28(a) will the New
York router notice a slower flow.

An alternative approach is to have the choke packet take effect at every hop it passes through,
as shown in the sequence of Fig. 5-28(b). Here, as soon as the choke packet reaches F, F is
required to reduce the flow to D. Doing so will require F to devote more buffers to the flow,
since the source is still sending away at full blast, but it gives D immediate relief, like a
headache remedy in a television commercial. In the next step, the choke packet reaches E,
which tells E to reduce the flow to F. This action puts a greater demand on E's buffers but gives
F immediate relief. Finally, the choke packet reaches A and the flow genuinely slows down.

The net effect of this hop-by-hop scheme is to provide quick relief at the point of congestion
at the price of using up more buffers upstream. In this way, congestion can be nipped in the
bud without losing any packets.
Figure 5-28. (a) A choke packet that affects only the source. (b) A choke packet that affects
each hop it passes through.

9
lOMoAR cPSD| 14609304

e. LOAD SHEDDING

When none of the above methods make the congestion disappear, routers can bring out
the heavy artillery: load shedding.

Load shedding is a fancy way of saying that when routers are being in undated by
packets that they cannot handle, they just throw them away.

A router drowning in packets can just pick packets at random to drop, but usually it can
do better than that.
Which packet to discard may depend on the applications running.
To implement an intelligent discard policy, applications must mark their packets in
priority classes to indicate how important they are. If they do this, then when packets
have to be discarded, routers can first drop packets from the lowest class, then the next
lowest class, and so on.
RANDOM EARLY DETECTION

It is well known that dealing with congestion after it is first detected is more effective
than letting it gum up the works and then trying to deal with it. This observation leads
to the idea of discarding packets before all the buffer space is really exhausted. A
popular algorithm for doing this is called RED (Random Early Detection).
In some transport protocols (including TCP), the response to lost packets is for the
source to slow down. The reasoning behind this logic is that TCP was designed for
wired networks and wired networks are very reliable, so lost packets are mostly due to
buffer overruns rather than transmission errors. This fact can be exploited to help
reduce congestion.
By having routers drop packets before the situation has become hopeless (hence the
''early'' in the name), the idea is that there is time for action to be taken before it is too
late. To determine when to start discarding, routers maintain a running average of their
queue lengths. When the average queue length on some line exceeds a threshold, the
line is said to be congested and action is taken.

f. JITTER CONTROL

The variation (i.e., standard deviation) in the packet arrival times is called jitter.
High jitter, for example, having some packets taking 20 msec and others taking 30 msec
to arrive will give an uneven quality to the sound or movie. Jitter is illustrated in Fig.
5- 29. In contrast, an agreement that 99 percent of the packets be delivered with a delay
in the range of 24.5 msec to 25.5 msec might be acceptable.

The jitter can be bounded by computing the expected transit time for each hop along
the path. When a packet arrives at a router, the router checks to see how much the packet
is behind or ahead of its schedule. This information is stored in the packet and updated
at each hop. If the packet is ahead of schedule, it is held just long enough to get it back
on schedule. If it is behind schedule, the router tries to get it out the door quickly.
lOMoAR cPSD| 14609304

Quality of Service

Quality of service (QoS) refers to any technology that manages data traffic to reduce packet
loss, latency and jitter on the network. QoS controls and manages network resources by setting
priorities for specific types of data on the network.

 The techniques we looked at in the previous sections are designed to reduce


congestion and improve network performance.
 However, with the growth of multimedia networking, often these ad hoc measures are
not enough.
 Serious attempts at guaranteeing quality of service through network and protocol
design are needed.
In the following sections we will continue our study of network performance.

A. Requirements
B. Techniques for Achieving Good Quality of Service
a. Overprovisioning
b. Buffering
c. Traffic Shaping
d. The Leaky Bucket Algorithm
e. The Token Bucket Algorithm
f. Resource Reservation
g. Admission Control
h. Proportional Routing
i. Packet Scheduling

B. Requirements
A stream of packets from a source to a destination is called a flow.

In a connection-oriented network, all the packets belonging to a flow follow the same
route;
lOMoAR cPSD| 14609304

In a connectionless network, they may follow different routes.

The needs of each flow can be characterized by four primary parameters:


reliability,
delay,
jitter, and
bandwidth.
Together these determine the QoS (Quality of Service) the flow requires.

Several common applications and the stringency of their requirements are listed in Fig. 5-27.

The first four applications have stringent requirements on reliability. No bits may be delivered
incorrectly. This goal is usually achieved by check summing each packet and verifying the
checksum at the destination. If a packet is damaged in transit, it is not acknowledged and will
be retransmitted eventually. This strategy gives high reliability.

The four final (audio/video) applications can tolerate errors, so no checksums are computed or
verified.

File transfer applications, including e-mail and video, are not delay sensitive. If all packets are
delayed uniformly by a few seconds, no harm is done.

Interactive applications, such as Web surfing and remote login, are more delay sensitive.

Real-time applications, such as telephony and videoconferencing have strict delay


requirements.

C. Techniques for Achieving Good Quality of Service

We know something about QoS requirements, how do we achieve them? We will now
examine some of the techniques system designers use to achieve QoS.

j. Overprovisioning
k. Buffering
l. Traffic Shaping
m. The Leaky Bucket Algorithm
n. The Token Bucket Algorithm
o. Resource Reservation
p. Admission Control
q. Proportional Routing
12
lOMoAR cPSD| 14609304

Overprovisioning
To provide so much router capacity, buffer space, and bandwidth that the packets just fly through easily. The
trouble with this solution is that it is expensive.

The telephone system is overprovisioned. It is rare to pick up a telephone and not get a dial tone instantly.

Buffering
Flows can be buffered on the receiving side before being delivered. Buffering them does not affect the reliability
or bandwidth, and increases the delay, but it smooths out the jitter. For audio and video on demand, jitter is the
main problem, so this technique helps a lot.

Example:

The above figure shows, we see a stream of packets being delivered with substantial jitter. Packet 1 is sent from
the server at t = 0 sec and arrives at the client at t = 1 sec. Packet 2 undergoes more delay and takes 2 sec to arrive.
As the packets arrive, they are buffered on the client machine.

At t = 10 sec, playback begins. At this time, packets 1 through 6 have been buffered so that they can be removed
from the buffer at uniform intervals for smooth play. Unfortunately, packet 8 has been delayed so much that it is
not available when its play slot comes up, so playback must stop until it arrives, creating an annoying gap in the
music or movie. This problem can be alleviated by delaying the starting time even more, although doing so also
requires a larger buffer. Commercial Web sites that contain streaming audio or video all use players that buffer for
about 10 seconds before starting to play.

Traffic Shaping
1. Another method of congestion control is to “shape” the traffic before it enters the
network.
2. Traffic shaping controls the rate at which packets are sent (not just how many). Used in ATM and
Integrated Services networks.
3. At connection set - up time, the sender and carrier negotiate a traffic pattern (shape).

Two traffic shaping algorithms are: Leaky Bucket


Token Bucket

Leaky Bucket Algorithm


Consider a Bucket with a small hole at the bottom, whatever may be the rate of water pouring into the bucket, the
rate at which water comes out from that small hole is constant. This scenario is depicted in figure 7.5.3(a).

Once the bucket is full, any additional water entering it spills over the sides and is lost (i.e. it
doesn’t appear in the output stream through the hole underneath).
lOMoAR cPSD| 14609304

The same idea of leaky bucket can be applied to packets, as shown in Fig. 7.5.3(b). Conceptually each network
interface contains a leaky bucket. And the following steps are performed:

• When the host has to send a packet, the packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets at a constant rate.
• Bursty traffic is converted to a uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.
This arrangement can be simulated in the operating system or can be built into the hardware. Implementation of
this algorithm is easy and consists of a finite queue. Whenever a packet arrives, if there is room in the queue it is
queued up and if there is no room then the packet is discarded.

Token Bucket Algorithm

The leaky bucket algorithm described above, enforces a rigid pattern at the output stream, irrespective of the
pattern of the input. For many applications it is better to allow the output to speed up somewhat when a larger burst
arrives than to loose the data. Token Bucket algorithm provides such a solution. In this algorithm leaky bucket
holds token, generated at regular intervals. Main steps of this algorithm can be described as follows:
 In regular intervals tokens are thrown into the bucket.

 The bucket has a maximum capacity.


 If there is a ready packet, a token is removed from the bucket, and the packet is send.
 If there is no token in the bucket, the packet cannot be send.

Figure 7.5.4 shows the two scenarios before and after the tokens present in the bucket have been consumed. In Fig.
7.5.4(a) the bucket holds two tokens, and three packets are waiting to be sent out of the interface, in Fig. 7.5.4(b)
two packets have been sent out by consuming two tokens, and 1 packet is still left.

The token bucket algorithm is less restrictive than the leaky bucket algorithm, in a sense that it allows bursty
traffic. However, the limit of burst is restricted by the number of tokens available in the bucket at a particular
instant of time.

The implementation of basic token bucket algorithm is simple; a variable is used just to count the tokens. This
counter is incremented every t seconds and is decremented whenever a packet is sent. Whenever this counter
reaches zero, no further packet is sent out as shown in Fig. 7.5.5.
lOMoAR cPSD| 14609304

Leaky Bucket vs. Token Bucket


1. LB discards packets; TB does not. TB discards tokens.
2. With TB, a packet can only be transmitted if there are enough tokens to cover its length in bytes.
3. LB sends packets at an average rate. TB allows for large bursts to be sent faster by speeding up the
output.
4. TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.

Resource Reservation
Once we have a specific route for a flow, it becomes possible to reserve resources along that route to make sure the
needed capacity is available. Three different kinds of resources can potentially be reserved:
1. Bandwidth.
2. Buffer space.
3. CPU cycles.

The first one, bandwidth, is the most obvious. If a flow requires 1 Mbps and the outgoing line has a capacity of 2
Mbps, trying to direct three flows through that line is not going to work. Thus, reserving bandwidth means not
oversubscribing any output line.
lOMoAR cPSD| 14609304

A second resource that is often in short supply is buffer space. When a packet arrives, it is usually deposited on the
network interface card by the hardware itself. The router software then has to copy it to a buffer in RAM and
queue that buffer for transmission on the chosen outgoing line. If no buffer is available, the packet has to be
discarded since there is no place to put it. For a good quality of service, some buffers can be reserved for a specific
flow so that flow does not have to compete for buffers with other flows. There will always be a buffer available
when the flow needs one, up to some maximum.

Finally, CPU cycles are also a scarce resource. It takes router CPU time to process a packet, so a router can process
only a certain number of packets per second. Making sure that the CPU is not overloaded is needed to ensure
timely processing of each packet.

Admission Control
The incoming traffic from some flow is well shaped and can potentially follow a single route in which capacity can
be reserved in advance on the routers along the path. When such a flow is offered to a router, it has to decide,
based on its capacity and how many commitments it has already made for other flows, whether to admit or reject
the flow.
The decision to accept or reject a flow is not a simple matter of comparing the (bandwidth, buffers, cycles)
requested by the flow with the router's excess capacity in those three dimensions.

An example flow specification.

Proportional Routing
Most routing algorithms try to find the best path for each destination and send all traffic to that
destination over the best path. A different approach that has been proposed to provide a higher quality of
service is to split the traffic for each destination over multiple paths.
Since routers generally do not have a complete overview of network-wide traffic, the only feasible
way to split traffic over multiple routes is to use locally-available information.
A simple method is to divide the traffic equally or in proportion to the capacity of the outgoing links.

Packet Scheduling
If a router is handling multiple flows, there is a danger that one flow will hog too much of its
capacity and starve all the other flows. Processing packets in the order of their arrival means that an
lOMoAR cPSD| 14609304

aggressive sender can capture most of the capacity of the routers its packets traverse, reducing the quality of
service for others.

Fair queueing algorithm

The essence of the algorithm is that routers have separate queues for each output line, one for each
flow. When a line becomes idle, the router scans the queues round robin, taking the first packet on the next
queue. In this way, with n hosts competing for a given output line, each host gets to send one out of every n
packets. Sending more packets will not improve this fraction.
One problem with this algorithm is that it gives all hosts the same priority. In many situations, it is
desirable to give video servers more bandwidth than regular file servers so that they can be given two or
more bytes per tick. This modified algorithm is called weighted fair queueing and is widely used.
Sometimes the weight is equal to the number of flows coming out of a machine, so each process gets equal
bandwidth.
lOMoAR cPSD| 14609304

Integrated services:

QoS relates to the quality of service. It refers to a number of networking technologies that enable the network to deliver the
results it needs. In addition, QoS helps improve the performance of the network in terms of availability, error rate, latency
and throughput. In addition, QoS supports the prioritization of network traffic. QoS can also consider a
specific router or server , etc. Therefore, network monitoring systems are usually provided as part of QoS to ensure that the
network is functioning at the desired level. Overall, QoS provides two types of services: integrated services and differentiated
services.
Integrated services refer to an architecture that ensures quality of service (QoS) in a network. In addition, these services
enable the recipient to watch and hear video and sound without interruption. Every router in the network implements
integrated services. In addition, every registration requires some kind of guarantee in order to be able to make an individual
reservation.

In addition, it is possible to implement the integrated service structure through signaling protocol and access
control routine, classifier and packet planner. In addition, these services require an explicit signaling mechanism to
transmit information to routers so that they can provide the requested resources.

What are differentiated services


Differentiated services refer to a multiple service model that can meet many requirements. In other words, it
supports several business critical applications. In addition, these services help to minimize the load on network
devices and also support the scaling of the network. Some important differentiated services are as follows.

Traffic Conditioning - Ensures that traffic enters the DiffServ domain.

Packet Classification - Categorizes the packet within a specific group using the traffic descriptor.
lOMoAR cPSD| 14609304

Packet Marking - Classify a packet based on a specific traffic descriptor.

Congestion Management - Achieve queues and traffic planning.

Traffic congestion avoidance - Monitor traffic loads to minimize congestion. In includes dropping packets.

Difference between integrated services and differentiated services


Definition
Integrated services refer to an architecture that specifies the elements to ensure quality of service (QoS) on a
network, while differentiated services are a computer network architecture that provides a simple and scalable
mechanism for classifying and managing network traffic and providing QoS in specified in modern IP networks.

Functionality

Integrated services involve reserving resources in advance before the required quality of service is achieved. On
the other hand, differential services mark the packets with priority and send them to the network without prior
reservation. Hence, their functionality is the main difference between integrated services and differential services.

Scalability

In addition, integrated services are not scalable, while differentiated services are scalable.

Put up
Another difference between integrated services and differential services is that integrated services involve one
facility per flow, while different services involve one long-term facility.

Scope of services
In addition, integrated services include an end-to-end service scope, whereas differentiated services include a
domain service scope.
lOMoAR cPSD| 14609304

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy