unit4
unit4
UNIT 4
[Unit 4] 6 Hrs
Network Layer and Congestion Control: IPv4/IPv 6, Routers and Routing Algorithms distance
vector link state. TCP UDP and sockets.
Congestion Control and QOS: General principles, Congestion prevention policies, Load
shading, Jitter control, Quality of service: Packet scheduling, Traffic shaping, integrated Services.
IPV4
One of the most important topics in any discussion of TCP/IP is IP addressing. An IP address is a
numeric identifier assigned to each machine on an IP network. It designates the specific location of
a device on the network.
An IP address is a software address, not a hardware address—the latter is hard-coded on a
network interface card (NIC) and used for finding hosts on a local network. IP addressing was
designed to allow hosts on one network to communicate with a host on a different network
regardless of the type of LANs the hosts are participating in.
Before we get into the more complicated aspects of IP addressing, you need to understand
some of the basics. First, I’m going to explain some of the fundamentals of IP addressing and its
terminology. Then you’ll learn about the hierarchical IP addressing scheme and private IP
addresses.
IP Terminology
Throughout this chapter you’re being introduced to several important terms that are vital
to understanding the Internet Protocol. Here are a few to get you started:
Bit A bit is one digit, either a 1 or a 0.
Byte A byte is 7 or 8 bits, depending on whether parity is used. For the rest of this chapter, always
assume a byte is 8 bits.
Octet An octet, made up of 8 bits, is just an ordinary 8-bit binary number. In this chapter, the terms
byte and octet are completely interchangeable.
Network address This is the designation used in routing to send packets to a remote network—for
example, 10.0.0.0, 172.16.0.0, and 192.168.10.0.
Broadcast address The address used by applications and hosts to send information to all nodes on
a network is called the broadcast address. Examples of layer 3 broadcasts include 255.255.255.255,
which is any network, all nodes; 172.16.255.255, which is all subnets and hosts on network
172.16.0.0; and 10.255.255.255, which
broadcasts to all subnets and hosts on network 10.0.0.0
IP Addressing
Classful Addressing
IPv4 addressing, at its inception, used the concept of classes. This architecture is
called classful addressing. Although this scheme is becoming obsolete, we briefly
discuss it here to show the rationale behind classless addressing.
Mask
The length of the network id and host id (in bits) is predetermined in classful addressing,
we can also use a mask (also called the default mask), a 32-bit number made of contiguous Is
followed by contiguous as. The masks for classes A, B, and C are shown in following Table.
The concept does not apply to classes D and E.
Default masks for classful addressing
Subnetting
If an organization was granted a large block in class A or B, it could divide the
addresses into several contiguous groups and assign each group to smaller networks (called
subnets)
Supernetting
In supernetting, an organization can combine several class C blocks to create a larger
range of addresses. In other words, several networks are combined to create a supernetwork
or a supernet. An organization can apply for a set of class C blocks instead of just one.
Address Depletion
The flaws in classful addressing scheme combined with the fast growth of the Internet
led to the near depletion of the available addresses. Yet the number of devices on the Internet
is much less than the 232 address space.
Restriction To simplify the handling of addresses, the Internet authorities impose three
restrictions on classless address blocks:
1. The addresses in a block must be contiguous, one after another.
2. The number of addresses in a block must be a power of 2 (I, 2, 4, 8, ... ).
3. The first address must be evenly divisible by the number of addresses.
Following figure shows a block of addresses, in both binary and dotted-decimal notation,
granted to a small business that needs 16 addresses.
We can see that the restrictions are applied to this block. The addresses are contiguous.
The number of addresses is a power of 2 (16 = 24), and the first address is divisible by 16. The
first address, when converted to a decimal number, is 3,440,387,360, which when divided by
16 results in 215,024,210.
In IPv4 addressing, a block of addresses can be defined as x.y.z.t /n in which x.y.z.t defines
one of the addresses and the /n defines the mask.
The first address in the block can be found by setting the rightmost 32 − n bits to 0s.
A block of addresses is granted to a small organization. We know that one of the addresses
is 205.16.37.39/28. What is the first address in the block?
Solution
The binary representation of the given address is
11001101 00010000 00100101 00100111
The last address in the block can be found by setting the rightmost 32 − n bits to 1s.
A block of addresses is granted to a small organization. We know that one of the addresses
is 205.16.37.39/28. What is the last address in the block?
Solution
The binary representation of the given address is
11001101 00010000 00100101 00100111
If we set 32 − 28 rightmost bits to 1, we get
11001101 00010000 00100101 00101111
or
205.16.37.47
The number of addresses in the block can be found by using the formula 232−n.
Sub-netting
Subnetting is the strategy used to partition a single physical network into more than onesmaller
logical sub-networks (subnets).
An IP address includes a network segment and a host segment. Subnets are designed by
accepting bits from the IP address's host part and using these bits to assign a number of smaller sub-
networks inside the original network.
Subnetting allows an organization to add sub-networks without the need to acquire a new
network number via the Internet service provider (ISP). Subnetting helps to reduce the networktraffic
and conceals network complexity.
Subnetting is essential when a single network number has to be allocated over numeroussegments of
a local area network (LAN).
Subnets were initially designed for solving the shortage of IP addresses over the Internet. Each IP
address consists of a subnet mask. All the class types, such as Class A, Class B and
Class C include the subnet mask known as the default subnet mask.
The subnet mask is intended for determining the type and number of IP addresses required for a
given local network. The firewall or router is called the default gateway. The default subnet mask is
as follows:
Class A: 255.0.0.0
Class B: 255.255.0.0
Class C: 255.255.255.0
The subnetting process allows the administrator to divide a single Class A, Class B, or ClassC
network number into smaller portions. The subnets can be sub netted again into sub- subnets.
Dividing the network into a number of subnets provides the following benefits:
*Detailed solved examples given in hand written solved examples given on google classroom
IPV6
IPv6 addresses
IPv6 addresses are written using hexadecimal, as opposed to dotted decimal in IPv4.
Because an hexadecimal number uses 4 bits this means that an IPv6 address consists of 32 hexadecimal
numbers.
These numbers are grouped in 4’s giving 8 groups or blocks. The groups are written with a : (colon) as a
separator.
Note: Because of the length of IPv6 addresses various shortening techniques are employed.
The main technique being to omit repetitive 0’s as shown in the example above.
In IPv4 an address is split into two components a network component and a node component.
This was done initially using Address classes and later using subnet masking.
In IPv6 we do the same. The first step is to split the address into two parts.
The address is split into 2 64 bit segments the top 64 bits is the network part and the lower 64 bits the
node part:
The upper 64 bits are used for routing.
The lower 64 bits identify the address of the interface or node, and is derived from the actual physical
or MAC address using IEEE’s Extended Unique Identifier (EUI-64) format.
If we look at the upper 64 bits in more detail we can see that it is split into 2 blocks of 48 and 16
bits respectively the lower 16 bits are used for subnets on an internal networks, and are controlled by a
network administrator.
The upper 48 bits are used for the global network addresses and are for routing over the internet.
Global addresses are routable on the internet and start with 2001:
These addresses are known as global Unicast addresses and are the equivalent of the public addresses of
IPv4 networks.
The Internet authorities allocate address blocks to ISPs who in turn allocate them to their customers.
These addresses are not routed on the Internet and are reserved for internal networks.
Link Local
Unique Local
Link Local
These are meant to be used inside an internal network, and again they are not routed on the Internet.
It is equivalent to the IPv4 address 169.254.0.0/16 which is allocated on an IPv4 network when no DHCP
server is found.
They are restricted to a link and are not routed on the Internal network or the Internet.
Link Local addresses are self assigned i.e. they do not require a DHCP server.
A link local address is required on every IP6 interface even if no routing is present.
Unique Local
They are routed on the Internal network but not routed on the Internet.
They are equivalent to the IPv4 addresses are 10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16
The address space is divided into two /8 spaces: fc00::/8 for globally assigned addressing, and fd00::/8 for
locally assigned addressing.
On IPv4 networks you can access a network rsource e.g. a web page using the format
http://192.168.1.21/webpage
However IPv6 addresses contain a colon as separator and so must be enclosed in square brackets.
http://[IPv6 address]/webpage.
ping ::1
Advantages of IPv6
The major difference in IPv4 and IPv6 packet formats are as follows:
1. IPv6 packet format does not contain header length field as IPv6 base header has fixed length of 40
bytes. IPv4 head is variable in length so header length field is required.
2. The header checksum field is not present in IPv6. As a result error detection is not done on the header,
checksum is provided by upper layer protocols. It reduces the processing time of an IP packet.
3. In IPv6, maximum hop field is used whereas in IPv4 Time to line (TTL) field is used.
4. In IPv6, the size of payload (excluding header) is specified whereas in IPv4 total length field is used
that specifies the total size of IP packet including header.
5. There is no fragmentation field in the base header in IPv6. It has been moved to the extension header.
6. The identification, flag, and offset field are eliminated from the base header in IPv6. They are included
in the fragmentation extension header.
7. The options field is moved under extension headers in IPv6.
8. The source and destination address sizes in IPv6 are 128 bits as against 32 bits in IPv4.
9. The service type field is eliminated in IPv6. The priority and flow label fields together take over the
function of the service type field.
ROUTING ALGORITHMS
The main function of NL (Network Layer) is routing packets from the source machine to the
destination machine.
There are two processes inside router:
a) One of them handles each packet as it arrives, looking up the outgoing line to use for it in
the routing table. This process is forwarding.
b) The other process is responsible for filling in and updating the routing tables.
That is where the routing algorithm comes into play. This process is routing.
ROUTING ALGORITHMS
The routing algorithm is that part of the network layer software responsible for deciding which
output line an incoming packet should be transmitted on.
Adaptive algorithm
Adaptive algorithm in contrast, change their routing decisions to reflect
changes in thetopology, and usually the traffic as well.
BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON
Configuration Manual Automatic
Routing table building Routing locations are Locations are dynamically filled inthe
hand-typed table.
Routes User defined Routes are updated according to
change in topology.
Routing algorithms Doesn't employ complex Uses complex routing algorithms to
routing algorithms. perform routing operations.
Implemented in Small networks Large networks
Link failure Link failure obstructs the Link failure doesn't affect the
rerouting. rerouting.
Security Provides high security. Less secure due to sending broadcastsand
multicasts.
Routing protocols No routing protocols are Routing protocols such as RIP,
indulged in the process. EIGRP, etc are involved in the
routing process.
Additional resources Not required Needs additional resources to storethe
information.
One can make a general statement about optimal routes without regard to
networktopology or traffic. This statement is known as the optimality principle.
If router J is on the optimal path from router I to router K, then the optimal path
from J toK also falls along the same route.
The set of optimal routes from all sources to a given destination form a tree
rooted at thedestination. Such a tree is called a sink tree.
As a direct consequence of the optimality principle, we can see that the set of optimal
routesfrom all sources to a given destination form a tree rooted at the destination.
Such a tree is called a sink tree where the distance metric is the number of hops.
Note thata sink tree is not necessarily unique; other trees with the same path
lengths may exist.
The goal of all routing algorithms is to discover and use the sink trees for all routers.
OSPF
Routing for Mobile Hosts
Routing in Ad Hoc Networks
The idea is to build a graph of the subnet, with each node of the graph representing a
router and each arc of the graph representing a communication line (often called a link).
To choose a route between a given pair of routers, the algorithm just finds the shortest path
betweenthem on the graph.
1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to
this nodeand make it the first permanent node.
2. Examine each neighbor of the node that was the last permanent node.
3. Assign a cumulative cost to each node and make it tentative
4. Among the list of tentative nodes
a. Find the node with the smallest cost and make it Permanent
b. If a node can be reached from more than one route then select the route
with theshortest cumulative cost.
5. Repeat steps 2 to 4 until every node becomes permanent
Fig. 5-7 The first six steps used in computing the shortest path from A to D. The
arrowsindicate the working node.
To illustrate how the labelling algorithm works, look at the weighted, undirected
graph ofFig. 5-7(a), where the weights represent, for example, distance.
We want to find the shortest path from A to D. We start out by marking
node A aspermanent, indicated by a filled-in circle.
Then we examine, in turn, each of the nodes adjacent to A (the working node),
relabelingeach one with the distance to A.
Whenever a node is relabelled, we also label it with the node from which the
probe wasmade so that we can reconstruct the final path later.
lOMoAR cPSD| 14609304
Having examined each of the nodes adjacent to A, we examine all the tentatively
labellednodes in the whole graph and make the one with the smallest label
permanent, as shown in Fig. 5-7(b).
This one becomes the new working node.
We now start at B and examine all nodes adjacent to it. If the sum of the label on B
and the distance from B to the node being considered is less than the label on that
node, we have a shorterpath, so the node is relabeled.
After all the nodes adjacent to the working node have been inspected and the tentative labels
changed if possible, the entire graph is searched for the tentatively-labeled node with the smallest.
Flooding
When a routing algorithm is implemented, each router must make decisions based
on local knowledge, not the complete picture of the network. A simple local technique is
flooding, in which every incoming packet is sent out on every outgoing line except the one
it arrived on.
Another static algorithm is flooding, in which every incoming packet is sent out
on everyoutgoing line except the one it arrived on.
Flooding obviously generates vast numbers of duplicate packets, in fact, an
infinitenumber unless some measures are taken to damp the process.
One such measure is to have a hop counter contained in the header of each packet,
whichis decremented at each hop, with the packet being discarded when the
counter reaches zero.
Ideally, the hop counter should be initialized to the length of the path from source
to destination. If the sender does not know how long the path is, it can initialize
the counterto the worst case, namely, the full diameter of the subnet.
A variation of flooding that is slightly more practical is selective flooding. In this
algorithm the routers do not send every incoming packet out on every line, only on
those lines that are going approximately in the right direction
Flooding is not practical in most applications.
Figure 5-9. (a) A network. (b) Input from A, I, H, K, and the new routing table for J.
Part (a) shows a subnet. The first four columns of part (b) show the delay
vectorsreceived from the neighbors of router J.
A claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to
D, etc.Suppose that J has measured or estimated its delay to its neighbors, A, I,
H, and K as 8, 10, 12, and 6 msec, respectively.
Each node constructs a one-dimensional array containing the "distances"(costs)
to all othernodes and distributes that vector to its immediate neighbors.
1. The starting assumption for distance-vector routing is that each node knows the
cost of thelink to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost.
Example 1
lOMoAR cPSD| 14609304
1. The starting assumption for distance-vector routing is that each node knows the
cost of the link to each of its directly connected neighbors.
2. A link that is down is assigned an infinite cost.
Stored at A B C D E F G
Node
A 0 1 1 ∞ 1 1 ∞
B 1 0 1 ∞ ∞ ∞ ∞
C 1 1 0 1 ∞ ∞ ∞
D ∞ ∞ 1 0 ∞ ∞ 1
E 1 ∞ ∞ ∞ 0 ∞ ∞
F 1 ∞ ∞ ∞ ∞ 0 1
G ∞ ∞ ∞ 1 ∞ 1 0
We can represent each node's knowledge about the distances to all other nodes as a table
like theone given in Table 1.
Note that each node only knows the information in one row of the table.
1. Every node sends a message to its directly connected neighbors containing its
personallist of distance. ( for example, A sends its information to its neighbors
B,C,E, and F. )
2. If any of the recipients of the information from A find that A is advertising a path
shorterthan the one they currently know about, they update their list to give the
new path lengthand note that they should send packets for that destination through
A. ( node B learns from A that node E can be reached at a cost of 1; B also knows
it can reach A at a cost of1, so it adds these to get the cost of reaching E by means
of A. B records that it can
reach E at a cost of 2 by going through A.)
3. After every node has exchanged a few updates with its directly connected
neighbors, allnodes will know the least-cost path to all the other nodes.
4. In addition to updating their list of distances when they receive updates, the nodes
need tokeep track of which node told them about the path that they used to
calculate the cost, so that they can create their forwarding table. ( for example, B
knows that it was A who said" I can reach E in one hop" and so B puts an entry in
its table that says " To reach E, use the link to A.)
Stored at Node A B C D E F G
A 0 1 1 2 1 1 2
lOMoAR cPSD| 14609304
B 1 0 1 2 2 2 3
C 1 1 0 1 2 2 2
D 2 1 0 3 2 1
E 1 2 2 3 0 2 3
F 1 2 2 2 2 0 1
G 3 2 1 3 1 0
In practice, each node's forwarding table consists of a set of triples of the form: (
Destination, Cost, Next Hop).
For example, Table 3 shows the complete routing table maintained at node Bfor
the network in figure1.
A 1 A
C 1 C
D 2 C
E 2 A
F 2 A
G 3 A
Example 2
In distance vector routing, the least- cost route between any two nodes is the
route withminimum distance.
In this protocol, as the name implies, each node maintains a vector (table) of minimum
distancesto every node.
Mainly 3 things in this
lOMoAR cPSD| 14609304
Initiali
zation
Sharin
g
Updati
ng
Initialization
Each node can know only the distance between itself and its immediate neighbors, those
directly connected to it. So for the moment, we assume that each node can send a message to
the immediate neighbors and find the distance between itself and these neighbors. Below
fig shows the initial tables for each node. The distance for any entry that is not a neighbor
is marked as infinite (unreachable).
Sharing
The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So if node C shares its
routing table with A, node A can also know how to reach node E. On the other hand, node
C does not know how to reach node D, but node A does. If node A shares its routing table
with node C, node C also knows how to reach node D. In other words, nodes A and C, as
immediate neighbors, can improvetheir routing tables if they help each other .
NOTE: In distance vector routing, each node shares its routing table with its immediate
neighborsperiodically and when there is a change.
Updating
When a node receives a two -column table from a neighbor, it needs to update its routing
table.Updating takes three steps:
1. The receiving node needs to add the cost between itself and the sending node
to eachvalue in the second column. (x+y)
2. If the receiving node uses information from any row. The sending node is
the nextnode in the route.
3. The receiving node needs to compare each row of its old table with the
correspondingrow of the modified version of the received table.
a. If the next -node entry is different, the receiving node chooses the row
with thesmaller cost. If there is a tie, the old one is kept.
b. If the next - node entry is the same, the receiving node chooses the new
lOMoAR cPSD| 14609304
row. For example, suppose node C has previously advertised a route to node X with
distance 3. Supposethat now there is no path between C and X; node C now advertises
this route with a distance ofinfinity. Node A must not ignore this value even though its old
entry is smaller. The old route doesnot exist anymore. The new route has a distance of
infinity.
Counting to infinity is just another name for a routing loop. In distance vector routing,
routing loops usually occur when an interface goes down. It can also occur when two
routers send updatesto each other at the same time.
1. One of the important issue in Distance Vector Routing is County of Infinity Problem.
2. Counting to infinity is just another name for a routing loop.
3. In distance vector routing, routing loops usually occur when an interface goes down.
4. It can also occur when two routers send updates to each other at the same time.
One way to solve this problem is for routers to send information only to the
neighboursthat are not exclusive links to the destination.
For example, in this case, C shouldn't send any information to B about A,
because B isthe only way to A.
Example 2 by Andrew
Consider the five-node (linear) subnet of following Fig., where the delay metric is the
number ofhops. Suppose A is down initially and all the other routers know this. In other
words, they have all recorded the delay to A as infinity.
Now let us consider the situation of Fig. (b), in which all the lines and routers are initially
up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4, respectively. Suddenly A
goes down,or alternatively, the line between A and B is cut, which is effectively the same
thing from B's point of view.
At the first packet exchange, B does not hear anything from A. Fortunately, C says ‘‘Do
not worry; I have a path to A of length 2.’’ Little does B suspect that C’s path runs
through B itself.For all B knows, C might have ten links all with separate paths to A of
length 2. As a result, B thinks it can reach A via C, with a path length of 3. D and E do
not update their entries for A onthe first exchange.
On the second exchange, C notices that each of its neighbors claims to have a path to A of
length 3. It picks one of them at random and makes its new distance to A 4, as shown in
the third row of Fig (b). Subsequent exchanges produce the history shown in the rest of
Fig. (b).
RIP
The Routing Information Protocol (RIP) is an intradomain routing protocol used
inside an autonomous system. It is a very simple protocol based on distance vector
routing.
Timers in RIP
RIP Version 2
lOMoAR cPSD| 14609304
The idea behind link state routing is simple and can be stated as five parts. Each router
must do thefollowing:
1. Discover its neighbors and learn their network addresses.
2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router
Learning about the Neighbours
When a router is booted, its first task is to learn who its neighbours are.
It accomplishes this goal by sending a special HELLO packet on each point-to-point line.
The router on the other end is expected to send back a reply telling who it is.
(a) A subnet. (b) The link state packets for this subnet.
Once the information needed for the exchange has been collected, the next step is for each
router tobuild a packet containing all the data.
The packet starts with the identity of the sender, followed by a sequence number and age
and a listof neighbours.
For each neighbour, the delay to that neighbour is given.
An example subnet is given in above Fig. (a) with delays shown as labels on the lines.
Thecorresponding link state packets for all six routers are shown in above Fig. (b).
OSPF
OSPF packets
It is basically use for smaller size It is basically use for larger size organization
4 organization. in the network.
lOMoAR cPSD| 14609304
The networks are classified as areas The networks are classified as areas, sub areas,
7 and tables here. autonomous systems and backbone areas here.
A TCP /UDP socket is an endpoint instance defined by an IP address and a port in the context of
either a particular TCP connection or the listening state.
There can only be one listener socket for a given address/port combination.
The lANA (Internet Assigned Number Authority) has divided the port numbers into
three ranges: well known, registered, and dynamic (or private), as shown in Figure 23.4.
Well-known ports. The ports ranging from 0 to 1023 are assigned and controlled
by lANA. These are the well-known ports.
Registered ports. The ports ranging from 1024 to 49,151 are not assigned or controlled
by lANA. They can only be registered with lANA to prevent duplication.
Dynamic ports. The ports ranging from 49,152 to 65,535 are neither controlled
nor registered. They can be used by any process. These are the ephemeral ports.
lOMoAR cPSD| 14609304
Socket Addresses
Process-to-process delivery needs two identifiers, IP address and the port number, at
each end to make a connection. The combination of an IP address and a port number is
called a socket address. The client socket address defines the client process uniquely
just as the server socket address defines the server process uniquely (see Figure 23.5).
A transport layer protocol needs a pair of socket addresses: the client socket address
and the server socket address. These four pieces of information are part of the IP header
and the transport layer protocol header. The IP header contains the IP addresses; the
UDP or TCP header contains the port numbers.
Imagine sitting on your PC at home, and you have two browser windows open.
One looking at the Google website, and the other at the Yahoo website.
The connection to Google would be:
Your PC – IP1+port 60200 ——– Google IP2 +port 80 (standard port)
The combination IP1+60200 = the socket on the client computer and IP2 + port 80 = destination
socket on the Google server.
The connection to Yahoo would be:
your PC – IP1+port 60401 ——–Yahoo IP3 +port 80 (standard port)
The combination IP1+60401 = the socket on the client computer and IP3 + port 80 = destination
socket on the Yahoo server.
Notes: IP1 is the IP address of your PC. Client port numbers are dynamically assigned, and can be
reused once the session is closed.
*SEE THE PPT ADDED IN THE GOOGLE CLASSROOM RELATED TO TCP SOCKETS
lOMoAR cPSD| 14609304
The network and transport layers share the responsibility for handling congestion. Since
congestion occurs within the network, it is the network layer that directly experiences it and
must ultimately determine what to do with the excess packets.
However, the most effective way to control congestion is to reduce the load that the transport
layer is placing on the network. This requires the network and transport layers to work together
When too much traffic is offered, congestion sets in and performance degrades sharply Above
Figure depicts the onset of congestion. When the number of packets hosts send into the network
is well within it carrying capacity, the number delivered is proportional to the number sent. If
twice as many are sent, twice as many are delivered.
However, as the offered load approaches the carrying capacity, bursts of traffic occasionally
fill up the buffers inside routers and some packets are lost. These lost packets consume some
of the capacity, so the number of delivered packets falls below the ideal curve. The network is
now congested. Unless the network is well designed, it may experience a congestion collapse.
Congestion control has to do with making sure the network is able to carry the offered traffic.
It is a global issue, involving the behavior of all the hosts and routers.
Flow control, in contrast, relates to the traffic between a particular sender and a particular
receiver. Its job is to make sure that a fast sender cannot continually transmit data faster than
the receiver is able to absorb it.
lOMoAR cPSD| 14609304
Congestion control refers to the techniques used to control or prevent congestion. Congestion
control techniques can be broadly classified into two categories:
2. Window Policy:
The type of window at the sender side may also affect the congestion. Several packets
in the Go-back-n window are resent, although some packets may be received
successfully at the receiver side. This duplication may increase the congestion in the
network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet
that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discards the corrupted or less sensitive
package and also able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy:
Since acknowledgement are also the part of the load in network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be
lOMoAR cPSD| 14609304
5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in a flow
should first check the resource requirement of a network flow before transmitting it further.
If there is a chance of congestion or there is congestion in the network, router should deny
establishing a virtual network connection to prevent further congestion.
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and informs the source to slow down.
2. Choke Packet Technique:
Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitors its resources and the utilization at each of its output
lines. Whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback
to reduce the traffic. The intermediate nodes through which the packets have traveled
are not warned about congestion.
lOMoAR cPSD| 14609304
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the
source. The source guesses that there is congestion in a network. For example when
sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to
the source or destination to inform about congestion. The difference between choke
packet and explicit signaling is that the signal is included in the packets that carry data
rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
Forward Signaling : In forward signaling signal is sent in the direction of the
congestion. The destination is warned about congestion. The reciever in this case
adopt policies to prevent further congestion.
Backward Signaling : In backward signaling signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it needs to
slow down.
We will start our study of congestion control by looking at the approaches that can be used at
different time scales. Then we will look at approaches to preventing congestion from occurring
in the first place, followed by approaches
In all feedback schemes, the hope is that knowledge of congestion will cause the hosts to take
appropriate action to reduce the congestion.
The presence of congestion means that the load is (temporarily) greater than the resources (in
part of the system) can handle. Two solutions come to mind: increase the resources or decrease
the load.
The methods to control congestion by looking at open loop systems. These systems are
designed to minimize congestion in the first place, rather than letting it happen and reacting
after the fact. They try to achieve their goal by using appropriate policies at various levels. In
Fig. 5-26 we see different data link, network, and transport policies that can affect congestion.
Discard policy is the rule telling which packet is dropped when there is no space.
A good routing algorithm can help avoid congestion by spreading the traffic over all
the lines, whereas a bad one can send too much traffic over already congested lines.
Packet lifetime management deals with how long a packet may live before being
discarded. If it is too long, lost packets may clog up the works for a long time, but if it
is too short, packets may sometimes time out before reaching their destination, thus
inducing retransmissions.
One technique that is widely used to keep congestion that has already started from
gettingworse is admission control.
Once congestion has been signaled, no more virtual circuits are set up until the
problemhas gone away.
An alternative approach is to allow new virtual circuits but carefully route all new
virtual circuits around problem areas. For example, consider the subnet of Fig. 5-
27(a), in whichtwo routers are congested, as indicated.
lOMoAR cPSD| 14609304
where the constant a determines how fast the router forgets recent history.
Whenever u moves above the threshold, the output line enters a ''warning'' state. Each newly
arriving packet is checked to see if its output line is in warning state. If it is, some action is
taken. The action taken can be one of several alternatives, which we will now discuss.
The old DECNET architecture signaled the warning state by setting a special bit in the packet's
header.
When the packet arrived at its destination, the transport entity copied the bit into the next
acknowledgement sent back to the source. The source then cut back on traffic.
As long as the router was in the warning state, it continued to set the warning bit, which meant
that the source continued to get acknowledgements with it set.
The source monitored the fraction of acknowledgements with the bit set and adjusted its
transmission rate accordingly. As long as the warning bits continued to flow in, the source
continued to decrease its transmission rate. When they slowed to a trickle, it increased its
transmission rate.
Note that since every router along the path could set the warning bit, traffic increased only
when no router was in trouble.
Choke Packets
In this approach, the router sends a choke packet back to the source host, giving it the
destination found in the packet.
The original packet is tagged (a header bit is turned on) so that it will not generate any more
choke packets farther along the path and is then forwarded in the usual way.
When the source host gets the choke packet, it is required to reduce the traffic sent to the
specified destination by X percent. Since other packets aimed at the same destination are
probably already under way and will generate yet more choke packets, the host should ignore
choke packets referring to that destination for a fixed time interval. After that period has
expired, the host listens for more choke packets for another interval. If one arrives, the line is
still congested, so the host reduces the flow still more and begins ignoring choke packets
again. If no choke packets arrive during the listening period, the host may increase the flow
again.
lOMoAR cPSD| 14609304
The feedback implicit in this protocol can help prevent congestion yet not throttle any flow
unless trouble occurs.
Hosts can reduce traffic by adjusting their policy parameters.
Increases are done in smaller increments to prevent congestion from reoccurring quickly.
Routers can maintain several thresholds. Depending on which threshold has been crossed, the
choke packet can contain a mild warning, a stern warning, or an ultimatum.
At high speeds or over long distances, sending a choke packet to the source hosts does not
work well because the reaction is so slow.
Consider, for example, a host in San Francisco (router A in Fig. 5-28) that is sending traffic to
a host in New York (router D in Fig. 5-28) at 155 Mbps. If the New York host begins to run
out of buffers, it will take about 30 msec for a choke packet to get back to San Francisco to tell
it to slow down. The choke packet propagation is shown as the second, third, and fourth steps
in Fig. 5-28(a). In those 30 msec, another 4.6 megabits will have been sent. Even if the host in
San Francisco completely shuts down immediately, the 4.6 megabits in the pipe will continue
to pour in and have to be dealt with. Only in the seventh diagram in Fig. 5-28(a) will the New
York router notice a slower flow.
An alternative approach is to have the choke packet take effect at every hop it passes through,
as shown in the sequence of Fig. 5-28(b). Here, as soon as the choke packet reaches F, F is
required to reduce the flow to D. Doing so will require F to devote more buffers to the flow,
since the source is still sending away at full blast, but it gives D immediate relief, like a
headache remedy in a television commercial. In the next step, the choke packet reaches E,
which tells E to reduce the flow to F. This action puts a greater demand on E's buffers but gives
F immediate relief. Finally, the choke packet reaches A and the flow genuinely slows down.
The net effect of this hop-by-hop scheme is to provide quick relief at the point of congestion
at the price of using up more buffers upstream. In this way, congestion can be nipped in the
bud without losing any packets.
Figure 5-28. (a) A choke packet that affects only the source. (b) A choke packet that affects
each hop it passes through.
9
lOMoAR cPSD| 14609304
e. LOAD SHEDDING
When none of the above methods make the congestion disappear, routers can bring out
the heavy artillery: load shedding.
Load shedding is a fancy way of saying that when routers are being in undated by
packets that they cannot handle, they just throw them away.
A router drowning in packets can just pick packets at random to drop, but usually it can
do better than that.
Which packet to discard may depend on the applications running.
To implement an intelligent discard policy, applications must mark their packets in
priority classes to indicate how important they are. If they do this, then when packets
have to be discarded, routers can first drop packets from the lowest class, then the next
lowest class, and so on.
RANDOM EARLY DETECTION
It is well known that dealing with congestion after it is first detected is more effective
than letting it gum up the works and then trying to deal with it. This observation leads
to the idea of discarding packets before all the buffer space is really exhausted. A
popular algorithm for doing this is called RED (Random Early Detection).
In some transport protocols (including TCP), the response to lost packets is for the
source to slow down. The reasoning behind this logic is that TCP was designed for
wired networks and wired networks are very reliable, so lost packets are mostly due to
buffer overruns rather than transmission errors. This fact can be exploited to help
reduce congestion.
By having routers drop packets before the situation has become hopeless (hence the
''early'' in the name), the idea is that there is time for action to be taken before it is too
late. To determine when to start discarding, routers maintain a running average of their
queue lengths. When the average queue length on some line exceeds a threshold, the
line is said to be congested and action is taken.
f. JITTER CONTROL
The variation (i.e., standard deviation) in the packet arrival times is called jitter.
High jitter, for example, having some packets taking 20 msec and others taking 30 msec
to arrive will give an uneven quality to the sound or movie. Jitter is illustrated in Fig.
5- 29. In contrast, an agreement that 99 percent of the packets be delivered with a delay
in the range of 24.5 msec to 25.5 msec might be acceptable.
The jitter can be bounded by computing the expected transit time for each hop along
the path. When a packet arrives at a router, the router checks to see how much the packet
is behind or ahead of its schedule. This information is stored in the packet and updated
at each hop. If the packet is ahead of schedule, it is held just long enough to get it back
on schedule. If it is behind schedule, the router tries to get it out the door quickly.
lOMoAR cPSD| 14609304
Quality of Service
Quality of service (QoS) refers to any technology that manages data traffic to reduce packet
loss, latency and jitter on the network. QoS controls and manages network resources by setting
priorities for specific types of data on the network.
A. Requirements
B. Techniques for Achieving Good Quality of Service
a. Overprovisioning
b. Buffering
c. Traffic Shaping
d. The Leaky Bucket Algorithm
e. The Token Bucket Algorithm
f. Resource Reservation
g. Admission Control
h. Proportional Routing
i. Packet Scheduling
B. Requirements
A stream of packets from a source to a destination is called a flow.
In a connection-oriented network, all the packets belonging to a flow follow the same
route;
lOMoAR cPSD| 14609304
Several common applications and the stringency of their requirements are listed in Fig. 5-27.
The first four applications have stringent requirements on reliability. No bits may be delivered
incorrectly. This goal is usually achieved by check summing each packet and verifying the
checksum at the destination. If a packet is damaged in transit, it is not acknowledged and will
be retransmitted eventually. This strategy gives high reliability.
The four final (audio/video) applications can tolerate errors, so no checksums are computed or
verified.
File transfer applications, including e-mail and video, are not delay sensitive. If all packets are
delayed uniformly by a few seconds, no harm is done.
Interactive applications, such as Web surfing and remote login, are more delay sensitive.
We know something about QoS requirements, how do we achieve them? We will now
examine some of the techniques system designers use to achieve QoS.
j. Overprovisioning
k. Buffering
l. Traffic Shaping
m. The Leaky Bucket Algorithm
n. The Token Bucket Algorithm
o. Resource Reservation
p. Admission Control
q. Proportional Routing
12
lOMoAR cPSD| 14609304
Overprovisioning
To provide so much router capacity, buffer space, and bandwidth that the packets just fly through easily. The
trouble with this solution is that it is expensive.
The telephone system is overprovisioned. It is rare to pick up a telephone and not get a dial tone instantly.
Buffering
Flows can be buffered on the receiving side before being delivered. Buffering them does not affect the reliability
or bandwidth, and increases the delay, but it smooths out the jitter. For audio and video on demand, jitter is the
main problem, so this technique helps a lot.
Example:
The above figure shows, we see a stream of packets being delivered with substantial jitter. Packet 1 is sent from
the server at t = 0 sec and arrives at the client at t = 1 sec. Packet 2 undergoes more delay and takes 2 sec to arrive.
As the packets arrive, they are buffered on the client machine.
At t = 10 sec, playback begins. At this time, packets 1 through 6 have been buffered so that they can be removed
from the buffer at uniform intervals for smooth play. Unfortunately, packet 8 has been delayed so much that it is
not available when its play slot comes up, so playback must stop until it arrives, creating an annoying gap in the
music or movie. This problem can be alleviated by delaying the starting time even more, although doing so also
requires a larger buffer. Commercial Web sites that contain streaming audio or video all use players that buffer for
about 10 seconds before starting to play.
Traffic Shaping
1. Another method of congestion control is to “shape” the traffic before it enters the
network.
2. Traffic shaping controls the rate at which packets are sent (not just how many). Used in ATM and
Integrated Services networks.
3. At connection set - up time, the sender and carrier negotiate a traffic pattern (shape).
Once the bucket is full, any additional water entering it spills over the sides and is lost (i.e. it
doesn’t appear in the output stream through the hole underneath).
lOMoAR cPSD| 14609304
The same idea of leaky bucket can be applied to packets, as shown in Fig. 7.5.3(b). Conceptually each network
interface contains a leaky bucket. And the following steps are performed:
• When the host has to send a packet, the packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets at a constant rate.
• Bursty traffic is converted to a uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.
This arrangement can be simulated in the operating system or can be built into the hardware. Implementation of
this algorithm is easy and consists of a finite queue. Whenever a packet arrives, if there is room in the queue it is
queued up and if there is no room then the packet is discarded.
The leaky bucket algorithm described above, enforces a rigid pattern at the output stream, irrespective of the
pattern of the input. For many applications it is better to allow the output to speed up somewhat when a larger burst
arrives than to loose the data. Token Bucket algorithm provides such a solution. In this algorithm leaky bucket
holds token, generated at regular intervals. Main steps of this algorithm can be described as follows:
In regular intervals tokens are thrown into the bucket.
Figure 7.5.4 shows the two scenarios before and after the tokens present in the bucket have been consumed. In Fig.
7.5.4(a) the bucket holds two tokens, and three packets are waiting to be sent out of the interface, in Fig. 7.5.4(b)
two packets have been sent out by consuming two tokens, and 1 packet is still left.
The token bucket algorithm is less restrictive than the leaky bucket algorithm, in a sense that it allows bursty
traffic. However, the limit of burst is restricted by the number of tokens available in the bucket at a particular
instant of time.
The implementation of basic token bucket algorithm is simple; a variable is used just to count the tokens. This
counter is incremented every t seconds and is decremented whenever a packet is sent. Whenever this counter
reaches zero, no further packet is sent out as shown in Fig. 7.5.5.
lOMoAR cPSD| 14609304
Resource Reservation
Once we have a specific route for a flow, it becomes possible to reserve resources along that route to make sure the
needed capacity is available. Three different kinds of resources can potentially be reserved:
1. Bandwidth.
2. Buffer space.
3. CPU cycles.
The first one, bandwidth, is the most obvious. If a flow requires 1 Mbps and the outgoing line has a capacity of 2
Mbps, trying to direct three flows through that line is not going to work. Thus, reserving bandwidth means not
oversubscribing any output line.
lOMoAR cPSD| 14609304
A second resource that is often in short supply is buffer space. When a packet arrives, it is usually deposited on the
network interface card by the hardware itself. The router software then has to copy it to a buffer in RAM and
queue that buffer for transmission on the chosen outgoing line. If no buffer is available, the packet has to be
discarded since there is no place to put it. For a good quality of service, some buffers can be reserved for a specific
flow so that flow does not have to compete for buffers with other flows. There will always be a buffer available
when the flow needs one, up to some maximum.
Finally, CPU cycles are also a scarce resource. It takes router CPU time to process a packet, so a router can process
only a certain number of packets per second. Making sure that the CPU is not overloaded is needed to ensure
timely processing of each packet.
Admission Control
The incoming traffic from some flow is well shaped and can potentially follow a single route in which capacity can
be reserved in advance on the routers along the path. When such a flow is offered to a router, it has to decide,
based on its capacity and how many commitments it has already made for other flows, whether to admit or reject
the flow.
The decision to accept or reject a flow is not a simple matter of comparing the (bandwidth, buffers, cycles)
requested by the flow with the router's excess capacity in those three dimensions.
Proportional Routing
Most routing algorithms try to find the best path for each destination and send all traffic to that
destination over the best path. A different approach that has been proposed to provide a higher quality of
service is to split the traffic for each destination over multiple paths.
Since routers generally do not have a complete overview of network-wide traffic, the only feasible
way to split traffic over multiple routes is to use locally-available information.
A simple method is to divide the traffic equally or in proportion to the capacity of the outgoing links.
Packet Scheduling
If a router is handling multiple flows, there is a danger that one flow will hog too much of its
capacity and starve all the other flows. Processing packets in the order of their arrival means that an
lOMoAR cPSD| 14609304
aggressive sender can capture most of the capacity of the routers its packets traverse, reducing the quality of
service for others.
The essence of the algorithm is that routers have separate queues for each output line, one for each
flow. When a line becomes idle, the router scans the queues round robin, taking the first packet on the next
queue. In this way, with n hosts competing for a given output line, each host gets to send one out of every n
packets. Sending more packets will not improve this fraction.
One problem with this algorithm is that it gives all hosts the same priority. In many situations, it is
desirable to give video servers more bandwidth than regular file servers so that they can be given two or
more bytes per tick. This modified algorithm is called weighted fair queueing and is widely used.
Sometimes the weight is equal to the number of flows coming out of a machine, so each process gets equal
bandwidth.
lOMoAR cPSD| 14609304
Integrated services:
QoS relates to the quality of service. It refers to a number of networking technologies that enable the network to deliver the
results it needs. In addition, QoS helps improve the performance of the network in terms of availability, error rate, latency
and throughput. In addition, QoS supports the prioritization of network traffic. QoS can also consider a
specific router or server , etc. Therefore, network monitoring systems are usually provided as part of QoS to ensure that the
network is functioning at the desired level. Overall, QoS provides two types of services: integrated services and differentiated
services.
Integrated services refer to an architecture that ensures quality of service (QoS) in a network. In addition, these services
enable the recipient to watch and hear video and sound without interruption. Every router in the network implements
integrated services. In addition, every registration requires some kind of guarantee in order to be able to make an individual
reservation.
In addition, it is possible to implement the integrated service structure through signaling protocol and access
control routine, classifier and packet planner. In addition, these services require an explicit signaling mechanism to
transmit information to routers so that they can provide the requested resources.
Packet Classification - Categorizes the packet within a specific group using the traffic descriptor.
lOMoAR cPSD| 14609304
Traffic congestion avoidance - Monitor traffic loads to minimize congestion. In includes dropping packets.
Functionality
Integrated services involve reserving resources in advance before the required quality of service is achieved. On
the other hand, differential services mark the packets with priority and send them to the network without prior
reservation. Hence, their functionality is the main difference between integrated services and differential services.
Scalability
In addition, integrated services are not scalable, while differentiated services are scalable.
Put up
Another difference between integrated services and differential services is that integrated services involve one
facility per flow, while different services involve one long-term facility.
Scope of services
In addition, integrated services include an end-to-end service scope, whereas differentiated services include a
domain service scope.
lOMoAR cPSD| 14609304