0% found this document useful (0 votes)
109 views37 pages

Unit3 Congestion Control

The document discusses congestion control in computer networks. It defines congestion as occurring when the load on the network is greater than its capacity, causing packet delay and loss. Both the network and transport layers are responsible for handling congestion. Congestion control techniques aim to prevent or remove congestion and can operate with or without feedback from the network. Open-loop control uses policies at sources and destinations to avoid congestion, while closed-loop control detects congestion and passes that information back to adjust the system. Common approaches include backpressure, choke packets, implicit and explicit signaling.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views37 pages

Unit3 Congestion Control

The document discusses congestion control in computer networks. It defines congestion as occurring when the load on the network is greater than its capacity, causing packet delay and loss. Both the network and transport layers are responsible for handling congestion. Congestion control techniques aim to prevent or remove congestion and can operate with or without feedback from the network. Open-loop control uses policies at sources and destinations to avoid congestion, while closed-loop control detects congestion and passes that information back to adjust the system. Common approaches include backpressure, choke packets, implicit and explicit signaling.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Congestion Control

Unit 3_Network Layer


Congestion
►if the load on the network i.e.the number of packets sent to
the network is greater than the capacity of the network or the
number of packets a network can handle. OR
► Too many packets present in (a part of) the network causes
packet delay and loss that degrades performance.
► This situation is called congestion.
► The network and transport layers share the responsibility for
handling congestion.
►Since congestion occurs within the network, it is the network
layer that directly experiences it and must ultimately
determine what to do with the excess packets.
Effects of Congestion

Congestion affects two vital parameters of the network


performance, namely throughput and delay.
In simple terms, the throughput can be defined as the percentage
utilization of the network capacity.
The delay also increases with offered load. No matter what
technique is used for congestion control, the delay grows without
bound as the load approaches the capacity of the system.
It may be noted that initially there is longer delay when congestion
control policy is applied.
However, the network without any congestion control will saturate
at a lower offered load.
Congestion
►However, the most effective way to control congestion is to
reduce the load that the transport layer is placing on the
network.
► This requires the network and transport layers to work
together.
Causes of Congestion
• Congestion occurs when a router receives data faster
than it can send it
– Insufficient bandwidth
– Slow hosts
– Data simultaneously arriving from multiple lines
destined for the same outgoing line.
• The system is not balanced
– Correcting the problem at one router will probably
just move the bottleneck to another router.
Congestion Causes More Congestion
– Incoming messages must be placed in queues
• The queues have a finite size
– Overflowing queues will cause packets to be dropped
– Long queue delays will cause packets to be resent
– Dropped packets will cause packets to be resent
• Senders that are trying to transmit to a congested
destination also become congested
– They must continually resend packets that have been
dropped or that have timed-out
– They must continue to hold outgoing/unacknowledged
messages in memory.
Congestion Control
• Congestion control refers to techniques and mechanisms that
can either prevent congestion, before it happens, or remove
congestion, after it has happened.
Open-Loop Congestion Control
➢ Protocols to prevent or avoid congestion, ensuring that the system
(or network under consideration) never enters a Congested State.
➢ This category of solutions or protocols attempt to solve the
problem by a good design, at first, to make sure that it doesn’t
occur at all. Once system is up and running midcourse corrections
are not made.
➢ These solutions are somewhat static in nature, as the policies to
control congestion don’t change much according to the current
state of the system.
➢ These rules or policies include deciding upon when to accept
traffic, when to discard it, making scheduling decisions and so on.
➢ Main point here is that they make decision without taking into
consideration the current state of the network.
➢ The open loop algorithms are further divided on the basis of
whether these acts on source versus that act upon destination.
Open-Loop Congestion Control
In open-loop congestion control, policies are applied to
prevent congestion before it happens. In these mechanisms,
congestion control is handled by either the source or the
destination.
Retransmission Policy :
➢ Retransmission is sometimes unavoidable.
➢ If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted. Retransmission in general
may increase congestion in the network.
➢ So good retransmission policy can prevent congestion. So the
retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent
congestion.
Window Policy :
➢ The type of window at the sender may also affect congestion.
➢ The Selective Repeat window is better than the Go-Back-N
window for congestion control.
➢ Several packets in the Go-back-n window are resent, although
some packets may be received successfully at the receiver
side. This duplication may increase the congestion in the
network and making it worse. Therefore, Selective repeat
window should be adopted as it sends the specific packet that
may have been lost.
Acknowledge Policy :
➢ If the receiver does not acknowledge every packet it receives, it
may slow down the sender and help prevent congestion.
➢ A receiver may send an acknowledgment only if it has a packet to
be sent or a special timer expires.
➢ A receiver may decide to acknowledge only N packets at a
time.
Discarding Policy:
➢ A good discarding policy adopted by the routers is that the
routers may prevent congestion and at the same time
partially discards the corrupted or less sensitive package
and also able to maintain the quality of a message.
➢ For Example, In case of audio file transmission, routers can
discard less sensitive packets to prevent congestion and also
maintain the quality of the audio file.
Admission Policy :
➢ Since acknowledgement are also the part of the load in
network, the acknowledgment policy imposed by the
receiver may also affect congestion.
➢ An admission policy, can also prevent congestion in virtual-
circuit networks. Switches in a flow, first check the resource
requirement of a flow before admitting it to the network.
➢ A router can deny establishing a virtual circuit connection if
there is congestion in the network or if there is a possibility
of future congestion.
➢ Several approaches can be used to prevent congestion
related to acknowledgment :
➢ The receiver should send acknowledgement for N
packets rather than sending acknowledgement for a
single packet.
➢ The receiver should send a acknowledgment only if it has
to sent a packet or a timer expires.
Closed-Loop Congestion Control
➢ This category is based on the concept of feedback.
➢ During operation, some system parameters are
measured and feed back to portions of the subnet that
can take action to reduce the congestion.
➢ This approach can be divided into 3 steps:
• Monitor the system (network) to detect whether the
network is congested or not and what’s the actual location
and devices involved.
• To pass this information to the places where actions
can be taken
• Adjust the system operation to correct the problem
Closed-Loop Congestion Control
➢ The closed loop algorithms can also be divided into two
categories, namely explicit feedback and implicit
feedback algorithms.
➢ In the explicit approach, special packets are sent back to
the sources to curtail down the congestion.
➢ While in implicit approach, the source itself acts pro-
actively and tries to deduce the existence of congestion
by making local observations.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate
congestion after it happens. Several mechanisms have been
used by different protocols.
Backpressure :
➢ Backpressure is a technique in which a congested node
stop receiving packet from upstream node.
➢ Backpressure is a node-to-node congestion control
technique that propagate in the opposite direction of
data flow.
➢ The backpressure technique can be applied only to
virtual circuit where each node has information of its
above upstream node.
Closed-Loop Congestion Control
Backpressure :
➢ This may cause the upstream node or nodes to become
congested, and they, in turn, reject data from their upstream
nodes or nodes.
➢ Node III in the figure has more input data than it can handle.
It drops some packets in its input buffer and informs node II to
slow down.
➢ Node II, in turn, may be congested because it is slowing down
the output flow of data. If node II is congested, it informs
node I to slow down, which in turn may create congestion.
➢ If so, node I inform the source of data to slow down. This, in
time, alleviates the congestion.
Choke Packet :
➢ Choke packet technique is applicable to both virtual
networks as well as datagram subnets.
➢ A choke packet is a packet sent by a node to the source to
inform it of congestion.
➢ In the choke packet method, the warning is from the router,
which has encountered congestion, to the source station
directly.
➢ Each router monitor its resources and the utilization at each
of its output lines. whenever the resource utilization exceeds
the threshold value which is set by the administrator, the
router directly sends a choke packet to the source giving it a
feedback to reduce the traffic
➢ The intermediate nodes through which the packets has
traveled are not warned about congestion.
Implicit Signaling :
➢ In implicit signaling, there is no communication between the
congested node or nodes and the source.
➢ The source guesses that there is congestion somewhere in the
network from other symptoms.
➢ For example, when a source sends several packets and there
is no acknowledgment for a while, one assumption is that the
network is congested so the source should slow down.
Exmplicit Signaling :
➢ The node that experiences congestion can explicitly send a
signal to the source or destination.
➢ The signal is included in the packets that carry data. Explicit
signaling, in Frame Relay congestion control, can occur in
either the forward or the backward direction.
• (i) Backward Signaling
A bit can be set in a packet moving in the direction opposite
to the congestion. This bit can warn the source that there is
congestion and that it needs to slow down to avoid the
discarding of packets.
(Ii) Forward Signaling
A bit can be set in a packet moving in the direction of the
congestion. This bit can warn the destination that there is
congestion. The receiver in this case can use policies, such as
slowing down the acknowledgments, to alleviate the
congestion.
Quality of Service(QoS)
• Network performance is guaranteed to all traffic flows that
have been admitted into the network
• Initially for connection-oriented networks
• Key Mechanisms
– Admission Control
– Policing
– Traffic Shaping
– Load Shedding
Admission Control
• Flows negotiate contract
with network
Peak rate
• Specify requirements:
– Peak, Avg., Min Bit rate
Bits/second

– Maximum burst size


Average rate
– Delay, Loss requirement
• Network computes
resources needed
– “Effective” bandwidth
• If flow accepted, network
Time allocates resources to
Typical bit rate demanded by a ensure QoS delivered as
variable bit rate information long as source conforms
source to contract
Policing
• Network monitors traffic flows continuously to ensure they
meet their traffic contract
• When a packet violates the contract, network can discard or
tag the packet giving it lower priority
• If congestion occurs, tagged packets are discarded first
• Leaky Bucket Algorithm is the most commonly used policing
mechanism
– Bucket has specified leak rate for average contracted rate
– Bucket has specified depth to accommodate variations in arrival
rate
– Arriving packet is conforming if it does not result in overflow
Traffic Shaping
• Another method of congestion control is to “shape” the
traffic before it enters the network.
• Traffic shaping controls the rate at which packets are sent
(not just how many). Used in ATM and Integrated
Services networks.
• At connection set-up time, the sender and carrier
negotiate a traffic pattern (shape).
• Two traffic shaping algorithms are:
– Leaky Bucket
– Token Bucket

20
The Leaky Bucket Algorithm
• The Leaky Bucket Algorithm used to control rate in a
network. It is implemented as a single-server queue with
constant service time. If the bucket (buffer) overflows then
packets are discarded.
• The leaky bucket enforces a constant output rate (average
rate) regardless of the burstiness of the input. Does nothing
when input is idle.
• The host injects one packet per clock tick onto the network.
• This results in a uniform flow of packets, smoothing out
bursts and reducing congestion.

21
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with packets.

22
• When packets are the same size (as in ATM cells), the one
packet per tick is okay. For variable length packets though, it
is better to allow a fixed number of bytes per tick. E.g. 1024
bytes per tick will allow one 1024-byte packet or two 512-
byte packets or four 256-byte packets on 1 tick.

23
Leaky Bucket Traffic Shaper
Size N
Incoming traffic Shaped traffic
Ser
ver
Packet

•Buffer incoming packets


•Play out periodically to conform to parameters
•Surges in arrivals are buffered & smoothed out
•Possible packet loss due to buffer overflow
•Too restrictive, since conforming traffic does not need to be
completely smooth
Token Bucket Algorithm
• The leaky bucket algorithm described above, enforces a
rigid pattern at the output stream, irrespective of the
pattern of the input.
• For many applications it is better to allow the output to
speed up somewhat when a larger burst arrives than to
loose the data.
• Token Bucket algorithm provides such a solution.
• In this algorithm leaky bucket holds token, generated at
regular intervals.
• Main steps of this algorithm can be described as follows: ƒ
• In regular intervals tokens are thrown into the bucket.
• The bucket has a maximum capacity.
• If there is a ready packet, a token is removed from the
bucket, and the packet is send.
• If there is no token in the bucket, the packet cannot be
32
send.
Token Bucket Algorithm
• In contrast to the LB, the Token Bucket Algorithm, allows
the output rate to vary, depending on the size of the burst.
• In the TB algorithm, the bucket holds tokens. To transmit a
packet, the host must capture and destroy one token.
• Tokens are generated by a clock at the rate of one token
every  t sec.
• Idle hosts can capture and save up tokens (up to the max.
size of the bucket) in order to send larger bursts later.

32
The Token Bucket Algorithm

5-
34

(a) Before. (b) After. 33


Token bucket
Token Bucket Traffic Shaper
Tokens arrive
periodically

An incoming packet must


have sufficient tokens
before admission into the
network Size K
Token

Size N
Incoming traffic Shaped traffic
Server

Packet
•Token rate regulates transfer of packets
•If sufficient tokens available, packets enter network without delay
•K determines how much burstiness allowed into the network
Leaky Bucket vs Token Bucket

Leaky Bucket Token Bucket

In this leaky bucket holds tokens


When the host has to send a packet ,
generated at regular intervals of
packet is thrown in bucket.
time.

Bucket leaks at constant rate Bucket has maximum capacity.

If there is a ready packet , a token is


Bursty traffic is converted into
removed from Bucket and packet is
uniform traffic by leaky bucket.
send.
In practice bucket is a finite queue If there is a no token in bucket,
outputs at finite rate packet can not be send.

29
Advantage of token Bucket over leaky bucket

• If bucket is full in token Bucket , tokens are discard not packets.


While in leaky bucket, packets are discarded.
• Token Bucket can send Large bursts at a faster rate while leaky
bucket always sends packets at constant rate.

29
Load Shedding
• When buffers become full, routers simply discard packets.
• Which packet is chosen to be the victim depends on the
application and on the error strategy used in the data link
layer.
• For a file transfer, for, e.g. cannot discard older packets since
this will cause a gap in the received data.
•For real-time voice or video it is probably better to throw
away old data and keep new packets.
• Get the application to mark packets with discard priority.

30

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy