0% found this document useful (0 votes)
16 views27 pages

Congestion FLow

Uploaded by

Alston Beny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views27 pages

Congestion FLow

Uploaded by

Alston Beny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 27

COMPUTER NETWORKS

Congestion Control &


Flow Control

V. ALSTON BENITO
What is Congestion?

Congestion in a computer network happens when there is too much data being

sent at the same time, causing the network to slow down. Just like traffic

congestion on a busy road, network congestion leads to delays and sometimes

data loss. When the network can’t handle all the incoming data, it gets “clogged,”

making it difficult for information to travel smoothly from one place to another.
Congestion Control
Congestion control is a crucial concept in computer networks. It refers to the

methods used to prevent network overload and ensure smooth data flow. When

too much data is sent through the network at once, it can cause delays and data

loss. Congestion control techniques help manage the traffic, so all users can

enjoy a stable and efficient network connection. These techniques are essential

for maintaining the performance and reliability of modern networks.


Congestion Control
Improved Network Stability: Congestion control helps keep the network
stable by preventing it from getting overloaded. It manages the flow of data
so the network doesn’t crash or fail due to too much traffic.

Reduced Latency and Packet Loss: Without congestion control, data


transmission can slow down, causing delays and data loss. Congestion
control helps manage traffic better, reducing these delays and ensuring fewer
data packets are lost, making data transfer faster and the network more
responsive.
Congestion Control
Enhanced Throughput: By avoiding congestion, the network can use its
resources more effectively. This means more data can be sent in a shorter
time, which is important for handling large amounts of data and supporting
high-speed applications.

Fairness in Resource Allocation: Congestion control ensures that network


resources are shared fairly among users. No single user or application can
take up all the bandwidth, allowing everyone to have a fair share.
Congestion Control Algorithms
• Open Loop Congestion Control – Doesn’t get feedback from network

1. Leaky Bucket Algorithm

2. Token Bucket Algorithm

• Closed Loop Congestion Control – Uses feedback to determine the flow of

data

1. TCP Congestion Control

a) AIMD (Additive Increase Multiplicative Decrease)

b) Slow Start
Open Loop Congestion Control
LEAKY BUCKET ALGORITHM:

Let us consider an example to understand Imagine a bucket with a small hole in


the bottom. No matter at what rate water enters the bucket, the outflow is at
constant rate. When the bucket is full with water additional water entering spills
over the sides and is lost.
• When host wants to send packet, packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
• Bursty traffic is converted to a uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.
•When host wants to send packet, packet is thrown
into the bucket.
•The bucket leaks at a constant rate, meaning the
network interface transmits packets at a constant
rate.
•Bursty traffic is converted to a uniform traffic by
the leaky bucket.
•In practice the bucket is a finite queue that
outputs at a finite rate.
Open Loop Congestion Control
TOKEN BUCKET ALGORITHM:

The Token Bucket algorithm is designed to control the amount of data that can be
sent into a network. It allows a certain burst of data to be transmitted while ensuring
that the average data rate does not exceed a specified limit.
• The algorithm utilizes a conceptual "bucket" that holds "tokens," where each
token represents permission to send a fixed amount of data (typically one packet
or a specific number of bytes).
• The algorithm allows for bursts of traffic to be transmitted when enough tokens
are available, up to the bucket's capacity, enabling temporary increases in data
flow.
Open Loop Congestion Control
TOKEN BUCKET ALGORITHM:

• To transmit a packet, a token is required. If sufficient tokens are available in


the bucket, the packet is sent, and the corresponding tokens are removed.
If no tokens are available, the packet must wait.
• The algorithm allows for bursts of traffic to be transmitted when enough
tokens are available, up to the bucket's capacity, enabling temporary
increases in data flow.
• After a burst, the transmission rate reverts to the average rate determined
by the token generation rate, ensuring consistent overall bandwidth usage.
Closed Loop Congestion Control
TCP CONGESTION CONTROL:

AIMD (Additive Increase Multiplicative Decrease)

• The congestion window (amount of data TCP can send) increases by a fixed
amount (usually one segment) each round-trip time (RTT) when no packet loss
is detected..
• When packet loss is detected (a sign of congestion), the congestion window is
reduced by a factor (typically halved) to alleviate congestion.Bursty traffic is
converted to a uniform traffic by the leaky bucket.
Closed Loop Congestion Control
AIMD (Additive Increase Multiplicative Decrease)

• This balance ensures efficient use of available bandwidth (by gradually


increasing data flow) while preventing congestion (by sharply reducing the
flow when congestion is detected).
• AIMD creates a "sawtooth" pattern, where the window size increases
linearly (additive) and drops sharply (multiplicative) when packet loss
occurs, leading to stable data transmission over time.
Closed Loop Congestion Control
Slow Start

• Slow Start begins with a small congestion window, typically set to one or
two segments (often equal to the Maximum Segment Size, MSS).
• For each acknowledgment received, the congestion window is increased
exponentially. This means the CWND(Initial Congestion Window) doubles for
every round-trip time (RTT) where packets are acknowledged successfully.
• As the congestion window grows, it eventually reaches a threshold known
as the slow start threshold (ssthresh). This threshold helps determine when
to switch to the Congestion Avoidance phase.
Closed Loop Congestion Control
Slow Start

• If packet loss occurs (indicating network congestion), the slow start phase
is halted, and the congestion window is adjusted (typically halved) to
prevent further congestion.
• Once the congestion window surpasses the slow start threshold, TCP
transitions to the Congestion Avoidance phase, where the growth of the
window changes from exponential to linear to stabilize the transmission
rate.
Flow Control
The process of regulating the data transmission rate between two nodes is

known as flow control. If the sender is quick, the receiver may be unable to get

and process the data. It may happen due to a high traffic load and a low process

power in the receiver. Flow control may help to avoid this type of situation. It

enables the sender to control and manage the transmission speed while

preventing data overflow from the transmitting node. Similarly, this method

allows a sender to send data quicker than the receiver while also receiving and

processing data.
Buffering Technique
• Buffering refers to the use of temporary storage areas (buffers) at both the
sender and receiver ends of a communication channel.The primary purpose
of buffering is to accommodate differences in processing speeds between
the sender and receiver, ensuring smooth and efficient data transmission
without loss or overflow.
• When the sender transmits data, it first writes the data into a buffer rather
than sending it directly to the receiver.The receiver then reads the data
from its own buffer at its own pace. This allows the sender to continue
sending data without waiting for the receiver to process each piece of data
immediately.
Rate Based Flow Control
• Rate-Based Flow Control is a mechanism that regulates the transmission
rate of data from the sender to the receiver based on the network
conditions, available bandwidth, and the receiver's capacity.The primary
purpose is to optimize the flow of data and prevent overwhelming the
receiver or causing network congestion.
• The sender monitors the network conditions and adjusts its transmission
rate dynamically, ensuring that it matches the receiving capacity of the
receiver.By using feedback mechanisms, the sender can determine how
much data it can safely send without causing buffer overflow at the
receiver or introducing excessive delays in the network.
Rate Based Flow Control
• Explicit Feedback: The receiver sends specific control messages to the
sender, indicating its current state, such as how much data it can handle or
whether it is ready to receive more data.Examples include TCP
acknowledgment (ACK) packets that confirm receipt of data and negative
acknowledgment (NAK) packets that indicate lost packets.
• Implicit Feedback: The sender deduces the receiver's state based on
timing and performance metrics, such as round-trip time (RTT) or packet
loss rates, without receiving direct messages.For example, if the sender
notices an increase in packet loss, it may interpret this as a signal to reduce
the transmission rate.
Sliding Window

• The sliding window protocol is a method for controlling the flow of data
packets between a sender and receiver in a way that allows for efficient use
of network resources.Its main purpose is to improve data transmission
efficiency and manage the flow of packets to prevent congestion and data
loss.
Sliding Window

• Window Size: The protocol defines a "window" that indicates the number of packets
(or bytes) that can be sent before needing an acknowledgment from the receiver.
• Sending Data: The sender can transmit multiple packets up to the size of the window
without waiting for an acknowledgment. Once the sender receives an
acknowledgment for the first packet, the window "slides" forward, allowing the next
packet to be sent.
• Receiving Data: The receiver keeps track of which packets have been received and
sends back acknowledgments for them. The receiver's buffer also plays a role in
managing the incoming data flow.
Sliding Window

• Types of Sliding Window Protocols


• Go-Back-N ARQ: In this version, the sender can send several packets, but
if an error occurs, it must retransmit all packets from the point of error
onward.
• Selective Repeat ARQ: This allows the sender to retransmit only the
specific packets that were not acknowledged, rather than all subsequent
packets, improving efficiency.
Stop And Wait

• Stop-and-Wait ARQ is a protocol that ensures reliable data transmission by


requiring the sender to stop and wait for an acknowledgment (ACK) from
the receiver after sending each data packet before sending the next
one.The primary purpose is to confirm the successful receipt of data and
manage flow control, preventing data loss and ensuring that the receiver
can process packets without being overwhelmed.
Stop And Wait

• Transmission Process: The sender transmits a single data packet to the


receiver.After sending the packet, the sender stops and waits for an
acknowledgment from the receiver.
• Acknowledgment: The receiver, upon successfully receiving the packet,
sends back an acknowledgment (ACK) to the sender.If the sender receives
the ACK, it proceeds to send the next packet.If the sender does not receive
an ACK within a specified timeout period (indicating a potential packet
loss), it retransmits the same packet.
Stop And Wait – Types

• Stop-and-Wait ARQ
• The simplest form of the protocol where the sender transmits a single
packet and waits for an acknowledgment before sending the next packet.
• Mechanism: After sending a data packet, the sender stops and waits for
an acknowledgment (ACK) from the receiver.If the ACK is not received
within a specified timeout period, the sender retransmits the same packet.
Stop And Wait – Types

• Stop-and-Wait ARQ with Timeouts


• In this approach, the sender has a fixed timeout period after which it
assumes that the packet was lost if no acknowledgment is received.
• Mechanism: If the timeout expires without receiving an ACK, the sender
retransmits the last packet.This helps to manage packet loss and ensures
reliable delivery.
Stop And Wait – Types

• Stop-and-Wait ARQ with Retransmission Counter


• This variant includes a retransmission counter to limit the number of times
a packet can be retransmitted.
• Mechanism: The sender keeps track of the number of retransmissions for
each packet. If the retransmission limit is reached without receiving an ACK,
the sender may stop further retransmissions or take alternative actions.
THANK YOU

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy