Congestion FLow
Congestion FLow
V. ALSTON BENITO
What is Congestion?
Congestion in a computer network happens when there is too much data being
sent at the same time, causing the network to slow down. Just like traffic
data loss. When the network can’t handle all the incoming data, it gets “clogged,”
making it difficult for information to travel smoothly from one place to another.
Congestion Control
Congestion control is a crucial concept in computer networks. It refers to the
methods used to prevent network overload and ensure smooth data flow. When
too much data is sent through the network at once, it can cause delays and data
loss. Congestion control techniques help manage the traffic, so all users can
enjoy a stable and efficient network connection. These techniques are essential
data
b) Slow Start
Open Loop Congestion Control
LEAKY BUCKET ALGORITHM:
The Token Bucket algorithm is designed to control the amount of data that can be
sent into a network. It allows a certain burst of data to be transmitted while ensuring
that the average data rate does not exceed a specified limit.
• The algorithm utilizes a conceptual "bucket" that holds "tokens," where each
token represents permission to send a fixed amount of data (typically one packet
or a specific number of bytes).
• The algorithm allows for bursts of traffic to be transmitted when enough tokens
are available, up to the bucket's capacity, enabling temporary increases in data
flow.
Open Loop Congestion Control
TOKEN BUCKET ALGORITHM:
• The congestion window (amount of data TCP can send) increases by a fixed
amount (usually one segment) each round-trip time (RTT) when no packet loss
is detected..
• When packet loss is detected (a sign of congestion), the congestion window is
reduced by a factor (typically halved) to alleviate congestion.Bursty traffic is
converted to a uniform traffic by the leaky bucket.
Closed Loop Congestion Control
AIMD (Additive Increase Multiplicative Decrease)
• Slow Start begins with a small congestion window, typically set to one or
two segments (often equal to the Maximum Segment Size, MSS).
• For each acknowledgment received, the congestion window is increased
exponentially. This means the CWND(Initial Congestion Window) doubles for
every round-trip time (RTT) where packets are acknowledged successfully.
• As the congestion window grows, it eventually reaches a threshold known
as the slow start threshold (ssthresh). This threshold helps determine when
to switch to the Congestion Avoidance phase.
Closed Loop Congestion Control
Slow Start
• If packet loss occurs (indicating network congestion), the slow start phase
is halted, and the congestion window is adjusted (typically halved) to
prevent further congestion.
• Once the congestion window surpasses the slow start threshold, TCP
transitions to the Congestion Avoidance phase, where the growth of the
window changes from exponential to linear to stabilize the transmission
rate.
Flow Control
The process of regulating the data transmission rate between two nodes is
known as flow control. If the sender is quick, the receiver may be unable to get
and process the data. It may happen due to a high traffic load and a low process
power in the receiver. Flow control may help to avoid this type of situation. It
enables the sender to control and manage the transmission speed while
preventing data overflow from the transmitting node. Similarly, this method
allows a sender to send data quicker than the receiver while also receiving and
processing data.
Buffering Technique
• Buffering refers to the use of temporary storage areas (buffers) at both the
sender and receiver ends of a communication channel.The primary purpose
of buffering is to accommodate differences in processing speeds between
the sender and receiver, ensuring smooth and efficient data transmission
without loss or overflow.
• When the sender transmits data, it first writes the data into a buffer rather
than sending it directly to the receiver.The receiver then reads the data
from its own buffer at its own pace. This allows the sender to continue
sending data without waiting for the receiver to process each piece of data
immediately.
Rate Based Flow Control
• Rate-Based Flow Control is a mechanism that regulates the transmission
rate of data from the sender to the receiver based on the network
conditions, available bandwidth, and the receiver's capacity.The primary
purpose is to optimize the flow of data and prevent overwhelming the
receiver or causing network congestion.
• The sender monitors the network conditions and adjusts its transmission
rate dynamically, ensuring that it matches the receiving capacity of the
receiver.By using feedback mechanisms, the sender can determine how
much data it can safely send without causing buffer overflow at the
receiver or introducing excessive delays in the network.
Rate Based Flow Control
• Explicit Feedback: The receiver sends specific control messages to the
sender, indicating its current state, such as how much data it can handle or
whether it is ready to receive more data.Examples include TCP
acknowledgment (ACK) packets that confirm receipt of data and negative
acknowledgment (NAK) packets that indicate lost packets.
• Implicit Feedback: The sender deduces the receiver's state based on
timing and performance metrics, such as round-trip time (RTT) or packet
loss rates, without receiving direct messages.For example, if the sender
notices an increase in packet loss, it may interpret this as a signal to reduce
the transmission rate.
Sliding Window
• The sliding window protocol is a method for controlling the flow of data
packets between a sender and receiver in a way that allows for efficient use
of network resources.Its main purpose is to improve data transmission
efficiency and manage the flow of packets to prevent congestion and data
loss.
Sliding Window
• Window Size: The protocol defines a "window" that indicates the number of packets
(or bytes) that can be sent before needing an acknowledgment from the receiver.
• Sending Data: The sender can transmit multiple packets up to the size of the window
without waiting for an acknowledgment. Once the sender receives an
acknowledgment for the first packet, the window "slides" forward, allowing the next
packet to be sent.
• Receiving Data: The receiver keeps track of which packets have been received and
sends back acknowledgments for them. The receiver's buffer also plays a role in
managing the incoming data flow.
Sliding Window
• Stop-and-Wait ARQ
• The simplest form of the protocol where the sender transmits a single
packet and waits for an acknowledgment before sending the next packet.
• Mechanism: After sending a data packet, the sender stops and waits for
an acknowledgment (ACK) from the receiver.If the ACK is not received
within a specified timeout period, the sender retransmits the same packet.
Stop And Wait – Types