0% found this document useful (0 votes)
5 views46 pages

4-CN TCP CongesCntrlFnlShort

The document discusses TCP windows, flow control, and congestion control in the transport layer of computer networks. It explains the mechanisms of TCP, including send and receive windows, congestion window management, and error control strategies such as acknowledgments and checksums. Additionally, it covers congestion control algorithms like Slow Start, Congestion Avoidance, and the DECbit and RED techniques for proactive congestion management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views46 pages

4-CN TCP CongesCntrlFnlShort

The document discusses TCP windows, flow control, and congestion control in the transport layer of computer networks. It explains the mechanisms of TCP, including send and receive windows, congestion window management, and error control strategies such as acknowledgments and checksums. Additionally, it covers congestion control algorithms like Slow Start, Congestion Avoidance, and the DECbit and RED techniques for proactive congestion management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 46

COMPUTER NETWORKS

Unit-4
TCP Windows, Flow Control and Congestion Control
UNIT IV - TRANSPORT LAYER
• Transport Layer Services
• Connectionless and Connection Oriented Protocols
• User Datagram Protocol
• Transmission Control Protocol
• TCP Services
• TCP Features
• Segment
• TCP Connection Establishment and Termination
• TCP Congestion Control
Presentation Outline
• TCP Windows
• Flow Control
• Congestion Control
• TCP Congestion Control
• Slow Start, Exponential Increase
• Congestion Avoidance, Additive Increase
• FSM for Taho TCP
• FSM for Reno TCP
• Additive Increase, Multiplicative Decrease
• TCP Timers
Windows in TCP
TCP uses three windows (send window, receive window
and congestion window) for each direction of data
transfer, which means four windows for a bidirectional
communication.

To make the discussion simple, we make an unrealistic


assumption that communication is only unidirectional.

The bidirectional communication can be inferred using


two unidirectional communications with piggybacking.
Send window in TCP

The send window size is


dictated by the receiver: 100
bytes. The send window in
TCP is similar to the one
used with the Selective-
Repeat protocol, but with
some differences:
1. The window size in SR is the number of packets, but the window size in TCP is the number
of bytes.
2. TCP can store data received from the process and send them later.
3. The number of timers: SR may use several timers, but TCP uses only one timer.
Receive window in TCP

The receive window size


determines the number of
bytes that the receive
window can accept from
the sender before being
overwhelmed (flow
control).
TCP allows the receiving
process to pull data at its
own Pace. This means
that the receiver buffer
may be occupied by bytes
that have been received
and acknowledged, but
are waiting to be pulled
by the receiving process.

The receive window size is then always smaller than or equal to the buffer size.
rwnd = buffer size - number of waiting bytes to be pulled
Example Window

First Send:
Sender → sends 1000 bytes (uses the entire window).
Receiver → buffer holds 1000 bytes.
After Receiver Processes 500 Bytes:
Receiver → processes 500 bytes, frees up space.
Receiver → sends acknowledgment, window size increases by 500 bytes.
Sender → can now send 500 more bytes.
Sender Sends More Data:
Sender → sends another 500 bytes, making the total data sent 1500 bytes.
Receiver → processes and frees space as it continues to receive and acknowledge data.
The Congestion Window
• In order to deal with congestion, a new state variable called
“CongestionWindow” is maintained by the source.
– Limits the amount of data that it has in transit at a given time.

• TCP sends no faster than what the slowest component -- the network or the
destination host --can accommodate.
• Decrease window when TCP perceives high congestion.
• Increase window when TCP knows that there is not much congestion.
• How ? Since increased congestion is more catastrophic, reduce it more
aggressively.
• Increase is additive, decrease is multiplicative -- called the Additive
Increase/Multiplicative Decrease (AIMD) behavior of TCP.
Receive Window Vs Congestion Window

• Congestion control is a global issue – involves


every router and host within the subnet
• Flow control – scope is point-to-point; involves
just sender and receiver

Actual Window Size= min(rwnd, cwnd)


What is Flow/Congestion/Error Control ?

• Flow Control : Algorithms to prevent that the


sender overruns the receiver with
information?

• Congestion Control : Algorithms to prevent that the sender


overloads the network

• Error Control : Algorithms to recover or conceal


the effects from packet losses

 The goal of each control mechanism is different.

 But the implementation is combined


Flow Control
Flow control balances the rate a producer creates data
with the rate a consumer can use the data.
TCP separates flow control from error control.
In this section we discuss flow control, ignoring error
control. We assume that the logical channel between
the sending and receiving TCP is error-free.
Data Flow and Flow Control Feedbacks in
TCP
Error Control
TCP is a reliable transport-layer protocol. This means that an
application program that delivers a stream of data to TCP relies
on TCP to deliver the entire stream to the application program
on the other end in order, without error, and without any part
lost or duplicated.

Error control in TCP is achieved through the use of three


simple tools:
1.Checksum: used to check for a corrupted segment.
2. Acknowledgment: to confirm the receipt of data segments.
(cumulative & selective)
3. Time-out.
Normal operation

When does a receiver generate acknowledgments? Several rules have been defined :
1. When end A sends a data segment to end B, it must include (piggyback) an acknowledgment that
gives the next sequence number it expects to receive.
2. When the receiver has no data to send and it receives an in-order segment (with expected sequence
number) and the previous segment has already been acknowledged, the receiver delays sending an
ACK segment until another segment arrives or until a period of time (normally 500 ms) has passed.
3. When a segment arrives with a sequence number that is expected by the receiver, and the previous
in-order segment has not been acknowledged, the receiver immediately sends an ACK segment.
Lost Segment

Rule 4: When a segment arrives with an out-of-order sequence number that is higher than expected, the
receiver immediately sends an ACK segment announcing the sequence number of the next expected
segment. This leads to the fast retransmission of missing segments.
Rule 5: When a missing segment arrives, the receiver sends an ACK segment to announce the next
sequence number expected. This informs the receiver that segments reported missing have been received.
Fast Retransmission
Most implementations today follow the three duplicate ACKs rule and retransmit the missing segment
immediately without waiting for the time-out. This feature is called fast retransmission.
Lost Acknowledgment

TCP uses cumulative acknowledgment. We can say that the next acknowledgment
automatically corrects the loss of the previous acknowledgment.
TCP Congestion Control
Two events to detect congestion (1) time-out(more severe)
(2) three dup. ACKs.

Three congestion policies (1) slow start (2) congestion avoidance


(3) fast recovery.
TCP -Window Control

• TCP sender maintains two new variables:



cwnd – congestion window
cwnd is inferred from the level of congestion in the
network.

ssthresh – slow-start threshold
ssthresh can be thought of as an estimate of the level
below which congestion is not expected.
• send_win = min (rwin, cwnd)
TCP -Congestion Control-States
-Slow Start (cwnd < ssthresh)
- Exponential Increase
-cwnd = cwnd+ cwnd
-Congestion Avoidance (cwnd > ssthresh)
- Additive Increase, Multiplicative Decrease
- ssthresh=old_cwnd/2
-Fast Recovery
- Additive Increase
-ssthresh=old_cwnd/2
- cwnd=ssthresh+3
-
Slow Start, Exponential Increase
(cwnd < ssthresh)
The sender starts with cwnd = 1. This means that the sender can send only one segment. After the first ACK
arrives. The size of the congestion window is increased by1. The size of the window is now 2. After sending
two segments and receiving two individual acknowledgments for them, the size of the congestion window
now becomes 4, and so on. The size of the congestion window in this algorithm is a function of the number of
ACKs arrived.
The size of the congestion window increases exponentially until it reaches a threshold.
Slow Start in Action
(cwnd < ssthresh)

1 2 4 8
Src

D D D A A D D D D
A
A A A A

Dest
Congestion Avoidance, Additive Increase
(cwnd > ssthresh)
Each time the whole “window” of segments is acknowledged, the size of the congestion
window is increased by one. A window is the number of segments transmitted during RTT.
Additive Increase/Multiplicative Decrease (AIMD)
(cwnd > ssthresh)
• Each time congestion occurs - the congestion window
is halved. Source Destination

ssthresh=old_cwnd/2
–Example, if current window is 16 segments and a time-
out occurs (implies packet loss), reduce the window to 8.
–Finally window may be reduced to 1 segment.
• Window is not allowed to fall below 1 segment
(MSS).
• For each congestion window worth of packets that
has been sent out successfully (an ACK is received),
increase the congestion window by the size of a (one)
segment.
Fast Recovery (3 dup acks)
• Optional in TCP. The old version of TCP did not use it, but the
new versions try to use it.
• It starts when three duplicate ACKs arrive, which is interpreted as
light congestion in the network.
• Like congestion avoidance, this algorithm is also an additive
increase, but it increases the size of the congestion window when
a duplicate ACK arrives (after the three duplicate ACKs that
trigger the use of this algorithm).
• If a duplicate ACK arrives,
ssthresh=old_cwnd/2
TCP Congestion Control
• Tahoe (Jacobson 1988)
– Slow Start
– Congestion Avoidance
– Fast Retransmit
• Reno (Jacobson 1990)
– Tahoe +Fast Recovery
• New Reno (Fall & Floyd 1996)
– Partial Acks & Fast Recovery
TCP Tahoe-Algorithm
TCP Tahoe -Example
Summary: TCP Tahoe
• Thus:
– When cwnd is below the ssthresh, cwnd grows exponentially

– When it is above the ssthresh, cwnd grows linearly.

– Upon time-out, set “new” ssthresh to half of current cwnd and the cwnd is reset to 1.

– Taho TCP treats the two signs used for congestion detection, time-out and three
duplicate ACKs, in the same way.

– This version of TCP is called “TCP Tahoe”.


Reno TCP-Algorithm
Reno TCP-Example
Reno TCP
• Same as Tahoe TCP, but changed Fast Retransmit to
include Fast Recovery
• Fast Recovery is entered once a certain number of
duplicate acks have been received (generally the
threshold is set to 3)
• Like Fast Retransmit, the sender retransmits the packet
that has been lost, but instead of slow-starting the cwnd
is cut in half and then the sender counts duplicate acks
to determine when to send packets
New Reno Modifications to Fast
Recovery
– Partial ACKs: An ACK that acknowledges some but not all the
segments that were outstanding at the start of fast recovery.

– If partial ACK received, re-transmit the next lost segment


immediately.

– Sender remains in fast recovery until all data outstanding when


fast recovery was initiated is ACK’ed.

– New-Reno exits Fast Recovery only when all the outstanding


data when it began has been acked
Summary of TCP Behavior
TCP Response to 3 Response to Partial ACK Response to “full” ACK of
Variation dupACK’s of Fast Retransmission Fast Retransmission
Do fast retransmit,
Tahoe ++cwnd ++cwnd
enter slow start
Exit fast recovery, deflate Exit fast recovery, deflate
Do fast retransmit,
Reno window, enter congestion window, enter congestion
enter fast recovery
avoidance avoidance
Do fast retransmit, Fast retransmit and deflate Exit modified fast recovery,
NewReno enter modified fast window – remain in deflate window, enter
recovery modified fast recovery congestion avoidance
Congestion Avoidance
• Congestion avoidance is a mechanism used in
networking to prevent network congestion, which
occurs when the amount of traffic exceeds the
network's capacity, causing packet loss, delays,
and reduced performance.
• To address this, various congestion control
algorithms are used to avoid or alleviate
congestion. Two notable approaches are DECbit
and RED.
DECbit (Decentralized Congestion
Control)
• DECbit is a congestion avoidance technique used in TCP
(Transmission Control Protocol) networks. It is designed
to provide congestion feedback to the sender from the
network by using a simple and decentralized mechanism.
• How DECbit Works:
• Concept: DECbit operates by using explicit congestion
notification (ECN) in TCP/IP networks. It helps avoid
congestion by allowing routers to mark packets when they
sense congestion, and the end systems (e.g., TCP senders)
then react accordingly.
DECbit (Decentralized Congestion
Control)
• Mechanism:
• The router monitors the queue length and marks packets
when congestion is detected. If the queue length exceeds a
predefined threshold, the router sets the congestion bit
(the DECbit) in the header of packets.
• The receiver, when it receives a packet with the DECbit
set, will send an acknowledgment (ACK) with the
congestion bit set.
• The sender then reduces its transmission rate (e.g., by
reducing the congestion window in TCP) to alleviate
congestion.
DECbit (Decentralized Congestion
Control)
• Congestion Feedback: The DECbit approach
relies on feedback from the receivers to inform
the sender of network congestion. The sender
reduces its transmission rate based on this
feedback, trying to avoid further congestion.
DECbit (Decentralized Congestion
Control)
• Pros of DECbit:
• Low Overhead: Since it relies on packet markings rather than
complex signaling, DECbit has low overhead.
• Decentralized: DECbit does not require a central server to
manage congestion control, making it scalable and adaptable.
• Cons of DECbit:
• Fairness Issues: DECbit may not work well in some cases
where multiple users or flows are competing for resources. It
may not provide the level of fairness needed in highly congested
networks.
• Limited Feedback: Feedback is only sent when congestion is
detected, which may not be sufficient for highly dynamic
networks.
RED (Random Early Detection)

• RED (Random Early Detection) is a


proactive congestion avoidance mechanism
used in routers. Unlike traditional methods
that wait for a queue to overflow (leading to
packet drops), RED begins to drop packets
early when the queue is becoming
congested, thus signaling to the sender to
slow down before congestion worsens.
RED (Random Early Detection)

• How RED Works:


• Concept: RED aims to avoid congestion by
randomly dropping packets early when
the router’s buffer begins to fill up. The key
idea is to warn traffic sources (senders) to
reduce their sending rate before the buffer
overflows, preventing massive packet loss
and congestion collapse.
RED (Random Early Detection)

• Mechanism:
• Average Queue Size: RED monitors the
average queue size of the router. If the
average queue size exceeds a minimum
threshold, it starts dropping packets
randomly, with the probability of dropping
increasing as the queue length grows.
RED (Random Early Detection)

Marking and Dropping: When the average


queue size is beyond a maximum threshold,
the router will drop packets with higher
probability. The goal is to prevent queue
overflow by keeping the queue length below a
certain threshold.
Feedback to Sender: Packet drops or
markings are used to indicate congestion to
the sender, prompting it to reduce its sendin
RED (Random Early Detection)

• Control Parameters:
• Min and Max Thresholds: These thresholds
define the point at which RED starts marking or
dropping packets. These values help control when
the congestion avoidance behavior should begin.
• Drop Probability: RED uses a drop probability
function to determine how likely it is that a packet
will be dropped based on the average queue size.
The more congested the router is, the higher the
drop probability.
Further Inquiries
Thank
ThankYou
You

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy