0% found this document useful (0 votes)
20 views13 pages

22 - Trans - Congestion

This document discusses congestion control in computer networks. It begins by defining congestion as having too many sources sending too many packets too fast for the network to handle. It then discusses two scenarios that can lead to congestion: 1) when the output link capacity is reached, maximum throughput is reduced and delays increase, and 2) with finite buffer space, packets may be dropped leading to retransmissions. The costs of congestion are more work to transmit the same data and carrying duplicate packets. Effective congestion control is needed to optimize network usage and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views13 pages

22 - Trans - Congestion

This document discusses congestion control in computer networks. It begins by defining congestion as having too many sources sending too many packets too fast for the network to handle. It then discusses two scenarios that can lead to congestion: 1) when the output link capacity is reached, maximum throughput is reduced and delays increase, and 2) with finite buffer space, packets may be dropped leading to retransmissions. The costs of congestion are more work to transmit the same data and carrying duplicate packets. Effective congestion control is needed to optimize network usage and performance.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

CS 210

Introduction to Computer Networks

application
transport
network
data link
physical

Transport Layer
Congestion Control – An Introduction
Some slides are adapted from “Computer Networking – a Top-Down Approach”
© 1996-2012 by J.F Kurose and K.W. Ross, All Rights Reserved

CS 210
Introduction to Computer Networks

application
transport
network
data link
physical

Application Layer – Advanced Topics


Some slides are adapted from “Computer Networking – a Top-Down Approach”
© 1996-2012 by J.F Kurose and K.W. Ross, All Rights Reserved

1
Questions
• Why does congestion happen? What
are the costs of congestion?
• What are the two broad approaches to
congestion control?
• How does TCP provide congestion
control?
• What is TCP slow start?
• What is fairness? How does TCP
provide fairness?
3

Principles of Congestion Control


congestion:
• informally: “too many sources sending too many
packets too fast for network to handle”
• different from flow control!
• manifestations:
– lost packets (buffer overflow at routers)
– long delays (queueing in router buffers)
• a top-ten problem!

2
Causes/costs of Congestion: scenario 1
original data: lin throughput: lout
• two senders, two
receivers Host A

• one router, infinite unlimited shared


buffers output link buffers

• output link capacity: R


• no retransmission
Host B

R/2

delay
lout

lin R/2 lin R/2


❖ maximum per-connection ❖ large delays as arrival rate, lin,
throughput: R/2 approaches capacity 5

Causes/costs of Congestion: scenario 2


• one router, finite buffers
• sender retransmission of timed-out packet
– application-layer input = application-layer output: lin = lout
– transport-layer input includes retransmissions : l‘in lin

lin : original data


l'in: original data, plus
lout
retransmitted data

Host A

finite shared output


Host B link buffers

3
Causes/costs of Congestion: scenario 2
R/2
idealization: perfect
knowledge

lout
• sender sends only when
router buffers available
lin R/2

lin : original data


copy
l'in: original data, plus
lout
retransmitted data

Host A free buffer space!

finite shared output


Host B link buffers 7

Causes/costs of Congestion: scenario 2


Idealization: known loss
packets can be lost, dropped
at router due to full buffers
• sender only resends if packet
known to be lost

lin : original data


copy
l'in: original data, plus
lout
retransmitted data

no buffer space!
Host A

Host B 8

4
Causes/costs of congestion: scenario 2
Idealization: known loss R/2
packets can be lost,
dropped at router due to when sending at R/2,
some packets are

lout
full buffers retransmissions but
• sender only resends if asymptotic goodput
is still R/2
packet known to be lost
lin R/2

lin : original data


l'in: original data, plus
lout
retransmitted data

Host A free buffer space!

Host B
9

Causes/costs of Congestion: scenario 2


Realistic: duplicates
R/2
❖ packets can be lost, dropped
at router due to full buffers when sending at R/2,
some packets are
lout

❖ sender times out prematurely, retransmissions


sending two copies, both of including duplicated
that are delivered!
which are delivered
lin R/2

lin
timeout lout
copy l'in

Host A free buffer space!

Host B 10

10

5
Causes/costs of Congestion: scenario 2
Realistic: duplicates
R/2
❖ packets can be lost, dropped
at router due to full buffers when sending at R/2,
some packets are

lout
❖ sender times out prematurely, retransmissions
sending two copies, both of including duplicated
that are delivered!
which are delivered
lin R/2

“costs” of congestion:
❖ more work (retrans) for given “goodput”
❖ unneeded retransmissions: link carries multiple copies of pkt
▪ decreasing goodput

11

11

Causes/costs of Congestion: scenario 3


• four senders Q: what happens as lin and l'in
increase ?
• multihop paths
A: as red lin’ increases, all arriving
• timeout/retransmit blue pkts at upper queue are
dropped, blue throughput g 0
Host A lin : original data lout
Host B
l'in: original data, plus
retransmitted data
finite shared output
link buffers

Host D
Host C

12

12

6
Causes/costs of Congestion: scenario 3
C/2
lout

lin’ C/2

another “cost” of congestion:


❖ when packet dropped, any “upstream
transmission capacity used for that packet was
wasted!

13

13

Approaches towards Congestion Control


two broad approaches towards congestion control:

end-end congestion network-assisted


control: congestion control:
• no explicit feedback • routers provide
from network feedback to end systems
• congestion inferred –single bit indicating
from end-system congestion (SNA,
observed loss, delay DECbit, TCP/IP ECN,
• approach taken by TCP ATM)
–explicit rate for
sender to send at

14

14

7
TCP Congestion Control: additive increase
multiplicative decrease
❖ approach: sender increases transmission rate (window size),
probing for usable bandwidth, until loss occurs
▪ additive increase: increase cwnd by 1 MSS (Maximum
Segment Size) every RTT until loss detected
▪ multiplicative decrease: cut cwnd in half after loss

additively increase window size …


…. until loss occurs (then cut window in half)
congestion window size
cwnd: TCP sender

AIMD saw tooth


behavior: probing
for bandwidth

time
15

15

TCP Congestion Control: Details


sender sequence number space
cwnd TCP sending rate:
❖ roughly: send cwnd
bytes, wait RTT for
last byte last byte
ACKS, then send more
ACKed sent, not-
yet ACKed
sent bytes
(“in-flight”)
cwnd
• sender limits transmission: rate ~
~ bytes/sec
RTT
LastByteSent- cwnd
LastByteAcked <

• cwnd is dynamic, function


of perceived network
congestion
16

16

8
TCP Slow Start
Host A Host B
❖ when connection begins,
increase rate
exponentially until first

RTT
loss event:
▪ initially cwnd = 1 MSS
▪ double cwnd every RTT
▪ done by incrementing
cwnd for every ACK
received
❖ summary: initial rate is
slow but ramps up
exponentially fast time

17

17

TCP: Detecting, Reacting to Loss


• loss indicated by timeout:
– cwnd set to 1 MSS (TCP Tahoe) or is cut in half (TCP
Reno)
– window then grows exponentially (as in slow start)
to threshold, then grows linearly
• loss indicated by 3 duplicate ACKs:
– dup ACKs indicate network capable of delivering
some segments -> move to fast recovery state
– cwnd is set to threshold + 3
• threshold is set to ½ of cwnd in both cases of
loss
18

18

9
TCP: Switching from Slow Start to CA
Q: when should the
exponential
increase switch to
linear?
A: when cwnd gets
to 1/2 of its value
before timeout.

Implementation:
• variable ssthresh
• on loss event, ssthresh
is set to 1/2 of cwnd just
before loss event

19

19

Summary: TCP Congestion Control


New
New ACK!
ACK! new ACK
duplicate ACK
dupACKcount++ new ACK
.
cwnd = cwnd + MSS (MSS/cwnd)
dupACKcount = 0
cwnd = cwnd+MSS transmit new segment(s), as allowed
dupACKcount = 0
L transmit new segment(s), as allowed
cwnd = 1 MSS
ssthresh = 64 KB cwnd > ssthresh
dupACKcount = 0 slow L congestion
start timeout avoidance
ssthresh = cwnd/2
cwnd = 1 MSS duplicate ACK
timeout dupACKcount = 0 dupACKcount++
ssthresh = cwnd/2 retransmit missing segment
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment New
timeout
ACK!
ssthresh = cwnd/2
cwnd = 1 New ACK
dupACKcount = 0
retransmit missing segment cwnd = ssthresh dupACKcount == 3
dupACKcount == 3 dupACKcount = 0
ssthresh= cwnd/2 ssthresh= cwnd/2
cwnd = ssthresh + 3 cwnd = ssthresh + 3
retransmit missing segment retransmit missing segment
fast
recovery
duplicate ACK
cwnd = cwnd + MSS
transmit new segment(s), as allowed
20

20

10
TCP Throughput
• Average TCP throughput as function of window size,
RTT?
– ignore slow start, assume always data to send
• W: window size (measured in bytes) where loss occurs
– avg. window size (# in-flight bytes) is ¾ W
– avg. thruput is 3/4W per RTT
3 W
avg TCP thruput = bytes/sec
4 RTT

W/2

21

21

TCP Futures: TCP over “long, fat pipes”

• example: 1500 byte segments, 100ms RTT, want


10 Gbps throughput
• requires W = 83,333 in-flight segments
• throughput in terms of segment loss probability, L
[Mathis 1997]:
.
TCP throughput = 1.22 MSS
RTT L

➜ to achieve 10 Gbps throughput, need a loss rate of L =


2·10-10 – a very small loss rate!
• new versions of TCP needed for high-speed
connections
22

22

11
TCP Fairness
fairness goal: if k TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/k

TCP connection 1

bottleneck
router
capacity R
TCP connection 2

23

23

Why is TCP fair?


two competing sessions:
❖ additive increase gives slope of 1, as throughout increases
❖ multiplicative decrease decreases throughput proportionally

R equal bandwidth share

loss: decrease window by factor of 2


congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase

Connection 1 throughput R

24

24

12
Fairness – cont’d
Fairness and UDP Fairness, parallel TCP
• multimedia apps often connections
do not use TCP • application can open
– do not want rate multiple parallel
throttled by congestion connections between two
control
hosts
• instead use UDP:
• web browsers do this
– send audio/video at
constant rate, tolerate • e.g., link of rate R with 9
packet loss existing connections:
– new app asks for 1 TCP, gets rate
R/10
– new app asks for 11 TCPs, gets R/2
25

25

Quote of The Day


26

26

13

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy