0% found this document useful (0 votes)
12 views69 pages

Lecture 17-18 Congestion Control

Chapter 3 of the document discusses the transport layer in computer networking, focusing on services, multiplexing, and congestion control. It outlines the principles of TCP congestion control, including slow start, congestion avoidance, and fast recovery mechanisms. The chapter emphasizes the importance of managing data flow to prevent network congestion and ensure efficient data transmission.

Uploaded by

lordshen2804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views69 pages

Lecture 17-18 Congestion Control

Chapter 3 of the document discusses the transport layer in computer networking, focusing on services, multiplexing, and congestion control. It outlines the principles of TCP congestion control, including slow start, congestion avoidance, and fast recovery mechanisms. The chapter emphasizes the importance of managing data flow to prevent network congestion and ensure efficient data transmission.

Uploaded by

lordshen2804
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

Chapter 3

Transport
Layer
A note on the use of these PowerPoint slides:
We’re making these slides freely available to all (faculty, students,
readers). They’re in PowerPoint form so you see the animations; and
can add, modify, and delete slides (including this one) and slide content
to suit your needs. They obviously represent a lot of work on our part.
In return for use, we only ask the following:
 If you use these slides (e.g., in a class) that you mention their source
(after all, we’d like people to use our book!)
 If you post any slides on a www site, that you note that they are
adapted from (or perhaps identical to) our slides, and note our
copyright of this material.
Computer Networking: A
For a revision history, see the slide note for this page.
Top-Down Approach
Thanks and enjoy! JFK/KWR 8th edition
Jim Kurose, Keith Ross
All material copyright 1996-2020
J.F Kurose and K.W. Ross, All Rights Reserved Pearson, 2020
Transport Layer: 3-1
Chapter 3: roadmap
 Transport-layer services
 Multiplexing and demultiplexing
 Connectionless transport: UDP
 Principles of reliable data transfer
 Connection-oriented transport: TCP
 Principles of congestion control
 TCP congestion control
 Evolution of transport-layer
functionality

Transport Layer: 3-2


Principles of congestion
control
congestion:
 informally: “too many sources sending
too much data too fast for network to
handle”
 different from flow control!
 manifestations:
• lost packets (buffer overflow at
routers)
• long delays (queueing in router
buffers)
 a top-10Transport
problem!Layer 3
Flow vs. Congestion Control
Flow control Congestion control

Sender
Sender

Feedback:
“Not much
Feedback:
getting through”
“Receiver
overflowing”
Receiver

Receiver
Approaches towards congestion
control
two broad approaches towards congestion control:

network-assisted congestion control:


 routers provide feedback to end systems
• single bit indicating congestion (TCP/IP ECN, ATM). This form of
notification typically takes the form of a choke packet (essentially
saying, “I’m congested!”).
• The second form of notification occurs when a router marks/updates a
field in a packet flowing from sender to receiver to indicate congestion.
Upon receipt of a marked packet, the receiver then notifies the sender
of the congestion indication.
• explicit rate for sender to send at

Transport Layer 5
Approaches towards congestion
control
two broad approaches towards congestion control:

end-end congestion control:


 no explicit feedback from network
 congestion inferred from end-system observed loss, delay
(as indicated by a timeout or a triple duplicate
acknowledgment)

 approach taken by TCP

Transport Layer 6
TCP congestion control

• Each sender limits the rate at which it sends traffic into its
connection as a function of perceived network congestion.

• Three basic questions:


• First, how does a TCP sender limit the rate?
• Second, how does a TCP sender perceive that there is
congestion on the path?
• And third, what algorithm should the sender use to
change its send rate as a function of perceived end-to-
end congestion?

Transport Layer 7
TCP congestion control

• Three basic questions:


 First, how does a TCP sender limit the rate?
 Second, how does a TCP sender perceive that there is
congestion on the path?
 And third, what algorithm should the sender use to
change its send rate as a function of perceived end-to-
end congestion?

Transport Layer 8
Simplified Network Model
The entire network is abstracted as a single router – “black box”
RcvWin
cwnd rwnd
CongWin

Receiver resources
Represented by
“Receive Window Size”

(b)
Network resources
Represented by “Congestion Window Size”
Simplified Network Model

Window size Window size


2,000 10,000

Send some data


MSS MSS
1000 1480

MTU=1040 MTU=1520

MSS: Maximum packet size allowed on an Interface in Bytes


TCP Handshake (1)
Window size 2,000
MSS 1000
Initial sequence number X
1 byte
Acknowledge number 0

Syn flag =1 Set


ACK flag= Not set
Fin = Not set

Window size Window size


2,000 10,000
X Y

MSS MSS
1000 1480

MTU=1020 MTU=1500
TCP Handshake (2)
Window size 10,000
MSS 1480
Initial sequence number Y
1 byte
Acknowledge number X+1

Syn flag =1 Set


ACK flag= 1 Set
Fin = Not set

Window size Window size


2,000 10,000

MSS MSS
1000 1480

MTU=1020 MTU=1500
TCP Handshake (3)
Window size 2,000

Initial sequence number X+1

Acknowledge number Y+1

Syn flag = Not set


ACK flag= 1 Set
Fin = Not set

Window size Window size


2,000 MSS 1000 (agreed on min) 10,000
We can only send a segment <= MSS

MSS MSS
1000 1480

MTU=1020 MTU=1500
Classic TCP (send all
segments)

Do you see any problem with this approach?

Window size Window size


2,000 MSS 1000 (agreed) 10,000

MSS MSS
1000 10 segments (where 1 MSS is equal to 1000 bytes)=10,000 1480

MTU=1020 MTU=1500
Classic TCP (send all
segments)

Window size Window size


2,000 MSS 1000 (agreed) 10,000

MSS MSS
1000 Packets may drop at Network layer due to congestion 1480

Congestion
MTU=1020 MTU=1500
Classic TCP (send all
secgments)

Cwnd=10 segments = 10 MSS


TCP Congestion Control: details

sender sequence number space


cwnd TCP sending rate:
 roughly: send cwnd bytes,
wait RTT for ACKS, then
last byte last byte
send more bytes
ACKed sent, not- sent
yet ACKed
(“in-flight”)

cwnd
 sender limits transmission: rate ~ bytes/sec
~
RTT
LastByteSent - LastByteAcked Example: MSS = 500
< min{cwnd, rwnd} bytes & RTT = 200 msec
 cwnd is dynamic, function of perceived network initial rate = MSS/RTT=? kbps
congestion

Transport Layer 17
MSS (recap)
The maximum segment size option is used in
connection setup to define the largest allowable TCP
segment. The value of MSS is determined during
connection establishment and does not change
during the connection.

Transport Layer 18
TCP congestion control

• Three basic questions:


 First, how does a TCP sender limit the rate?
 Second, how does a TCP sender perceive that
there is congestion on the path?
 And third, what algorithm should the sender use to
change its send rate as a function of perceived end-to-
end congestion?

Transport Layer 19
How does sender perceive congestion?

 Let us define a “loss event” at a TCP sender as the occurrence of either a


timeout or the receipt of three duplicate ACKs from the receiver.

20
TCP Slow Start
 When connection begins,  When connection begins,
cwnd = 1 MSS increase rate
• Example: MSS = 500 bytes & exponentially fast until
RTT = 200 msec first loss event
• initial rate = MSS/RTT=2.5 kbps
 available bandwidth may be
>> MSS/RTT
• desirable to quickly ramp up
to respectable rate

Transport Layer 21
TCP Slow Start (more)
Host A Host B
 When connection begins,
increase rate exponentially
until first loss event: one s e gm
ent

RTT
• double cwnd every RTT
• done by incrementing cwnd two segm
en ts
for every ACK received
 Summary: initial rate is slow
but ramps up exponentially four segm
ents
fast

time

Transport Layer 22
TCP Slow Start (more)
If we look at the size of the cwnd in terms of round-trip
times (RTTs), we find that the growth rate is
exponential as shown below:

23
• Slow start cannot continue indefinitely. There must
be a threshold to stop this phase.
• The sender keeps track of a variable named
ssthresh (slow start threshold). When the
size of window in bytes reaches this threshold, slow
start stops and the next phase starts.

24
When does the Slow Start end?
 First, if there is a loss event (i.e., congestion) indicated by a timeout,
• TCP sender sets the value of cwnd to 1, restart Slow Start
• ssthresh variable (“slow start threshold”) to cwnd/2

 The second way in which slow start may end is directly tied to the value of ssthresh
• When cwnd equals ssthresh, slow start ends and TCP transitions into congestion avoidance
mode.
• Why not keep doubling?

Transport Layer 25
ssthresh

Transport Layer 26
ssthresh

Transport Layer 27
When does the Slow Start end?
 The final way in which slow start can end is if three duplicate
ACKs are detected,
• TCP performs a fast retransmit and enters the fast recovery state.

Transport Layer 28
TCP Congestion Control - Congestion Avoidance

 When do we get to this state


• value of cwnd is approximately half its value when congestion was last
encountered
• congestion could be just around the corner!

 Rather than doubling the value of cwnd every RTT, TCP adopts a
more conservative approach and increases the value of cwnd by just
a single MSS every RTT

Transport Layer 29
TCP Congestion Control - Congestion Avoidance

Transport Layer 30
TCP Congestion Control - Congestion Avoidance

 This can be accomplished in several ways.


• TCP sender increases cwnd by MSS bytes (MSS/cwnd) when- ever a
new acknowledgment arrives.
• For example, if MSS is 1,460 bytes and cwnd is 14,600 bytes, then 10
segments are being sent within an RTT.
• Each arriving ACK (assuming one ACK per segment) increases the
congestion window size by 1/10 MSS (146 bytes).

Transport Layer 31
Slow start with Congestion
avoidance
When does it end?
TCP’s congestion-avoidance algorithm
behaves the same when a timeout occurs.

 On time out: Congestion avoidance

• Set ssthresh = cwnd/2 ; cwnd =1

Transport Layer 32
Slow start with Congestion
avoidance

Congestion avoidance

Transport Layer 33
Drawbacks

 Slow recovery from losses


 Timeout drain the pipe -> Forces one to do slow
start which takes time to fill the pipe

Transport Layer 34
TCP Congestion Control - Fast Recovery

 When do we get to this state?


• loss event also can be triggered by a triple duplicate ACK event
• In this case, the network is continuing to deliver segments from
sender to receiver
• So TCP’s behavior to this type of loss event should be less drastic than
with a timeout-indicated loss

Idea: each dup ACK represents a packet successfully


received. Therefore, no need for very drastic action

Transport Layer 35
Fast Retransmit
 Coarse timeouts remained a problem, and Fast retransmit
was added with TCP Tahoe.
 Since the receiver responds every time a packet arrives,
this implies the sender will see duplicate ACKs.
Basic Idea:: use duplicate ACKs to signal lost packet.

Fast Retransmit
Upon receipt of three duplicate ACKs, the TCP Sender
retransmits the lost packet.

36
TCP Congestion Control - Fast Recovery
 What does TCP do?
• TCP halves the value of cwnd (adding in 3 MSS)
• ssthresh half the value of cwnd
• cwnd is increased by 1 MSS for every duplicate ACK received for the missing segment that caused TCP to
enter the fast-recovery state.

 When does it end?


• when an ACK arrives for the missing segment, TCP enters the congestion-avoidance state.
• If a timeout event occurs, fast recovery transitions to the slow-start state

set SSThresh = CWND/2


set CWND = SSThresh + 3

Transport Layer 37
Fast Retransmit

 Generally, fast retransmit eliminates about half the


coarse-grain timeouts.
 This yields roughly a 20% improvement in
throughput.

38
TCP Congestion Control - Fast Recovery

 Fast recovery is a recommended, but not required, component of


TCP
• Early version of TCP, TCP Tahoe, unconditionally cut its congestion window to
1 MSS.
• The newer version of TCP, TCP Reno, incorporated fast recovery.

39
TCP congestion control: AIMD
 approach: senders can increase sending rate until packet loss
(congestion) occurs, then decrease sending rate on loss event
Additive Increase Multiplicative Decrease
increase sending rate(cwnd) by cut sending rate(cwnd) in half
1 maximum segment size every at each loss event
RTT until loss detected
TCP sender Sending rate

AIMD sawtooth
behavior: probing
for bandwidth

time Transport Layer: 3-40


TCP Congestion Control - Fast Recovery

41
Summary: TCP Congestion Control

 When cwnd is below ssthresh, sender in slow-start


phase, window grows exponentially.
 When cwnd is above ssthresh, sender is in congestion-
avoidance phase, window grows linearly.
 When a triple duplicate ACK occurs, ssthresh set to
cwnd/2 and cwnd set to ssthresh+3.
 When timeout occurs, ssthresh set to cwnd/2 and
cwnd is set to 1 MSS.

42
Summary: TCP Congestion Control
New
New ACK!
duplicate ACK
ACK! new ACK
.
cwnd = cwnd + MSS (MSS/cwnd)
dupACKcount++ new ACK dupACKcount = 0
cwnd = cwnd+MSS transmit new segment(s), as allowed
dupACKcount = 0
L transmit new segment(s), as allowed
cwnd = 1 MSS
ssthresh = 64 KB cwnd > ssthresh
dupACKcount = 0
slow L congestion
start timeout
avoidance
ssthresh = cwnd/2
cwnd = 1 MSS duplicate ACK
timeout dupACKcount = 0 dupACKcount++
retransmit missing segment
ssthresh = cwnd/2
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment
timeout
New
ACK!
ssthresh = cwnd/2
cwnd = 1 New ACK
dupACKcount = 0
retransmit missing segment cwnd = ssthresh dupACKcount == 3
dupACKcount == 3 dupACKcount = 0
ssthresh= cwnd/2 ssthresh= cwnd/2
cwnd = ssthresh + 3.MSS cwnd = ssthresh + 3.MSS
retransmit missing segment retransmit missing segment
fast
recovery

duplicate ACK
cwnd = cwnd + MSS
transmit new segment(s), as allowed

43
TCP congestion control:
additive increase multiplicative decrease
 approach: sender increases transmission rate (window size),
probing for usable bandwidth, until loss occurs
 additive increase: increase cwnd by 1 MSS every RTT
until loss detected
 multiplicative decrease: cut cwnd in half after loss

44
TCP CUBIC
 Is there a better way than AIMD to “probe” for usable bandwidth?
 Insight/intuition:
• Wmax: sending rate at which congestion loss was detected
• congestion state of bottleneck link probably (?) hasn’t changed much
• after cutting rate/window in half on loss, initially ramp to to Wmax faster, but then
approach Wmax more slowly

Wmax classic TCP

TCP CUBIC - higher


Wmax/2 throughput in this
example

Transport Layer: 3-45


TCP CUBIC
 K: point in time when TCP window size will reach Wmax
• K itself is tuneable
 increase W as a function of the cube of the distance between current
time and K
• larger increases when further away from K
• smaller increases (cautious) when nearer K
 TCP CUBIC default
Wmax
in Linux, most
TCP Reno
popular TCP for TCP CUBIC
popular Web TCP
sending
servers rate

time
t0 t1 t2 t3 t4
Transport Layer: 3-46
TCP and the congested “bottleneck link”
 TCP (classic, CUBIC) increase TCP’s sending rate until packet loss occurs
at some router’s output: the bottleneck link

source destination
application application
TCP TCP
network network
link link
physical physical
packet queue almost
never empty, sometimes
overflows packet (loss)

bottleneck link (almost always busy)


Transport Layer: 3-47
TCP and the congested “bottleneck link”
 TCP (classic, CUBIC) increase TCP’s sending rate until packet loss occurs
at some router’s output: the bottleneck link
 understanding congestion: useful to focus on congested bottleneck link

insight: increasing TCP sending rate will


source not increase end-end throughout destination
with congested bottleneck
application application
TCP TCP
network network
link link
physical physical

insight: increasing TCP


sending rate will
increase measured RTT
Goal: “keep the end-end pipe just full, but not fuller”
RTT
Transport Layer: 3-48
Delay-based TCP congestion control
Keeping sender-to-receiver pipe “just full enough, but no fuller”: keep
bottleneck link busy transmitting, but avoid high delays/buffering
# bytes sent in
measured last RTT interval
RTTmeasured throughput =
RTTmeasured
Delay-based approach:
 RTTmin - minimum observed RTT (uncongested path)
 uncongested throughput with congestion window cwnd is cwnd/RTTmin
if measured throughput “very close” to uncongested throughput
increase cwnd linearly /* since path not congested */
else if measured throughput “far below” uncongested throughout
decrease cwnd linearly /* since path is congested */
Transport Layer: 3-49
Delay-based TCP congestion control

 congestion control without inducing/forcing loss


 maximizing throughout (“keeping the just pipe full… ”) while keeping
delay low (“…but not fuller”)
 a number of deployed TCPs take a delay-based approach
 BBR deployed on Google’s (internal) backbone network

Transport Layer: 3-50


Explicit congestion notification (ECN)
TCP deployments often implement network-assisted congestion control:
 two bits in IP header (ToS field) marked by network router to indicate congestion
• policy to determine marking chosen by network operator
 congestion indication carried to destination
 destination sets ECE bit on ACK segment to notify sender of congestion
 involves both IP (IP header ECN bit marking) and TCP (TCP header C,E bit marking)

source TCP ACK segment


destination
application application
ECE=1
TCP TCP
network network
link link
physical physical

ECN=10 ECN=11

IP datagram
Transport Layer: 3-51
TCP fairness
Fairness goal: if K TCP sessions share same bottleneck link of
bandwidth R, each should have average rate of R/K
TCP connection 1

bottleneck
TCP connection 2 router
capacity R

Transport Layer: 3-52


Q: is TCP Fair?
Example: two competing TCP sessions:
 additive increase gives slope of 1, as throughout increases
 multiplicative decrease decreases throughput proportionally

R equal bandwidth share


Is TCP fair?
Connection 2 throughput

A: Yes, under idealized


loss: decrease window by factor of 2 assumptions:
congestion avoidance: additive increase  same RTT
loss: decrease window by factor of 2
congestion avoidance: additive increase
 fixed number of sessions
only in congestion
avoidance

Connection 1 throughput R
Transport Layer: 3-53
Fairness: must all network apps be “fair”?
Fairness and UDP Fairness, parallel TCP
 multimedia apps often do not connections
use TCP  application can open multiple
• do not want rate throttled by parallel connections between two
congestion control hosts
 instead use UDP:  web browsers do this , e.g., link of
• send audio/video at constant rate, rate R with 9 existing connections:
tolerate packet loss • new app asks for 1 TCP, gets rate R/10
 there is no “Internet police” • new app asks for 11 TCPs, gets
policing use of congestion roughly R/2
control

Transport Layer: 3-54


Transport layer: roadmap
 Transport-layer services
 Multiplexing and demultiplexing
 Connectionless transport: UDP
 Principles of reliable data transfer
 Connection-oriented transport: TCP
 Principles of congestion control
 TCP congestion control
 Evolution of transport-layer
functionality
Transport Layer: 3-55
Evolving transport-layer functionality
 TCP, UDP: principal transport protocols for 40 years
 different “flavors” of TCP developed, for specific scenarios:
Scenario Challenges
Long, fat pipes (large data Many packets “in flight”; loss shuts down
transfers) pipeline
Wireless networks Loss due to noisy wireless links, mobility;
TCP treat this as congestion loss
Long-delay links Extremely long RTTs
Data center networks Latency sensitive
Background traffic flows Low priority, “background” TCP flows

 moving transport–layer functions to application layer, on top of UDP


• HTTP/3: QUIC
Transport Layer: 3-56
QUIC: Quick UDP Internet Connections
 application-layer protocol, on top of UDP
• increase performance of HTTP
• deployed on many Google servers, apps (Chrome, mobile YouTube app)

HTTP/2 HTTP/2 (slimmed)


Application HTTP/3
TLS QUIC

Transport TCP UDP

Network IP IP

HTTP/2 over TCP HTTP/2 over QUIC over UDP

Transport Layer: 3-57


QUIC: Quick UDP Internet Connections
adopts approaches we’ve studied in this chapter for
connection establishment, error control, congestion control
• error and congestion control: “Readers familiar with TCP’s loss
detection and congestion control will find algorithms here that parallel
well-known TCP ones.” [from QUIC specification]
• connection establishment: reliability, congestion control,
authentication, encryption, state established in one RTT

 multiple application-level “streams” multiplexed over single QUIC


connection
• separate reliable data transfer, security
• common congestion control
Transport Layer: 3-58
QUIC: Connection establishment

TCP handshake
(transport layer) QUIC handshake

data
TLS handshake
(security)
data

TCP (reliability, congestion control QUIC: reliability, congestion control,


state) + TLS (authentication, crypto authentication, crypto state
state)
 1 handshake
 2 serial handshakes

Transport Layer: 3-59


QUIC: streams: parallelism, no HOL blocking

HTTP HTTP
GET GET HTTP
application

GET
HTTP HTTP
GET GET
HTTP
GET QUIC QUIC QUIC QUIC QUIC QUIC
encrypt encrypt encrypt encrypt encrypt encrypt
QUIC QUIC QUIC QUIC QUIC QUIC
TLS encryption TLS encryption RDT RDT RDT RDT
error!
RDT RDT

QUIC Cong. Cont. QUIC Cong. Cont.


transport

TCP RDT TCP


error! RDT

TCP Cong. Contr. TCP Cong. Contr. UDP UDP

(a) HTTP 1.1 (b) HTTP/2 with QUIC: no HOL blocking


Transport Layer: 3-60
Chapter 3: summary
 principles behind transport Up next:
layer services:  leaving the network
• multiplexing, demultiplexing “edge” (application,
• reliable data transfer transport layers)
• flow control  into the network “core”
• congestion control
 two network-layer
 instantiation, implementation chapters:
in the Internet • data plane
• UDP • control plane
• TCP

Transport Layer: 3-61


Additional Chapter 3 slides

Transport Layer: 3-62


Go-Back-N: sender extended FSM
rdt_send(data)
if (nextseqnum < base+N) {
sndpkt[nextseqnum] = make_pkt(nextseqnum,data,chksum)
udt_send(sndpkt[nextseqnum])
if (base == nextseqnum)
start_timer
nextseqnum++
}
L else
refuse_data(data)
base=1
nextseqnum=1
timeout
start_timer
Wait udt_send(sndpkt[base])
rdt_rcv(rcvpkt) udt_send(sndpkt[base+1])
&& corrupt(rcvpkt) …
udt_send(sndpkt[nextseqnum-
1])
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
base = getacknum(rcvpkt)+1
If (base == nextseqnum)
stop_timer
else
start_timer
Transport Layer: 3-63
Go-Back-N: receiver extended FSM
any other event
udt_send(sndpkt) rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
L && hasseqnum(rcvpkt,expectedseqnum)
expectedseqnum=1 Wait extract(rcvpkt,data)
sndpkt = deliver_data(data)
make_pkt(expectedseqnum,ACK,chksum) sndpkt = make_pkt(expectedseqnum,ACK,chksum)
udt_send(sndpkt)
expectedseqnum++

ACK-only: always send ACK for correctly-received packet with highest


in-order seq #
• may generate duplicate ACKs
• need only remember expectedseqnum
 out-of-order packet:
• discard (don’t buffer): no receiver buffering!
• re-ACK pkt with highest in-order seq # Transport Layer: 3-64
TCP sender (simplified)
data received from application above
create segment, seq. #: NextSeqNum
pass segment to IP (i.e., “send”)
NextSeqNum = NextSeqNum + length(data)
if (timer currently not running)
L start timer
NextSeqNum = InitialSeqNum wait
SendBase = InitialSeqNum for
event timeout
retransmit not-yet-acked
segment with
smallest seq. #
ACK received, with ACK field value y start timer

if (y > SendBase) {
SendBase = y
/* SendBase–1: last cumulatively ACKed byte */
if (there are currently not-yet-acked segments)
start timer
else stop timer
}
Transport Layer: 3-65
TCP 3-way handshake FSM
closed
Socket connectionSocket =
welcomeSocket.accept();
L Socket clientSocket =
newSocket("hostname","port number");
SYN(x)
SYNACK(seq=y,ACKnum=x+1) SYN(seq=x)
create new socket for
communication back to client
listen

SYN
SYN sent
rcvd
SYNACK(seq=y,ACKnum=x+1)
ESTAB
ACK(ACKnum=y+1) ACK(ACKnum=y+1)
L

Transport Layer: 3-66


Closing a TCP connection
client state server state
ESTAB ESTAB
clientSocket.close()
FIN_WAIT_1 can no longer FINbit=1, seq=x
send but can
receive data CLOSE_WAIT
ACKbit=1; ACKnum=x+1
can still
FIN_WAIT_2 wait for server send data
close

LAST_ACK
FINbit=1, seq=y
TIMED_WAIT can no longer
send data
ACKbit=1; ACKnum=y+1
timed wait
for 2*max CLOSED
segment lifetime

CLOSED

Transport Layer: 3-67


TCP throughput
 avg. TCP thruput as function of window size, RTT?
• ignore slow start, assume there is always data to send
 W: window size (measured in bytes) where loss occurs
• avg. window size (# in-flight bytes) is ¾ W
• avg. thruput is 3/4W per RTT
3 W
avg TCP thruput = bytes/sec
4 RTT
W

W/2
TCP over “long, fat pipes”
 example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput
 requires W = 83,333 in-flight segments
 throughput in terms of segment loss probability, L [Mathis 1997]:

1.22 . MSS
TCP throughput =
RTT L
➜ to achieve 10 Gbps throughput, need a loss rate of L = 2·10-10 – a very
small loss rate!
 versions of TCP for long, high-speed scenarios

Transport Layer: 3-69

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy