CN Unit 3
CN Unit 3
(CC2001-1)
THE TRANSPORT LAYER
THE TRANSPORT SERVICE
Services Provided to the Upper Layers
• The ultimate goal of the transport layer is to provide efficient, reliable,
and cost-effective data transmission service to its users, normally
processes in the application layer.
• To achieve this, the transport layer makes use of the services provided
by the network layer.
• The software and/or hardware within the transport layer that does
the work is called the transport entity.
• Just as there are two types of network service, connection-oriented
and connectionless.
• If the transport layer service is so similar to the network layer service,
why are there two distinct layers?
• The transport code runs entirely on the users’ machines, but the network
layer mostly runs on the routers, which are operated by the carrier.
Transport Service Primitives
ELEMENTS OF TRANSPORT PROTOCOLS
• Transport Protocols deal with error control, sequencing, and flow
control.
• What is the significant differences between data link layer service and
transport layer service?
• Differences are due to major dissimilarities between the environments in
which the two protocols operate
Data link layer service Transport layer service
Two routers communicate directly via a physical Devices at transport layer, communicate with the
channel, whether wired or wireless entire network
Establishing a connection over the wire is simple Initial connection establishment is complicated
Protocols allocate a fixed number of buffers to each Idea of dedicating many buffers to each one is less
line. attractive.
Addressing
• When an application (e.g., a user) process wishes to set up a
connection to a remote application process, it must specify which one
to connect.
• In the Internet, these endpoints are called ports OR generic term
TSAP (Transport Service Access Point) to mean a specific endpoint in
the transport layer.
A possible scenario for a transport connection is as
follows:
1. A mail server process attaches itself to TSAP 1522 on host
2 to wait for an incoming call. A call such as our LISTEN
might be used, for example.
2. An application process on host 1 wants to send an email
message, so it attaches itself to TSAP 1208 and issues a
CONNECT request. Then request specifies TSAP 1208 on
host 1 as the source and TSAP 1522 on host 2 as the
destination. This action ultimately results in a transport
connection being established between the application
process and the server.
3. The application process sends over the mail message.
4. The mail server responds to say that it will deliver the
message.
5. The transport connection is released.
Connection Establishment
• Transport layer uses three-way handshake procedure for connection
establishment.
• Transport entity send a CONNECTION REQUEST segment to the
destination and wait for a CONNECTION ACCEPTED reply.
• The problem occurs when the network can lose, delay, corrupt, and
duplicate packets.
Connection Release
• There are two styles of terminating a
connection:
• Asymmetric release
• Asymmetric release is the way the telephone
system works: when one party hangs up, the
connection is broken.
• Symmetric release.
• Symmetric release treats the connection as two
separate unidirectional connections and requires
each one to be released separately.
Four protocol scenarios for releasing a
connection
LAST_ACK
FINbit=1, seq=y
TIMED_WAIT can no longer
send data
ACKbit=1; ACKnum=y+1
timed wait
for 2*max CLOSED
segment lifetime
CLOSED
TCP Connection Management Modeling
• TCP implements Three-way Hand shake mechanism for connection
establishment
TCP states visited by a client TCP
Client application
initiates a TCP connection
Wait 30 seconds CLOSED
Send SYN
TIME_WAIT SYN_SENT
FIN_WAIT_2 ESTABLISHED
Send FIN
Receive ACK, FIN_WAIT_1
Client application
send nothing initiates close connection
TCP states visited by a server-side TCP
Server application
creates a listen socket
Receive ACK, CLOSED
send nothing
LAST_ACK LISTEN
CLOSE_WAIT SYN_RCVD
where α is a smoothing factor that determines how quickly the old values are forgotten.
Typically, α = 7/8. This kind of formula is an EWMA (Exponentially
Weighted Moving Average)
Making the timeout value sensitive to the variance in round-trip times as well as the smoothed round-trip
time.
This change requires keeping track of another smoothed variable, RTTVAR (Round- Trip Time VARiation)
that is updated using the formula
TCP Congestion Control
• Transport protocol using an AIMD (Additive Increase Multiplicative
Decrease) control law in response to binary congestion signals from
the network would converge to a fair and efficient bandwidth
allocation.
• TCP congestion control is based on implementing this approach using
a window and with packet loss as the binary signal. To do so, TCP
maintains a congestion window(cwnd) whose size is the number of
bytes the sender may have in the network at any time.
TCP congestion-control algorithm
• The algorithm has three major components:
• (1) Slow start,
• (2) Congestion avoidance, and
• (3) Fast recovery.
• Slow start and congestion avoidance are mandatory components of
TCP, differing in how
• they increase the size of cwnd in response to received ACKs. Slow
start increases the size of
Slow Start Phase
• The slow start phase is a crucial part of TCP congestion control, designed to efficiently utilize
available bandwidth while avoiding network congestion. During this phase, a TCP sender gradually
increases its congestion window size, which determines how many packets it can have in flight at
any given time.
• Initialization: The sender starts with a conservative estimate of the available bandwidth, typically
by setting the congestion window (cwnd) to a small value, often just one or two segments.
• Exponential Growth: In each round-trip time (RTT), the sender doubles its congestion window
size. This means that with each successful receipt of ACKnowledgment for sent packets, the
sender increases the number of packets it can send by a factor of 2. This exponential growth
quickly ramps up the amount of data being sent.
• Congestion Detection: While in the slow start phase, the sender keeps track of any signs of
congestion, such as packet loss or the receipt of duplicate ACKs. If it detects congestion, it
transitions to the congestion avoidance phase.
• Threshold: Once the congestion window size reaches a predefined threshold value (often referred
to as ssthresh or slow start threshold), the sender exits the slow start phase and enters
congestion avoidance.
Congestion avoidance
• Congestion avoidance is a critical phase in TCP congestion control algorithms, aimed at regulating data transmission to
maintain network stability and prevent congestion collapse. During this phase, TCP gradually increases its sending rate to
make efficient use of available network bandwidth while being mindful not to overwhelm the network.
Here's how congestion avoidance typically works:
• Slow Start: TCP begins by ramping up its transmission rate exponentially, quickly probing the available bandwidth until it
detects congestion, typically signaled by packet loss or other indicators like ECN (Explicit Congestion Notification)
markings.
• Congestion Detection: When TCP detects congestion, it switches from the exponential growth phase to the congestion
avoidance phase. It interprets packet loss as a sign of congestion and reduces its transmission rate accordingly. TCP
assumes that congestion occurred due to the network being unable to handle the current transmission rate.
• Additive Increase, Multiplicative Decrease (AIMD): In congestion avoidance, TCP follows the AIMD principle. It increases
its congestion window size additively, typically by one MSS (Maximum Segment Size) for each round-trip time (RTT) that
passes without congestion. In case of congestion, it reduces its congestion window multiplicatively, often by halving it. This
behavior helps TCP to regulate its transmission rate more conservatively in response to network conditions.
• Stable Operation: During congestion avoidance, TCP strives to achieve a stable and fair share of available bandwidth while
minimizing the risk of congestion collapse. By adjusting its transmission rate dynamically based on network feedback, TCP
aims to maintain an optimal balance between efficiency and network responsiveness.
Fast recovery
• The fast recovery phase is a part of TCP congestion control algorithms used to efficiently
handle packet loss events. When TCP detects packet loss, typically through triple
duplicate ACKs, it enters the fast recovery phase instead of going into a full congestion
avoidance phase, which would involve reducing the congestion window size significantly.
• During fast recovery:
• Duplicate ACKs: TCP sender receives three duplicate ACKs, indicating that a packet was
lost but subsequent packets were successfully received.
• Fast Retransmit: Upon receiving the third duplicate ACK, TCP assumes the missing packet
is lost and performs a fast retransmit, resending the packet without waiting for the
retransmission timer to expire.
• Congestion Window Adjustment: Instead of halving the congestion window size as in
traditional congestion avoidance, TCP in fast recovery increases the congestion window
by a small amount, typically by 1 MSS (Maximum Segment Size), assuming the network is
capable of handling it.
• SACK Option: If TCP supports the Selective Acknowledgment (SACK) option, it may use
this information to retransmit only the missing segments instead of the entire window.
• Exit Criteria: TCP exits the fast recovery phase when it receives an ACK for the missing
packet. This indicates that the network is flowing again and the congestion control
algorithm can return to regular operation.