0% found this document useful (0 votes)
9 views65 pages

CN Unit 3

The transport layer provides reliable and efficient data transmission services to application layer processes, utilizing network layer services. It distinguishes between connection-oriented and connectionless services, with protocols addressing error control, flow control, and connection management. Key transport protocols include TCP, which ensures reliable byte streams, and UDP, which allows for connectionless communication without flow or error control.

Uploaded by

kevinmarshal0444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views65 pages

CN Unit 3

The transport layer provides reliable and efficient data transmission services to application layer processes, utilizing network layer services. It distinguishes between connection-oriented and connectionless services, with protocols addressing error control, flow control, and connection management. Key transport protocols include TCP, which ensures reliable byte streams, and UDP, which allows for connectionless communication without flow or error control.

Uploaded by

kevinmarshal0444
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 65

Computer Networks

(CC2001-1)
THE TRANSPORT LAYER
THE TRANSPORT SERVICE
Services Provided to the Upper Layers
• The ultimate goal of the transport layer is to provide efficient, reliable,
and cost-effective data transmission service to its users, normally
processes in the application layer.
• To achieve this, the transport layer makes use of the services provided
by the network layer.
• The software and/or hardware within the transport layer that does
the work is called the transport entity.
• Just as there are two types of network service, connection-oriented
and connectionless.
• If the transport layer service is so similar to the network layer service,
why are there two distinct layers?
• The transport code runs entirely on the users’ machines, but the network
layer mostly runs on the routers, which are operated by the carrier.
Transport Service Primitives
ELEMENTS OF TRANSPORT PROTOCOLS
• Transport Protocols deal with error control, sequencing, and flow
control.
• What is the significant differences between data link layer service and
transport layer service?
• Differences are due to major dissimilarities between the environments in
which the two protocols operate
Data link layer service Transport layer service
Two routers communicate directly via a physical Devices at transport layer, communicate with the
channel, whether wired or wireless entire network
Establishing a connection over the wire is simple Initial connection establishment is complicated
Protocols allocate a fixed number of buffers to each Idea of dedicating many buffers to each one is less
line. attractive.
Addressing
• When an application (e.g., a user) process wishes to set up a
connection to a remote application process, it must specify which one
to connect.
• In the Internet, these endpoints are called ports OR generic term
TSAP (Transport Service Access Point) to mean a specific endpoint in
the transport layer.
A possible scenario for a transport connection is as
follows:
1. A mail server process attaches itself to TSAP 1522 on host
2 to wait for an incoming call. A call such as our LISTEN
might be used, for example.
2. An application process on host 1 wants to send an email
message, so it attaches itself to TSAP 1208 and issues a
CONNECT request. Then request specifies TSAP 1208 on
host 1 as the source and TSAP 1522 on host 2 as the
destination. This action ultimately results in a transport
connection being established between the application
process and the server.
3. The application process sends over the mail message.
4. The mail server responds to say that it will deliver the
message.
5. The transport connection is released.
Connection Establishment
• Transport layer uses three-way handshake procedure for connection
establishment.
• Transport entity send a CONNECTION REQUEST segment to the
destination and wait for a CONNECTION ACCEPTED reply.
• The problem occurs when the network can lose, delay, corrupt, and
duplicate packets.
Connection Release
• There are two styles of terminating a
connection:
• Asymmetric release
• Asymmetric release is the way the telephone
system works: when one party hangs up, the
connection is broken.
• Symmetric release.
• Symmetric release treats the connection as two
separate unidirectional connections and requires
each one to be released separately.
Four protocol scenarios for releasing a
connection

. (a) Normal case of three-way handshake. (b) Final ACK lost.


(c) Response lost.. (d) Response lost and subsequent DRs lost
Error Control and Flow Control
• Error control is ensuring that the data is delivered with the desired level of
reliability, usually that all of the data is delivered without any errors.
• Flow control is keeping a fast transmitter from overrunning a slow receiver.
• Data link layer performs:
• Error detection (CRC..)
• ARQ (Automatic Repeat reQuest).
• Stop-and-wait.
• Sliding window
• The link layer checksum protects a frame while it crosses a single link.
• The transport layer checksum protects a segment while it crosses an entire
network path. It is an end-to-end check, which is not the same as having a
check on every link.
Buffering of data
• Transport protocols generally use larger sliding windows.
• Since a host may have many connections, each of which is treated
separately, it may need a substantial amount of buffering for the
sliding windows.
• The buffers are needed at both the sender and the receiver.
• Certainly they are needed at the sender to hold all transmitted but as
yet unacknowledged segments.
• They are needed there because these segments may be lost and need
to be retransmitted.
• However, since the sender is buffering, the receiver may or may not
dedicate specific buffers to specific connections, as it sees fit.
• The receiver may, for example, maintain a single buffer pool shared by all
connections.
• When a segment comes in, an attempt is made to dynamically acquire a
new buffer.
• If one is available, the segment is accepted; otherwise, it is discarded.
• Since the sender is prepared to retransmit segments lost by the network,
no permanent harm is done by having the receiver drop segments,
although some resources are wasted.
• The sender just keeps trying until it gets an acknowledgement.
• The best trade-off between source buffering and destination
buffering depends on the type of traffic carried by the connection.
• For low-bandwidth bursty traffic, such as that produced by an interactive
terminal, it is reasonable not to dedicate any buffers, but rather to acquire
them dynamically at both ends.
• For file transfer and other high-bandwidth traffic, it is better if the receiver
does dedicate a full window of buffers, to allow the data to flow at maximum
speed.
• This is the strategy that TCP uses.
Buffer Size
• There still remains the question of how to organize the buffer pool.
• Identically sized buffers
• If most segments are nearly the same size, it is natural to organize the buffers
as a pool of identically sized buffers, with one segment per buffer.
• If the buffer size is chosen to be equal to the largest possible segment, space
will be wasted whenever a short segment arrives.
• If the buffer size is chosen to be less than the maximum segment size,
multiple buffers will be needed for long segments, with the attendant
complexity.
• Variable-sized buffers
• The advantage here is better memory utilization
• Drawback is more complicated buffer management.
• Single large circular buffer per connection
• This system is simple and elegant and does not depend on segment sizes,
• But makes good use of memory only when the connections are heavily
loaded.
Dynamic buffer management
• Transport layer uses a window mechanism to control the flow of
data.
Multiplexing
Crash Recovery
• Hosts normally crash during the data transmission
• How to recover from host crashes?
• It may be desirable for clients to be able to continue working when servers crash
and quickly reboot.
• let us assume that one host, the client, is sending a long file to another host, the
file server, using a simple stop-and-wait protocol.
• Partway through the transmission, the server crashes. When it comes back up, its
tables are reinitialized, so it no longer knows precisely where it was.
• In an attempt to recover its previous status, the server might send a broadcast
segment to all other hosts, announcing that it has just crashed and requesting
that its clients inform it of the status of all open connections.
• Each client can be in one of two states: one segment outstanding, S1, or no
segments outstanding, S0.
• Consider, for example, the situation in which the server’s transport entity first
sends an acknowledgement and then writes to the application process.
• Writing a segment onto the output stream and sending an acknowledgement are
two distinct events that cannot be done simultaneously.
• If a crash occurs after the acknowledgement has been sent but before the write
has been fully completed, the client will receive the acknowledgement and thus
be in state S0 when the crash recovery announcement arrives.
• The client will therefore not retransmit, (incorrectly) thinking that the segment
has arrived.
• This decision by the client leads to a missing segment.
• What’s the solution?
• Reprogram the transport entity to first do the write and then send the
acknowledgement will solve the problem?
Strategies used for Crash recovery
• The server can be programmed in one of two ways:
• acknowledge first or write first.
• The client can be programmed in one of four ways:
• always retransmit the last segment,
• never retransmit the last segment,
• retransmit only in state S0, or
• retransmit only in state S1.
• Three events are possible at the server:
• sending an acknowledgement (A),
• writing to the output process (W), and
• crashing (C).
• The three events can occur in six different orderings:
• AC(W), AWC, C(AW), C(WA), WAC, and WC(A)
Introduction to UDP
• The Internet protocol suite supports a connectionless transport
protocol called UDP (User Datagram Protocol).
• UDP provides a way for applications to send encapsulated IP
datagrams without having to establish a connection.
• UDP is described in RFC 768.
• UDP does not do
• Flow control,
• Congestion control, or
• Retransmission upon receipt of a bad segment.
• All of that is up to the user processes.
THE INTERNET TRANSPORT PROTOCOLS: TCP
• TCP (Transmission Control Protocol) was specifically designed to provide a
reliable end-to-end byte stream over an unreliable internetwork.
• TCP was designed to dynamically adapt to properties of the internetwork
and to be robust in the face of many kinds of failures.
• TCP was formally defined in RFC 793 in September 1981.
• TCP Evolved further in RFC 793 plus, RFC 1122, RFC 1323, RFC 2018, RFC
2581, RFC 2873, RFC 2988, RFC 3168.
• TCP provides:
• Flow Control and Error Control
• Congestion Control
• Retransmission
• Segmentation and Reassembly
The TCP Service Model
• TCP service is obtained by both the sender and the receiver creating end points,
called sockets.
• Each socket has a socket number (address) consisting of the IP address of the
host and a 16-bit number local to that host, called a port.
• A port is the TCP name for a TSAP.
• Port numbers below 1024 are reserved for standard services.
• They are called well-known ports.
• Other ports from 1024 through 49151 can be registered with IANA for use by
unprivileged users.
• Applications can and do choose their own ports. For example, the BitTorrent
peer-to-peer file-sharing application (unofficially) uses ports 6881–6887, but may
run on other ports as well.
• All TCP connections are full duplex and point-to-point. Full duplex
means that traffic can go in both directions at the same time.
• Point-to-point means that each connection has exactly two end
points.
• TCP does not support multicasting or broadcasting.
• A TCP connection is a byte stream, not a message stream. Message
boundaries are not preserved end to end.
• When an application passes data to TCP, TCP may send it immediately or
buffer it (in order to collect a larger amount to send at once), at its
discretion.
• However, sometimes the application really wants the data to be sent
immediately.
• To force data out, TCP has the notion of a PUSH flag that is carried on
packets.
• The original intent was to let applications tell TCP implementations via the
PUSH flag not to delay the transmission.
• TCP_NODELAY in Windows and Linux.
• Urgent data: When an application has high priority data that should be
processed immediately.
The TCP Protocol
• A key feature of TCP, and one that dominates the protocol design, is that
every byte on a TCP connection has its own 32-bit sequence number.
• The sending and receiving TCP entities exchange data in the form of
segments.
• A TCP segment consists of a fixed 20-byte header (plus an optional part)
followed by zero or more data bytes.
• The TCP software decides how big segments should be.
• Two limits restrict the segment size.
• First, each segment, including the TCP header, must fit in the 65,515-byte IP payload.
• Second, each link has an MTU (Maximum Transfer Unit).
• Modern TCP implementations perform path MTU discovery
• The basic protocol used by TCP entities is the sliding window protocol
with a dynamic window size.
• When a sender transmits a segment, it also starts a timer.
• When the segment arrives at the destination, the receiving TCP entity
sends back a segment (with data if any exist, and otherwise without)
• Segment includes an acknowledgement number equal to the next
sequence number it expects to receive and the remaining window
size.
• If the sender’s timer goes off before the acknowledgement is
received, the sender retransmits the segment.
The TCP Segment Header
TCP Segments
• Every segment begins with a fixed-format, 20-byte header.
• After the options, if any, up to 65,535 − 20 − 20 = 65,495 data bytes
may follow, where the first 20 refer to the IP header and the second
to the TCP header.
• Segments without any data are legal and are commonly used for
acknowledgements and control messages.
TCP Header fields
• The Source port and Destination port fields identify the local end
points of the connection.
• The Sequence number and Acknowledgement number fields keeps
track of segments and ACKs.
• ACK Number specifies the next in-order byte expected
• It uses a cumulative acknowledgement.
• The TCP header length tells how many 32-bit words are contained in
the TCP header.
• Next comes a 4-bit field that is not used.
• Now come eight 1-bit flags.
• CWR and ECE are used to signal congestion when ECN (Explicit Congestion Notification) is used.
• ECE is set to signal an ECN-Echo to a TCP sender to tell it to slow down when the TCP receiver gets
a congestion indication from the network.
• CWR is set to signal Congestion Window Reduced from the TCP sender to the TCP receiver so that
it knows the sender has slowed down and can stop sending the ECN-Echo.
• URG is set to 1 if the Urgent pointer is in use.
• The ACK bit is set to 1 to indicate that the Acknowledgement number is valid. If ACK is 0, the
segment does not contain an acknowledgement, so the Acknowledgement number field is
ignored.
• The PSH bit indicates PUSHed data.
• The RST bit is used to abruptly reset a connection that has become confused due to a host crash or
some other reason. It is also used to reject an invalid segment or refuse an attempt to open a
connection.
• The SYN bit is used to establish connections.
• The FIN bit is used to release a connection.
• The Urgent pointer is used to indicate a byte offset from the current sequence number at
which urgent data are to be found.
• The Window size field tells how many bytes may be sent starting at the byte
acknowledged.
• A Checksum is also provided for extra reliability. It checksums the
header, the data, and a conceptual pseudoheader in exactly the same
way as UDP, except that the pseudoheader has the protocol number
for TCP (6) and the checksum is mandatory.
• The Options field provides a way to add extra facilities not covered by
the regular header.
• MSS (Maximum Segment Size)
• Timestamp
• SACK (Selective ACKnowledgement)
TCP Connection Establishment
• Connections are established in TCP by means of the three-way handshake.
• The CONNECT primitive sends a TCP segment with the SYN bit on and ACK
bit off and waits for a response.
• When this segment arrives at the destination, the TCP entity there checks
to see if there is a process that has done a LISTEN on the port given in the
Destination port field.
• If not, it sends a reply with the RST bit on to reject the connection.
• If some process is listening to the port, that process is given the incoming
TCP segment.
• It can either accept or reject the connection. If it accepts, an
acknowledgement segment is sent back.
• However, a vulnerability with implementing the three-way
handshake, a malicious sender can tie up resources on a host by
sending a stream of SYN segments and never following through to
complete the connection.
• This attack is called a SYN flood. Normally used in DoS/DDoS Attack
TCP Connection Release
• To release a connection, either party can send a TCP segment with
the FIN bit set, which means that it has no more data to transmit.
• When the FIN is acknowledged, that direction is shut down for new
data.
• Data may continue to flow indefinitely in the other direction,
however.
• When both directions have been shut down, the connection is
released.
• Normally, four TCP segments are needed to release a connection: one
FIN and one ACK for each direction.
TCP Connection Release
client state server state
ESTAB ESTAB
clientSocket.close()
FIN_WAIT_1 can no longer FINbit=1, seq=x
send but can
receive data CLOSE_WAIT
ACKbit=1; ACKnum=x+1
can still
FIN_WAIT_2 wait for server send data
close

LAST_ACK
FINbit=1, seq=y
TIMED_WAIT can no longer
send data
ACKbit=1; ACKnum=y+1
timed wait
for 2*max CLOSED
segment lifetime

CLOSED
TCP Connection Management Modeling
• TCP implements Three-way Hand shake mechanism for connection
establishment
TCP states visited by a client TCP
Client application
initiates a TCP connection
Wait 30 seconds CLOSED

Send SYN

TIME_WAIT SYN_SENT

Receive FIN, Receive SYN & ACK,


send ACK send ACK

FIN_WAIT_2 ESTABLISHED

Send FIN
Receive ACK, FIN_WAIT_1
Client application
send nothing initiates close connection
TCP states visited by a server-side TCP
Server application
creates a listen socket
Receive ACK, CLOSED
send nothing

LAST_ACK LISTEN

Send FIN Receive SYN


send SYN & ACK

CLOSE_WAIT SYN_RCVD

Receive FIN, ESTABLISHED Receive ACK,


send ACK send nothing
TCP Sliding Window
• Window management in TCP
decouples the issues of
acknowledgement of the correct
receipt of segments and receiver
buffer allocation.
• When the window is 0, the sender may not normally send segments,
with two exceptions.
• First, urgent data may be sent, for example, to allow the user to kill the
process running on the remote machine.
• Second, the sender may send a 1-byte segment to force the receiver to
reannounce the next byte expected and the window size.
• This packet is called a window probe.
Silly window syndrome
• This problem occurs when data
are passed to the sending TCP
entity in large blocks, but an
interactive application on the
receiving side reads data only 1
byte at a time.
TCP Timer Management
• RTO (Retransmission TimeOut).
• When a segment is sent, a retransmission timer is started.
• If the segment is acknowledged before the timer expires, the timer is
stopped.
• If, on the other hand, the timer goes off before the acknowledgement
comes in, the segment is retransmitted (and the timer os started
again).
Calculating RTO
• For each connection, TCP maintains a variable, SRTT (Smoothed
Round-Trip Time)

where α is a smoothing factor that determines how quickly the old values are forgotten.
Typically, α = 7/8. This kind of formula is an EWMA (Exponentially
Weighted Moving Average)

Making the timeout value sensitive to the variance in round-trip times as well as the smoothed round-trip
time.
This change requires keeping track of another smoothed variable, RTTVAR (Round- Trip Time VARiation)
that is updated using the formula
TCP Congestion Control
• Transport protocol using an AIMD (Additive Increase Multiplicative
Decrease) control law in response to binary congestion signals from
the network would converge to a fair and efficient bandwidth
allocation.
• TCP congestion control is based on implementing this approach using
a window and with packet loss as the binary signal. To do so, TCP
maintains a congestion window(cwnd) whose size is the number of
bytes the sender may have in the network at any time.
TCP congestion-control algorithm
• The algorithm has three major components:
• (1) Slow start,
• (2) Congestion avoidance, and
• (3) Fast recovery.
• Slow start and congestion avoidance are mandatory components of
TCP, differing in how
• they increase the size of cwnd in response to received ACKs. Slow
start increases the size of
Slow Start Phase
• The slow start phase is a crucial part of TCP congestion control, designed to efficiently utilize
available bandwidth while avoiding network congestion. During this phase, a TCP sender gradually
increases its congestion window size, which determines how many packets it can have in flight at
any given time.
• Initialization: The sender starts with a conservative estimate of the available bandwidth, typically
by setting the congestion window (cwnd) to a small value, often just one or two segments.
• Exponential Growth: In each round-trip time (RTT), the sender doubles its congestion window
size. This means that with each successful receipt of ACKnowledgment for sent packets, the
sender increases the number of packets it can send by a factor of 2. This exponential growth
quickly ramps up the amount of data being sent.
• Congestion Detection: While in the slow start phase, the sender keeps track of any signs of
congestion, such as packet loss or the receipt of duplicate ACKs. If it detects congestion, it
transitions to the congestion avoidance phase.
• Threshold: Once the congestion window size reaches a predefined threshold value (often referred
to as ssthresh or slow start threshold), the sender exits the slow start phase and enters
congestion avoidance.
Congestion avoidance
• Congestion avoidance is a critical phase in TCP congestion control algorithms, aimed at regulating data transmission to
maintain network stability and prevent congestion collapse. During this phase, TCP gradually increases its sending rate to
make efficient use of available network bandwidth while being mindful not to overwhelm the network.
Here's how congestion avoidance typically works:
• Slow Start: TCP begins by ramping up its transmission rate exponentially, quickly probing the available bandwidth until it
detects congestion, typically signaled by packet loss or other indicators like ECN (Explicit Congestion Notification)
markings.
• Congestion Detection: When TCP detects congestion, it switches from the exponential growth phase to the congestion
avoidance phase. It interprets packet loss as a sign of congestion and reduces its transmission rate accordingly. TCP
assumes that congestion occurred due to the network being unable to handle the current transmission rate.
• Additive Increase, Multiplicative Decrease (AIMD): In congestion avoidance, TCP follows the AIMD principle. It increases
its congestion window size additively, typically by one MSS (Maximum Segment Size) for each round-trip time (RTT) that
passes without congestion. In case of congestion, it reduces its congestion window multiplicatively, often by halving it. This
behavior helps TCP to regulate its transmission rate more conservatively in response to network conditions.
• Stable Operation: During congestion avoidance, TCP strives to achieve a stable and fair share of available bandwidth while
minimizing the risk of congestion collapse. By adjusting its transmission rate dynamically based on network feedback, TCP
aims to maintain an optimal balance between efficiency and network responsiveness.
Fast recovery
• The fast recovery phase is a part of TCP congestion control algorithms used to efficiently
handle packet loss events. When TCP detects packet loss, typically through triple
duplicate ACKs, it enters the fast recovery phase instead of going into a full congestion
avoidance phase, which would involve reducing the congestion window size significantly.
• During fast recovery:
• Duplicate ACKs: TCP sender receives three duplicate ACKs, indicating that a packet was
lost but subsequent packets were successfully received.
• Fast Retransmit: Upon receiving the third duplicate ACK, TCP assumes the missing packet
is lost and performs a fast retransmit, resending the packet without waiting for the
retransmission timer to expire.
• Congestion Window Adjustment: Instead of halving the congestion window size as in
traditional congestion avoidance, TCP in fast recovery increases the congestion window
by a small amount, typically by 1 MSS (Maximum Segment Size), assuming the network is
capable of handling it.
• SACK Option: If TCP supports the Selective Acknowledgment (SACK) option, it may use
this information to retransmit only the missing segments instead of the entire window.
• Exit Criteria: TCP exits the fast recovery phase when it receives an ACK for the missing
packet. This indicates that the network is flowing again and the congestion control
algorithm can return to regular operation.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy