0% found this document useful (0 votes)
3 views11 pages

Unit-4, BCS603, Computer Network

The document provides an overview of the Transport Layer (Layer 4) of the OSI model, detailing its functions such as end-to-end delivery, flow control, and error control. It compares the two main transport layer protocols, TCP (connection-oriented) and UDP (connectionless), highlighting their differences in reliability, speed, and data handling. Additionally, it discusses connection management, multiplexing, demultiplexing, congestion control algorithms, and Quality of Service (QoS) requirements.

Uploaded by

dss745147
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views11 pages

Unit-4, BCS603, Computer Network

The document provides an overview of the Transport Layer (Layer 4) of the OSI model, detailing its functions such as end-to-end delivery, flow control, and error control. It compares the two main transport layer protocols, TCP (connection-oriented) and UDP (connectionless), highlighting their differences in reliability, speed, and data handling. Additionally, it discusses connection management, multiplexing, demultiplexing, congestion control algorithms, and Quality of Service (QoS) requirements.

Uploaded by

dss745147
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Notes by Professor Tanya Shrivastava

UNIT – 4, Computer Network, BCS-603


 Transport Layer

Transport Layer (Layer 4):

 End to End Encryption, end to end delivery of packets.


 The fourth layer of the OSI reference model is the Transport layer.
 Network layer contains packets, when that data moves from Network layer to
Transport layer that packet converted into segments.
 Contains segments.
 Flow control and Error control at global level.
 The transport layer provides services to the application layer and takes services from the
network layer.
 The data in the transport layer is referred to as Segments.
 It is responsible for the End to End Delivery of the complete message.
 The transport layer also provides the acknowledgement of the successful data transmission
and re-transmits the data if an error is found.
 TCP and UDP are the transport layer protocol. TCP is connection oriented protocol and
UDP is the connectionless protocol.
 Congestion Control.

Data in the Transport Layer is called as Segments.


Transport Layer is called as Heart of OSI model.

 Transport Layer Protocols (TCP and UDP)

TCP (Connection Oriented Protocol)  UDP (Connectionless Protocol)

TCP (Transmission control protocol) is a UDP (User datagram Protocol) is a


connection oriented protocol. connectionless protocol.

TCP rearranges data packets in the specific UDP protocol has no fixed order because
order. all packets are independent of each other.

The speed of TCP is slower. The speed of UDP is faster.

TCP is heavy weight. TCP needs three packets UDP is light weight. There are no
to set up socket connections before any user tracking, no connections, and no ordering
datya can be sent. of messages.

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

Acknowledgement. No Acknowledgements.

Using three – way handshake protocol – No handshake protocol because it is


SYN, SYN-ACK, ACK connectionless.

TCP is reliable as it guarantees delivery of data The delivery of data to the destination
to the destination. can’t be guaranteed in UDP.

Retransmission of data in case of data loss. No retransmission of data.

More secure. Less secure.

Keep track of loss packets. Don’t keep track of loss packets.

Data send in a sequence. Doesn’t care about packet or data arrival


in order.

Point – to – Point. Supports Multicast.

Example – HTTP, HTTPS, FTP etc. Example – DNS, DHCP etc.

Connections are wired. Connections are wireless.

Used for: Web browsing (HTTP), emails Used for: Video calls (Zoom), online
(SMTP), file transfers (FTP) gaming, live streaming

Responsibilities of a Transport Layer:

 Process to Process Delivery


 Computers run many applications (e.g., Chrome, Whatsapp, Spotify) at the same time.
 The Transport Layer ensures data reaches the right app (not just the right computer).
 It uses port numbers (like apartment numbers in a building) to identify apps.

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

The Transport Layer uses port numbers to identify applications.

1. Port Numbers (Like Apartment Numbers in a Building)


 Each app has a unique port number (e.g., web servers use port 80, email uses port 25).
 Example:
o Source: Your browser (Chrome) sends a request from port 54321.
o Destination: The YouTube server receives it on port 443 (HTTPS).

2. Socket = IP Address + Port Number


 A socket is like a full address:
o IP Address → Finds the right computer.
o Port Number → Finds the right app inside that computer.
 Example: Ip address : port number
o 192.168.1.10:54321 → Your Chrome browser.
o 142.250.190.46:443 → YouTube’s server.

 Multiplexing and Demultiplexing

Multiplexing and demultiplexing are the services facilitated by the transport layer of the OSI
model. Multiplexers and Demultiplexers are abbreviated as MUX and DEMUX.

1. Multiplexing (at the Sender Side)

Multiplexing in the transport layer refers to the process of combining data from multiple applications
into a single stream that can be sent over the network.

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

2. Demultiplexing (at the Receiver Side)

Demultiplexing is the reverse process at the receiver’s end, where the transport layer separates the
incoming data and delivers it to the correct application.

 Connection Management
Connection management refers to establishing, maintaining, and terminating network
connections between devices or systems to enable reliable data transfer.

Connection Establishment and Termination

TCP needs a 4-way handshake for its termination. To establish a connection, TCP needs a 3-way
handshake. So, here we will discuss the detailed process of TCP to build a 3-way handshake for
connection and a 4-way handshake for its termination. Here, we will discuss the following:

 TCP Connection (A 3-way handshake)


 TCP Termination (A 4-way handshake)

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

1. Connection Establishment

 Ensures both sender and receiver are ready for data transfer.
 TCP uses a three-way handshake:
1. SYN: Client sends a connection request.
2. SYN-ACK: Server acknowledges and responds.
3. ACK: Client acknowledges the response, and connection is established.

 The diagram of a successful TCP connection showing the three handshakes is shown below:

2. Connection Termination

 Either side can initiate termination.


 TCP uses a four-step process:
1. FIN: One side signals it wants to terminate.
2. ACK: The other side acknowledges.
3. FIN: The second side signals it is done.
4. ACK: The original side acknowledges.

The diagram of a successful TCP termination showing the four handshakes is shown below:

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

 Flow Control and Retransmission

Flow Control

 Ensures the sender doesn’t overwhelm the receiver.


 TCP uses the Sliding Window Protocol:
o Receiver tells the sender how much data it can accept (window size).
o Sender adjusts the data flow accordingly.

Retransmission

 Ensures reliable delivery by resending lost or corrupted packets.


 Triggered by:
o Timeouts: No ACK received in time.
o Duplicate ACKs: Triggers fast retransmit.
 TCP uses sequence numbers and ACKs to manage retransmissions.

 Window Management
This refers to how TCP manages the data transmission limits, based on two key windows:

1. Receive Window (rwnd) – Flow control.

 Set by the receiver to indicate how much data it can handle.

2. Congestion Window (cwnd) – Congestion control.

 Set by the sender based on network conditions.

The actual amount of data a sender can transmit is:

Effective Window = min (rwnd, cwnd)

 TCP Congestion control


What is congestion?

A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.

Effects of Congestion:

 As delay increases, performance decreases.


 If delay increases, retransmission occurs, making situation worse.

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

Congestion control algorithms:

 Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.

 There are two congestion control algorithm which are as follows:


 Leaky Bucket Algorithm
 Token Bucket Algorithm

Leaky Bucket Algorithm

 The leaky bucket algorithm discovers its use in the context of network traffic shaping or
rate-limiting.
 A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
 The disadvantages compared with the leaky-bucket algorithm are the inefficient use of
available network resources.
 The large area of network resources such as bandwidth is not being used effectively.

Let us consider an example to understand

Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the
bucket, the outflow is at constant rate. When the bucket is full with water additional water
entering spills over the sides and is lost.

Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets at
a constant rate.
Notes by Professor Tanya Shrivastava
Notes by Professor Tanya Shrivastava

3. Bursty traffic is converted to a uniform traffic by the leaky bucket.


4. In practice the bucket is a finite queue that outputs at a finite rate.

Token bucket Algorithm

 The leaky bucket algorithm has a rigid output design at an average rate independent of
the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to speed up. This
calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-
limiting.
 It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of predetermined size.
Tokens in the bucket are deleted for the ability to share a packet.
 When tokens are shown, a flow to transmit traffic appears in the display of tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic up to its peak
burst rate in good tokens in the bucket.

Need of token bucket Algorithm:-

The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm
so that the data is not lost. One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:

1. In regular intervals tokens are thrown into the bucket.


2. The bucket has a maximum capacity.
3. If there is a ready packet, a token is removed from the bucket, and the packet is sent.
4. If there is no token in the bucket, the packet cannot be sent.

 Quality of Service
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions, or traffic aggregates. Basic phenomenon for
QoS means in terms of packet delay and losses of various kinds.
Need for QoS –
 Video and audio conferencing require bounded delay and loss rate.
 Video and audio streaming requires bounded packet loss rate, it may not be so sensitive to
delay.
 Time-critical applications (real-time control) in which bounded delay is considered to be an
important factor.

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

 Valuable applications should be provided better services than less valuable applications.

QoS Specification –

QoS requirements can be specified as:


1. Bandwidth
2. Delay
3. Loss
4. Jitter

 Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.
 Delay: The time it takes for a packet to go from its source to its end destination. This can
often be affected by queuing delay, which occurs during times of congestion and a packet
waits in a queue before being transmitted. QoS enables organizations to avoid this by
creating a priority queue for certain types of traffic.
 Loss: The amount of data lost as a result of packet loss, which typically occurs due to
network congestion. QoS enables organizations to decide which packets to drop in this event.
 Jitter: The irregular speed of packets on a network as a result of congestion, which can result
in packets arriving late and out of sequence. This can cause distortion or gaps in audio and
video being delivered.

Headers (TCP Header Format)

 Source port: this is a 16 bit field that specifies the port number of the sender.
 Destination port: this is a 16 bit field that specifies the port number of the receiver.

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

 Sequence number: the sequence number is a 32 bit field that indicates how much data is
sent during the TCP session. When you establish a new TCP connection (3 way handshake)
then the initial sequence number is a random 32 bit value. The receiver will use this sequence
number and sends back an acknowledgment. Protocol analyzers like wire shark will often use
a relative sequence number of 0 since it are easier to read than some high random number.
 Acknowledgment number: this 32 bit field is used by the receiver to request the next TCP
segment. This value will be the sequence number incremented by 1.
 DO: this is the 4 bit data offset field, also known as the header length. It indicates the length
of the TCP header so that we know where the actual data begins.
 RSV: these are 3 bits for the reserved field. They are unused and are always set to 0.
 Flags: there are 9 bits for flags; we also call them control bits. We use them to establish
connections, send data and terminate connections:
o URG: urgent pointer. When this bit is set, the data should be treated as priority over
other data.
o ACK: used for the acknowledgment.
o PSH: this is the push function. This tells an application that the data should be
transmitted immediately and that we don’t want to wait to fill the entire TCP
segment.
o RST: this resets the connection, when you receive this you have to terminate the
connection right away. This is only used when there are unrecoverable errors and it’s
not a normal way to finish the TCP connection.
o SYN: we use this for the initial three way handshake and it’s used to set the initial
sequence number.
o FIN: this finish bit is used to end the TCP connection. TCP is full duplex so both
parties will have to use the FIN bit to end the connection. This is the normal method
how we end an connection.
 Window: the 16 bit window field specifies how many bytes the receiver is willing to receive.
It is used so the receiver can tell the sender that it would like to receive more data than what
it is currently receiving. It does so by specifying the number of bytes beyond the sequence
number in the acknowledgment field.
 Checksum: 16 bits are used for a checksum to check if the TCP header is OK or not.
 Urgent pointer: these 16 bits are used when the URG bit has been set, the urgent pointer is
used to indicate where the urgent data ends.
 Options: this field is optional and can be anywhere between 0 and 320 bits.

UDP Header Format:


The following diagram represents the UDP Header Format-

1 Byte = 8 bits

2 Byte = 16 bits

Notes by Professor Tanya Shrivastava


Notes by Professor Tanya Shrivastava

1. Source Port-
 Source Port is a 16 bit (2 byte) field.
 It identifies the port of the sending application.

2. Destination Port-
 Destination Port is a 16 bit field.
 It identifies the port of the receiving application.

3. Length-
 Length is a 16 bit field.
 It identifies the combined length of UDP Header and Encapsulated data.

Length = Length of UDP Header + Length of encapsulated data

4. Checksum (UDP Checksum) -


 Checksum is a 16 bit field used for error control.
 It is calculated on UDP Header, encapsulated data and IP pseudo header.
 Checksum calculation is not mandatory in UDP.

IMPORTANT QUESTIONS (UNIT-4)


 Define the functionalities of transport layer.(imp)
 Differentiate between multiplexing and demultiplexing. (imp)
 Define role of TCP and UDP in transport layer. (imp)
 Differentiate between network layer delivery and the transport layer delivery. (imp)
 Enumerate on TCP header and working of TCP. Differentiate TCP and UDP. Discuss
the TCP/IP suite Protocol in brief. (imp)
 Explain TCP/IP model with diagram and protocols in detail. (imp)
 Explain Token Bucket.
 Differentiate between connection oriented and connectionless protocol.
 Define TCP features and header in detail.
 Define connection management. Explain how Three-way handshaking using diagram.
 Write about QoS parameters.
 UDP header format.

Notes by Professor Tanya Shrivastava

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy