Unit 10
Unit 10
2 Addressing
In the Transport Layer, addressing is essential for directing data to the correct application process on the
destination machine. Unlike the Network Layer, which handles end-to-end communication between
machines, the Transport Layer ensures communication between application processes.
Key Concepts:
Service Access Points (SAPs): These are logical points where the application layer interacts with
the Transport Layer. SAPs provide an interface between the transport protocol and the application.
Addressing Mechanism: Just like IP addresses direct data packets to the correct machine, the
Transport Layer uses additional addressing to ensure data reaches the correct application on the
destination machine.
Transport Layer Addressing: Involves the Port Number associated with each application,
allowing multiple applications to communicate simultaneously over the same network interface.
Real-World Example:
Imagine you're sending an email from a mail client. The data generated by your email application needs
to reach the email server (i.e., the receiving machine). However, multiple services like HTTP (web
browsers), FTP (file transfers), and SMTP (email services) may be running on the same machine. The
Transport Layer uses port numbers to ensure the email data reaches the SMTP service on the receiving
server.
Port Numbers:
Well-Known Ports (0 to 1023): Used by common services like HTTP (Port 80), FTP (Port 21), and
SMTP (Port 25).
Registered Ports (1024-49159): Used by user applications.
Dynamic Ports (49152-65535): Used for temporary connections.
The Transport Layer is responsible for ensuring that data is delivered reliably to the correct application
on the destination machine. It employs several methods to ensure the integrity and order of data
transmission.
1. Error Control:
Goal: Detect and correct errors in data transmission.
Method: The Transport Layer uses error detection algorithms like checksum to detect errors
in the data packets. If an error is detected, the data is retransmitted.
Example: If a data packet is corrupted during transmission, the checksum will fail, and the data
packet will be re-sent.
2. Sequence Control:
Goal: Ensure data is received in the correct order.
Method: Each data segment is assigned a sequence number. The receiver uses these numbers to
reassemble the data in the correct order.
Example: If a message is split into multiple packets (segmentation), the sequence numbers will
allow the receiver to reassemble the packets in the correct order.
3. Loss Control:
Goal: Ensure that lost data packets are retransmitted.
Method: If the Transport Layer detects that some data segments were lost (through missing
sequence numbers), it requests retransmission of those missing segments.
Example: If a packet fails to arrive at the destination, the sequence number will indicate the
missing data, and the receiver will request its retransmission.
4. Duplication Control:
Goal: Prevent duplicate data packets from reaching the destination.
Method: The receiver identifies duplicate segments using the sequence numbers and discards
them.
Example: If a packet is received more than once (due to retransmission or network errors), the
sequence number helps the receiver detect and discard the duplicate.
Flow control at the Transport Layer ensures that data is transmitted at a rate that the receiver can handle.
It prevents the receiver from being overwhelmed with too much data.
This protocol is used to control the amount of data in transit. It ensures that the sender does not transmit
more data than the receiver can handle at one time.
1. Sender Side: The sender is allowed to send a certain number of packets (based on the window size)
without waiting for an acknowledgment from the receiver.
2. Receiver Side: The receiver can adjust the window size based on its buffer capacity, providing
backpressure to the sender if needed.
The window size can be dynamically adjusted depending on the receiver's capacity.
Acknowledgments from the receiver can trigger adjustments to the window size, allowing for more
efficient transmission.
The sender does not need to send a full window’s worth of data all at once, and the receiver can
acknowledge at any time.
Real-World Example:
Imagine a server that sends files to a client. The server uses the sliding window to control the number of
bytes it sends, ensuring the client’s buffer doesn’t overflow. If the client’s buffer is full, it will notify the
server, which will stop sending data until the client is ready for more.
Connection-Oriented Communication:
This involves establishing a connection before data can be transferred. The most commonly used protocol
for this is TCP (Transmission Control Protocol).
1. Three-Way Handshake:
Step 1: The sender sends a connection request packet.
Step 2: The receiver acknowledges the request and responds with a confirmation.
Step 3: The sender acknowledges the confirmation, and the connection is established.
2. Connection Establishment: Before any data is exchanged, the devices must agree on the parameters
for the connection and ensure that the path is available for communication.
3. Connection Termination:
When data transfer is complete, the connection must be closed properly to free up resources.
Similar to connection establishment, connection termination also follows a three-way
handshake.
2.6 Multiplexing
Multiplexing is the process of sending multiple data streams over a single communication link. It
improves the efficiency of data transmission by using available bandwidth effectively.
Types of Multiplexing:
1. Upward Multiplexing:
Multiple transport layer connections use the same network connection.
This reduces overhead and optimizes bandwidth usage.
Example: Several applications on a computer (e.g., web browsing, email) use the same network
connection to communicate with different servers.
2. Downward Multiplexing:
A single transport layer connection uses multiple network paths to improve throughput (data
delivery speed).
Example: In low-bandwidth environments, multiple virtual circuits are used to send data
simultaneously through different paths to increase the data transfer rate.
Key Definitions
Error Control: Mechanism to detect and correct errors during data transmission.
Flow Control: Mechanism to regulate the flow of data between sender and receiver to prevent
overload.
Sliding Window: A flow control technique that allows data to be sent in chunks while controlling the
amount in transit.
Connection-Oriented Communication: Communication where a connection is established before
data transmission (e.g., TCP).
Multiplexing: Technique of combining multiple data streams over a single communication link.
Key Takeaways
Addressing in the Transport Layer ensures data reaches the correct application on the destination
machine.
Reliable Delivery involves mechanisms such as error control, sequence control, loss control, and
duplication control.
Flow Control prevents the receiver from being overwhelmed by regulating data transmission,
typically through sliding window protocols.
Connection Management uses three-way handshakes to establish and terminate connections.
Multiplexing improves transmission efficiency by allowing multiple data streams to use the same
connection.
Congestion Control aims to prevent network congestion, or alleviate it once it happens, by managing the
flow of data. Network congestion occurs when too much data is transmitted on the network,
overwhelming its capacity and leading to packet loss and delays.
TCP uses a combination of slow start, congestion avoidance, and congestion detection to handle
congestion control.
1. Slow Start:
Goal: Avoid overwhelming the network at the start of a connection.
Mechanism: The sender begins by transmitting data at a slow rate and exponentially increases
the transmission rate as acknowledgments are received. Initially, the congestion window (the
amount of data that can be sent before waiting for an acknowledgment) is small and grows
rapidly as the connection proceeds.
Real-World Example: If you're pouring water into a glass, you start slowly and increase the flow
only when you're sure the glass can hold more without overflowing.
2. Congestion Avoidance:
Goal: Prevent congestion by carefully managing the transmission rate.
Mechanism: When the congestion window reaches a threshold, TCP switches from exponential
increase to linear increase in the congestion window size, avoiding rapid increase that might
cause congestion.
Real-World Example: It's like adjusting the flow rate of a water pipe to avoid a flood once the
pipe reaches a certain capacity.
3. Congestion Detection:
Goal: React to congestion once it has been detected, usually after packet loss.
Mechanism: When packet loss is detected, TCP reduces the congestion window by half
(multiplicative decrease) and slows down the transmission rate.
Real-World Example: If a traffic jam is detected, the driver reduces speed and adjusts their
driving pattern to avoid worsening the situation.
Quality of Service (QoS) refers to the performance characteristics of a network and how well it supports
the needs of various types of traffic (such as voice, video, and data).
QoS Parameters:
1. Reliability:
Definition: The ability of the system to consistently perform as expected, ensuring that data is
not lost and reaches the destination without errors.
Example: A VoIP (Voice over IP) service needs high reliability to avoid dropped calls.
2. Delay:
Definition: The time it takes for data to travel from the sender to the receiver. Lower delay is
critical for real-time applications.
Example: Video conferencing platforms require low delay to ensure a smooth conversation.
3. Jitter:
Definition: The variation in packet arrival times. High jitter can result in uneven data flow and is
problematic for real-time applications.
Example: In live streaming, high jitter can cause video freezes or audio drops.
4. Bandwidth:
Definition: The data rate supported by the network connection. Higher bandwidth means more
data can be transmitted in less time.
Example: Streaming services like Netflix require high bandwidth for high-definition video
streams.
1. Over-Provisioning: Adding extra bandwidth and router capacity to ensure that the network can
handle high loads without degrading performance.
2. Buffering: Storing data temporarily before it's sent or processed, which helps smooth out delays.
3. Scheduling: Prioritizing certain types of data (e.g., voice traffic over file downloads) to ensure time-
sensitive data gets through first.
Scheduling Techniques:
FIFO Queuing: Packets are processed in the order they arrive. It can lead to delays if the queue gets
too full.
Priority Queuing: Packets are classified into priority levels, and higher-priority packets are
processed first.
Weighted Fair Queuing (WFQ): Packets are assigned weights based on their priority, and each
queue is served proportionally to its weight.
TCP uses a sliding window mechanism to manage the flow of data between the sender and receiver,
optimizing data transfer rates while preventing congestion.
The sender has a window size that determines how much data can be sent without receiving an
acknowledgment.
The receiver also has a window size that indicates how much data it can handle at one time.
Key Features:
1. Window Size Adjustments: The size of the window can be increased or decreased based on the
receiver’s ability to handle data.
2. Acknowledgments: When the receiver gets the data, it sends an acknowledgment, allowing the
sender to move the window and send more data.
3. Silly Window Syndrome: This happens when the receiver reads small amounts of data at a time
(e.g., 1 byte), leading to inefficient transmission. Solutions include delaying the acknowledgment
until a sufficient amount of buffer space is available.
Real-World Example:
Imagine a water tank (receiver) filling up from a hose (sender). The hose can only pour water at a certain
rate, and the tank can only handle a certain amount at a time. The sliding window controls how much
water is sent at once, adjusting based on the tank's current capacity.
2.10 Ports
In networking, a port is a communication endpoint used by the Transport Layer to identify the specific
process or application running on a machine.
Types of Ports:
1. Well-Known Ports (0-1023): Reserved for standard services like HTTP (Port 80), FTP (Port 21),
and SMTP (Port 25).
Example: When you visit a website (HTTP), your browser connects to port 80 on the web server.
2. Registered Ports (1024-49159): Used for user applications and custom services.
Example: When you run a custom service, it might use a port like 5000.
3. Dynamic or Private Ports (49152-65535): Used for temporary or ephemeral connections, usually
by client applications.
Example: When you open a new web page, the browser might dynamically assign a port number
from this range for the session.
Port Usage in Communication:
TCP/UDP Ports: These are used by both TCP and UDP protocols to direct data to the correct process
on a machine.
Port Numbering: Port numbers are represented as 16-bit integers, meaning there are 65,536
possible ports.
Real-World Example:
Consider sending an email. The SMTP server on the destination machine listens on port 25, while your
email client connects to it through this port to send the message.
Key Definitions
Congestion Control: Mechanisms used to avoid or handle network congestion by controlling the
flow of data.
Quality of Service (QoS): The overall performance of a network, particularly in terms of delay, jitter,
bandwidth, and reliability.
Sliding Window: A flow control mechanism that helps manage the amount of data in transit at any
given time.
Port Number: A unique identifier used by TCP/UDP protocols to direct data to a specific application
on a machine.
Key Takeaways
Congestion Control uses methods like slow start and congestion detection to prevent and handle
congestion on the network.
QoS ensures that different types of traffic (e.g., video, voice, data) receive appropriate priority for
optimal performance.
TCP Window Management uses the sliding window technique to regulate data flow between sender
and receiver, ensuring efficient and reliable transmission.
Ports are used by the Transport Layer to deliver data to the correct application or process on a
machine.