Unit-4, BCS603, Computer Network
Unit-4, BCS603, Computer Network
TCP rearranges data packets in the specific UDP protocol has no fixed order because
order. all packets are independent of each other.
TCP is heavy weight. TCP needs three packets UDP is light weight. There are no
to set up socket connections before any user tracking, no connections, and no ordering
datya can be sent. of messages.
Acknowledgement. No Acknowledgements.
TCP is reliable as it guarantees delivery of data The delivery of data to the destination
to the destination. can’t be guaranteed in UDP.
Used for: Web browsing (HTTP), emails Used for: Video calls (Zoom), online
(SMTP), file transfers (FTP) gaming, live streaming
Multiplexing and demultiplexing are the services facilitated by the transport layer of the OSI
model. Multiplexers and Demultiplexers are abbreviated as MUX and DEMUX.
Multiplexing in the transport layer refers to the process of combining data from multiple applications
into a single stream that can be sent over the network.
Demultiplexing is the reverse process at the receiver’s end, where the transport layer separates the
incoming data and delivers it to the correct application.
Connection Management
Connection management refers to establishing, maintaining, and terminating network
connections between devices or systems to enable reliable data transfer.
TCP needs a 4-way handshake for its termination. To establish a connection, TCP needs a 3-way
handshake. So, here we will discuss the detailed process of TCP to build a 3-way handshake for
connection and a 4-way handshake for its termination. Here, we will discuss the following:
1. Connection Establishment
Ensures both sender and receiver are ready for data transfer.
TCP uses a three-way handshake:
1. SYN: Client sends a connection request.
2. SYN-ACK: Server acknowledges and responds.
3. ACK: Client acknowledges the response, and connection is established.
The diagram of a successful TCP connection showing the three handshakes is shown below:
2. Connection Termination
The diagram of a successful TCP termination showing the four handshakes is shown below:
Flow Control
Retransmission
Window Management
This refers to how TCP manages the data transmission limits, based on two key windows:
A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.
Effects of Congestion:
Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
The leaky bucket algorithm discovers its use in the context of network traffic shaping or
rate-limiting.
A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
The disadvantages compared with the leaky-bucket algorithm are the inefficient use of
available network resources.
The large area of network resources such as bandwidth is not being used effectively.
Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the
bucket, the outflow is at constant rate. When the bucket is full with water additional water
entering spills over the sides and is lost.
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits packets at
a constant rate.
Notes by Professor Tanya Shrivastava
Notes by Professor Tanya Shrivastava
The leaky bucket algorithm has a rigid output design at an average rate independent of
the bursty traffic.
In some applications, when large bursts arrive, the output is allowed to speed up. This
calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-
limiting.
It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of predetermined size.
Tokens in the bucket are deleted for the ability to share a packet.
When tokens are shown, a flow to transmit traffic appears in the display of tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic up to its peak
burst rate in good tokens in the bucket.
The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm
so that the data is not lost. One such algorithm is token bucket algorithm.
Quality of Service
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions, or traffic aggregates. Basic phenomenon for
QoS means in terms of packet delay and losses of various kinds.
Need for QoS –
Video and audio conferencing require bounded delay and loss rate.
Video and audio streaming requires bounded packet loss rate, it may not be so sensitive to
delay.
Time-critical applications (real-time control) in which bounded delay is considered to be an
important factor.
Valuable applications should be provided better services than less valuable applications.
QoS Specification –
Bandwidth: The speed of a link. QoS can tell a router how to use bandwidth. For example,
assigning a certain amount of bandwidth to different queues for different traffic types.
Delay: The time it takes for a packet to go from its source to its end destination. This can
often be affected by queuing delay, which occurs during times of congestion and a packet
waits in a queue before being transmitted. QoS enables organizations to avoid this by
creating a priority queue for certain types of traffic.
Loss: The amount of data lost as a result of packet loss, which typically occurs due to
network congestion. QoS enables organizations to decide which packets to drop in this event.
Jitter: The irregular speed of packets on a network as a result of congestion, which can result
in packets arriving late and out of sequence. This can cause distortion or gaps in audio and
video being delivered.
Source port: this is a 16 bit field that specifies the port number of the sender.
Destination port: this is a 16 bit field that specifies the port number of the receiver.
Sequence number: the sequence number is a 32 bit field that indicates how much data is
sent during the TCP session. When you establish a new TCP connection (3 way handshake)
then the initial sequence number is a random 32 bit value. The receiver will use this sequence
number and sends back an acknowledgment. Protocol analyzers like wire shark will often use
a relative sequence number of 0 since it are easier to read than some high random number.
Acknowledgment number: this 32 bit field is used by the receiver to request the next TCP
segment. This value will be the sequence number incremented by 1.
DO: this is the 4 bit data offset field, also known as the header length. It indicates the length
of the TCP header so that we know where the actual data begins.
RSV: these are 3 bits for the reserved field. They are unused and are always set to 0.
Flags: there are 9 bits for flags; we also call them control bits. We use them to establish
connections, send data and terminate connections:
o URG: urgent pointer. When this bit is set, the data should be treated as priority over
other data.
o ACK: used for the acknowledgment.
o PSH: this is the push function. This tells an application that the data should be
transmitted immediately and that we don’t want to wait to fill the entire TCP
segment.
o RST: this resets the connection, when you receive this you have to terminate the
connection right away. This is only used when there are unrecoverable errors and it’s
not a normal way to finish the TCP connection.
o SYN: we use this for the initial three way handshake and it’s used to set the initial
sequence number.
o FIN: this finish bit is used to end the TCP connection. TCP is full duplex so both
parties will have to use the FIN bit to end the connection. This is the normal method
how we end an connection.
Window: the 16 bit window field specifies how many bytes the receiver is willing to receive.
It is used so the receiver can tell the sender that it would like to receive more data than what
it is currently receiving. It does so by specifying the number of bytes beyond the sequence
number in the acknowledgment field.
Checksum: 16 bits are used for a checksum to check if the TCP header is OK or not.
Urgent pointer: these 16 bits are used when the URG bit has been set, the urgent pointer is
used to indicate where the urgent data ends.
Options: this field is optional and can be anywhere between 0 and 320 bits.
1 Byte = 8 bits
2 Byte = 16 bits
1. Source Port-
Source Port is a 16 bit (2 byte) field.
It identifies the port of the sending application.
2. Destination Port-
Destination Port is a 16 bit field.
It identifies the port of the receiving application.
3. Length-
Length is a 16 bit field.
It identifies the combined length of UDP Header and Encapsulated data.