Datalink Layer
Datalink Layer
o In the OSI model, the data link layer is a 4th layer from the top and 2nd layer from the bottom.
o The communication channel that connects the adjacent nodes is known as links, and in order to
move the datagram from source to the destination, the datagram must be moved across an
individual link.
o The main responsibility of the Data Link Layer is to transfer the datagram across an individual link.
o The Data link layer protocol defines the format of the packet exchanged across the nodes as well as
the actions such as Error detection, retransmission, flow control, and random access.
o The Data Link Layer protocols are Ethernet, token ring, FDDI and PPP. o An important
characteristic of a Data Link Layer is that datagram can be handled by different link layer protocols
on different links in a path. For example, the datagram is handled by Ethernet on the first link, PPP on
the second link.
o Framing & Link access: Data Link Layer protocols encapsulate each network frame within a Link
layer frame before the transmission across the link. A frame consists of a data field in which a
network layer datagram is inserted and a number of data fields. It specifies the structure of the
frame as well as a channel access protocol by which frame is to be transmitted over the link.
o Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the network
layer datagram without any error. A reliable delivery service is accomplished with transmissions
and acknowledgements. A data link layer mainly provides the reliable delivery service over the
links as they have higher error rates and they can be corrected locally, link at which an error
occurs rather than forcing to retransmit the data.
o Flow control: A receiving node can receive the frames at a faster rate than it can process the frame.
Without flow control, the receiver's buffer can overflow, and frames can get lost. To overcome
this problem, the data link layer uses the flow control to prevent the sending node on one side of
the link from overwhelming the receiving node on another side of the link.
o Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer
protocol provides a mechanism to detect one or more errors. This is achieved by adding error
detection bits in the frame and then receiving node can perform an error check.
o Error correction: Error correction is similar to the Error detection, except that receiving node not
only detect the errors but also determine where the errors have occurred in the frame.
o Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the data at the
same time. In a Half-Duplex mode, only one node can transmit the data at the same time.
The data link layer in the OSI (Open System Interconnections) Model, is in between the physical layer
and the network layer. This layer converts the raw transmission facility provided by the physical layer to a
reliable and error-free link.
The main functions and the design issues of this layer are
In the OSI Model, each layer uses the services of the layer below it and provides services to the layer
above it. The data link layer uses the services offered by the physical layer.The primary function of this
layer is to provide a well defined service interface to network layer above it.
The types of services provided can be of three types −
Framing
The data link layer encapsulates each data packet from the network layer into frames that are then
transmitted.
• Frame Header
• Payload field that contains the data packet from network layer
• Trailer
Error Control
The data link layer ensures error free link for data transmission. The issues it caters to with respect to
error control are −
The data link layer regulates flow control so that a fast sender does not drown a slow receiver. When the
sender sends frames at very high speeds, a slow receiver may not be able to handle it. There will be frame
losses even if the transmission is error-free. The two common approaches for flow control are −
Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is required that
the sender and receiver should work at the same speed. That is, the sender sends at a speed on which the
receiver can process and accept the data. What if the speed (hardware/software) of the sender or receiver
differs? If sender is sending too fast the receiver may be overloaded, (swamped) and data may be lost.
• Sliding Window
In this flow control mechanism, both sender and receiver agree on the number of data-frames
after which the acknowledgement should be sent. As we learnt, stop and wait flow control
mechanism wastes resources, this protocol tries to make use of underlying resources as much as
possible.
Error Control
When data-frame is transmitted, there is a probability that data-frame may be lost in the transit or it is
received corrupted. In both cases, the receiver does not receive the correct data-frame and sender does not
know anything about any loss.In such case, both sender and receiver are equipped with some protocols
which helps them to detect transit errors such as loss of data-frame. Hence, either the sender retransmits
the data-frame or the receiver may request to resend the previous data-frame.
• Error detection - The sender and receiver, either both or any, must ascertain that there is some error
in the transit.
• Positive ACK - When the receiver receives a correct frame, it should acknowledge it.
• Negative ACK - When the receiver receives a damaged frame or a duplicate frame, it sends a
NACK back to the sender and the sender must retransmit the correct frame.
• Retransmission: The sender maintains a clock and sets a timeout period. If an acknowledgement of
a data-frame previously transmitted does not arrive before the timeout the sender retransmits the
frame, thinking that the frame or it’s acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to control the errors
by Automatic Repeat Requests (ARQ):
Stop-and-wait ARQ
Stop and Wait ARQ Mechanism:
Go-Back-N ARQ
Stop and wait ARQ mechanism does not utilize the resources at their best.When the
acknowledgement is received, the sender sits idle and does nothing. In Go Back-N ARQ method,
both sender and receiver maintain a window.
Window Size: The sender can send multiple frames (up to a predefined window size, N) without waiting
for an acknowledgment.
Sequence Numbers: Each frame is assigned a unique sequence number to keep track of its order.
Go-Back-N ARQ Mechanism:
Receiving Frames:The receiver only accepts frames in the correct sequence.If a frame is received
correctly, it sends an ACK with the sequence number of the next expected frame.
Error Handling:If a frame is lost or damaged, the receiver discards that frame and any subsequent
frames, even if they were received correctly.The receiver sends an ACK for the last correctly received
frame.
Go-Back Mechanism:When the sender detects a missing ACK (due to a lost frame or timeout), it
retransmits the missing frame along with all subsequent frames, even if some were already received
correctly by the receiver.
Example:
The receiver accepts multiple frames and tracks their sequence numbers, sending
acknowledgments (ACK) for each.
After sending all frames, the sender checks which ones have received positive ACKs.
If all frames are acknowledged, the sender sends the next batch.
In Go-back-N ARQ, it is assumed that the receiver does not have any buffer space for its window
size and has to process each frame as it comes. This enforces the sender to retransmit all the
frames which are not acknowledged.
In Selective-Repeat ARQ, the receiver, while keeping track of sequence numbers, buffers the
frames in memory and sends NACK for only frames which are missing or damaged.
The sender in this case, sends only the packet for which NACK is received.
ERROR CONTROL
Data can be corrupted during transmission. For reliable communication, errors must be detected and
corrected. Error Control is a technique of error detection and retransmission.
TYPES OF ERRORS
SINGLE-BIT ERROR
The term Single-bit error means that only one bit of a given data unit (such as byte, character, data unit or
packet) is changed from 1 to 0 or from 0 to 1.
BURST ERROR
The term Burst Error means that two or more bits in the data unit have changed from 1 to 0 or from 0 to 1.
Steps Involved :
● Consider the original message (dataword) as M(x) consisting of ‘k’ bits and the divisor as C(x)
consists of ‘n+1’ bits.
● The original message M(x) is appended by ‘n’ bits of zero’s. Let us call this zero-extended
message as T(x).
● Divide T(x) by C(x) and find the remainder.
● The division operation is performed using XOR operation.
● The resultant remainder is appended to the original message M(x) as CRC and sent by the
sender(codeword).
Example 1:
● Consider the Dataword / Message M(x) = 1001
● Divisor C(x) = 1011 (n+1=4)
● Appending ‘n’ zeros to the original Message M(x).
● The resultant messages is called T(x) = 1001 000. (here n=3)
● Divide T(x) by the divisor C(x) using XOR operation.
HDLC
It is a group of communication protocols used for transmitting data between network points or nodes.
It operates at the data link layer, where data is organized into frames.
Frames are sent through the network, and the destination checks if they arrive successfully.
HDLC is a bit-oriented protocol, meaning data is handled at the bit level.
It works for both point-to-point (direct connection between two nodes) and multipoint (one-to-many
connections) communications.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous balanced
mode.
Normal Response Mode (NRM) − Here, two types of stations are there, a primary station that sends
commands and a secondary station that can respond to received commands. It is used for both point -
to - point and multipoint communications.
Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each station can both
send commands and respond to commands. It is used for only point - to - point communications.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure varies
according to the type of frame. The fields of a HDLC frame are −
Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit pattern of the flag
is 01111110.
Address − It contains the address of the receiver. If the frame is sent by the primary station, it contains the
address(es) of the secondary station(s). If it is sent by the secondary station, it contains the address of the
primary station. The address field may be from 1 byte to several bytes.
Payload − This carries the data from the network layer. Its length may vary from one network to another.
FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard code used is CRC
(cyclic redundancy code)
POINT-TO-POINT PROTOCOL (PPP)
● Point-to-Point Protocol (PPP) was devised by IETF (Internet Engineering Task Force) in 1990 as a
Serial Line Internet Protocol (SLIP).
● PPP is a data link layer communications protocol used to establish a direct connection between
two nodes.
● It connects two routers directly without any host or any other networking device in between.
● It is used to connect the Home PC to the server of ISP via a modem.
● It is a byte - oriented protocol that is widely used in broadband communications having heavy
loads and high speeds.
● Since it is a data link layer protocol, data is transmitted in frames. It is also known as RFC 1661.
Services Provided by PPP The main services provided by Point - to - Point Protocol are −
1. Defining the frame format of the data to be transmitted.
2. Defining the procedure of establishing link between two points and exchange of data.
3. Stating the method of encapsulation of network layer data in the frame.
4. Stating authentication rules of the communicating devices.
5. Providing address for network communication.
6. Providing connections over multiple links.
7. Supporting a variety of network layer protocols by providing a range of services.
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one or more bytes.
1. Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the flag is 01111110.
2. Address − 1 byte which is set to 11111111 in case of broadcast.
3. Control − 1 byte set to a constant value of 11000000.
4. Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
5. Payload − This carries the data from the network layer. The maximum length of the payload field is
1500 bytes.
6. FCS − It is a 2 byte(16-bit) or 4 bytes(32-bit) frame check sequence for error detection. The standard
code used is CRC.
Dead: In dead phase the link is not used. There is no active carrier and the line is quiet.
Establish: Connection goes into this phase when one of the nodes start communication. In this phase, two
parties negotiate the options. If negotiation is successful, the system goes into authentication phase or
directly to networking phase.
Authenticate: This phase is optional. The two nodes may decide whether they need this phase during the
establishment phase. If they decide to proceed with authentication, they send several authentication
packets. If the result is successful, the connection goes to the networking phase; otherwise, it goes to the
termination phase.
Network: In network phase, negotiation for the network layer protocols takes place.PPP specifies that two
nodes establish a network layer agreement before data at the network layer can be exchanged. This is
because PPP supports several protocols at network layer. If a node is running multiple protocols
simultaneously at the network layer, the receiving node needs to know which protocol will receive the
data.
Open: In this phase, data transfer takes place. The connection remains in this phase until one of the
endpoints wants to end the connection.
Terminate: In this phase connection is terminated.
Following are the different methods of random-access protocols for broadcasting frames on the
channel.
○ Aloha
○ CSMA
○ CSMA/CD
○ CSMA/CA
Aloha Rules
Pure Aloha
● When a station has data to send, it transmits immediately without checking if the channel is
free.
● This can lead to collisions, causing the data frame to be lost.
● After sending a frame, the station waits for an acknowledgment from the receiver.
● If no acknowledgment is received within a set time, the station waits for a random backoff
time.
● The station then retransmits the frame.
● This process continues until the data is successfully sent and acknowledged.
Slotted Aloha
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel
and if the channel is idle, it immediately sends the data. Else it must wait and keep track of the status
of the channel to be idle and broadcast the frame unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node
must sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the
station must wait for a random time (not continuously), and when the channel is found to be idle, it
transmits the frames.
P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent mode
defines that each node senses the channel, and if the channel is inactive, it sends a frame with a P
probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time and resumes
the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each station
waits for its turn to retransmit the data.
Controlled Access Protocol refers to a method where stations or devices in a network take turns or
are granted permission to transmit data, ensuring orderly communication and preventing collisions.
Unlike random access methods (like CSMA), controlled access protocols avoid collisions by
coordinating which device can send data at any given time. There are three main types of controlled
access protocols:
1. Reservation
2. Polling
● A central controller or a primary station asks each station (in a round-robin or pre-determined
order) if they have data to send.
● If a station has data, it sends it; if not, the controller polls the next station.
● This ensures that only one station transmits at a time, avoiding collisions.
3. Token Passing
Channelization Protocols
● These protocols allow multiple stations to share a channel by dividing the total available
bandwidth based on time, frequency, or codes.
● All stations can access the channel at the same time, but they use different methods to avoid
interference.
Methods of Channelization:
● Designed for faster data transmission than traditional LANs, typically offering speeds from
100 Mbps to several Gbps.
● These LANs support bandwidth-intensive applications like video conferencing, large file
transfers, and real-time data analytics.
● Examples include Fast Ethernet (100 Mbps), Gigabit Ethernet (1 Gbps), and 10-Gigabit
Ethernet.
● High-speed LANs often use fiber-optic cables or advanced copper wiring for faster and more
reliable connections.
● They rely on more efficient network protocols and switching technologies to handle high
data volumes.
Token Ring
● Mechanism: A special control packet, called a token, circulates continuously on the network.
○ Only the station holding the token is allowed to transmit data.
○ After transmitting, the token is passed to the next station in the ring.
● Collision Prevention: Because only one station can send data at a time (while holding the
token), collisions are avoided.
● Speed: Typically operates at 4 Mbps or 16 Mbps.
● Fault Tolerance: If one station fails, the whole network can be affected unless recovery
mechanisms are in place.
Token Bus
● Topology: Logical token-passing but in a bus topology (stations are connected to a shared
communication medium like a coaxial cable).
● Mechanism: A token is passed in a logical sequence among stations, even though physically,
they are connected in a bus.
○ Only the station that holds the token can send data.
● Collision Prevention: Prevents data collisions by controlling access through the
token-passing process.
● Applications: Often used in industrial networks like in factory automation due to its ability
to handle real-time data.
● Standard: Defined under the IEEE 802.4 standard.
● Overview: IEEE 802.3 is the standard for Ethernet-based local area networks (LANs),
defining how data is transmitted across wired networks.
● Transmission Medium: Primarily uses twisted-pair cables (copper wires) or fiber optics for
data transmission.
● Access Method: It employs CSMA/CD (Carrier Sense Multiple Access with Collision
Detection) to manage data collisions on the network.
○ Stations check if the network is idle before transmitting data.
○ If a collision occurs, the network detects it, and stations resend data after a random
delay.
● Data Rates:
○ 10 Mbps (Ethernet): Original standard for basic Ethernet.
○ 100 Mbps (Fast Ethernet): For faster data transfer.
○ 1 Gbps (Gigabit Ethernet): High-speed Ethernet commonly used today.
○ 10 Gbps (10 Gigabit Ethernet): Advanced standard for high-performance
networking.
● Topology: Primarily uses a star topology where devices are connected to a central switch or
hub.
● Frame Format:
○ The Ethernet frame includes headers with source and destination MAC addresses,
data, and error-checking information.
● Applications: Widely used in office networks, data centers, and enterprise environments
for high-speed data exchange.
● Overview: IEEE 802.11 is the standard for wireless local area networks (WLANs), which
governs how devices communicate over Wi-Fi.
● Transmission Medium: Uses radio waves to transmit data wirelessly over air.
○ Operates on 2.4 GHz and 5 GHz frequency bands.
● Access Method: Uses CSMA/CA (Carrier Sense Multiple Access with Collision
Avoidance).
○ Before transmitting, stations check if the channel is clear and then wait for a random
backoff period to avoid collisions.
● Data Rates and Versions:
○ 802.11b: Operates at 11 Mbps in the 2.4 GHz band.
○ 802.11a: Operates at 54 Mbps in the 5 GHz band.
○ 802.11g: Operates at 54 Mbps in the 2.4 GHz band.
○ 802.11n: Introduced MIMO (Multiple Input Multiple Output) technology, offering
speeds up to 600 Mbps over both 2.4 GHz and 5 GHz bands.
○ 802.11ac: Operates in the 5 GHz band, providing speeds up to 1 Gbps.
○ 802.11ax (Wi-Fi 6): The latest standard, with speeds over 10 Gbps and improved
performance in crowded environments.
● Topology: Typically uses an infrastructure mode, where devices connect through a central
wireless access point (WAP).
○ Also supports ad-hoc mode, where devices communicate directly without an access
point.
● Frame Format: Includes headers with source and destination MAC addresses, control
information, and data payload.
● Security: Includes encryption protocols like WEP, WPA, and WPA2/WPA3 to ensure secure
wireless communication.
● Applications: Commonly used in homes, offices, public spaces, and mobile devices for
internet access and wireless communication.