Cn Unit 2 Notes Complete
Cn Unit 2 Notes Complete
Unit 2
Data Link Layer
Packet: -The basic unit of communication between a source and a destination in a network is a
packet.
Frame: -frames are small parts of a message in the network. packet is the unit of data used in the
network layer; a frame is the unit of data used in the OSI model’s data link layer
o In the OSI model, the data link layer is a 6th layer from the top and 2nd layer from the bottom.
o The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
o Data link layer is responsible for converting data stream to signals bit by bit and to send that
over the underlying hardware. At the receiving end, Data link layer picks up data from
hardware which are in the form of electrical signals, assembles them in a recognizable frame
format, and hands over to upper layer.
1. Framing
2. Reliable Delivery
3. Flow control
4. Error control
5. Synchronization
6. Access Control
7. Physical Addressing
1. Framing
Data-link layer takes packets from Network Layer and encapsulates them into Frames. Then,
it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up
signals from hardware and assembles them into frames.
2. Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the
network layer datagram without any error. A reliable delivery service is accomplished with
transmissions and acknowledgements.
3. Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow
control that enables both machine to exchange data on same speed.
4. Error Control
Sometimes signals may have encountered problem in transition and the bits are flipped. These
errors are detected and attempted to recover actual data bits. It also provides error reporting
mechanism to the sender.
5. Synchronization
When data frames are sent on the link, both machines must be synchronized in order to
transfer to take place.
6. Access Control: Protocols of this layer determine which of the devices has control over the
link at any given time, when two or more devices are connected to the same link.
7. Physical Addressing: The Data Link layer adds a header to the frame in order to define
physical address of the sender or receiver of the frame, if the frames are to be distributed to
different systems on the network.
Framing
Framing in data link layer is a point -to-point connection between the sender and
receiver. The framing is the primary function of the data link layer and it provides a
way to transmit data between the connected devices.
Framing use frames to send or receive data. The data link layer receives packets from
the network layer and converts them into frames. Framing provides a way for a sender to
transmit a set of bits that are meaningful to the receiver.
Parts of a Frame
Types of Framing
Framing can be of two types, fixed sized framing and variable sized framing.
Fixed-sized Framing
The frame has a fixed size. In fixed-size framing, there is no need for defining the
boundaries of the frames to mark the beginning and end of a frame.
For example- This type of framing is used in ATMs, Wide area networks. They use
frames of fixed size called cells.
The size of the frame is variable in this type of framing. In variable size framing, we
need a way to define the end of the frame and the beginning of the next frame. This is
used in local area networks . It could be implemented in two different forms:
Length field: In a frame, we can define the length field to show the length of a frame. It is applied in
the Ethernet (802.3). The issue with it is that the length field may get corrupted sometimes.
ED (End Delimiter): In a frame, we can address an end delimiter to show the completion of a frame.
It is applied in the Token Ring. The issue with it is that the end delimiter can appear in the data.
There are three methods used to start and end of each frame.
Character count
Bit stuffing
Byte stuffing
Character count: - Character count framing method uses a field in the header to specify the
number of bytes in the frame.
The data link layer on the receiving end removes the ESC before giving the data to the network layer.
Error Detection
When data is transmitted from one device to another device, the system does not guarantee
whether the data received by the device is identical to the data transmitted by another device. An
Error is a situation when the message received at the receiver end is not identical to the message
transmitted.
Types of Errors
o Single-Bit Error
o Burst Error
Single-Bit Error:
The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.
In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed to 1.
Burst Error:
The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.
The Burst Error is determined from the first corrupted bit to the last corrupted bit.
o Parity check bits are computed for each row, which is equivalent to the single-parity check.
o In Two-Dimensional Parity check, a block of bits is divided into rows, and the redundant row
of bits is added to the whole block.
o At the receiving end, the parity bits are compared with the parity bits computed from the
received data
Checksum
Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data into equal
segments of n bits each, and all these segments are added together by using one's complement
arithmetic. The sum is complemented and appended to the original data, known as checksum field.
The extended data is transmitted across the network.
Checksum Checker
A Checksum is verified at the receiving side. The receiver subdivides the incoming data into equal
segments of n bits each, and all these segments are added together, and then this sum is
complemented. If the complement of the sum is zero, then the data is accepted otherwise data is
rejected.
o In CRC technique, a string of n 0s is appended to the data unit, and this n number is less than
the number of bits in a predetermined number, known as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known as binary
division. The remainder generated from this division is known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This
newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat this
whole unit as a single unit, and it is divided by the same divisor that was used to find the CRC
remainder.
If the resultant of this division is zero which means that it has no error, and the data is accepted.
If the resultant of this division is not zero which means that the data consists of an error. Therefore,
the data is discarded.
CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end of
the data as the length of the divisor is 4 and we know that the length of the string 0s to be
appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.
o The remainder generated from the binary division is known as CRC remainder. The generated
value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the final
string would be 11100111 which is sent across the network.
CRC Checker
o When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.
Flow Control:-
It is a collection of protocols that advises the sender how much data it can communicate before the recipient
becomes overwhelmed.
The receiving device has limited speed and limited memory to store the data. Therefore, the receiving device
must be able to inform the sending device to stop the transmission temporarily before the limits are reached.
It requires a buffer, a block of memory for storing the information until they are processed.
Approaches of Flow Control
Flow control can be broadly classified into two categories −
• Feedback based Flow Control In these protocols, the sender sends frames after it has received
acknowledgments from the user. This is used in the data link layer.
• Rate based Flow Control These protocols have built in mechanisms to restrict the rate of
transmission of data without requiring acknowledgment from the receiver. This is used in the network
layer and the transport layer.
Protocol
Step 3 − Let us assume that the receiver can handle any frame it receives with a processing time that is small enough to be
negligible, the data link layer of the receiver immediately removes the header from the frame and hands the data packet to
its network layer, which can also accept the packet immediately.
Suppose if any frame sent is not received by the receiver and is lost. So the receiver will not send any
acknowledgment as it has not received any frame. Also, the sender will not send the next frame as it
will wait for the acknowledgment for the previous frame which it had sent. So a deadlock situation
can be created here.
Go-Back-n ARQ:
If a frame is lost or damaged in the Go-Back-N ARQ protocol, it retransmits all of the frames and
does not get a valid ACK.
Lost Frame:
Data frames are transmitted consecutively in Sliding Window systems. If any of the frames is lost, the
following frame received by the receiver is out of order. The receiver examines each frame’s
sequence number, identifies the frame that was skipped, and returns the NAK for the missing frame.
The frame indicated by NAK, as well as the frames transferred following the lost frame, is
retransmitted by the transmitting device.
Lost Acknowledgment:
Before waiting for an acknowledgement, the sender can transmit as many frames as the windows
allow. When the window’s limit is reached, the sender has no more frames to send and must wait for
acknowledgement. If the acknowledgement is lost, the sender may be forced to wait indefinitely. To
circumvent this, the transmitter is outfitted with a timer that begins counting when the window
capacity is reached. If the acknowledgement is not received within the specified time period, the
sender retransmits the frame since the last ACK.
The Selective-Reject ARQ approach is more efficient than the Go-Back-n ARQ technique.
Only frames that have received negative acknowledgement (NAK) are retransmitted using this
approach.
All damaged frames are held in the receiver storage buffer until the frame in mistake is successfully
received.
The receiver must contain the necessary logic to reinsert the frames in the right order.
The sender must provide a searching mechanism that only retransmits the requested frame.
Channel Allocation
Channel Allocation in computer networks refers to the method of assigning communication
resources (channels) to users or devices for transmitting data. A channel can be a frequency band, a
time slot, a code, or even a physical wire, depending on the type of network (wired or wireless).
Advantages :
Disadvantages :
Example:
Advantages:
1. Efficient use of available bandwidth.
2. Reduces call blocking and improves call quality.
3. Allows for dynamic allocation of resources.
Disadvantages:
1. Requires more complex equipment and algorithms.
2. May result in call drops or poor quality if resources are not available
Examples:
Advantages:
1. Provides the benefits of both FCA and DCA.
2. Allows for dynamic allocation of resources while maintaining predictable call quality and
reliability.
Disadvantages:
1. Requires more complex equipment and algorithms than FCA.
2. May not provide the same level of efficiency as pure DCA.
The data link control is sufficient to handle the channel when a sender and receiver have a dedicated
link for transmitting data packets. Assume there is no dedicated communication or data transfer path
between two devices. In this case, multiple stations access the channel and transmit data over it at the
same time. It could result in collisions and cross-talk. As a result, the multiple access protocol is
required to reduce channel collisions and avoid crosstalk.
Following are the different methods of random-access protocols for broadcasting frames on the
channel.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to
transmit data. Using this method, any station can transmit data across a network simultaneously when
a data frameset is available for transmission.
Aloha Rules
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.
Pure Aloha:- When a station sends data it waits for an acknowledgement. If the acknowledgement
doesn’t come within the allotted time then the station waits for a random amount of time called
back-off time (Tb) and re-sends the data. Since different stations wait for different amount of time,
the probability of further collision decreases.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the same
time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver end. At
the same time, other frames are lost or destroyed. Whenever two frames fall on a shared channel
simultaneously, collisions can occur, and both will suffer damage. If the new frame's first bit enters
the channel before finishing the last bit of the second frame. Both frames are completely finished, and
both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very
high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time
interval called slots. So that, if a station wants to send a frame to a shared channel, the frame can only
be sent at the beginning of the slot, and only one frame is allowed to be sent to each slot. And if the
stations are unable to send data to the beginning of the slot, the station will have to wait until the
beginning of the slot for the next time. However, the possibility of a collision remains when trying to
send a frame at the beginning of two or more station time slot.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel and if
the channel is idle, it immediately sends the data. Else it must wait and keep track of the status of the channel
to be idle and broadcast the frame unconditionally as soon as the channel is idle.
NON- Persistent: It is an NON-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each station waits for
its turn to retransmit the data.
P-Persistent: The node senses the medium, if idle it sends the data with p probability. If the data is
not transmitted ((1-p) probability) then it waits for some time and checks the medium again, now if it is
found idle then it send with p probability. This repeat continues until the frame is sent. It is used in Wifi and
packet radio systems.
CSMA/ CD
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network protocol for carrier
transmission that operates in the Medium Access Control (MAC) layer. It senses or listens whether the shared
channel for transmission is busy or not, and defers transmissions until the channel is free. The collision
detection technology detects collisions by sensing transmissions from other stations. On detection of a
collision, the station stops transmitting, sends a jam signal, and then waits for a random time interval before
retransmission.
Algorithms
The algorithm of CSMA/CD is:
• When a frame is ready, the transmitting station checks whether the channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station starts transmitting and continually monitors the channel to detect
collision.
• If a collision is detected, the station starts the collision resolution algorithm.
• The station resets the retransmission counters and completes frame transmission.
The algorithm of Collision Resolution is:
• The station continues transmission of the current frame for a specified time along with a jam signal, to
ensure that all the other stations detect collision.
• The station increments the retransmission counters.
• If the maximum number of retransmission attempts is reached, then the station aborts transmission.
• Otherwise, the station waits for a back off period which is generally a function of the number of
collisions and restart main algorithm.
CSMA/ CA
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) is a network protocol for carrier
transmission that operates in the Medium Access Control (MAC) layer. In contrast to CSMA/CD (Carrier
Sense Multiple Access/Collision Detection) that deals with collisions after their occurrence, CSMA/CA
prevents collisions prior to their occurrence.
Algorithm
The algorithm of CSMA/CA is:
• When a frame is ready, the transmitting station checks whether the channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station waits for an Inter-frame gap (IFG) amount of time and then sends the
frame.
• After sending the frame, it sets a timer.
• The station then waits for acknowledgement from the receiver. If it receives the acknowledgement
before expiry of timer, it marks a successful transmission.
• Otherwise, it waits for a back-off time period and restarts the algorithm.
The following flowchart summarizes the algorithms:
Advantages of CMSA/CD
• CMSA/CA prevents collision.
Disadvantages of CSMA/CD
• The algorithm calls for long waiting times.
It is a method of reducing data frame collision on a shared channel. In the controlled access method,
each station interacts and decides to send a data frame by a particular station approved by all other
stations. It means that a single station cannot send the data frames unless all other stations are not
approved. It has three types of controlled access: Reservation, Polling, and Token Passing.
The protocols lies under the category of Controlled access are as follows:
1. Reservation
2. Polling
3. Token Passing
Reservation
In this method, a station needs to make a reservation before sending the data.
• Time is mainly divided into intervals.
• Also, in each interval, a reservation frame precedes the data frame that is sent in that interval.
• Suppose if there are 'N' stations in the system in that case there are exactly 'N' reservation
mini slots in the reservation frame; where each mini slot belongs to a station.
• Whenever a station needs to send the data frame, then the station makes a reservation in its
own mini slot.
• Then the stations that have made reservations can send their data after the reservation frame.
Example
Let us take an example of 5 stations and a 5-minislot reservation frame. In the first interval, the
station 2,3 and 5 have made the reservations. While in the second interval only station 2 has made the
reservations.
Polling
The polling method mainly works with those topologies where one device is designated as the
primary station and the other device is designated as the secondary station.
• All the exchange of data must be made through the primary device even though the final
destination is the secondary device.
• Thus to impose order on a network that is of independent users, and in order to establish one
station in the network that will act as a controller and periodically polls all other stations is
simply referred to as polling.
• The Primary device mainly controls the link while the secondary device follows the
instructions of the primary device.
• The responsibility is on the primary device in order to determine which device is allowed to
use the channel at a given time.
• Therefore the primary device is always an initiator of the session.
Token Passing
In the token passing methods, all the stations are organized in the form of a logical ring. We can also
say that for each station there is a predecessor and a successor.
Learning Bridge:
A learning bridge is a network device that connects multiple LAN segments and learns the MAC
addresses of devices on each port. It builds a MAC address table (also called a forwarding table)
to decide where to forward frames.
How It Works:
1. Learning:
o When a frame arrives at a bridge port, it examines the source MAC address and
stores it in a table with the port number.
2. Forwarding:
o If the destination MAC address is in the table, the frame is forwarded to the correct
port.
o If it’s not in the table, the bridge floods the frame to all ports (except the source port).
3. Filtering:
o If the destination and source are on the same port, the frame is not forwarded.
Benefits: