Unit 3
Unit 3
UNIT-3
The Data Link layer is located between physical and network layers. It provides services to the
Network layer and it receives services from the physical layer. The scope of the data link layer is
node-to-node.
This layer converts the raw transmission facility provided by the physical layer to a reliable and
error-free link.
The main functions and the design issues of this layer are
Providing services to the network layer
Framing
Error Control
Flow Control
Services to the Network Layer
In OSI each layer uses the services of the bottom layer and provides services to the above layer.
The main function of this layer is to provide a well defined service interface over the network
layer.
Types of Services
Unacknowledged connectionless service: Sender sends message, receiver is receiving
messages without any acknowledgement both nodes are using connectionless services.
Acknowledged connectionless service: Sender sends the message to receiver, when
receiver the message it sends acknowledgement to sender that it receives the message
with connectionless services.
Acknowledged connection - oriented service: Both sender and receiver are using
connection oriented services, and communication is acknowledged base communication
between the two nodes.
Framing
The data link layer encapsulates each data packet from the network layer into frames that are
then transmitted.
A frame has three parts, namely −
Frame Header
Payload field that contains the data packet from network layer
Trailer
Error Control
The data link layer ensures error free link for data transmission. The issues it caters to with
respect to error control are −
Dealing with transmission errors
Sending acknowledgement frames in reliable connections
Retransmitting lost frames
Identifying duplicate frames and deleting them
Controlling access to shared channels in case of broadcasting
Flow Control
The data link layer regulates flow control so that a fast sender does not drown a slow receiver.
When the sender sends frames at very high speeds, a slow receiver may not be able to handle
it. There will be frame losses even if the transmission is error-free. The two common
approaches for flow control are −
Feedback based flow control
Rate based flow control
Error detection and correction
Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data, are
transmitted from the source to the destination with a certain extent of accuracy.
Errors
When bits are transmitted over the computer network, they are subject to get corrupted due to
interference and network problems. The corrupted bits leads to spurious data being received
by the destination and are called errors.
Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
Single bit error: In the received frame, only one bit has been corrupted, i.e. either changed
from 0 to 1 or from 1 to 0
Multiple bits error: In the received frame, more than one bits are corrupted.
Burst error: In the received frame, more than one sequence bits are corrupted.
Error Control
Error control can be done in two ways
Error detection: Error detection involves checking whether any error has occurred or
not. The number of error bits and the type of error does not matter.
Error correction: Error correction involves ascertaining the exact number of bits that
has been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits
along with the data bits. The receiver performs necessary checks based upon the additional
redundant bits. If it finds that the data is free from errors, it removes the redundant bits before
passing the message to the upper layers.
o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, a redundant bit is also known as a parity bit which is appended at the
end of the data unit so that the number of 1s becomes even. Therefore, the total
number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s bits
is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity
checking.
Drawbacks of Single Parity Checking
o If two bits in one data unit are corrupted and two bits exactly the same position in
another data unit are also corrupted, then 2D Parity checker will not be able to detect
the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.
2.Checksum
Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data into
equal segments of n bits each, and all these segments are added together by using one's
complement arithmetic. The sum is complemented and appended to the original data, known
as checksum field. The extended data is transmitted across the network.
Suppose L is the total sum of the data segments, then the checksum would be:
The Sender follows the given steps:
1. The block unit is divided into k sections, and each of n bits.
2. All the k sections are added together by using one's complement to get the sum.
3. The sum is complemented and it becomes the checksum field.
4. The original data and checksum field are sent across the network.
Checksum Checker
A Checksum is verified at the receiving side. The receiver subdivides the incoming data into
equal segments of n bits each, and all these segments are added together, and then this sum is
complemented. If the complement of the sum is zero, then the data is accepted otherwise data
is rejected.
o In CRC technique, a string of n 0s is appended to the data unit, and this n number is less
than the number of bits in a predetermined number, known as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known as
binary division. The remainder generated from this division is known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data.
This newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat
this whole unit as a single unit, and it is divided by the same divisor that was used to find
the CRC remainder.
Note: If the resultant of this division is zero which means that it has no error, and the data is
accepted.
If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
Let's understand this concept through an example:
CRC Generator
o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end
of the data as the length of the divisor is 4 and we know that the length of the string 0s
to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
o The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the
final string would be 11100111 which is sent across the network.
CRC Checker
An utopian simplex protocol is a simple protocol because it does not worry about whether
something is going right or wrong on the channel.
In protocol, two entities are sender and receiver, who communicate with each other over a
channel. The sender process and receiver process are running at the data link layer of the
sender’s machine and the receiver’s machine, respectively. Sequence number and
acknowledgment number are not used. Only the undamaged frame arrival process is going on.
In a stop-and-wait protocol, the sender stops after sending a frame to the receiver and waits for
an acknowledgment before sending another frame.
We here assume a noiseless channel that is error-free on which the frame is never
damaged or corrupted. Here the channel is error-free but does not control the flow of
data.
Using the simplex stop-and-wait protocol, we can prevent the sender from flooding the
receiver with frames faster than the receiver can process them.
To prevent flooding on the receiver side, one solution is to enable the receiver to
process frames back-to-back by adding a buffer of sufficient size. We can enhance the
processing capabilities of the receiver so that it can quickly pass the received frame to
the network layer. But it’s still not a general solution.
Common solutions for addressing flooding issues on the receiver side, providing
feedback to the sender to reduce the flow rate at the receiver.
So that, in the simplex stop-and-wait protocol, the receiver sends a dummy frame back
to the sender after the packet is sent over the network layer, asking the sender to send
the next frame.
Frames can be transmitted to or received from the sender or receiver, so the simplex
stop-and-wait protocol is bidirectional.
let’s see how the Simplex stop-and-wait protocol handles flow control over a noiseless channel.
The diagram below explains the working of the simplex stop-and-wait protocol.
As you can see in the above diagram that the sender is sending the frame to the
receiver. After sending the frame, the sender stops the transmission and waits for the
acknowledgment from the receiver.
As soon as the receiver receives the frame, it opens it and sends it to the network layer
for further processing. Now, the receiver will create an acknowledgment, which allows
the sender to send the next frame.
You can see that the communication is bidirectional, but they are using half-duplex
mode.
On a noisy channel, the receiver has only a limited buffer capacity and a limited
processing speed, so the protocol prevents the sender from flooding the receiver with
data too fast to handle it.
In rare cases, the frame sent by the sender may be damaged in such a way that the
checksum is correct, causing this and all other protocols to fail. To avoid this situation, a
timer is added.
Suppose, receiver’s acknowledgment is lost during transmission, the sender will wait for
acknowledgment for some time, and after the timeout, the sender will send the frame
again. This process is repeated until the frame arrives and the acknowledgment is
received from the receiver.
The data link layer is responsible for flow and error control. Therefore, when the
sender’s network layer transmits a series of packets to the data link layer, the data link
layer transmits the packets through the receiver’s data link layer to the network layer.
Here, the network layer has no functionality to check whether there is an error in the
packet, so the data link layer must guarantee to the network layer that no transmission
error occurs in the packet. Although duplicate packets may arrive at the network layer,
we can prevent this by using this protocol.
As we have seen in the above section that the network layer does not have the functionality to
detect errors or duplication in the packet, so it is guaranteed by the data link layer that there
are no errors in the packet. But duplicate packets can arrive at the network layer. So, let us
understand this scenario with an example.
The diagram below explains the working of the simplex stop-and-wait protocol for a noisy
channel.
As you can see in the above diagram, the sender sends the packet in the form of a frame
to the receiver. When the receiver receives the frame, it sends the frame in a packet
format to the network layer.
After frame-1 successfully reaches the receiver, the receiver will send an
acknowledgment to the sender. The sender will send the frame-2 after receiving the
acknowledgment from the receiver. But as shown in the figure, frame-2 is lost during
transmission. Therefore, the sender will retransmit frame-2 after the timeout.
Further, the receiver is sending an acknowledgment to the sender after receiving frame-
2. But the acknowledgment is completely lost during transmission.
The sender is waiting for the acknowledgment, but the timeout has elapsed, and the
acknowledgment has not been received. So the sender will assume that the frame is lost
or damaged, and it will send the same frame again to the receiver.
The receiver receives the same frame again. But how does the receiver recognize that
the packet of the frame is a duplicate or the original? So, it will use the sequence
number to identify whether the packet is duplicate or new.
Typically, the sliding window protocol is used for flow control purposes. In a noisy channel, the
data flow increases when the sender sends multiple frames at once before receiving
acknowledgement of the frame received from the receiver.
The sliding window protocols also send multiple frames from sender to receiver to
improve channel efficiency. For that, they use flow control mechanisms, which provide
reliable communication.
Here, the term sliding window refers to the buffers or memory that consists of frames.
There are 3 types of sliding window protocols used for flow control.
In the Stop-and-Wait ARQ protocol, the size of the sending window and receiving window is 1.
Since the sender and receiver window size is ‘1’, the sender transmits one frame and waits for
an acknowledgement from the receiver before sending the next one. It is also known as a one-
bit sliding window protocol because only one bit is transmitted on a channel.
As shown in the above figure, Sn is at the 0th position. So the sender will send frame-0
to the receiver. As soon as the receiver receives frame-0, it will check the Rn and send
ACK-1 to the sender, as it wants frame-1 from the sender.
Now, the sender receives ACK-1 and sends frame-1 to the receiver. But frame-1 is lost
during transmission. Therefore, the sender will wait for the acknowledgement from the
receiver until the timeout.
After the timeout, the sender will again send frame-1 to the receiver. Now, the receiver
Rn is on the 0th frame because it wants the next frame, which is frame-0.
So, the receiver will send ACK-0 to the sender, and the sender will send the second
frame-0 to the receiver. The receiver sends ACK-1, but it is lost during transmission.
So the sender will resend the frame-0 to the receiver, but the receiver will discard it as it
is duplicated, and ACK-1 is sent to the sender. Similarly, this process continues until all
the frames have been sent to the receiver.
In the Go-Back-N ARQ protocol, the sending and receiving window sizes are N-bit and One-bit,
respectively. It uses cumulative and independent acknowledgement for communication.
The Go-Back-N ARQ protocol does not accept corrupted frames during transmission.
Furthermore, it does not accept out-of-order frames and silently discards them.
If the receiver does not accept a frame, the go-back-n protocol leads to the re-
transmission of the entire window.
When a frame is lost during transmission, the go-back-n protocol resends the frame
after the timeout.
The diagram below explains the Go-Back-N ARQ protocol.
As shown in the diagram above, the sending and receiving windows have window sizes
of 6 and 1, respectively. Both Sf and Sn are located in the first position.
Here, Sf specifies the frame of the window that has been sent but has not received
acknowledgement from the receiver, and Sn specifies which frame to send. Since Sn is
at the beginning, the sender will send frame-0 to the receiver.
On the receiver side, Rn describes which frame the receiver expects to receive. As the
receiver accepts frame-0, it slides the Rn onto frame-1, indicating that it now wants
frame-1. So, the receiver will send ACK-1 to the sender.
When the sender receives ACK-1, it sends frame-1 to the receiver. But frame-1 is lost
during transmission, and the sender is waiting for an acknowledgement .
As soon as the timeout is over, the sender will resend frame-1 to the receiver. This time,
the receiver receives frame-1 and sends ACK-2 to the sender. Now, Sn on the sender
side will increase by one.
When the sender receives ACK-2, it sends frame-2 to the receiver. The receiver receives
frame-2 and sends ACK-3, but ACK-3 is lost during transmission.
Before receiving ACK-3, the sender sends frame-3 and frame-4 to the receiver. When
the receiver sends ACK-5, the sender will understand that the previous frames were
received by the receiver successfully.
The sender sends frame-5 and frame-6 to the receiver, but frame-5 is lost during
transmission, and frame-6 reaches the receiver. But this is an out-of-delivery frame, and
the receiver will not accept frame-6. The sender will retransmit frame-5 and frame-6
after the timeout.
As we have seen in the Go-Back-N ARQ protocol, the sender and receiver window sizes are N
and 1, respectively. Also, it does not receive out-of-order delivery. But in Selective Repeat ARQ
protocol, sending and receiving windows are of equal size, which is N. In addition, it also
accepts out-of-order frames.
Synchronous Data Link Protocol (SDLC) − SDLC was developed by IBM in the 1970s as
part of Systems Network Architecture. It was used to connect remote devices to
mainframe computers. It ascertained that data units arrive correctly and with right flow
from one network point to the next.
High Level Data Link Protocol (HDLC) − HDLC is based upon SDLC and provides both
unreliable service and reliable service. It is a bit – oriented protocol that is applicable for
both point – to – point and multipoint communications.
Serial Line Interface Protocol (SLIP) − This is a simple protocol for transmitting data
units between an Internet service provider (ISP) and home user over a dial-up link. It
does not provide error detection / correction facilities.
Point - to - Point Protocol (PPP) − This is used to transmit multiprotocol data between
two directly connected (point-to-point) computers. It is a byte – oriented protocol that
is widely used in broadband communications having heavy loads and high speeds.
Link Control Protocol (LCP) − It one of PPP protocols that is responsible for establishing,
configuring, testing, maintaining and terminating links for transmission. It also imparts
negotiation for set up of options and use of features by the two endpoints of the links.
Network Control Protocol (NCP) − These protocols are used for negotiating the
parameters and facilities for the network layer. For every higher-layer protocol
supported by PPP, one NCP is there.
Medium Access Control Sublayer (MAC Sublayer)
The medium access control (MAC) is a sublayer of the data link layer of the open system
interconnections (OSI) reference model for data transmission. It is responsible for flow control
and multiplexing for transmission medium. It controls the transmission of data packets via
remotely shared channels. It sends data over the network interface card.
Its main responsibility is preventing collisions while ensuring multiple devices can transmit data
fairly and efficiently over a shared communication channel.
o Access Control: The MAC layer controls which device can transmit data at any given
time by controlling access to the shared communication medium. It employs various
access control techniques to control how devices compete for access to the medium.
Contention-based (like Carrier Sense Multiple Access with Collision Detection or
CSMA/CD) and contention-free (like token passing) techniques can be used.
o Frame Addressing: Networked devices on the same network segment are uniquely
identified by their MAC addresses, also called hardware or physical addresses. The MAC
layer includes the source and destination MAC addresses in data frames to identify the
sender and recipient of the data.
o Frame Formatting: The MAC layer packages data from the higher layers (typically the
Network Layer) into frames that can be transmitted over the network medium. These
frames contain data, control information, information for checking for errors, and
information for addressing.
o Error detection: Many MAC protocols include error-checking components to identify
transmission errors. This guarantees the accuracy of the data transmitted through the
medium. The MAC layer may ask for retransmission of the frame if a mistake is found.
o Frame detection and collision handling: Collisions can happen when multiple devices
try to transmit data simultaneously over a shared communication medium like Ethernet.
Detecting collisions and putting collision resolution mechanisms into place to lessen
their effects falls to the MAC layer. For instance, when CSMA/CD detects collisions, it
starts a back-off mechanism that sends data again after an arbitrary amount of time has
passed.
o Flow Control: Some MAC protocols employ flow control to ensure that data is
transmitted at a rate the recipient device can handle without experiencing data loss or
overflow. Flow control mechanisms may use feedback from the receiver to the sender
to change transmission rates.
o Address Resolution: In Ethernet networks, the MAC layer uses the Address Resolution
Protocol (ARP) to translate addresses from higher layers (like IP addresses) to MAC
addresses. The local network segment's corresponding MAC address is found using ARP,
which maps the destination IP address to it.
o Broadcast and Multicast: The MAC layer supports both broadcasting and multicasting,
allowing for the sending of frames to various groups of devices (multicast) or all devices
on a network segment (broadcast) as needed.
o Security: Some MAC layer protocols include security features like encryption and
authentication to protect data and ensure that only authorized devices can access the
network.
o Restricted to Shared Medium: Shared network mediums are the only medium for
many MAC protocols. MAC protocols might not be the most effective option when
dedicated point-to-point connections are required (such as point-to-point links or
dedicated circuits).
o Overhead: Control data, addressing, and collision detection mechanisms are all added
by MAC protocols. The network's actual data throughput may decrease due to this
overhead.
o Latency: If applicable, accessing the medium and collision resolution can cause
network communication to have variable and occasionally unpredictable latency.
o Limitations on Scalability: Due to the inefficiency of the contention process, some
MAC protocols may not scale well to extremely large networks with many devices
competing for access.
o Low Utilization Causes Inefficiency: MAC protocols may add irrational delays and
contention overhead in networks with low utilization, which can lower overall efficiency.
o Congestion Vulnerability: Network congestion can increase contention and
collisions and reduce network performance for MAC protocols based on contention.
o Security issues: It's possible that MAC protocols don't come with strong security
features by default. It might be necessary to use additional security measures, like
authentication and encryption, to protect data transmission at higher layers.
o Limited Quality of Service (QoS) Support: Although some MAC protocols support
fundamental QoS mechanisms, they might not offer fine-grained control over network
prioritization and traffic management.
o Security issues: It's possible that MAC protocols don't come with robust security
features by default. Potential security risks may arise if unauthorized devices connect to
the network. Higher layers frequently need additional security measures like
authentication and encryption.
On a network, multiple devices are communicating with each other. It is the responsibility of
the data link layer to provide reliable communication by allocating a channel to the device for
communication. Allocating channels to specific devices for communication is known as channel
allocation.
The data link layer allocates a single broadcast channel between competing devices.
Depending on the network and geographic region, the channel can be guided media or
unguided media. On the channel, several nodes are connected.
The purpose of a channel is to connect one device to another device on a network for
communication.
Channel allocation problem plays a major role in the network. There are two types of channel
allocation schemes used on the network. They are as follows:
1. Static Channel Allocation
2. Dynamic Channel Allocation
In static channel allocation, the network bandwidth is divided equally between devices and is
permanent for devices.
For example, there are 50 users on the network, and the network bandwidth is 100 Hz.
The 100 Hz bandwidth would be divided into 50 equally sized parts that are 2 Hz. Each
user will get a 2 Hz portion.
Each device has a private frequency band, so there is no possibility of interference with
other devices.
A well-known example of a single channel allocation scheme is FM radio, in which
different frequencies are assigned to different stations that are fixed.
The figure below shows the static channel allocation scheme.
As shown in the figure, 1000 MHz bandwidth is present on a single channel. The 1000
MHz bandwidth is divided into four equally sized frequency bands of 250 MHz,
permanent for the specific station.
In addition, there is a gap between the frequency bands of the stations to avoid signal
interference with other signals.
Let’s say the bandwidth is divided for 50 devices, and only 40 devices are active on the
network, then the bulk of the valuable bandwidth will be wasted.
If there are equal-sized portions of bandwidth for 50 devices and 60 devices want to
communicate, 10 devices will be denied permission due to lack of bandwidth.
Now, let’s say 50 equally-sized bandwidth portions are assigned to 50 devices. But the
problem is that when some devices are idle, their bandwidth will be lost simply because
they are not using it, and no one is allowed to use it.
In a static allocation channel, most of the channels will be idle for most of the time.
To overcome this problem, we can use dynamic channel allocation, in which the bandwidth is
not allocated to the device permanently.
The first three assumptions for dynamic channel allocation are independent traffic, single-
channel, and observable collisions. Let us understand all these one by one.
Independent Traffic: In this assumption, there are n independent devices controlled by the
program or user that generate frames for transmission. The expected number of frames
generated in the channel depends on the length of the interval and the rate of arrival of new
frames.
After creating a frame, the device is blocked and does nothing until the frame is
successfully transmitted over a channel.
Single Channel: There is a single channel between devices for communication, in which all
devices can send and receive from it.
In single-channel assumption, protocols are used to prioritize frames from less
important frames to more important frames.
The channel transmits frames from one device to another according to the priority of
the frame.
Observable Collisions: The collision occurs when two frames are traveling on a shared
channel and overlap with each other. When a collision occurs between frames, all devices can
detect that a collision occurred on a channel. The lost frame can be retransmitted after some
time. No error is produced here compared to that generated by the collision.
The last two assumptions for dynamic channel allocation are continuous or slotted time and
carrier sense or no carrier sense. Both of them play major roles in the channel. Let us
understand one by one.
Continuous or Slotted Time: We can consider the time constant as when frame transmission
can be initiated at any instant. Time can be slotted or divided into discrete intervals known as
slots. When the slot is allocated to the device or channel, the frame transmission must start at
the beginning of the slot.
For the idle slot, the slot has 0 frames for transmission. Similarly, successful transmission
occurs if the slot contains one frame, and a channel collision will occur if the slot
contains more frames.
Carrier Sense or No Carrier Sense: In the carrier sense, the device checks whether the
channel is in use before using the channel. If the station finds the channel busy, it will not
broadcast any frame on the channel. If there is no carrier sense, the station may not
understand the channel before use, leading to frame collisions.
Now, we have understood the five assumptions of the Dynamic Channel Allocation Scheme.
Some assumptions can either be good or bad, and they also affect the performance of the
network. Let’s discuss the assumptions.
We have seen that in independent traffic assumption, frames are generated according
to the length of the interval and the arrival rate. In independent traffic, frame arrivals
occur independently and are generated unexpectedly, which is not good for network
traffic because the packets sent by the network layer are massive.
The single-channel assumption is the heart of the dynamic channel model as no external
way to communicate exists. Since it uses protocols to prioritize the packets,
performance of the network increases.
The collision assumption provides a way to detect collisions if frames collide and frames
need to be retransmitted.
Slotted time can be used for better performance. Splitting the time into discrete
intervals requires the devices to be synchronized with each other.
A network may or may not have carrier sensing. In wireless networks, not every device
will be within radio range of another device, they may not use carrier sense effectively.
Even if there is no network conflict after sensing the channel, the receiver may receive
some frames incorrectly for different reasons. The data link layer protocol and other
higher layers provide reliability.
The diagram below explains the concept of dynamic channel allocation assumptions.
As shown in the figure, all the stations are connected on a single channel and sense the
channel continuously.
Stations sense the channel to see if the channel is busy or idle. If a station finds the
channel idle, it sends data to the channel. If a collision has occurred with data from
another station, the collision detection method will notify the station that the collision
occurred on the channel.
When a station sends data to another station, a clock is synchronized, which helps the
sender and receiver identify the data.
For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the same
time (transferring the data simultaneously). All the students respond at the same time due to
which data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different process
as:
A. Random Access Protocol
In this protocol, all the station has the equal priority to send the data over a channel. In random
access protocol, one or more stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy), each station transmits the
data frame. However, if more than one station sends the data over a channel, there may be a
collision or data conflict. Due to the collision, the data frame packets may be lost or changed.
And hence, it does not receive by the receiver end.
Following are the different methods of random-access protocols for broadcasting frames on
the channel.
o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
Aloha Rules
Pure Aloha
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any
station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the specified time, the
station waits for a random amount of time, called the backoff time (Tb). And the station may
assume the frame has been lost or destroyed. Therefore, it retransmits the frame until all the
data are successfully transmitted to the receiver.
As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames fall
on a shared channel simultaneously, collisions can occur, and both will suffer damage. If the
new frame's first bit enters the channel before finishing the last bit of the second frame. Both
frames are completely finished, and both stations must retransmit the data frame.
Slotted Aloha
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a
very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a frame to a shared channel, the
frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the beginning of the slot, the station
will have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep
track of the status of the channel to be idle and broadcast the frame unconditionally as soon as
the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.
CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is successfully
received, the station sends another frame. If any collision is detected in the CSMA/CD, the
station sends a jam/ stop signal to the shared channel to terminate data transmission. After
that, it waits for a random time before sending a frame to a channel.
CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.
Following are the methods used in the CSMA/ CA to avoid the collision:
Interframe space: In this method, the station waits for the channel to become idle, and if it
gets the channel is idle, it does not immediately send the data. Instead of this, it waits for some
time, and this time period is called the Interframe space or IFS. However, the IFS time is often
used to define the priority of the station.
Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number
of slots as wait time. If the channel is still busy, it does not restart the entire process, except
that it restarts the timer only to send data packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender station sends the data frame
to the shared channel if the acknowledgment is not received ahead of time.
Binary Countdown
This protocol overcomes the overhead of 1 bit per station of the bit – map protocol. Here,
binary addresses of equal lengths are assigned to each station. For example, if there are 6
stations, they may be assigned the binary addresses 001, 010, 011, 100, 101 and 110. All
stations wanting to communicate broadcast their addresses. The station with higher address
gets the higher priority for transmitting.
Initially all nodes (A, B ……. G, H) are permitted to compete for the channel. If a node is
successful in acquiring the channel, it transmits its frame. In case of collision, the nodes are
divided into two groups (A, B, C, D in one group and E, F, G, H in another group). Nodes
belonging to only one of them are permitted for competing. This process continues until
successful transmission occurs.
Wireless LANs
Wireless LANs (WLANs) are wireless computer networks that use high-frequency radio waves
instead of cables for connecting the devices within a limited area forming LAN (Local Area
Network). Users connected by wireless LANs can move around within this limited area such as
home, school, campus, office building, railway platform, etc.
Most WLANs are based upon the standard IEEE 802.11 standard or WiFi.
Components of WLANs
WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad
hoc mode.
Infrastructure Mode : Mobile devices or clients connect to an access point (AP) that in
turn connects via a bridge to the LAN or Internet. The client transmits frames to other
clients via the AP.
Ad Hoc Mode : Clients transmit frames directly to each other in a peer-to-peer fashion.
Advantages of WLANs
Since radio waves are used for communications, the signals are noisier with more
interference from nearby systems.
Greater care is needed for encrypting information. Also, they are more prone to errors.
So, they require greater bandwidth than the wired LANs.
WLANs are slower than wired LANs.
Stations (STA) Stations comprises of all devices and equipment that are connected to the
wireless LAN. A station can be of two types
o Wireless Access Point (WAP) WAPs or simply access points (AP) are generally wireless
routers that form the base stations or access.
o Client. Clients are workstations, computers, laptops, printers, smartphones, etc.
The physical layer is the first and lowest layer from the bottom of the 7-layered OSI model and
delivers security to hardware. This layer is in charge of data transmission over the physical
medium. It is the most complex layer in the OSI model.
The physical layer converts the data frame received from the data link layer into bits, i.e., in
terms of ones and zeros. It maintains the data quality by implementing the required protocols
on different network modes and maintaining the bit rate through data transfer using a wired or
wireless medium.
The physical layer has several attributes that are implemented in the OSI model:
1. Signals: The data is first converted to a signal for efficient data transmission. There are two
kinds of signals:
o Analog Signals: These signals are continuous waveforms in nature and are
represented by continuous electromagnetic waves for the transmission of data.
o Digital Signals: These signals are discrete in nature and represent network
pulses and digital data from the upper layers.
2. Transmission media: Data is carried from source to destination with the help of
transmission media. There are two sorts of transmission media:
o Wired Media: The connection is established with the help of cables. For example, fiber
optic cables, coaxial cables, and twisted pair cables.
o Wireless Media: The connection is established using a wireless communication
network. For example, Wi-Fi, Bluetooth, etc.
3. Data Flow: It describes the rate of data flow and the transmission time frame. The factors
affecting the data flow are as follows:
4. Transmission mode: It describes the direction of the data flow. Data can be transmitted in
three sorts of transmission modes as follows:
Physical Topology:
Physical topology refers to the specification or structure of the connections of the network
between the devices where the transmission will happen. There are four types of topologies,
which are as follows:
Star Topology:
Star topology is a sort of network topology in which each node or device in the network is
individually joined to a central node, which can be a switch or a hub. This topology looks like a
star, due to which this topology is called star topology.
Hub does not provide route data, but it transmits data to other devices connected to it. The
advantage of this topology is that if one cable fails, the device connected to that cable is
affected, and not the others.
Bus Topology:
Bus topology comprises a single communication line or cable that is connected to each node.
The backbone of this network is the central cable, and each node can communicate with other
devices through the central cable.
The signal goes from the ground terminator to the other terminator of the wire. The terminator
stops the signal once it reaches the end of the wire to avoid signal bounce. Each computer
communicates independently with other computers in what is called a peer-to-peer network.
Each computer has a unique address, so if a message is to be sent to a specific computer, the
device can communicate directly with that computer.
The advantage of bus topology is that collapse in one device will not affect other devices. The
bus topology is not expensive to build because it uses a single wire and works well for small
networks.
Ring Topology:
In a ring topology, the devices are connected in the form of a ring so that each device has two
neighbors for communication. Data moves around the ring in one direction.
As you can see below, all four devices are connected to each other in the form of a ring. Each
device has two neighbors. Node 2 and Node 4 are neighbors of Node 1; similarly, Node 1 and
Node 3 are neighbors of Node 2, and so on.
The advantage of ring topology is that if you want to add another device to the ring, you will
need an additional cable to do so. Similarly, you can remove a device and join the wires.
Mesh Topology:
In a mesh topology, each system is directly joined to every other system. The advantage of
mesh topology is that there will be no traffic issues as each device has a dedicated
communication line. If one system is not functioning, it will not affect other devices. It provides
more security or privacy.
The drawback of mesh topology is that it is expensive and more complex than other topologies.
Responsibilities: Bit timing, signal encoding, interaction with the physical medium, and
defining the properties of the cable, optical fiber, or wire itself.
Examples: Specifications for Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet.
o Bandwidth: It is defined as the maximum transfer rate of a cable. It is a very critical and
expensive resource. Therefore, switching techniques are used for the effective
utilization of the bandwidth of a network.
o Collision: Collision is the effect that occurs when more than one device transmits the
message over the same physical media, and they collide with each other. To overcome
this problem, switching technology is implemented so that packets do not collide with
each other.
Advantages of Switching:
Disadvantages of Switching:
Security Concerns: As networks become more interconnected, safety threats also evolve.
Switches play an important role in network protection, and improvements in encryption, access
management, and risk detection are crucial for safeguarding sensitive information.
5G Integration: The creation of 5G technology introduces new opportunities and demanding
situations for network switching. The prolonged pace and functionality demand switching
infrastructures to help the growing extensive sort of related gadgets and packages.
Edge Computing: The upward push of edge computing, wherein processing happens in the
direction of the flow of technology, needs network switching capable of managing allotted and
decentralized architectures correctly.
Quantum Networking: With the exploration of quantum computing and verbal exchange,
the panorama of network switching might also witness revolutionary adjustments. Quantum
switches, harnessing the concepts of quantum entanglement, must redefine the way
information is transmitted.