0% found this document useful (0 votes)
17 views46 pages

Unit 3

The document discusses the Data Link Layer's design issues, including its functions such as framing, error control, and flow control. It outlines various types of services provided to the Network Layer, error detection techniques like Parity Check, Checksum, and Cyclic Redundancy Check (CRC), and error correction methods. Additionally, it describes elementary data link protocols, including the Utopian Simplex Protocol and the Simplex Stop-and-Wait Protocol for both noiseless and noisy channels.

Uploaded by

parekhsahil2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views46 pages

Unit 3

The document discusses the Data Link Layer's design issues, including its functions such as framing, error control, and flow control. It outlines various types of services provided to the Network Layer, error detection techniques like Parity Check, Checksum, and Cyclic Redundancy Check (CRC), and error correction methods. Additionally, it describes elementary data link protocols, including the Utopian Simplex Protocol and the Simplex Stop-and-Wait Protocol for both noiseless and noisy channels.

Uploaded by

parekhsahil2018
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

R. B.

INSTITUTE OF MANAGEMENT AND STUDIES


DEPARTMENT: MCA

Semester: 2 Academic Year: 2025-26

Subject Name: Computer Network


Subject Code: MC02094031

UNIT-3

Data Link Layer Design issues

The Data Link layer is located between physical and network layers. It provides services to the
Network layer and it receives services from the physical layer. The scope of the data link layer is
node-to-node.
This layer converts the raw transmission facility provided by the physical layer to a reliable and
error-free link.
The main functions and the design issues of this layer are
 Providing services to the network layer
 Framing
 Error Control
 Flow Control
Services to the Network Layer
In OSI each layer uses the services of the bottom layer and provides services to the above layer.
The main function of this layer is to provide a well defined service interface over the network
layer.
Types of Services
 Unacknowledged connectionless service: Sender sends message, receiver is receiving
messages without any acknowledgement both nodes are using connectionless services.
 Acknowledged connectionless service: Sender sends the message to receiver, when
receiver the message it sends acknowledgement to sender that it receives the message
with connectionless services.
 Acknowledged connection - oriented service: Both sender and receiver are using
connection oriented services, and communication is acknowledged base communication
between the two nodes.
Framing
The data link layer encapsulates each data packet from the network layer into frames that are
then transmitted.
A frame has three parts, namely −
 Frame Header
 Payload field that contains the data packet from network layer
 Trailer

Error Control
The data link layer ensures error free link for data transmission. The issues it caters to with
respect to error control are −
 Dealing with transmission errors
 Sending acknowledgement frames in reliable connections
 Retransmitting lost frames
 Identifying duplicate frames and deleting them
 Controlling access to shared channels in case of broadcasting
Flow Control
The data link layer regulates flow control so that a fast sender does not drown a slow receiver.
When the sender sends frames at very high speeds, a slow receiver may not be able to handle
it. There will be frame losses even if the transmission is error-free. The two common
approaches for flow control are −
 Feedback based flow control
 Rate based flow control
Error detection and correction

Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data, are
transmitted from the source to the destination with a certain extent of accuracy.
Errors
When bits are transmitted over the computer network, they are subject to get corrupted due to
interference and network problems. The corrupted bits leads to spurious data being received
by the destination and are called errors.
Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
Single bit error: In the received frame, only one bit has been corrupted, i.e. either changed
from 0 to 1 or from 1 to 0

Multiple bits error: In the received frame, more than one bits are corrupted.
Burst error: In the received frame, more than one sequence bits are corrupted.

Error Control
Error control can be done in two ways
 Error detection: Error detection involves checking whether any error has occurred or
not. The number of error bits and the type of error does not matter.
 Error correction: Error correction involves ascertaining the exact number of bits that
has been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits
along with the data bits. The receiver performs necessary checks based upon the additional
redundant bits. If it finds that the data is free from errors, it removes the redundant bits before
passing the message to the upper layers.

Error Detection Techniques


There are three main techniques for detecting errors in frames: Parity Check, Checksum,
and Cyclic Redundancy Check (CRC).

1.Single Parity Check

o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, a redundant bit is also known as a parity bit which is appended at the
end of the data unit so that the number of 1s becomes even. Therefore, the total
number of transmitted bits would be 9 bits.
o If the number of 1s bits is odd, then parity bit 1 is appended and if the number of 1s bits
is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and
compared with the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity
checking.
Drawbacks of Single Parity Checking

o It can only detect single-bit errors which are very rare.


o If two bits are interchanged, then it cannot detect the errors.

Two-Dimensional Parity Check

o Performance can be improved by using Two-Dimensional Parity Check which organizes


the data in the form of a table.
o Parity check bits are computed for each row, which is equivalent to the single-parity
check.
o In Two-Dimensional Parity check, a block of bits is divided into rows, and the redundant
row of bits is added to the whole block.
o At the receiving end, the parity bits are compared with the parity bits computed from
the received data.
Drawbacks Of 2D Parity Check

o If two bits in one data unit are corrupted and two bits exactly the same position in
another data unit are also corrupted, then 2D Parity checker will not be able to detect
the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.

2.Checksum

A Checksum is an error detection technique based on the concept of redundancy.


It is divided into two parts:

Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data into
equal segments of n bits each, and all these segments are added together by using one's
complement arithmetic. The sum is complemented and appended to the original data, known
as checksum field. The extended data is transmitted across the network.

Suppose L is the total sum of the data segments, then the checksum would be:
The Sender follows the given steps:
1. The block unit is divided into k sections, and each of n bits.
2. All the k sections are added together by using one's complement to get the sum.
3. The sum is complemented and it becomes the checksum field.
4. The original data and checksum field are sent across the network.

Checksum Checker
A Checksum is verified at the receiving side. The receiver subdivides the incoming data into
equal segments of n bits each, and all these segments are added together, and then this sum is
complemented. If the complement of the sum is zero, then the data is accepted otherwise data
is rejected.

The Receiver follows the given steps:


1. The block unit is divided into k sections and each of n bits.
2. All the k sections are added together by using one's complement algorithm to get the sum.
3. The sum is complemented.
4. If the result of the sum is zero, then the data is accepted otherwise the data is discarded.

3.Cyclic Redundancy Check (CRC)

CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:

o In CRC technique, a string of n 0s is appended to the data unit, and this n number is less
than the number of bits in a predetermined number, known as division which is n+1 bits.
o Secondly, the newly extended data is divided by a divisor using a process is known as
binary division. The remainder generated from this division is known as CRC remainder.
o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data.
This newly generated unit is sent to the receiver.
o The receiver receives the data followed by the CRC remainder. The receiver will treat
this whole unit as a single unit, and it is divided by the same divisor that was used to find
the CRC remainder.
Note: If the resultant of this division is zero which means that it has no error, and the data is
accepted.

If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.

CRC Generator

o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end
of the data as the length of the divisor is 4 and we know that the length of the string 0s
to be appended is always one less than the length of the divisor.
o Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
o The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
o CRC remainder replaces the appended string of 0s at the end of the data unit, and the
final string would be 11100111 which is sent across the network.

CRC Checker

o The functionality of the CRC checker is similar to the CRC generator.


o When the string 11100111 is received at the receiving end, then CRC checker performs
the modulo-2 division.
o A string is divided by the same divisor, i.e., 1001.
o In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.

Error Correction Techniques


Error correction techniques find out the exact number of bits that have been corrupted and as
well as their locations. There are two principle ways
 Backward Error Correction (Retransmission): If the receiver detects an error in the
incoming frame, it requests the sender to retransmit the frame. It is a relatively simple
technique. But it can be efficiently used only where retransmitting is not expensive as in
fiber optics and the time for retransmission is low relative to the requirements of the
application.
 Forward Error Correction: If the receiver detects some error in the incoming frame, it
executes error-correcting code that generates the actual frame. This saves bandwidth
required for retransmission. It is inevitable in real-time systems. However, if there are
too many errors, the frames need to be retransmitted.
Elementary data link protocols
Protocols in the data link layer are designed so that this layer can perform its basic functions:
framing, error control and flow control. Framing is the process of dividing bit - streams from
physical layer into data frames whose size ranges from a few hundred to a few thousand bytes.
Error control mechanisms deals with transmission errors and retransmission of corrupted and
lost frames. Flow control regulates speed of delivery and so that a fast sender does not drown a
slow receiver.
Types of Data Link Protocols
Data link protocols can be broadly divided into two categories, depending on whether the
transmission channel is noiseless or noisy.

An utopian simplex protocol is a simple protocol because it does not worry about whether
something is going right or wrong on the channel.

 In this protocol, data is transmitted in only one direction. Therefore it is unidirectional.


 No matter what is happening in the network, the sender and receiver are always ready
to communicate. So they also ignore the delay in processing.
 This protocol is just a consideration so that there is infinite buffer space available on the
sender and receiver.
 It is an unrealistic protocol, or you can say it is an unrestricted protocol.
 In this protocol, the channel used between layer-2 of the sender and receiver never
discards or damages the frame during communication.

Working of Utopian Simplex Protocol

In protocol, two entities are sender and receiver, who communicate with each other over a
channel. The sender process and receiver process are running at the data link layer of the
sender’s machine and the receiver’s machine, respectively. Sequence number and
acknowledgment number are not used. Only the undamaged frame arrival process is going on.

The diagram below explains the utopian simplex protocol.


 As you can see in the diagram, the direction of communication is in only one direction.
The sender is sending the data over the line as fast as possible. The sender’s machine
fetches packets from the network layer, creates frames, and sends the frames over the
line.
 On the other hand, the receiver is waiting to receive the frame. The frame comes from
the sender, so the receiver will take the frame that comes into the hardware buffer and
send it to the network layer.
 After the frame is sent to the network layer, the receiver’s data link layer will sit back to
wait for the next frame.
Because this protocol is unrealistic and unrestricted, it has no flow control and error control
restrictions. Here, no frame is lost during transmission, and hence no field of the frame is
required to control the flow of data and detect the error.

Simplex Stop-and-Wait Protocol for Noiseless Channel

In a stop-and-wait protocol, the sender stops after sending a frame to the receiver and waits for
an acknowledgment before sending another frame.

 We here assume a noiseless channel that is error-free on which the frame is never
damaged or corrupted. Here the channel is error-free but does not control the flow of
data.
 Using the simplex stop-and-wait protocol, we can prevent the sender from flooding the
receiver with frames faster than the receiver can process them.
 To prevent flooding on the receiver side, one solution is to enable the receiver to
process frames back-to-back by adding a buffer of sufficient size. We can enhance the
processing capabilities of the receiver so that it can quickly pass the received frame to
the network layer. But it’s still not a general solution.
 Common solutions for addressing flooding issues on the receiver side, providing
feedback to the sender to reduce the flow rate at the receiver.
 So that, in the simplex stop-and-wait protocol, the receiver sends a dummy frame back
to the sender after the packet is sent over the network layer, asking the sender to send
the next frame.
 Frames can be transmitted to or received from the sender or receiver, so the simplex
stop-and-wait protocol is bidirectional.

Working of Simplex Stop-and-Wait Protocol (Noiseless Channel)

let’s see how the Simplex stop-and-wait protocol handles flow control over a noiseless channel.

The diagram below explains the working of the simplex stop-and-wait protocol.

 As you can see in the above diagram that the sender is sending the frame to the
receiver. After sending the frame, the sender stops the transmission and waits for the
acknowledgment from the receiver.
 As soon as the receiver receives the frame, it opens it and sends it to the network layer
for further processing. Now, the receiver will create an acknowledgment, which allows
the sender to send the next frame.
 You can see that the communication is bidirectional, but they are using half-duplex
mode.

Simplex Stop-and-Wait Protocol for a Noisy Channel


Here, assume the general situation in which the sender and receiver on the communication
channel make an error during transmission. Frames can either be damaged or lost completely
during transmission.

 On a noisy channel, the receiver has only a limited buffer capacity and a limited
processing speed, so the protocol prevents the sender from flooding the receiver with
data too fast to handle it.
 In rare cases, the frame sent by the sender may be damaged in such a way that the
checksum is correct, causing this and all other protocols to fail. To avoid this situation, a
timer is added.
 Suppose, receiver’s acknowledgment is lost during transmission, the sender will wait for
acknowledgment for some time, and after the timeout, the sender will send the frame
again. This process is repeated until the frame arrives and the acknowledgment is
received from the receiver.
 The data link layer is responsible for flow and error control. Therefore, when the
sender’s network layer transmits a series of packets to the data link layer, the data link
layer transmits the packets through the receiver’s data link layer to the network layer.
 Here, the network layer has no functionality to check whether there is an error in the
packet, so the data link layer must guarantee to the network layer that no transmission
error occurs in the packet. Although duplicate packets may arrive at the network layer,
we can prevent this by using this protocol.

Working of Simplex Stop-and-Wait Protocol for a Noisy Channel

As we have seen in the above section that the network layer does not have the functionality to
detect errors or duplication in the packet, so it is guaranteed by the data link layer that there
are no errors in the packet. But duplicate packets can arrive at the network layer. So, let us
understand this scenario with an example.

The diagram below explains the working of the simplex stop-and-wait protocol for a noisy
channel.
 As you can see in the above diagram, the sender sends the packet in the form of a frame
to the receiver. When the receiver receives the frame, it sends the frame in a packet
format to the network layer.
 After frame-1 successfully reaches the receiver, the receiver will send an
acknowledgment to the sender. The sender will send the frame-2 after receiving the
acknowledgment from the receiver. But as shown in the figure, frame-2 is lost during
transmission. Therefore, the sender will retransmit frame-2 after the timeout.
 Further, the receiver is sending an acknowledgment to the sender after receiving frame-
2. But the acknowledgment is completely lost during transmission.
 The sender is waiting for the acknowledgment, but the timeout has elapsed, and the
acknowledgment has not been received. So the sender will assume that the frame is lost
or damaged, and it will send the same frame again to the receiver.
 The receiver receives the same frame again. But how does the receiver recognize that
the packet of the frame is a duplicate or the original? So, it will use the sequence
number to identify whether the packet is duplicate or new.

Sliding Window Protocols

Typically, the sliding window protocol is used for flow control purposes. In a noisy channel, the
data flow increases when the sender sends multiple frames at once before receiving
acknowledgement of the frame received from the receiver.

 The sliding window protocols also send multiple frames from sender to receiver to
improve channel efficiency. For that, they use flow control mechanisms, which provide
reliable communication.
 Here, the term sliding window refers to the buffers or memory that consists of frames.
There are 3 types of sliding window protocols used for flow control.

1. Stop-and-Wait ARQ Protocol


2. Go-Back-N ARQ Protocol
3. Selective Repeat ARQ Protocol

Stop-and-Wait ARQ Protocol

In the Stop-and-Wait ARQ protocol, the size of the sending window and receiving window is 1.
Since the sender and receiver window size is ‘1’, the sender transmits one frame and waits for
an acknowledgement from the receiver before sending the next one. It is also known as a one-
bit sliding window protocol because only one bit is transmitted on a channel.

The diagram below explains the Stop-and-Wait ARQ protocol.

 As shown in the above figure, Sn is at the 0th position. So the sender will send frame-0
to the receiver. As soon as the receiver receives frame-0, it will check the Rn and send
ACK-1 to the sender, as it wants frame-1 from the sender.
 Now, the sender receives ACK-1 and sends frame-1 to the receiver. But frame-1 is lost
during transmission. Therefore, the sender will wait for the acknowledgement from the
receiver until the timeout.
 After the timeout, the sender will again send frame-1 to the receiver. Now, the receiver
Rn is on the 0th frame because it wants the next frame, which is frame-0.
 So, the receiver will send ACK-0 to the sender, and the sender will send the second
frame-0 to the receiver. The receiver sends ACK-1, but it is lost during transmission.
 So the sender will resend the frame-0 to the receiver, but the receiver will discard it as it
is duplicated, and ACK-1 is sent to the sender. Similarly, this process continues until all
the frames have been sent to the receiver.

Go-Back-N ARQ Protocol

In the Go-Back-N ARQ protocol, the sending and receiving window sizes are N-bit and One-bit,
respectively. It uses cumulative and independent acknowledgement for communication.

 The Go-Back-N ARQ protocol does not accept corrupted frames during transmission.
Furthermore, it does not accept out-of-order frames and silently discards them.
 If the receiver does not accept a frame, the go-back-n protocol leads to the re-
transmission of the entire window.
 When a frame is lost during transmission, the go-back-n protocol resends the frame
after the timeout.
The diagram below explains the Go-Back-N ARQ protocol.

 As shown in the diagram above, the sending and receiving windows have window sizes
of 6 and 1, respectively. Both Sf and Sn are located in the first position.
 Here, Sf specifies the frame of the window that has been sent but has not received
acknowledgement from the receiver, and Sn specifies which frame to send. Since Sn is
at the beginning, the sender will send frame-0 to the receiver.
 On the receiver side, Rn describes which frame the receiver expects to receive. As the
receiver accepts frame-0, it slides the Rn onto frame-1, indicating that it now wants
frame-1. So, the receiver will send ACK-1 to the sender.
 When the sender receives ACK-1, it sends frame-1 to the receiver. But frame-1 is lost
during transmission, and the sender is waiting for an acknowledgement .
 As soon as the timeout is over, the sender will resend frame-1 to the receiver. This time,
the receiver receives frame-1 and sends ACK-2 to the sender. Now, Sn on the sender
side will increase by one.
 When the sender receives ACK-2, it sends frame-2 to the receiver. The receiver receives
frame-2 and sends ACK-3, but ACK-3 is lost during transmission.
 Before receiving ACK-3, the sender sends frame-3 and frame-4 to the receiver. When
the receiver sends ACK-5, the sender will understand that the previous frames were
received by the receiver successfully.
 The sender sends frame-5 and frame-6 to the receiver, but frame-5 is lost during
transmission, and frame-6 reaches the receiver. But this is an out-of-delivery frame, and
the receiver will not accept frame-6. The sender will retransmit frame-5 and frame-6
after the timeout.

Selective Repeat ARQ Protocol

As we have seen in the Go-Back-N ARQ protocol, the sender and receiver window sizes are N
and 1, respectively. Also, it does not receive out-of-order delivery. But in Selective Repeat ARQ
protocol, sending and receiving windows are of equal size, which is N. In addition, it also
accepts out-of-order frames.

 Selective Repeat ARQ protocol uses independent acknowledgement for communication.


 Similar to the Go-Back-N ARQ protocol, the Selective Repeat ARQ protocol also does not
accept corrupted frames. But, the selective repeat ARQ protocol does not silently
discard corrupted frames, like the Go-Back-N ARQ protocol.
 To find the frame to be sent to the receiver, the sender performs the search operation.
On the receiver side, since it accepts out-of-order frames, it needs a sorting operation to
sort the frames according to the sequence number.
 If a frame loss occurs during transmission, the Selective Repeat ARQ protocol resends
the loss frame to the receiver.
The below diagram explains the Selective Repeat ARQ protocol.
 In the above figure, Sf specifies the frame of the window that has been sent but has not
received acknowledgement from the receiver, Sn specifies which frame to send and Rn
describes which frame the receiver expects to receive.
 As shown in the figure, the sender and receiver windows are of size 4 bits. The sender
sends frame-0 to the receiver, and the receiver sends ACK-1.
 After receiving the ACK-1, the sender sends frame-1 to the receiver, and the receiver
sends ACK-2 to the sender. Upon receiving ACK-2, the sender sends frame-2, and frame-
2 is lost during transmission.
 The sender also sends frame-3 and frame-4 before receiving the acknowledgement from
the receiver.
 When frame-3 arrives, the receiver will send a negative acknowledgement to the
sender. So the sender will see the NAK-2 and resend the frame-2 to the receiver.
 This process continues until all frames have been sent and received by the sender and
receiver, respectively. Sf, Sn, and Rn will change as per the situation.
Data Link Protocols
The data link protocols operate in the data link layer of the Open System Interconnections (OSI)
model, just above the physical layer.
The services provided by the data link protocols may be any of the following −
 Framing − The stream of bits from the physical layer are divided into data frames whose
size ranges from a few hundred to a few thousand bytes. These frames are distributed
to different systems, by adding a header to the frame containing the address of the
sender and the receiver.
 Flow Control − Through flow control techniques, data is transmitted in such a way so
that a fast sender does not drown a slow receiver.
 Error Detection and/or Correction − These are techniques of detecting and correcting
data frames that have been corrupted or lost during transmission.
 Multipoint transmission − Access to shared channels and multiple points are regulated
in case of broadcasting and LANs.
Common Data Link Protocols

 Synchronous Data Link Protocol (SDLC) − SDLC was developed by IBM in the 1970s as
part of Systems Network Architecture. It was used to connect remote devices to
mainframe computers. It ascertained that data units arrive correctly and with right flow
from one network point to the next.
 High Level Data Link Protocol (HDLC) − HDLC is based upon SDLC and provides both
unreliable service and reliable service. It is a bit – oriented protocol that is applicable for
both point – to – point and multipoint communications.
 Serial Line Interface Protocol (SLIP) − This is a simple protocol for transmitting data
units between an Internet service provider (ISP) and home user over a dial-up link. It
does not provide error detection / correction facilities.
 Point - to - Point Protocol (PPP) − This is used to transmit multiprotocol data between
two directly connected (point-to-point) computers. It is a byte – oriented protocol that
is widely used in broadband communications having heavy loads and high speeds.
 Link Control Protocol (LCP) − It one of PPP protocols that is responsible for establishing,
configuring, testing, maintaining and terminating links for transmission. It also imparts
negotiation for set up of options and use of features by the two endpoints of the links.
 Network Control Protocol (NCP) − These protocols are used for negotiating the
parameters and facilities for the network layer. For every higher-layer protocol
supported by PPP, one NCP is there.
Medium Access Control Sublayer (MAC Sublayer)
The medium access control (MAC) is a sublayer of the data link layer of the open system
interconnections (OSI) reference model for data transmission. It is responsible for flow control
and multiplexing for transmission medium. It controls the transmission of data packets via
remotely shared channels. It sends data over the network interface card.

MAC Layer in the OSI Model


The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The
data link layer is the second lowest layer. It is divided into two sublayers −
 The logical link control (LLC) sublayer
 The medium access control (MAC) sublayer

Function of Medium Access Control


A computer network's Medium Access Control (MAC) layer performs several crucial tasks to
control access to the shared communication medium.

Its main responsibility is preventing collisions while ensuring multiple devices can transmit data
fairly and efficiently over a shared communication channel.

The MAC layer performs the following primary tasks:

o Access Control: The MAC layer controls which device can transmit data at any given
time by controlling access to the shared communication medium. It employs various
access control techniques to control how devices compete for access to the medium.
Contention-based (like Carrier Sense Multiple Access with Collision Detection or
CSMA/CD) and contention-free (like token passing) techniques can be used.
o Frame Addressing: Networked devices on the same network segment are uniquely
identified by their MAC addresses, also called hardware or physical addresses. The MAC
layer includes the source and destination MAC addresses in data frames to identify the
sender and recipient of the data.
o Frame Formatting: The MAC layer packages data from the higher layers (typically the
Network Layer) into frames that can be transmitted over the network medium. These
frames contain data, control information, information for checking for errors, and
information for addressing.
o Error detection: Many MAC protocols include error-checking components to identify
transmission errors. This guarantees the accuracy of the data transmitted through the
medium. The MAC layer may ask for retransmission of the frame if a mistake is found.
o Frame detection and collision handling: Collisions can happen when multiple devices
try to transmit data simultaneously over a shared communication medium like Ethernet.
Detecting collisions and putting collision resolution mechanisms into place to lessen
their effects falls to the MAC layer. For instance, when CSMA/CD detects collisions, it
starts a back-off mechanism that sends data again after an arbitrary amount of time has
passed.
o Flow Control: Some MAC protocols employ flow control to ensure that data is
transmitted at a rate the recipient device can handle without experiencing data loss or
overflow. Flow control mechanisms may use feedback from the receiver to the sender
to change transmission rates.
o Address Resolution: In Ethernet networks, the MAC layer uses the Address Resolution
Protocol (ARP) to translate addresses from higher layers (like IP addresses) to MAC
addresses. The local network segment's corresponding MAC address is found using ARP,
which maps the destination IP address to it.
o Broadcast and Multicast: The MAC layer supports both broadcasting and multicasting,
allowing for the sending of frames to various groups of devices (multicast) or all devices
on a network segment (broadcast) as needed.
o Security: Some MAC layer protocols include security features like encryption and
authentication to protect data and ensure that only authorized devices can access the
network.

Limitation of Medium Access Control

There are some limitations of medium access control in computer networks:

o Restricted to Shared Medium: Shared network mediums are the only medium for
many MAC protocols. MAC protocols might not be the most effective option when
dedicated point-to-point connections are required (such as point-to-point links or
dedicated circuits).
o Overhead: Control data, addressing, and collision detection mechanisms are all added
by MAC protocols. The network's actual data throughput may decrease due to this
overhead.
o Latency: If applicable, accessing the medium and collision resolution can cause
network communication to have variable and occasionally unpredictable latency.
o Limitations on Scalability: Due to the inefficiency of the contention process, some
MAC protocols may not scale well to extremely large networks with many devices
competing for access.
o Low Utilization Causes Inefficiency: MAC protocols may add irrational delays and
contention overhead in networks with low utilization, which can lower overall efficiency.
o Congestion Vulnerability: Network congestion can increase contention and
collisions and reduce network performance for MAC protocols based on contention.
o Security issues: It's possible that MAC protocols don't come with strong security
features by default. It might be necessary to use additional security measures, like
authentication and encryption, to protect data transmission at higher layers.
o Limited Quality of Service (QoS) Support: Although some MAC protocols support
fundamental QoS mechanisms, they might not offer fine-grained control over network
prioritization and traffic management.
o Security issues: It's possible that MAC protocols don't come with robust security
features by default. Potential security risks may arise if unauthorized devices connect to
the network. Higher layers frequently need additional security measures like
authentication and encryption.

The channel allocation problem

What is Channel Allocation?

On a network, multiple devices are communicating with each other. It is the responsibility of
the data link layer to provide reliable communication by allocating a channel to the device for
communication. Allocating channels to specific devices for communication is known as channel
allocation.

 The data link layer allocates a single broadcast channel between competing devices.
 Depending on the network and geographic region, the channel can be guided media or
unguided media. On the channel, several nodes are connected.
 The purpose of a channel is to connect one device to another device on a network for
communication.

Channel Allocation Schemes

Channel allocation problem plays a major role in the network. There are two types of channel
allocation schemes used on the network. They are as follows:
1. Static Channel Allocation
2. Dynamic Channel Allocation

 Static Channel Allocation: This is a traditional way of allocating a single channel


among multiple users. In static allocation, the channel’s bandwidth is split into equal-
sized portions among users, and each user gets a portion of the bandwidth.
 Dynamic Channel Allocation: In dynamic channel allocation, bandwidth is not
allocated to the user permanently. The frequency is allocated to the devices when it is
needed on the network. It uses little CPU power, which increases the optimal resource
utilization of the network.

Static Channel Allocation

In static channel allocation, the network bandwidth is divided equally between devices and is
permanent for devices.

 For example, there are 50 users on the network, and the network bandwidth is 100 Hz.
The 100 Hz bandwidth would be divided into 50 equally sized parts that are 2 Hz. Each
user will get a 2 Hz portion.
 Each device has a private frequency band, so there is no possibility of interference with
other devices.
 A well-known example of a single channel allocation scheme is FM radio, in which
different frequencies are assigned to different stations that are fixed.
The figure below shows the static channel allocation scheme.

 As shown in the figure, 1000 MHz bandwidth is present on a single channel. The 1000
MHz bandwidth is divided into four equally sized frequency bands of 250 MHz,
permanent for the specific station.
 In addition, there is a gap between the frequency bands of the stations to avoid signal
interference with other signals.

Disadvantages of Static Channel Allocation

 Let’s say the bandwidth is divided for 50 devices, and only 40 devices are active on the
network, then the bulk of the valuable bandwidth will be wasted.
 If there are equal-sized portions of bandwidth for 50 devices and 60 devices want to
communicate, 10 devices will be denied permission due to lack of bandwidth.
 Now, let’s say 50 equally-sized bandwidth portions are assigned to 50 devices. But the
problem is that when some devices are idle, their bandwidth will be lost simply because
they are not using it, and no one is allowed to use it.
 In a static allocation channel, most of the channels will be idle for most of the time.

Dynamic Channel Allocation

To overcome this problem, we can use dynamic channel allocation, in which the bandwidth is
not allocated to the device permanently.

 In dynamic channel allocation, the bandwidth is not allocated to the device


permanently; the frequency band is allocated to the device whenever required.
 When the device completes its communication on a channel, it is de-allocated, and the
same channel can be assigned to another device.
 In dynamic channel allocation, a channel is dynamically allocated between devices.
The figure below shows five assumptions for dynamic channel allocation.
As shown in the above figure, Independent Traffic, Single Channel, Observable Collisions,
Continuous or Slotted Time, and Carrier Sense or No Carrier Sense are the five assumptions of
the Dynamic channel allocation.

Independent Traffic, Single Channel, and Observable Collisions

The first three assumptions for dynamic channel allocation are independent traffic, single-
channel, and observable collisions. Let us understand all these one by one.

Independent Traffic: In this assumption, there are n independent devices controlled by the
program or user that generate frames for transmission. The expected number of frames
generated in the channel depends on the length of the interval and the rate of arrival of new
frames.
 After creating a frame, the device is blocked and does nothing until the frame is
successfully transmitted over a channel.
Single Channel: There is a single channel between devices for communication, in which all
devices can send and receive from it.
 In single-channel assumption, protocols are used to prioritize frames from less
important frames to more important frames.
 The channel transmits frames from one device to another according to the priority of
the frame.
Observable Collisions: The collision occurs when two frames are traveling on a shared
channel and overlap with each other. When a collision occurs between frames, all devices can
detect that a collision occurred on a channel. The lost frame can be retransmitted after some
time. No error is produced here compared to that generated by the collision.

Continuous/Slotted Time and Carrier Sense/No Carrier Sense

The last two assumptions for dynamic channel allocation are continuous or slotted time and
carrier sense or no carrier sense. Both of them play major roles in the channel. Let us
understand one by one.

Continuous or Slotted Time: We can consider the time constant as when frame transmission
can be initiated at any instant. Time can be slotted or divided into discrete intervals known as
slots. When the slot is allocated to the device or channel, the frame transmission must start at
the beginning of the slot.
 For the idle slot, the slot has 0 frames for transmission. Similarly, successful transmission
occurs if the slot contains one frame, and a channel collision will occur if the slot
contains more frames.
Carrier Sense or No Carrier Sense: In the carrier sense, the device checks whether the
channel is in use before using the channel. If the station finds the channel busy, it will not
broadcast any frame on the channel. If there is no carrier sense, the station may not
understand the channel before use, leading to frame collisions.

Concept of Dynamic Channel Assumptions

Now, we have understood the five assumptions of the Dynamic Channel Allocation Scheme.
Some assumptions can either be good or bad, and they also affect the performance of the
network. Let’s discuss the assumptions.

 We have seen that in independent traffic assumption, frames are generated according
to the length of the interval and the arrival rate. In independent traffic, frame arrivals
occur independently and are generated unexpectedly, which is not good for network
traffic because the packets sent by the network layer are massive.
 The single-channel assumption is the heart of the dynamic channel model as no external
way to communicate exists. Since it uses protocols to prioritize the packets,
performance of the network increases.
 The collision assumption provides a way to detect collisions if frames collide and frames
need to be retransmitted.
 Slotted time can be used for better performance. Splitting the time into discrete
intervals requires the devices to be synchronized with each other.
 A network may or may not have carrier sensing. In wireless networks, not every device
will be within radio range of another device, they may not use carrier sense effectively.
 Even if there is no network conflict after sensing the channel, the receiver may receive
some frames incorrectly for different reasons. The data link layer protocol and other
higher layers provide reliability.
The diagram below explains the concept of dynamic channel allocation assumptions.
 As shown in the figure, all the stations are connected on a single channel and sense the
channel continuously.
 Stations sense the channel to see if the channel is busy or idle. If a station finds the
channel idle, it sends data to the channel. If a collision has occurred with data from
another station, the collision detection method will notify the station that the collision
occurred on the channel.
 When a station sends data to another station, a clock is synchronized, which helps the
sender and receiver identify the data.

Multiple access protocol- ALOHA,


CSMA, CSMA/CA and CSMA/CD
What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel. Suppose there is no dedicated path to communicate or
transfer the data between two devices. In that case, multiple stations access the channel and
simultaneously transmits the data over the channel. It may create collision and cross talk.
Hence, the multiple access protocol is required to reduce the collision and avoid crosstalk
between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the same
time (transferring the data simultaneously). All the students respond at the same time due to
which data is overlap or data lost. Therefore it is the responsibility of a teacher (multiple access
protocol) to manage the students and make them one answer.

Following are the types of multiple access protocol that is subdivided into the different process
as:
A. Random Access Protocol
In this protocol, all the station has the equal priority to send the data over a channel. In random
access protocol, one or more stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy), each station transmits the
data frame. However, if more than one station sends the data over a channel, there may be a
collision or data conflict. Due to the collision, the data frame packets may be lost or changed.
And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for broadcasting frames on
the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol


It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to
transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any
station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the specified time, the
station waits for a random amount of time, called the backoff time (Tb). And the station may
assume the frame has been lost or destroyed. Therefore, it retransmits the frame until all the
data are successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the
receiver end. At the same time, other frames are lost or destroyed. Whenever two frames fall
on a shared channel simultaneously, collisions can occur, and both will suffer damage. If the
new frame's first bit enters the channel before finishing the last bit of the second frame. Both
frames are completely finished, and both stations must retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a
very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a frame to a shared channel, the
frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the beginning of the slot, the station
will have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G *
e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)


It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared
channel and if the channel is idle, it immediately sends the data. Else it must wait and keep
track of the status of the channel to be idle and broadcast the frame unconditionally as soon as
the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not continuously), and when the channel is
found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-Persistent


mode defines that each node senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random
time and resumes the frame with the next time slot.
O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is inactive,
each station waits for its turn to retransmit the data.

CSMA/ CD
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is successfully
received, the station sends another frame. If any collision is detected in the CSMA/CD, the
station sends a jam/ stop signal to the shared channel to terminate data transmission. After
that, it waits for a random time before sending a frame to a channel.

CSMA/ CA
It is a carrier sense multiple access/collision avoidance network protocol for carrier
transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the collision:

Interframe space: In this method, the station waits for the channel to become idle, and if it
gets the channel is idle, it does not immediately send the data. Instead of this, it waits for some
time, and this time period is called the Interframe space or IFS. However, the IFS time is often
used to define the priority of the station.

Contention window: In the Contention window, the total time is divided into different slots.
When the station/ sender is ready to transmit the data frame, it chooses a random slot number
of slots as wait time. If the channel is still busy, it does not restart the entire process, except
that it restarts the timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends the data frame
to the shared channel if the acknowledgment is not received ahead of time.

Collision free protocols


In computer networks, when more than one station tries to transmit simultaneously via a
shared channel, the transmitted data is garbled. This event is called collision. The Medium
Access Control (MAC) layer of the OSI model is responsible for handling collision of frames.
Collision – free protocols are devised so that collisions do not occur. Protocols like CSMA/CD
and CSMA/CA nullifies the possibility of collisions once the transmission channel is acquired by
any station. However, collision can still occur during the contention period if more than one
stations starts to transmit at the same time. Collision – free protocols resolves collision in the
contention period and so the possibilities of collisions are eliminated.
Types of Collision – free Protocols
Bit – map Protocol
In bit map protocol, the contention period is divided into N slots, where N is the total number
of stations sharing the channel. If a station has a frame to send, it sets the corresponding bit in
the slot. So, before transmission, each station knows whether the other stations want to
transmit. Collisions are avoided by mutual agreement among the contending stations on who
gets the channel.

Binary Countdown
This protocol overcomes the overhead of 1 bit per station of the bit – map protocol. Here,
binary addresses of equal lengths are assigned to each station. For example, if there are 6
stations, they may be assigned the binary addresses 001, 010, 011, 100, 101 and 110. All
stations wanting to communicate broadcast their addresses. The station with higher address
gets the higher priority for transmitting.

Limited Contention Protocols


These protocols combines the advantages of collision based protocols and collision free
protocols. Under light load, they behave like ALOHA scheme. Under heavy load, they behave
like bitmap protocols.

Adaptive Tree Walk Protocol


In adaptive tree walk protocol, the stations or nodes are arranged in the form of
a binary tree as follows -

Initially all nodes (A, B ……. G, H) are permitted to compete for the channel. If a node is
successful in acquiring the channel, it transmits its frame. In case of collision, the nodes are
divided into two groups (A, B, C, D in one group and E, F, G, H in another group). Nodes
belonging to only one of them are permitted for competing. This process continues until
successful transmission occurs.
Wireless LANs
Wireless LANs (WLANs) are wireless computer networks that use high-frequency radio waves
instead of cables for connecting the devices within a limited area forming LAN (Local Area
Network). Users connected by wireless LANs can move around within this limited area such as
home, school, campus, office building, railway platform, etc.
Most WLANs are based upon the standard IEEE 802.11 standard or WiFi.

Components of WLANs

The components of WLAN architecture as laid down in IEEE 802.11 are


 Stations (STA): Stations comprises of all devices and equipment that are connected to
the wireless LAN. Each station has a wireless network interface controller. A station can
be of two types :
o Wireless Access Point (WAP or AP)
o Client
 Basic Service Set (BSS) : A basic service set is a group of stations communicating at
the physical layer level. BSS can be of two categories
o Infrastructure BSS
o Independent BSS
 Extended Service Set (ESS): It is a set of all connected BSS.
 Distribution System (DS) : It connects access points in ESS.
Types of WLANS

WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad
hoc mode.
 Infrastructure Mode : Mobile devices or clients connect to an access point (AP) that in
turn connects via a bridge to the LAN or Internet. The client transmits frames to other
clients via the AP.
 Ad Hoc Mode : Clients transmit frames directly to each other in a peer-to-peer fashion.
Advantages of WLANs

 They provide clutter-free homes, offices and other networked places.


 The LANs are scalable in nature, i.e. devices may be added or removed from the
network at greater ease than wired LANs.
 The system is portable within the network coverage. Access to the network is not
bounded by the length of the cables.
 Installation and setup are much easier than wired counterparts.
 The equipment and setup costs are reduced.
Disadvantages of WLANs

 Since radio waves are used for communications, the signals are noisier with more
interference from nearby systems.
 Greater care is needed for encrypting information. Also, they are more prone to errors.
So, they require greater bandwidth than the wired LANs.
 WLANs are slower than wired LANs.

IEEE 802.11 Architecture and Protocol Stack


IEEE 802.11 standard, popularly known as WiFi, lays down the architecture and specifications
of wireless LANs (WLANs). WiFi or WLAN uses high-frequency radio waves instead of cables for
connecting the devices in LAN. Users connected by WLANs can move around within the area of
network coverage.

IEEE 802.11 Architecture

The components of an IEEE 802.11 architecture are as follows:

 Stations (STA) Stations comprises of all devices and equipment that are connected to the
wireless LAN. A station can be of two types
o Wireless Access Point (WAP) WAPs or simply access points (AP) are generally wireless
routers that form the base stations or access.
o Client. Clients are workstations, computers, laptops, printers, smartphones, etc.

 Each station has a wireless network interface controller.


 Basic Service Set (BSS) A basic service set is a group of stations communicating at
the physical layer level. BSS can be of two categories depending upon the mode of operation
o Infrastructure BSS Here, the devices communicate with other devices through access
points.
o Independent BSS Here, the devices communicate in a peer-to-peer basis in an ad hoc
manner.
 Extended Service Set (ESS) It is a set of all connected BSS.
 Distribution System (DS) It connects access points in ESS.

Frame Format of IEEE 802.11


The main fields of a frame of wireless LANs as laid down by IEEE 802.11 are

 Frame Control It is a 2 bytes starting field composed of 11 subfields. It contains control


information of the frame.
 Duration It is a 2-byte field that specifies the time period for which the frame and its
acknowledgment occupy the channel.
 Address fields There are three 6-byte address fields containing addresses of source,
immediate destination, and final endpoint respectively.
 Sequence It a 2 bytes field that stores the frame numbers.
 Data This is a variable-sized field that carries the data from the upper layers. The maximum size
of the data field is 2312 bytes.
 Check Sequence It is a 4-byte field containing error detection information.
Physical layer

The physical layer is the first and lowest layer from the bottom of the 7-layered OSI model and
delivers security to hardware. This layer is in charge of data transmission over the physical
medium. It is the most complex layer in the OSI model.

The physical layer converts the data frame received from the data link layer into bits, i.e., in
terms of ones and zeros. It maintains the data quality by implementing the required protocols
on different network modes and maintaining the bit rate through data transfer using a wired or
wireless medium.

Attributes of the physical layer:

The physical layer has several attributes that are implemented in the OSI model:
1. Signals: The data is first converted to a signal for efficient data transmission. There are two
kinds of signals:

o Analog Signals: These signals are continuous waveforms in nature and are
represented by continuous electromagnetic waves for the transmission of data.
o Digital Signals: These signals are discrete in nature and represent network
pulses and digital data from the upper layers.
2. Transmission media: Data is carried from source to destination with the help of
transmission media. There are two sorts of transmission media:

o Wired Media: The connection is established with the help of cables. For example, fiber
optic cables, coaxial cables, and twisted pair cables.
o Wireless Media: The connection is established using a wireless communication
network. For example, Wi-Fi, Bluetooth, etc.
3. Data Flow: It describes the rate of data flow and the transmission time frame. The factors
affecting the data flow are as follows:

o Encoding: Encoding data for transmission on the channel.


o Error-Rate: Receiving erroneous data due to noise in transmission.
o Bandwidth: The rate of transmission of data in the channel.

4. Transmission mode: It describes the direction of the data flow. Data can be transmitted in
three sorts of transmission modes as follows:

o Simplex mode: This mode of communication is a one-way communication where a


device can only send data. Examples are a mouse, keyboard, etc.
o Half-duplex mode: This mode of communication supports one-way communication,
i.e., either data can be transmitted or received. An example is a walkie-talkie.
o Full-duplex mode: This mode of communication supports two-way communication,
i.e., the device can send and receive data at the same time. An example is cellular
communication.
5. Noise in transmission: Transmitted data can get corrupted or damaged during data
transmission due to many reasons. Some of the reasons are mentioned below:

o Attenuation: It is a gradual deterioration of the network signal on the communication


channel.
o Dispersion: In the case of Dispersion, the data is dispersed and overlapped during
transmission, which leads to the loss of the original data.
o Data Delay: The transmitted data reaches the destination system outside the specified
frame time.
The physical layer performs various functions and
services:

o It transfers data bit by bit or symbol by symbol.


o It performs bit synchronization, which means that only one bit needs to be transferred
from one system to another at a time. There should be no overlapping of bits during
transmission. Bit synchronization can be achieved by providing a clock.
o Bit rate control defines how many bits per second can be transmitted, i.e., the number
of bits sent per second.
o The physical layer is responsible for knowing the arrangements made between devices
in networks called physical topologies, such as mesh, ring, bus, and star.
o The transmission mode in which data is transmitted, and there are three modes of
transmitting data: full-duplex, half-duplex, and simplex.
o It is responsible for point-to-multipoint, point-to-point, or multipoint line configurations.
o It is responsible for flow control and start-stop signaling in asynchronous serial
communication.
o It provides bit-interleaving and another channel coding.
o It is responsible for serial or parallel communication.
o It provides a standardized interface for physical transmission media, including electrical
specifications for transmission line signal levels, mechanical specifications for electrical
cables and connectors, radio interfaces, and wireless IR communication links, IR
specifications.
o The physical layer is responsible for modulation, which means the conversion of
information into radio waves by adding the data to an optical nerve signal or electrical
signal.
o This layer is responsible for circuit switching.
o This layer is concerned with auto-negotiation. Signals are mainly of two sorts, digital
signals & analog signals. The physical layer decides which signal will be used to transfer
the data from one point to another.
o It also avoids collisions between data flowing in the network due to the irretrievability of
data packets.
o It is responsible for the translation of data received from the data link layer for further
transmission.

Physical Topology:

Physical topology refers to the specification or structure of the connections of the network
between the devices where the transmission will happen. There are four types of topologies,
which are as follows:

Star Topology:
Star topology is a sort of network topology in which each node or device in the network is
individually joined to a central node, which can be a switch or a hub. This topology looks like a
star, due to which this topology is called star topology.
Hub does not provide route data, but it transmits data to other devices connected to it. The
advantage of this topology is that if one cable fails, the device connected to that cable is
affected, and not the others.

Bus Topology:
Bus topology comprises a single communication line or cable that is connected to each node.
The backbone of this network is the central cable, and each node can communicate with other
devices through the central cable.

The signal goes from the ground terminator to the other terminator of the wire. The terminator
stops the signal once it reaches the end of the wire to avoid signal bounce. Each computer
communicates independently with other computers in what is called a peer-to-peer network.
Each computer has a unique address, so if a message is to be sent to a specific computer, the
device can communicate directly with that computer.
The advantage of bus topology is that collapse in one device will not affect other devices. The
bus topology is not expensive to build because it uses a single wire and works well for small
networks.

Ring Topology:
In a ring topology, the devices are connected in the form of a ring so that each device has two
neighbors for communication. Data moves around the ring in one direction.

As you can see below, all four devices are connected to each other in the form of a ring. Each
device has two neighbors. Node 2 and Node 4 are neighbors of Node 1; similarly, Node 1 and
Node 3 are neighbors of Node 2, and so on.

The advantage of ring topology is that if you want to add another device to the ring, you will
need an additional cable to do so. Similarly, you can remove a device and join the wires.

Mesh Topology:
In a mesh topology, each system is directly joined to every other system. The advantage of
mesh topology is that there will be no traffic issues as each device has a dedicated
communication line. If one system is not functioning, it will not affect other devices. It provides
more security or privacy.
The drawback of mesh topology is that it is expensive and more complex than other topologies.

Importance of the physical layer:


o Without proper data conversion at the physical level, the network cannot function.
o The physical layer is responsible for maintaining communication between the hardware
and the network mode.
o It handles the data flow rate of the data to be transmitted along with the timeframe of
the transmitted data.

Sub layer of Physical Layer


In computer networks, the physical layer, which deals with the physical transmission of data, is
sometimes divided into two sublayers: the Physical Medium Dependent (PMD) sublayer and the
Physical Layer Convergence Procedure (PLCP) sublayer.

1. Physical Medium Dependent (PMD) Sub layer:


This sublayer focuses on the physical characteristics of the transmission medium, such as the
cable, optical fiber, or wireless signal, and defines the details of transmitting and receiving
individual bits on that medium.

Responsibilities: Bit timing, signal encoding, interaction with the physical medium, and
defining the properties of the cable, optical fiber, or wire itself.
Examples: Specifications for Fast Ethernet, Gigabit Ethernet, and 10 Gigabit Ethernet.

2.Physical Layer Convergence Procedure (PLCP) Sublayer:


This sublayer handles the transmission and reception of data frames, ensuring that the data is
properly formatted and transmitted across the physical medium.
Responsibilities: Encoding and decoding data into suitable electrical/optical waveforms
on the communication medium, as well as performance monitoring and payload rate matching
of the different transport formats used at this layer.

Frame Structure of physical layer


In computer networks, the physical layer's frame structure involves converting data received
from the data link layer into raw bits (0s and 1s) for transmission, and then managing the
physical medium and bit synchronization.

 Role of the Physical Layer:


The physical layer, the lowest layer in the OSI model, is responsible for transmitting data over
the physical medium (e.g., cables, wireless signals).
 Framing:
The data link layer passes data frames to the physical layer, which then converts these frames
into a stream of bits.
 Bit Synchronization:
The physical layer ensures that the bits are synchronized at both the sender and receiver
ends, meaning that the bits are transmitted and received at the same rate.
 Modulation and Demodulation:
The physical layer also handles the modulation of data into signals suitable for transmission and
the demodulation of signals back into data at the receiver.
 Physical Medium:
The physical layer deals with the physical characteristics of the transmission medium, such as
the type of cable, frequency, and signal strength.
Examples of Physical Layer Protocols:
Some examples of physical layer protocols include Ethernet, Wi-Fi, and various modem
protocols.
 Physical layer is responsible for:
 Data transmission: Transmitting raw bits over the communication channel.
 Physical medium: Dealing with the communication medium used for transmission.
 Modulation: Converting data into signals suitable for transmission.
 Demodulation: Converting signals back into data at the receiver.
 Bit synchronization: Ensuring that the bits are synchronized at both the sender and
receiver ends.
Data link layer switching
o When a user accesses the internet or another computer network outside their
immediate location, messages are sent through the network of transmission media. This
technique of transferring the information from one computer network to another
network is known as switching.
o Switching in a computer network is achieved by using switches. A switch is a small
hardware device which is used to join multiple computers together with one local area
network (LAN).
o Network switches operate at layer 2 (Data link layer) in the OSI model.
o Switching is transparent to the user and does not require any configuration in the home
network.
o Switches are used to forward the packets based on MAC addresses.
o A Switch is used to transfer the data only to the device that has been addressed. It
verifies the destination address to route the packet appropriately.
o It is operated in full duplex mode.
o Packet collision is minimum as it directly communicates between source and
destination.
o It does not broadcast the message as it works with limited bandwidth.

Types of Network Switching


o A multifaceted approach to network switching has developed into numerous types, each
catering to specific requirements and conditions.

The primary kinds are discussed below:

o Circuit Switching: In traditional smartphone networks, circuit switching establishes a


dedicated communication route amongst devices during their verbal exchange. While
effective, it has boundaries in terms of scalability and overall performance.
o Packet Switching: Packet switching, in contrast to circuit switching, breaks down
records into packets, which might be transmitted independently across the network.
This method, employed via the internet, allows for greater, inexperienced use of
bandwidth and superior scalability.
o Message Switching: Message switching includes the whole message being sent from
delivery to destination. In current computer networks, it changed into an early form of
data transmission.
o Virtual Circuit Switching: Combining factors of both circuit and packet switching,
digital circuit switching establishes a dedicated path in the path of a conversation
consultation, just like circuit switching; however, it makes use of packet-like
transmission to maintain overall performance.
o Ethernet Switching: Ethernet switching has come to be the fundamental form of
community switching in local location networks (LANs). It operates at Layer 2 of the OSI
version. The usage of MAC addresses beforehand the facts simplest to the supposed
recipient.

Why is Switching Concept required?


Switching concept is developed because of the following reasons:

o Bandwidth: It is defined as the maximum transfer rate of a cable. It is a very critical and
expensive resource. Therefore, switching techniques are used for the effective
utilization of the bandwidth of a network.
o Collision: Collision is the effect that occurs when more than one device transmits the
message over the same physical media, and they collide with each other. To overcome
this problem, switching technology is implemented so that packets do not collide with
each other.

Advantages of Switching:

o Switch increases the bandwidth of the network.


o It reduces the workload on individual PCs as it sends the information to only that device
which has been addressed.
o It increases the overall performance of the network by reducing the traffic on the
network.
o There will be less frame collision as switch creates the collision domain for each
connection.

Disadvantages of Switching:

o A Switch is more expensive than network bridges.


o A Switch cannot determine the network connectivity issues easily.
o Proper designing and configuration of the switch are required to handle multicast
packets.

Challenges and Future Trends


While community switching has come in a protracted manner, it continues to present
demanding situations and opportunities for development. Some key concerns and future
developments consist of the following:

Security Concerns: As networks become more interconnected, safety threats also evolve.
Switches play an important role in network protection, and improvements in encryption, access
management, and risk detection are crucial for safeguarding sensitive information.
5G Integration: The creation of 5G technology introduces new opportunities and demanding
situations for network switching. The prolonged pace and functionality demand switching
infrastructures to help the growing extensive sort of related gadgets and packages.

Edge Computing: The upward push of edge computing, wherein processing happens in the
direction of the flow of technology, needs network switching capable of managing allotted and
decentralized architectures correctly.

Artificial Intelligence (AI) Integration: Integrating AI network switches can beautify


automation, predictive protection, and adaptive community optimization. Machine-learning
algorithms can analyze community traffic patterns to anticipate and prevent functionality
issues.

Quantum Networking: With the exploration of quantum computing and verbal exchange,
the panorama of network switching might also witness revolutionary adjustments. Quantum
switches, harnessing the concepts of quantum entanglement, must redefine the way
information is transmitted.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy