0% found this document useful (0 votes)
6 views27 pages

CN Unit 2

Uploaded by

ankitagarima13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views27 pages

CN Unit 2

Uploaded by

ankitagarima13
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

UNIT 2

Framing in Data Link Layer


Frames are the units of digital transmission, particularly in computer networks and
telecommunications. Frames are comparable to the packets of energy called photons in the case
of light energy. Frame is continuously used in the Time Division Multiplexing process.

Framing is a point-to-point connection between two computers or devices consisting of a wire


in which data is transmitted as a stream of bits. Framing is a function of the data link layer. It
provides a way for a sender to transmit a set of meaningful bits to the receiver. Frames have
headers that contain information such as error-checking codes.

At the data link layer, it extracts the message from the sender and provides it to the receiver by
providing the sender’s and receiver’s addresses. The advantage of using frames is that data is
broken up into recoverable chunks that can easily be checked for corruption.

The process of dividing the data into frames and reassembling it is transparent to the user and
is handled by the data link layer.

Framing is an important aspect of data link layer protocol design because it allows the
transmission of data to be organized and controlled, ensuring that the data is delivered
accurately and efficiently.

Problems in Framing
• Detecting start of the frame: When a frame is transmitted, every station must be
able to detect it. The station detects frames by looking out for a special sequence of
bits that marks the beginning of the frame i.e. SFD (Starting Frame Delimiter).
• Detecting end of frame: When to stop reading the frame.
• Handling errors: Framing errors may occur due to noise or other transmission
errors, which can cause a station to misinterpret the frame. Therefore, error
detection and correction mechanisms, such as cyclic redundancy check (CRC), are
used to ensure the integrity of the frame.
• Framing overhead: Every frame has a header and a trailer that contains control
information such as source and destination address, error detection code, and other
protocol-related information. This overhead reduces the available bandwidth for
data transmission, especially for small-sized frames.
• Framing incompatibility: Different networking devices and protocols may use
different framing methods, which can lead to framing incompatibility issues.
• Framing synchronization: Stations must be synchronized with each other to
avoid collisions and ensure reliable communication. Synchronization requires that
all stations agree on the frame boundaries and timing, which can be challenging in
complex networks with many devices and varying traffic loads.
• Framing efficiency: Framing should be designed to minimize the amount of data
overhead while maximizing the available bandwidth for data transmission.
Inefficient framing methods can lead to lower network performance and higher
latency.
Types of framing
There are two types of framing:

1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the
frame, the length of the frame itself acts as a delimiter.
• Drawback: It suffers from internal fragmentation if the data size is less than the
frame size
• Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning
of the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length
of the frame. Used in Ethernet(802.3). The problem with this is that sometimes the
length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the
frame. Used in Token Ring. The problem with this is that ED can occur in the data.
This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data
contains ED then, a byte is stuffed into data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’
character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is escaped
using \O).

Disadvantage – It is very costly and obsolete method.


2. Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.
Framing in the Data Link Layer also presents some challenges, which include:
Variable frame length: The length of frames can vary depending on the data being
transmitted, which can lead to inefficiencies in transmission. To address this issue, protocols
such as HDLC and PPP use a flag sequence to mark the start and end of each frame.
Bit stuffing: Bit stuffing is a technique used to prevent data from being interpreted as control
characters by inserting extra bits into the data stream. However, bit stuffing can lead to issues
with synchronization and increase the overhead of the transmission.
Synchronization: Synchronization is critical for ensuring that data frames are transmitted and
received correctly. However, synchronization can be challenging, particularly in high-speed
networks where frames are transmitted rapidly.
Error detection: Data Link Layer protocols use various techniques to detect errors in the
transmitted data, such as checksums and CRCs. However, these techniques are not foolproof
and can miss some types of errors.
Efficiency: Efficient use of available bandwidth is critical for ensuring that data is transmitted
quickly and reliably. However, the overhead associated with framing and error detection can
reduce the overall efficiency of the transmission.
Error Detection in Computer Networks
Error is a condition when the receiver’s information does not match the sender’s. Digital signals
suffer from noise during transmission that can introduce errors in the binary bits traveling from
sender to receiver. That means a 0 bit may change to 1 or a 1 bit may change to 0.
Data may get scrambled by noise or get corrupted whenever a message is transmitted. To
prevent such errors, error-detection codes are added as extra data to digital messages. This
helps in detecting any errors that may have occurred during message transmission.
Types of Errors
Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit of a
transmitted data unit is altered during transmission, resulting in an incorrect or corrupted data
unit.

Single-Bit Error

Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission
is affected. Although multiple-bit errors are relatively rare when compared to single-bit errors,
they can still occur, particularly in high-noise or high-interference digital environments.
Multiple-Bit Error

Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst
error. This error causes a sequence of consecutive incorrect values.

Burst Error

Error Detection Methods


To detect errors, a common technique is to introduce redundancy bits that provide additional
information. Various techniques for error detection include:
• Simple Parity Check
• Two-Dimensional Parity Check
• Checksum
• Cyclic Redundancy Check (CRC)
Simple Parity Check
Simple-bit parity is a simple error detection method that involves adding an extra bit to a data
transmission. It works as:
• 1 is added to the block if it contains an odd number of 1’s, and
• 0 is added if it contains an even number of 1’s
This scheme makes the total number of 1’s even, that is why it is called even parity checking.

Advantages of Simple Parity Check


• Simple parity check can detect all single bit error.
• Simple parity check can detect an odd number of errors.
• Implementation: Simple Parity Check is easy to implement in both hardware and
software.
• Minimal Extra Data: Only one additional bit (the parity bit) is added per data
unit (e.g., per byte).
• Fast Error Detection: The process of calculating and checking the parity bit is
quick, which allows for rapid error detection without significant delay in data
processing or communication.
• Single-Bit Error Detection: It can effectively detect single-bit errors within a
data unit, providing a basic level of error detection for relatively low-error
environments.
Disadvantages of Simple Parity Check
• Single Parity check is not able to detect even no. of bit error.
Two-Dimensional Parity Check
Two-dimensional Parity check bits are calculated for each row, which is equivalent to a
simple parity check bit. Parity check bits are also calculated for all columns, then both are sent
along with the data. At the receiving end, these are compared with the parity bits calculated on
the received data.

Advantages of Two-Dimensional Parity Check


• Two-Dimensional Parity Check can detect and correct all single bit error.
• Two-Dimensional Parity Check can detect two or three bit error that occur any
where in the matrix.
Disadvantages of Two-Dimensional Parity Check
• Two-Dimensional Parity Check can not correct two or three bit error. It can only
detect two or three bit error.
• If we have a error in the parity bit then this scheme will not work.
Checksum
Checksum error detection is a method used to identify errors in transmitted data. The process
involves dividing the data into equally sized segments and using a 1’s complement to calculate
the sum of these segments. The calculated sum is then sent along with the data to the receiver.
At the receiver’s end, the same process is repeated and if all zeroes are obtained in the sum, it
means that the data is correct.
Checksum – Operation at Sender’s Side
• Firstly, the data is divided into k segments each of m bits.
• On the sender’s end, the segments are added using 1’s complement arithmetic to
get the sum. The sum is complemented to get the checksum.
• The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
• At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
• If the result is zero, the received data is accepted; otherwise discarded.

Cyclic Redundancy Check (CRC)


• Unlike the checksum scheme, which is based on addition, CRC is based on binary
division.
• In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of the data unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
• At the destination, the incoming data unit is divided by the same number. If at this
step there is no remainder, the data unit is assumed to be correct and is therefore
accepted.
• A remainder indicates that the data unit has been damaged in transit and therefore
must be rejected.

CRC
Working
We have given dataword of length n and divisor of length k.
Step 1: Append (k-1) zero’s to the original message
Step 2: Perform modulo 2 division
Step 3: Remainder of division = CRC
Step 4: Code word = Data with append k-1 zero’s + CRC
Note:
• CRC must be k-1 bits
• Length of Code word = n+k-1 bits
Example: Let’s data to be send is 1010000 and divisor in the form of polynomial is x3+1. CRC
method discussed below.
Advantages of Error Detection
• Increased Data Reliability: Error detection ensures that the data transmitted over
the network is reliable, accurate, and free from errors. This ensures that the recipient
receives the same data that was transmitted by the sender.
• Improved Network Performance: Error detection mechanisms can help to
identify and isolate network issues that are causing errors. This can help to improve
the overall performance of the network and reduce downtime.
• Enhanced Data Security: Error detection can also help to ensure that the data
transmitted over the network is secure and has not been tampered with.
Disadvantages of Error Detection
• Overhead: Error detection requires additional resources and processing power,
which can lead to increased overhead on the network. This can result in slower
network performance and increased latency.
• False Positives: Error detection mechanisms can sometimes generate false
positives, which can result in unnecessary retransmission of data. This can further
increase the overhead on the network.
• Limited Error Correction: Error detection can only identify errors but cannot
correct them. This means that the recipient must rely on the sender to retransmit the
data, which can lead to further delays and increased network overhead.

Error Correction in Computer Networks


Once the errors are detected in the network, the deviated bits sequence needs to be replaced
with the right bit sequence so that the receiver can accept the data and process it. This method
is called Error Correction. We can correct the errors in the Network in two different ways which
are listed below:
• Forward Error Correction: In this Error Correction Scenario, the receiving end
is responsible for correcting the network error. There is no need for retransmission
of the data from the sender’s side.
• Backward Error Correction: the sender is responsible for retransmitting the data
if errors are detected by the receiver. The receiver signals the sender to resend the
corrupted data or the entire message to ensure accurate delivery.
However, there is one of the most widely used Error Correction methods which is called
‘Hamming Code’ which was designed by R.W. Hamming.
Hamming Code Error Correction
In this method extra parity bits are appended to the message which are used by the receiver to
correct the single bit error and multiple bit error. Consider the below example to understand
this method in a better way.
Suppose the sender wants to transmit the message whose bit representation is ‘1011001.’ In
this message:
• Total number of bits (d) = 7
• Total of redundant bits (r) = 4 (This is because the message has four 1’s in it)
• Thus, total bits (d+r) = 7 + 4 = 11
Also by convention the redundant bits are always placed in the places which are powers of 2.
Now this message will take the format as shown below:
Therefore we have R1, R2, R3, and R4 as redundant bits which will be calculated according to
the following rules:
• R1 includes all the positions whose binary representation has 1 in their least
significant bit. Thus, R1 covers positions 1, 3, 5, 7, 9, 11.
• R2 includes all the positions whose binary representation has 1 in the second
position from the least significant bit. Thus, R2 covers positions 2,3,6,7,10,11.
• R3 includes all the positions whose binary representation has 1 in the third
position from the least significant bit. Hence, R3 covers positions 4, 5, 6, 7.
• R4 includes all the positions whose binary representation has 1 in the fourth
position from the least significant bit due to which R4 covers positions 8,9,10,11.
These rules are illustrated below:
Now, we calculate the value of R1, R2, R3 and R4 as follows:
• Since the total number of 1s in all the bit positions corresponding to R1 is an even
number. R1 = 0.
• Since the total number of 1s in all the bit positions corresponding to R2 is an odd
number, R2= 1.
• Since the total number of 1s in all the bit positions corresponding to R3 is an odd
number, R3= 1.
• Since the total number of 1s in all the bit positions corresponding to R4 is even,
R4 = 0.
Therefore, the message to be transmitted becomes:

This message is transmitted at the receiver’s end. Suppose, bit 6 becomes corrupted and
changes to 1. Then the message becomes ‘10101101110.’ So at the receiver’s end the number
of 1’s in the respective bit positions of R1, R2, R3, and R4 is rechecked to correct the corrupted
bit. This is done in the following steps: For all the parity bits we will check the
• For R1: bits 1, 3, 5, 7, 9, and 11 are checked. We can see that the number of 1’s in
these bit positions is 4(even) so R1 = 0.
• For R2: bits 2,3,6,7,10,11 are checked. You can observe that the number of 1’s in
these bit positions is 5(odd) so we get a R2 = 1.
• For R3: bits 4, 5, 6, and 7 are checked. We see that the number of 1’s in these bit
positions is 3(odd). Hence, R3 = 1.
• For R8: bits 8,9,10,11 are observed. Here, the number of 1’s in these bit positions
is 2 and that’s even so we get R4 = 0.
If we observe the Redundant bits, they give the binary number 0110 whose decimal
representation is 6. The error in bit 6 has been successfully identified and corrected. By
rechecking the parity bits, we could detect that R4 was not matching its expected value,
pointing us to the corrupted bit. The bit 6 should have been 1, and after correcting it, the
message is now error-free.

Flow Control in Data Link Layer


Flow control is design issue at Data Link Layer. It is a technique that generally observes the
proper flow of data from sender to receiver. It is very essential because it is possible for
sender to transmit data or information at very fast rate and hence receiver can receive this
information and process it. This can happen only if receiver has very high load of traffic as
compared to sender, or if receiver has power of processing less as compared to sender. Flow
control is basically a technique that gives permission to two of stations that are working and
processing at different speeds to just communicate with one another. Flow control in Data
Link Layer simply restricts and coordinates number of frames or amount of data sender can
send just before it waits for an acknowledgement from receiver. Flow control is actually set
of procedures that explains sender about how much data or frames it can transfer or transmit
before data overwhelms receiver. The receiving device also contains only limited amount of
speed and memory to store data. This is why receiving device should be able to tell or inform
the sender about stopping the transmission or transferring of data on temporary basis before
it reaches limit. It also needs buffer, large block of memory for just storing data or frames
until they are processed.
flow control can also be understand as a speed matching mechanism for two stations.

Approaches to Flow Control: Flow Control is classified into two categories:


• Feedback–based Flow Control: In this control technique, the sender simply
transmits data or information or frame to the receiver, then the receiver transmits
data back to the sender and also allows the sender to transmit more amount of data
or tell sender about how receiver is processing or doing. This simply means that
the sender transmits data or frames after it has received acknowledgements from
the user.
• Rate–based Flow Control: In this control technique, usually when sender sends
or transfer data at faster speed to receiver and receiver is not being able to receive
data at the speed, then mechanism known as built-in mechanism in protocol will
just limit or restricts overall rate at which data or information is being transferred
or transmitted by sender without any feedback or acknowledgement from receiver.
Techniques of Flow Control in Data Link Layer : There are basically two types of
techniques being developed to control the flow of data

1. Stop-and-Wait Flow Control: This method is the easiest and simplest form of flow
control. In this method, basically message or data is broken down into various multiple
frames, and then receiver indicates its readiness to receive frame of data. When
acknowledgement is received, then only sender will send or transfer the next frame. This
process is continued until the sender transmits EOT (End of Transmission) frame. In this
method, only one of the frames can be in transmission at a time. It leads to inefficiency i.e.
less productivity if propagation delay is very much longer than the transmission delay and in
this method sender sent single frame and receiver take one frame at a time and sent an
acknowledgement (which is next frame number only) for new frame.
Advantages –
• This method is very easiest and simple and each of the frames is checked and
acknowledged well.
• This method is also very accurate.
Disadvantages –
• This method is fairly slow.
• In this, only one packet or frame can be sent at a time.
• It is very inefficient and makes the transmission process very slow.
2. Sliding Window Flow Control: This method is required where reliable in-order delivery
of packets or frames is very much needed like in data link layer. It is point to point protocol
that assumes that none of the other entity tries to communicate until the current data or frame
transfer gets completed. In this method, a sender transmits or sends various frames or packets
before receiving any acknowledgement. In this method, both the sender and receiver agree
upon total number of data frames after which acknowledgement is needed to be transmitted.
Data Link Layer requires and uses this method that simply allows sender to have more than
one unacknowledged packet “in-flight” at a time. This increases and improves network
throughput. and in this method, the sender sent multiple frames but the receiver take one by
one and after completing one frame, acknowledge for new frame.
Advantages –
• It performs much better than stop-and-wait flow control.
• This method increases efficiency.
• Multiples frames can be sent one after another.
Disadvantages –
• The main issue is complexity at the sender and receiver due to the transferring of
multiple frames.
• The receiver might receive data frames or packets out of the sequence.

Channel Allocation Problem in Computer Networks


Channel allocation is a process in which a single channel is divided and allotted to multiple
users to carry user-specific tasks. The user’s quantity may vary every time the process takes
place.
If there are N users and the channel is divided into N equal-sized sub-channels, then each user
is assigned one portion. If the number of users is small and doesn’t vary at times, then
Frequency Division Multiplexing can be used as it is a simple and efficient channel bandwidth-
allocating technique.
Channel Allocation Schemes
Channel allocation problems can be solved by two schemes: Static Channel Allocation in
LANs and MANs, and Dynamic Channel Allocation.

These are explained as follows.


Static Channel Allocation in LANs and MANs
It is the classical or traditional approach of allocating a single channel among multiple
competing users using Frequency Division Multiplexing (FDM). if there are N users, the
frequency channel is divided into N equal-sized portions (bandwidth), and each user is assigned
one portion. since each user has a private frequency band, there is no interference between
users.
However, it is not suitable in case of a large number of users with
variable bandwidth requirements.
It is not efficient to divide into a fixed number of blocks.
T=1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)
where,
T=mean time delay,
C=capacity of channel,
L=arrival rate of frames,
1/U= bits/frame,
N=number of sub channels,
T(FDM) = Frequency Division Multiplexing Time

Dynamic Channel Allocation


In a dynamic channel allocation scheme, frequency bands are not permanently assigned to the
users. Instead, channels are allotted to users dynamically as needed, from a central pool. The
allocation is done considering several parameters so that transmission interference is
minimized.
This allocation scheme optimises bandwidth usage and results is faster transmissions.
Dynamic channel allocation is further divided into:
1. Centralised Allocation
2. Distributed Allocation
Some of the possible assumptions include:
• Station Model: Assumes that each of N stations independently produce frames.
The probability of producing a packet in the interval IDt where I is the constant
arrival rate of new frames.
• Single Channel Assumption: In this allocation all stations are equivalent and can
send and receive on that channel.
• Collision Assumption: If two frames overlap in time-wise, then that’s collision.
Any collision is an error, and both frames must re-transmitted. Collisions are only
possible error.
Protocol Assumption:
• N independent stations.
• A station is blocked until its generated frame is transmitted.
• Probability of a frame being generated in a period of length Dt is IDt where I is
the arrival rate of frames.
• Only a single Channel available.
• Time can be either: Continuous or slotted.
• Carrier Sense: A station can sense if a channel is already busy before transmission.
• No Carrier Sense: Time out used to sense loss data.

Multiple Access Protocols in Computer Network


Multiple Access Protocols are methods used in computer networks to control how data is
transmitted when multiple devices are trying to communicate over the same network. These
protocols ensure that data packets are sent and received efficiently, without collisions or
interference. They help manage the network traffic so that all devices can share the
communication channel smoothly and effectively.
Who is Responsible for the Transmission of Data?
The Data Link Layer is responsible for the transmission of data between two nodes. Its main
functions are:
• Data Link Control
• Multiple Access Control

Data Link Layer Functions

Data Link Control


The data link control is responsible for the reliable transmission of messages over
transmission channels by using techniques like framing, error control and flow control. For
Data link control refer to – Stop and Wait ARQ.
Multiple Access Control
If there is a dedicated link between the sender and the receiver then data link control layer is
sufficient, however if there is no dedicated link present then multiple stations can access the
channel simultaneously. Hence multiple access protocols are required to decrease collision
and avoid crosstalk. For example, in a classroom full of students, when a teacher asks a
question and all the students (or stations) start answering simultaneously (send data at same
time) then a lot of chaos is created( data overlap or data lost) then it is the job of the teacher
(multiple access protocols) to manage the students and make them answer one at a time.
Thus, protocols are required for sharing data on non dedicated channels. Multiple access
protocols can be subdivided further as
1. Random Access Protocol
In this, all stations have same superiority that is no station has more priority than another
station. Any station can send data depending on medium’s state( idle or busy). It has two
features:
• There is no fixed time for sending data
• There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
ALOHA
It was designed for wireless LAN but is also applicable for shared medium. In this, multiple
stations can transmit data at the same time and can hence lead to collision and data being
garbled.

ALOHA

Pure ALOHA
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t
come within the allotted time then the station waits for a random amount of time called back-
off time (Tb) and re-sends the data. Since different stations wait for different amount of time,
the probability of further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5

Pure ALOHA

Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for
the next slot. This reduces the probability of collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
Slotted ALOHA

CSMA
Carrier Sense Multiple Access ensures fewer collisions as the station is required to first sense
the medium (for idle or busy) before transmitting data. If it is idle then it sends data, otherwise
it waits till the channel becomes idle. However there is still chance of collision in CSMA due
to propagation delay. For example, if station A wants to send data, it will first sense the
medium. If it finds the channel idle, it will start sending data. However, by the time the first
bit of data is transmitted (delayed due to propagation delay) from station A, if station B
requests to send data and senses the medium it will also find it idle and will also send data.
This will result in collision of data from station A and B.
CSMA

CSMA Access Modes

CSMA Access Modes

• 1-Persistent: The node senses the channel, if idle it sends the data, otherwise it
continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
• Non-Persistent: The node senses the channel, if idle it sends the data, otherwise
it checks the medium after a random amount of time (not continuously) and
transmits when found idle.
• P-Persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for some
time and checks the medium again, now if it is found idle then it send with p
probability. This repeat continues until the frame is sent. It is used in Wifi and
packet radio systems.
• O-Persistent: Superiority of nodes is decided beforehand and transmission
occurs in that order. If the medium is idle, node waits for its time slot to send data.
CSMA/CD
Carrier sense multiple access with collision detection. Stations can terminate transmission of
data if collision is detected. For more details refer – Efficiency of CSMA/CD.
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of collisions detection
involves sender receiving acknowledgement signals. If there is just one signal(its own) then
the data is successfully sent but if there are two signals(its own and the one with which it has
collided) then it means a collision has occurred. To distinguish between these two cases,
collision must have a lot of impact on received signal. However it is not so in wired networks,
so CSMA/CA is used in this case.
CSMA/CA Avoids Collision
• Interframe Space: Station waits for medium to become idle and if found idle it
does not immediately send data (to avoid collision due to propagation delay)
rather it waits for a period of time called Interframe space or IFS. After this time
it again checks the medium for being idle. The IFS duration depends on the
priority of station.
• Contention Window: It is the amount of time divided into slots. If the sender is
ready to send data, it chooses a random number of slots as wait time which doubles
every time medium is not found idle. If the medium is found busy it does not
restart the entire process, rather it restarts the timer when the channel is found idle
again.
• Acknowledgement: The sender re-transmits the data if acknowledgement is not
received before time-out.
2. Controlled Access
Controlled access protocols ensure that only one device uses the network at a time. Think of
it like taking turns in a conversation so everyone can speak without talking over each other.
In this, the data is sent by that station which is approved by all other stations. For further
details refer – Controlled Access Protocols.
3. Channelization
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
• Frequency Division Multiple Access (FDMA) – The available bandwidth is
divided into equal bands so that each station can be allocated its own band. Guard
bands are also added so that no two bands overlap to avoid crosstalk and noise.
• Time Division Multiple Access (TDMA) – In this, the bandwidth is shared
between multiple stations. To avoid collision time is divided into slots and stations
are allotted these slots to transmit data. However there is a overhead of
synchronization as each station needs to know its time slot. This is resolved by
adding synchronization bits to each slot. Another issue with TDMA is propagation
delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
• Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth nor division
of time. For example, if there are many people in a room all speaking at the same
time, then also perfect reception of data is possible if only two person speak the
same language. Similarly, data from different stations can be transmitted
simultaneously in different code languages.
• Orthogonal Frequency Division Multiple Access (OFDMA) – In OFDMA the
available bandwidth is divided into small subcarriers in order to increase the
overall performance, Now the data is transmitted through these small subcarriers.
it is widely used in the 5G technology.
Advantages of OFDMA
• High data rates
• Good for multimedia traffic
• Increase in efficiency
Disadvantages OFDMA
• Complex to implement
• High peak to power ratio
• Spatial Division Multiple Access (SDMA) – SDMA uses multiple antennas at
the transmitter and receiver to separate the signals of multiple users that are
located in different spatial directions. This technique is commonly used in MIMO
(Multiple-Input, Multiple-Output) wireless communication systems.
Advantages SDMA
• Frequency band uses effectively
• The overall signal quality will be improved
• The overall data rate will be increased
Disadvantages SDMA
• It is complex to implement
• It require the accurate information about the channel
Features of Multiple Access Protocols
• Contention-Based Access: Multiple access protocols are typically contention-
based, meaning that multiple devices compete for access to the communication
channel. This can lead to collisions if two or more devices transmit at the same
time, which can result in data loss and decreased network performance.
• Carrier Sense Multiple Access (CSMA): CSMA is a widely used multiple
access protocol in which devices listen for carrier signals on the communication
channel before transmitting. If a carrier signal is detected, the device waits for a
random amount of time before attempting to transmit to reduce the likelihood of
collisions.
• Collision Detection (CD): CD is a feature of some multiple access protocols
that allows devices to detect when a collision has occurred and take appropriate
action, such as backing off and retrying the transmission.
• Collision Avoidance (CA): CA is a feature of some multiple access protocols
that attempts to avoid collisions by assigning time slots to devices for
transmission.
• Token Passing: Token passing is a multiple access protocol in which devices
pass a special token between each other to gain access to the communication
channel. Devices can only transmit data when they hold the token, which ensures
that only one device can transmit at a time.
• Bandwidth Utilization: Multiple access protocols can affect the overall
bandwidth utilization of a network. For example, contention-based protocols may
result in lower bandwidth utilization due to collisions, while token passing
protocols may result in higher bandwidth utilization due to the controlled access
to the communication channel.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy