0% found this document useful (0 votes)
14 views30 pages

CN Unit 2

Uploaded by

anirudda1908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views30 pages

CN Unit 2

Uploaded by

anirudda1908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

Unit-2CN

UNIT II The Data Link Layer, Access Networks, and LANs


Data Link Layer Design Issues, Error Detection and Correction, Elementary Data Link Protocols,
Sliding Window Protocols, Introduction to the Link Layer, Error-Detection and - Correction
Techniques, Multiple Access Links and Protocols, Switched Local Area Networks Link
Virtualization: A Network as a Link Layer, Data Center Networking, Retrospective: A Day in the
Life of a Web Page Request
INTRODUCTION:
Data-link layer is the second layer after the physical layer. The data link layer is responsible
for maintaining the data link between two hosts or nodes.
Before going through the design issues in the data link layer. Some of its sub-layers and their
functions are as following below.
The data link layer is divided into two sub-layers:
1. Logical Link Control Sub-layer (LLC) –
Provides the logic for the data link, thus it controls the synchronization, flow
control, and error checking functions of the data link layer. Functions are –
 (i) Error Recovery.
 (ii) It performs the flow control operations.
 (iii) User addressing.

2. Media Access Control Sub-layer (MAC) –


It is the second sub-layer of data-link layer. It controls the flow and multiplexing for
transmission medium. Transmission of data packets is controlled by this layer. This
layer is responsible for sending the data over the network interface card.
Functions are –
 (i) To perform the control of access to media.
 (ii) It performs the unique addressing to stations directly connected to
LAN.
 (iii) Detection of errors.

Design issues with data link layer are:

1. Services provided to the network layer –


The data link layer act as a service interface to the network layer. The principle service is
transferring data from network layer on sending machine to the network layer on destination
machine. This transfer also takes place via DLL (Data link-layer).
It provides three types of services:
1. Unacknowledged and connectionless services.
2. Acknowledged and connectionless services.
3. Acknowledged and connection-oriented services
Unacknowledged and connectionless services.
 Here the sender machine sends the independent frames without any
acknowledgement from the sender.
 There is no logical connection established.

D. S. R 1
Unit-2CN

Acknowledged and connectionless services.


 There is no logical connection between sender and receiver established.
 Each frame is acknowledged by the receiver.
 If the frame didn’t reach the receiver in a specific time interval it has to be sent again.
 It is very useful in wireless systems.
Acknowledged and connection-oriented services
 A logical connection is established between sender and receiver before data is
trimester.
 Each frame is numbered so the receiver can ensure all frames have arrived and exactly
once.
2. Frame synchronization –
The source machine sends data in the form of blocks called frames to the destination machine.
The starting and ending of each frame should be identified so that the frame can be recognized
by the destination machine.
3. Flow control –
Flow control is done to prevent the flow of data frame at the receiver end. The source machine
must not send data frames at a rate faster than the capacity of destination machine to accept
them.
4. Error control –
Error control is done to prevent duplication of frames. The errors introduced during
transmission from source to destination machines must be detected and corrected at the
destination machine.

ERROR DETECTION:
Error is a condition when the receiver’s information does not match the senders. Digital signals
suffer from noise during transmission that can introduce errors in the binary bits traveling from
sender to receiver. That means a 0 bit may change to 1 or a 1 bit may change to 0.
Data (Implemented either at the Data link layer or Transport Layer of the OSI Model) may get
scrambled by noise or get corrupted whenever a message is transmitted. To prevent such errors,
error-detection codes are added as extra data to digital messages. This helps in detecting any
errors that may have occurred during message transmission.
Types of Errors
Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a single
binary digit) of a transmitted data unit is altered during transmission, resulting in an incorrect or
corrupted data unit.

D. S. R 2
Unit-2CN

Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission is
affected. Although multiple-bit errors are relatively rare when compared to single-bit errors, they
can still occur, particularly in high-noise or high-interference digital environments.

Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst
error. This error causes a sequence of consecutive incorrect values.

Error Detection Methods


To detect errors, a common technique is to introduce redundancy bits that provide additional
information. Various techniques for error detection include:
 Simple Parity Check
 Two-Dimensional Parity Check
 Checksum
 Cyclic Redundancy Check (CRC)
Simple Parity Check
Simple-bit parity is a simple error detection method that involves adding an extra bit to a data
transmission. It works as:
 1 is added to the block if it contains an odd number of 1’s, and
 0 is added if it contains an even number of 1’s
This scheme makes the total number of 1’s even, that is why it is called even parity checking.

D. S. R 3
Unit-2CN

Advantages of Simple Parity Check


 Simple parity check can detect all single bit error.
 Simple parity check can detect an odd number of errors.
 Implementation: Simple Parity Check is easy to implement in both hardware and
software.
 Minimal Extra Data: Only one additional bit (the parity bit) is added per data unit
(e.g., per byte).
 Fast Error Detection: The process of calculating and checking the parity bit is quick,
which allows for rapid error detection without significant delay in data processing or
communication.
 Single-Bit Error Detection: It can effectively detect single-bit errors within a data
unit, providing a basic level of error detection for relatively low-error environments.
Disadvantages of Simple Parity Check
 Single Parity check is not able to detect even no. of bit error.
 For example, the Data to be transmitted is 101010. Codeword transmitted to the
receiver is 1010101 (we have used even parity).
Let’s assume that during transmission, two of the bits of code word flipped to
1111101.
On receiving the code word, the receiver finds the no. of ones to be even and hence no
error, which is a wrong assumption.
Two-Dimensional Parity Check
Two-dimensional Parity check bits are calculated for each row, which is equivalent to a simple
parity check bit. Parity check bits are also calculated for all columns, then both are sent along
with the data. At the receiving end, these are compared with the parity bits calculated on the
received data.

D. S. R 4
Unit-2CN

Advantages of Two-Dimensional Parity Check


 Two-Dimensional Parity Check can detect and correct all single bit error.
 Two-Dimensional Parity Check can detect two- or three-bit error that occur anywhere
in the matrix.
Disadvantages of Two-Dimensional Parity Check
 Two-Dimensional Parity Check cannot correct two- or three-bit error. It can only
detect two- or three-bit error.
 If we have a error in the parity bit then this scheme will not work.
Checksum
Checksum error detection is a method used to identify errors in transmitted data. The process
involves dividing the data into equally sized segments and using a 1’s complement to calculate
the sum of these segments. The calculated sum is then sent along with the data to the receiver.
At the receiver’s end, the same process is repeated and if all zeroes are obtained in the sum, it
means that the data is correct.
Checksum – Operation at Sender’s Side
 Firstly, the data is divided into k segments each of m bits.
 On the sender’s end, the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.
 The checksum segment is sent along with the data segments.
Checksum – Operation at Receiver’s Side
 At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
 If the result is zero, the received data is accepted; otherwise discarded.

D. S. R 5
Unit-2CN

Cyclic Redundancy Check (CRC)


 Unlike the checksum scheme, which is based on addition, CRC is based on binary
division.
 In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of the data unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
 At the destination, the incoming data unit is divided by the same number. If at this
step there is no remainder, the data unit is assumed to be correct and is therefore
accepted.
 A remainder indicates that the data unit has been damaged in transit and therefore
must be rejected.

CRC Working
We have given dataword of length n and divisor of length k.
Step 1: Append (k-1) zero’s to the original message
Step 2: Perform modulo 2 division
Step 3: Remainder of division = CRC
Step 4: Code word = Data with append k-1 zero’s + CRC
Note:

D. S. R 6
Unit-2CN

 CRC must be k-1 bits


 Length of Code word = n+k-1 bits
Example: Let’s data to be send is 1010000 and divisor in the form of polynomial is x3+1. CRC
method discussed below.

Advantages of Error Detection


 Increased Data Reliability: Error detection ensures that the data transmitted over
the network is reliable, accurate, and free from errors. This ensures that the recipient
receives the same data that was transmitted by the sender.
 Improved Network Performance: Error detection mechanisms can help to identify
and isolate network issues that are causing errors. This can help to improve the overall
performance of the network and reduce downtime.
 Enhanced Data Security: Error detection can also help to ensure that the data
transmitted over the network is secure and has not been tampered with.
Disadvantages of Error Detection
 Overhead: Error detection requires additional resources and processing power,
which can lead to increased overhead on the network. This can result in slower
network performance and increased latency.
 False Positives: Error detection mechanisms can sometimes generate false positives,
which can result in unnecessary retransmission of data. This can further increase the
overhead on the network.
 Limited Error Correction: Error detection can only identify errors but cannot
correct them. This means that the recipient must rely on the sender to retransmit the
data, which can lead to further delays and increased network overhead.

D. S. R 7
Unit-2CN

ERROR CORRECTION:
Hamming Code
Hamming code is an error-correcting code used to ensure data accuracy during transmission or
storage. Hamming code detects and corrects the errors that can occur when the data is moved or
stored from the sender to the receiver. This simple and effective method helps improve the
reliability of communication systems and digital storage. It adds extra bits to the original data,
allowing the system to detect and correct single-bit errors. It is a technique developed by Richard
Hamming in the 1950s.

Redundant Bits

Redundant bits are extra binary bits that are generated and added to the information-carrying bits
of data transfer to ensure that no bits were lost during the data transfer. The number of redundant
bits can be calculated using the following formula:
2r ≥ m + r + 1
where m is the number of bits in input data, and r is the number of redundant bits.
Suppose the number of data bits is 7, then the number of redundant bits can be calculated using:
= 24 ≥ 7 + 4 + 1 . Thus, the number of redundant bits is 4.
Types of Parity Bits
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s in the
data is even or odd. Parity bits are used for error detection. There are two types of parity bits:
 Even Parity Bit: In the case of even parity, for a given set of bits, the number of 1’s
are counted. If that count is odd, the parity bit value is set to 1, making the total count
of occurrences of 1’s an even number. If the total number of 1’s in a given set of bits
is already even, the parity bit’s value is 0.
 Odd Parity Bit: In the case of odd parity, for a given set of bits, the number of 1’s
are counted. If that count is even, the parity bit value is set to 1, making the total
count of occurrences of 1’s an odd number. If the total number of 1’s in a given set
of bits is already odd, the parity bit’s value is 0.

Algorithm of Hamming Code

Hamming Code is simply the use of extra parity bits to allow the identification of an error.
Step 1: Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
Step 2: All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
Step 3: All the other bit positions are marked as data bits.
Step 4: Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form:
 a. Parity bit 1 covers all the bits positions whose binary representation includes a 1
in the least significant position (1, 3, 5, 7, 9, 11, etc).
 b. Parity bit 2 covers all the bits positions whose binary representation includes a 1
in the second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).
 c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in
the third position from the least significant bit (4–7, 12–15, 20–23, etc).

D. S. R 8
Unit-2CN

 d. Parity bit 8 covers all the bits positions whose binary representation includes a 1
in the fourth position from the least significant bit bits (8–15, 24–31, 40–47, etc).
 e. In general, each parity bit covers all bits where the bitwise AND of the parity
position and the bit position is non-zero.
Step 5: Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd. Set a parity bit to 0 if the total number of ones in the positions it checks
is even.

Determining The Position of Redundant Bits


Redundancy bits are placed at positions that correspond to the power of 2. As in the above
example:
 The number of data bits = 7
 The number of redundant bits = 4
 The total number of bits = 7+4=>11
 The redundant bits are placed at positions corresponding to power of 2 that is 1, 2, 4,
and 8

 Suppose the data to be transmitted is 1011001 from sender to receiver, the bits will
be placed as follows:

D. S. R 9
Unit-2CN

Determining The Parity Bits According to Even Parity


 R1 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the least significant position. R1: bits 1, 3, 5, 7, 9, 11

 To find the redundant bit R1, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R1 is an even number. So, the value of R1
(parity bit’s value) = 0.
 R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit. R2:
bits 2,3,6,7,10,11

 To find the redundant bit R2, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1

D. S. R 10
Unit-2CN

 R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit. R4: bits
4, 5, 6, 7

 To find the redundant bit R4, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R4 is odd so the value of R4(parity bit’s value)
=1
 R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit. R8:
bit 8,9,10,11

 To find the redundant bit R8, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R8 is an even number the value of R8(parity
bit’s value)=0. Thus, the data transferred is:

D. S. R 11
Unit-2CN

Error Detection and Correction


Suppose in the above example the 6th bit is changed from 0 to 1 during data transmission, then
it gives new parity values in the binary number:

For all the parity bits we will check the number of 1’s in their respective bit positions.
 For R1: bits 1, 3, 5, 7, 9, 11. We can see that the number of 1’s in these bit positions
are 4 and that’s even so we get a 0 for this.
 For R2: bits 2,3,6,7,10,11 . We can see that the number of 1’s in these bit positions
are 5 and that’s odd so we get a 1 for this.
 For R4: bits 4, 5, 6, 7 . We can see that the number of 1’s in these bit positions are 3
and that’s odd so we get a 1 for this.
 For R8: bit 8,9,10,11 . We can see that the number of 1’s in these bit positions are 2
and that’s even so we get a 0 for this.
 The bits give the binary number 0110 whose decimal representation is 6. Thus, bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.
Features of Hamming Code
 Error Detection and Correction: Hamming code is designed to detect and correct
single-bit errors that may occur during the transmission of data. This ensures that the
recipient receives the same data that was transmitted by the sender.
 Redundancy: Hamming code uses redundant bits to add additional information to
the data being transmitted. This redundancy allows the recipient to detect and correct
errors that may have occurred during transmission.
 Efficiency: Hamming code is a relatively simple and efficient error-correction
technique that does not require a lot of computational resources. This makes it ideal
for use in low-power and low-bandwidth communication networks.
 Widely Used: Hamming code is a widely used error-correction technique and is used
in a variety of applications, including telecommunications, computer networks, and
data storage systems.
 Single Error Correction: Hamming code is capable of correcting a single-bit error,
which makes it ideal for use in applications where errors are likely to occur due to
external factors such as electromagnetic interference.

D. S. R 12
Unit-2CN

 Limited Multiple Error Correction: Hamming code can only correct a limited
number of multiple errors. In applications where multiple errors are likely to occur,
more advanced error-correction techniques may be required.

Elementary data link layer protocols

Elementary Data Link protocols are classified into three categories, as given below −

 Protocol 1 − Unrestricted simplex protocol


 Protocol 2 − Simplex stop and wait protocol
 Protocol 3 − Simplex protocol for noisy channels.

Let us discuss each protocol one by one.

Unrestricted Simplex Protocol

Data transmitting is carried out in one direction only. The transmission (Tx) and receiving (Rx)
are always ready and the processing time can be ignored. In this protocol, infinite buffer space is
available, and no errors are occurring that is no damage frames and no lost frames.

The Unrestricted Simplex Protocol is diagrammatically represented as follows −

D. S. R 13
Unit-2CN

Simplex Stop and Wait protocol

In this protocol we assume that data is transmitted in one direction only. No error occurs; the
receiver can only process the received information at finite rate. These assumptions imply that the
transmitter cannot send frames at rate faster than the receiver can process them.

The main problem here is how to prevent the sender from flooding the receiver. The general
solution for this problem is to have the receiver send some sort of feedback to sender, the process
is as follows −

Step1 − The receiver send the acknowledgement frame back to the sender telling the sender that
the last received frame has been processed and passed to the host.

Step 2 − Permission to send the next frame is granted.

Step 3 − The sender after sending the sent frame has to wait for an acknowledge frame from the
receiver before sending another frame.

This protocol is called Simplex Stop and wait protocol, the sender sends one frame and waits for
feedback from the receiver. When the ACK arrives, the sender sends the next frame.

The Simplex Stop and Wait Protocol is diagrammatically represented as follows −

D. S. R 14
Unit-2CN

Simplex Protocol for Noisy Channel

Data transfer is only in one direction, consider separate sender and receiver, finite processing
capacity and speed at the receiver, since it is a noisy channel, errors in data frames or
acknowledgement frames are expected. Every frame has a unique sequence number.

After a frame has been transmitted, the timer is started for a finite time. Before the timer expires,
if the acknowledgement is not received, the frame gets retransmitted, when the acknowledgement
gets corrupted or sent data frames gets damaged, how long the sender should wait to transmit the
next frame is infinite.

The Simplex Protocol for Noisy Channel is diagrammatically represented as follows −

Sliding Window Protocol


The sliding window is a technique for sending multiple frames at a time. It controls the data packets
between the two devices where reliable and gradual delivery of data frames is needed. It is also
used in TCP (Transmission Control Protocol).
In this technique, each frame has sent from the sequence number. The sequence numbers are used
to find the missing data in the receiver end. The purpose of the sliding window technique is to
avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol

Sliding window protocol has two types:

1. Go-Back-N ARQ

D. S. R 15
Unit-2CN

2. Selective Repeat ARQ

Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is a data
link layer protocol that uses a sliding window method. In this, if any frame is corrupted or lost, all
subsequent frames have to be sent again.

The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the sender
window, will be 8. The receiver window size is always 1.

If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a corrupted
frame. When the timer expires, the sender sends the correct frame again. The design of the Go-
Back-N ARQ protocol is shown below.

Now what exactly happens in GBN, we will explain with a help of example. Consider the
diagram given below. We have sender window size of 4. Assume that we have lots of sequence
numbers just for the sake of explanation. Now the sender has sent the packets 0, 1, 2 and 3. After
acknowledging the packets 0 and 1, receiver is now expecting packet 2 and sender window has
also slided to further transmit the packets 4 and 5. Now suppose the packet 2 is lost in the
network, Receiver will discard all the packets which sender has transmitted after packet 2 as it
is expecting sequence number of 2.
On the sender side for every packet send there is a time out timer which will expire for packet
number 2. Now from the last transmitted packet 5 sender will go back to the packet number 2 in
the current window and transmit all the packets till packet number 5. That’s why it is called Go
Back N. Go back means sender has to go back N places from the last transmitted packet in the
unacknowledged window and not from the point where the packet is lost.

Advantages of GBN Protocol


 Simple to implement and effective for reliable communication.

D. S. R 16
Unit-2CN

Better performance than stop-and-wait protocols for error-free or low-error networks.


Disadvantages of GBN Protocol
 Inefficient if errors are frequent, as multiple frames might need to be retransmitted
unnecessarily.
 Bandwidth can be wasted due to redundant retransmissions

Selective Repeat ARQ


Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a
data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works
well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in sending
the frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the size of the
sender window is always equal to the size of the receiver window. The size of the sliding window
is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The design of
the Selective Repeat ARQ protocol is shown below.

D. S. R 17
Unit-2CN

Difference between the Go-Back-N ARQ and Selective Repeat ARQ


Go-Back-N ARQ Selective Repeat ARQ

If a frame is corrupted or lost in it,all In this, only the frame is sent again, which is
subsequent frames have to be sent again. corrupted or lost.

If it has a high error rate,it wastes a lot of


There is a loss of low bandwidth.
bandwidth.

It is more complex because it has to do sorting


It is less complex. and searching as well. And it also requires
more storage.

In this, sorting is done to get the frames in the


It does not require sorting.
correct order.

It does not require searching. The search operation is performed in it.

It is used more. It is used less because it is more complex.

Multiple Access Control


If there is a dedicated link between the sender and the receiver then data link control layer is
sufficient, however if there is no dedicated link present then multiple stations can access the
channel simultaneously. Hence multiple access protocols are required to decrease collision and
avoid crosstalk. For example, in a classroom full of students, when a teacher asks a question and
all the students (or stations) start answering simultaneously (send data at same time) then a lot
of chaos is created( data overlap or data lost) then it is the job of the teacher (multiple access
protocols) to manage the students and make them answer one at a time.
Thus, protocols are required for sharing data on non dedicated channels. Multiple access
protocols can be subdivided further as

D. S. R 18
Unit-2CN

1. Random Access Protocol


In this, all stations have same superiority that is no station has more priority than another station.
Any station can send data depending on medium’s state( idle or busy). It has two features:
 There is no fixed time for sending data
 There is no fixed sequence of stations sending data
The Random-access protocols are further subdivided as:
ALOHA
It was designed for wireless LAN but is also applicable for shared medium. In this, multiple
stations can transmit data at the same time and can hence lead to collision and data being
garbled.

Types of Aloha

Pure ALOHA
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t
come within the allotted time then the station waits for a random amount of time called back-off
time (Tb) and re-sends the data. Since different stations wait for different amount of time, the
probability of further collision decreases.

D. S. R 19
Unit-2CN

Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for the
next slot. This reduces the probability of collision.

D. S. R 20
Unit-2CN

CSMA
Carrier Sense Multiple Access ensures fewer collisions as the station is required to first sense
the medium (for idle or busy) before transmitting data. If it is idle then it sends data, otherwise
it waits till the channel becomes idle. However, there is still chance of collision in CSMA due
to propagation delay. For example, if station A wants to send data, it will first sense the medium.
If it finds the channel idle, it will start sending data. However, by the time the first bit of data is
transmitted (delayed due to propagation delay) from station A, if station B requests to send data
and senses the medium it will also find it idle and will also send data. This will result in collision
of data from station A and B.

CSMA

CSMA Access Modes

 1-Persistent: The node senses the channel, if idle it sends the data, otherwise it
continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
 Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it
checks the medium after a random amount of time (not continuously) and transmits
when found idle.

D. S. R 21
Unit-2CN

 P-Persistent: The node senses the medium, if idle it sends the data with p probability.
If the data is not transmitted ((1-p) probability) then it waits for some time and checks
the medium again, now if it is found idle then it send with p probability. This repeat
continues until the frame is sent. It is used in Wifi and packet radio systems.
 O-Persistent: Superiority of nodes is decided beforehand and transmission occurs in
that order. If the medium is idle, node waits for its time slot to send data.
CSMA/CD
Carrier sense multiple access with collision detection. Stations can terminate transmission of
data if collision is detected. For more details refer – Efficiency of CSMA/CD.
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of collisions detection
involves sender receiving acknowledgement signals. If there is just one signal (its own) then the
data is successfully sent but if there are two signals (its own and the one with which it has
collided) then it means a collision has occurred. To distinguish between these two cases, collision
must have a lot of impact on received signal. However, it is not so in wired networks, so
CSMA/CA is used in this case.
CSMA/CA Avoids Collision By
 Interframe Space: Station waits for medium to become idle and if found idle it does
not immediately send data (to avoid collision due to propagation delay) rather it waits
for a period of time called Interframe space or IFS. After this time, it again checks
the medium for being idle. The IFS duration depends on the priority of station.
 Contention Window: It is the amount of time divided into slots. If the sender is ready
to send data, it chooses a random number of slots as wait time which doubles every
time medium is not found idle. If the medium is found busy it does not restart the
entire process, rather it restarts the timer when the channel is found idle again.
 Acknowledgement: The sender re-transmits the data if acknowledgement is not
received before time-out.
2. Controlled Access
Controlled access protocols ensure that only one device uses the network at a time. Think of it
like taking turns in a conversation so everyone can speak without talking over each other.
In this, the data is sent by that station which is approved by all other stations. For further details
refer – Controlled Access Protocols.
3. Channelization
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
 Frequency Division Multiple Access (FDMA) – The available bandwidth is divided
into equal bands so that each station can be allocated its own band. Guard bands are
also added so that no two bands overlap to avoid crosstalk and noise.
 Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between
multiple stations. To avoid collision time is divided into slots and stations are allotted
these slots to transmit data. However, there is an overhead of synchronization as each
station needs to know its time slot. This is resolved by adding synchronization bits to
each slot. Another issue with TDMA is propagation delay which is resolved by
addition of guard bands.
Code Division Multiple Access (CDMA) – One channel carries all transmissions

D. S. R 22
Unit-2CN

simultaneously. There is neither division of bandwidth nor division of time. For


example, if there are many people in a room all speaking at the same time, then also
perfect reception of data is possible if only two persons speak the same language.
Similarly, data from different stations can be transmitted simultaneously in different
code languages.
 Orthogonal Frequency Division Multiple Access (OFDMA) – In OFDMA the
available bandwidth is divided into small subcarriers in order to increase the overall
performance, Now the data is transmitted through these small subcarriers. it is widely
used in the 5G technology.
Advantages of OFDMA
 High data rates
 Good for multimedia traffic
 Increase in efficiency
Disadvantages OFDMA
 Complex to implement

Spatial Division Multiple Access (SDMA) – SDMA uses multiple antennas at the transmitter
and receiver to separate the signals of multiple users that are located in different spatial
directions. This technique is commonly used in MIMO (Multiple-Input, Multiple-Output)
wireless communication systems.
Advantages SDMA
 Frequency band uses effectively
 The overall signal quality will be improved
 The overall data rate will be increased
Disadvantages SDMA
 It is complex to implement
 It requires the accurate information about the channel
Features of Multiple Access Protocols
 Contention-Based Access: Multiple access protocols are typically contention-based,
meaning that multiple devices compete for access to the communication channel. This
can lead to collisions if two or more devices transmit at the same time, which can
result in data loss and decreased network performance.
 Carrier Sense Multiple Access (CSMA): CSMA is a widely used multiple access
protocol in which devices listen for carrier signals on the communication channel
before transmitting. If a carrier signal is detected, the device waits for a random
amount of time before attempting to transmit to reduce the likelihood of collisions.
 Collision Detection (CD): CD is a feature of some multiple access protocols that
allows devices to detect when a collision has occurred and take appropriate action,
such as backing off and retrying the transmission.
 Collision Avoidance (CA): CA is a feature of some multiple access protocols that
attempts to avoid collisions by assigning time slots to devices for transmission.
 Token Passing: Token passing is a multiple access protocol in which devices pass a
special token between each other to gain access to the communication channel.
Devices can only transmit data when they hold the token, which ensures that only one
device can transmit at a time.

D. S. R 23
Unit-2CN

 Bandwidth Utilization: Multiple access protocols can affect the overall bandwidth
utilization of a network. For example, contention-based protocols may result in lower
bandwidth utilization due to collisions, while token passing protocols may result in
higher bandwidth utilization due to the controlled access to the communication
channel.

LAN Switching:
LAN stands for Local-area Network. It is a computer network that covers a relatively small area
such as within a building or campus of up to a few kilometers in size. LANs are generally used
to connect personal computers and workstations in company offices to share common resources,
like printers, and exchange information.
LAN switching is a technology that promises to increase the efficiency of local area networks
and solve the current bandwidth problems. Examples of Lan Switching are as follows:
 Wired LAN: Ethernet, Hub, Switch
 Wireless LAN: Wi-fi

Advantages of LAN Switching:


 It can give rise to an increase in network scalability, which means that network can
expand as the demand grows.
 Each network user can experience good and improved bandwidth performance.
 The setup of LAN is easy as compared to other switching techniques.
Disadvantages of LAN Switching:
 The cost of setting up a LAN network is quite High.
 privacy violations are another disadvantage as one LAN user/administrator can check
the personal files of every user present in that network.
 since each one has the power to check other users’ data security is a major issue.
Applications of LAN Switching:
 A LAN could be used to connect printers, desktops, file servers, storage of arrays.
 LANs direct traffic between endpoints in a local area server.
LAN switching technology addresses the existing issues of the bandwidth and helps to improve
the overall efficiency of the LAN. They also provide interconnection between all the other nodes
in that area network.

D. S. R 24
Unit-2CN

Benefits of LAN switching over other switching techniques


LAN switching provides increased scalability which means the network can be expanded as per
our requirements. In LAN switching many simultaneous connections can be done. In LAN since
all nodes are connected to each other the chances of failure are very
less.

Framing in Data Link Layer


Frames are the units of digital transmission, particularly in computer networks and
telecommunications. Frame is continuously used in Time Division Multiplexing process.
Framing is a point-to-point connection between two computers or devices consisting of a wire
in which data is transmitted as a stream of bits. However, these bits must be framed into
discernible blocks of information. Framing is a function of the data link layer.
It provides a way for a sender to transmit a set of bits that are meaningful to the receiver.
Ethernet, token ring, frame relay, and other data link layer technologies have their own frame
structures. Frames have headers that contain information such as error-checking codes.

At the data link layer, it extracts the message from the sender and provides it to the receiver by
providing the sender’s and receiver’s addresses. The advantage of using frames is that data is
broken up into recoverable chunks that can easily be checked for corruption.
The process of dividing the data into frames and reassembling it is transparent to the user and is
handled by the data link layer.
Framing is an important aspect of data link layer protocol design because it allows the
transmission of data to be organized and controlled, ensuring that the data is delivered accurately
and efficiently.
Problems in Framing

D. S. R 25
Unit-2CN

 Detecting start of the frame: When a frame is transmitted, every station must be
able to detect it. Station detects frames by looking out for a special sequence of bits
that marks the beginning of the frame i.e., SFD (Starting Frame Delimiter).
 How does the station detect a frame: Every station listens to link for SFD pattern
through a sequential circuit. If SFD is detected, sequential circuit alerts station.
Station checks destination address to accept or reject frame.
 Detecting end of frame: When to stop reading the frame.
 Handling errors: Framing errors may occur due to noise or other transmission
errors, which can cause a station to misinterpret the frame. Therefore, error detection
and correction mechanisms, such as cyclic redundancy check (CRC), are used to
ensure the integrity of the frame.
 Framing overhead: Every frame has a header and a trailer that contains control
information such as source and destination address, error detection code, and other
protocol-related information. This overhead reduces the available bandwidth for data
transmission, especially for small-sized frames.
 Framing incompatibility: Different networking devices and protocols may use
different framing methods, which can lead to framing incompatibility issues. For
example, if a device using one framing method sends data to a device using a different
framing method, the receiving device may not be able to correctly interpret the frame.
 Framing synchronization: Stations must be synchronized with each other to avoid
collisions and ensure reliable communication. Synchronization requires that all
stations agree on the frame boundaries and timing, which can be challenging in
complex networks with many devices and varying traffic loads.
 Framing efficiency: Framing should be designed to minimize the amount of data
overhead while maximizing the available bandwidth for data transmission. Inefficient
framing methods can lead to lower network performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the frame,
the length of the frame itself acts as a delimiter.
 Drawback: It suffers from internal fragmentation if the data size is less than the
frame size
 Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning
of the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of
the frame. Used in Ethernet (802.3). The problem with this is that sometimes the
length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED (pattern) to indicate the end of the
frame. Used in Token Ring. The problem with this is that ED can occur in the data.
This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains
ED, then, a byte is stuffed into data to differentiate it from ED.

D. S. R 26
Unit-2CN

Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’ character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is escaped
using \O).

Disadvantage – It is very costly and obsolete method.


2. Bit Stuffing: Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.

Examples:
 If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing.
--> 011010001101100
 If Data –> 110001001 and ED –> 1000 then, find data after bit stuffing?

D. S. R 27
Unit-2CN

--> 11001010011
framing in the Data Link Layer also presents some challenges, which include:
Variable frame length: The length of frames can vary depending on the data being transmitted,
which can lead to inefficiencies in transmission. To address this issue, protocols such as HDLC
and PPP use a flag sequence to mark the start and end of each frame.
Bit stuffing: Bit stuffing is a technique used to prevent data from being interpreted as control
characters by inserting extra bits into the data stream. However, bit stuffing can lead to issues
with synchronization and increase the overhead of the transmission.
Synchronization: Synchronization is critical for ensuring that data frames are transmitted and
received correctly. However, synchronization can be challenging, particularly in high-speed
networks where frames are transmitted rapidly.
Error detection: Data Link Layer protocols use various techniques to detect errors in the
transmitted data, such as checksums and CRCs. However, these techniques are not foolproof and
can miss some types of errors.
Efficiency: Efficient use of available bandwidth is critical for ensuring that data is transmitted
quickly and reliably. However, the overhead associated with framing and error detection can
reduce the overall efficiency of the transmission.

Link virtualization
Link virtualization is a technique that is used in computer networking to create virtual links or
connections between different devices. This technique is often used in the context of the link layer,
which is the second layer of the OSI (Open Systems Interconnection) model.
Link virtualization is a type of network virtualization that involves virtualizing Internet Protocol
(IP) routing, forwarding, and addressing schemes. Network virtualization is a process that
combines hardware and software network resources into a single virtual network. This allows you
to create multiple virtual networks that share the same physical infrastructure.

Here are some benefits of network virtualization:

 Flexibility
You can group or separate virtual networks as needed, or connect virtual machines (VMs)
however you want.
 Speed
You can spin up logical networks more quickly in response to business requirements.
 Control
You can improve control over your network.
 Software testing
You can test software in a simulated network environment without having to physically test
it on all possible hardware or system software.

D. S. R 28
Unit-2CN

Link virtualization in the context of the network link layer refers to the abstraction and
management of physical network links to create virtual links. This concept is particularly important
in modern networking environments where flexibility, scalability, and efficient resource utilization
are crucial. Here’s a breakdown of the key aspects:

Key Concepts of Link Virtualization

1. Abstraction of Physical Links:


- Link virtualization abstracts the underlying physical network connections,
allowing multiple logical links to be created over a single physical link. This
helps in simplifying network management and configuration.
2. Creation of Virtual Links:
- Virtual links can be created to represent different paths or connections without
needing additional physical infrastructure. This can be particularly useful in
scenarios like data centers, where multiple tenants may share the same physical
resources.
3. Improved Resource Utilization:
- By virtualizing links, network resources can be allocated and utilized more
efficiently. This allows for better handling of varying traffic loads and can lead
to improved performance.
4. Isolation and Security:
- Virtual links provide isolation between different traffic types or user groups,
enhancing security. For instance, different virtual links can be established for
different departments within an organization, preventing unauthorized access to
sensitive data.
5. Dynamic Configuration:
- Link virtualization enables dynamic reconfiguration of network paths, allowing
for adjustments based on current network conditions, performance requirements,
or administrative policies.
6. Support for Different Protocols:
- Virtual links can support various protocols and services, making it easier to
implement diverse networking solutions like VLANs (Virtual Local Area
Networks) and VPNs (Virtual Private Networks).

A Network as a Link Layer:


In computer networks, the link layer is the lowest layer of the Internet Protocol Suite
(TCP/IP). It's a group of methods and protocols that are used to connect hosts to a physical
link. The link layer is responsible for hardware issues, such as obtaining MAC addresses to locate
hosts and transmitting data frames.

Here are some key features of the link layer:


 Link layer protocols: These protocols define the format of packets exchanged between
nodes and the actions taken when sending and receiving packets.

D. S. R 29
Unit-2CN

 Local area network protocols: These include Ethernet and Wi-Fi.


 Framing protocols: These include Point-to-Point Protocol (PPP).
The link layer is sometimes described as a combination of the OSI model's data link layer and
physical layer. The data link layer is the second layer in the OSI model, and is responsible for how
data is transmitted, edited, and controlled in the physical environment.

D. S. R 30

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy