CN Unit 2
CN Unit 2
D. S. R 1
Unit-2CN
ERROR DETECTION:
Error is a condition when the receiver’s information does not match the senders. Digital signals
suffer from noise during transmission that can introduce errors in the binary bits traveling from
sender to receiver. That means a 0 bit may change to 1 or a 1 bit may change to 0.
Data (Implemented either at the Data link layer or Transport Layer of the OSI Model) may get
scrambled by noise or get corrupted whenever a message is transmitted. To prevent such errors,
error-detection codes are added as extra data to digital messages. This helps in detecting any
errors that may have occurred during message transmission.
Types of Errors
Single-Bit Error
A single-bit error refers to a type of data transmission error that occurs when one bit (i.e., a single
binary digit) of a transmitted data unit is altered during transmission, resulting in an incorrect or
corrupted data unit.
D. S. R 2
Unit-2CN
Multiple-Bit Error
A multiple-bit error is an error type that arises when more than one bit in a data transmission is
affected. Although multiple-bit errors are relatively rare when compared to single-bit errors, they
can still occur, particularly in high-noise or high-interference digital environments.
Burst Error
When several consecutive bits are flipped mistakenly in digital transmission, it creates a burst
error. This error causes a sequence of consecutive incorrect values.
D. S. R 3
Unit-2CN
D. S. R 4
Unit-2CN
D. S. R 5
Unit-2CN
CRC Working
We have given dataword of length n and divisor of length k.
Step 1: Append (k-1) zero’s to the original message
Step 2: Perform modulo 2 division
Step 3: Remainder of division = CRC
Step 4: Code word = Data with append k-1 zero’s + CRC
Note:
D. S. R 6
Unit-2CN
D. S. R 7
Unit-2CN
ERROR CORRECTION:
Hamming Code
Hamming code is an error-correcting code used to ensure data accuracy during transmission or
storage. Hamming code detects and corrects the errors that can occur when the data is moved or
stored from the sender to the receiver. This simple and effective method helps improve the
reliability of communication systems and digital storage. It adds extra bits to the original data,
allowing the system to detect and correct single-bit errors. It is a technique developed by Richard
Hamming in the 1950s.
Redundant Bits
Redundant bits are extra binary bits that are generated and added to the information-carrying bits
of data transfer to ensure that no bits were lost during the data transfer. The number of redundant
bits can be calculated using the following formula:
2r ≥ m + r + 1
where m is the number of bits in input data, and r is the number of redundant bits.
Suppose the number of data bits is 7, then the number of redundant bits can be calculated using:
= 24 ≥ 7 + 4 + 1 . Thus, the number of redundant bits is 4.
Types of Parity Bits
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s in the
data is even or odd. Parity bits are used for error detection. There are two types of parity bits:
Even Parity Bit: In the case of even parity, for a given set of bits, the number of 1’s
are counted. If that count is odd, the parity bit value is set to 1, making the total count
of occurrences of 1’s an even number. If the total number of 1’s in a given set of bits
is already even, the parity bit’s value is 0.
Odd Parity Bit: In the case of odd parity, for a given set of bits, the number of 1’s
are counted. If that count is even, the parity bit value is set to 1, making the total
count of occurrences of 1’s an odd number. If the total number of 1’s in a given set
of bits is already odd, the parity bit’s value is 0.
Hamming Code is simply the use of extra parity bits to allow the identification of an error.
Step 1: Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
Step 2: All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
Step 3: All the other bit positions are marked as data bits.
Step 4: Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form:
a. Parity bit 1 covers all the bits positions whose binary representation includes a 1
in the least significant position (1, 3, 5, 7, 9, 11, etc).
b. Parity bit 2 covers all the bits positions whose binary representation includes a 1
in the second position from the least significant bit (2, 3, 6, 7, 10, 11, etc).
c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in
the third position from the least significant bit (4–7, 12–15, 20–23, etc).
D. S. R 8
Unit-2CN
d. Parity bit 8 covers all the bits positions whose binary representation includes a 1
in the fourth position from the least significant bit bits (8–15, 24–31, 40–47, etc).
e. In general, each parity bit covers all bits where the bitwise AND of the parity
position and the bit position is non-zero.
Step 5: Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is odd. Set a parity bit to 0 if the total number of ones in the positions it checks
is even.
Suppose the data to be transmitted is 1011001 from sender to receiver, the bits will
be placed as follows:
D. S. R 9
Unit-2CN
To find the redundant bit R1, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R1 is an even number. So, the value of R1
(parity bit’s value) = 0.
R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit. R2:
bits 2,3,6,7,10,11
To find the redundant bit R2, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1
D. S. R 10
Unit-2CN
R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit. R4: bits
4, 5, 6, 7
To find the redundant bit R4, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R4 is odd so the value of R4(parity bit’s value)
=1
R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit. R8:
bit 8,9,10,11
To find the redundant bit R8, we check for even parity. Since the total number of 1’s
in all the bit positions corresponding to R8 is an even number the value of R8(parity
bit’s value)=0. Thus, the data transferred is:
D. S. R 11
Unit-2CN
For all the parity bits we will check the number of 1’s in their respective bit positions.
For R1: bits 1, 3, 5, 7, 9, 11. We can see that the number of 1’s in these bit positions
are 4 and that’s even so we get a 0 for this.
For R2: bits 2,3,6,7,10,11 . We can see that the number of 1’s in these bit positions
are 5 and that’s odd so we get a 1 for this.
For R4: bits 4, 5, 6, 7 . We can see that the number of 1’s in these bit positions are 3
and that’s odd so we get a 1 for this.
For R8: bit 8,9,10,11 . We can see that the number of 1’s in these bit positions are 2
and that’s even so we get a 0 for this.
The bits give the binary number 0110 whose decimal representation is 6. Thus, bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.
Features of Hamming Code
Error Detection and Correction: Hamming code is designed to detect and correct
single-bit errors that may occur during the transmission of data. This ensures that the
recipient receives the same data that was transmitted by the sender.
Redundancy: Hamming code uses redundant bits to add additional information to
the data being transmitted. This redundancy allows the recipient to detect and correct
errors that may have occurred during transmission.
Efficiency: Hamming code is a relatively simple and efficient error-correction
technique that does not require a lot of computational resources. This makes it ideal
for use in low-power and low-bandwidth communication networks.
Widely Used: Hamming code is a widely used error-correction technique and is used
in a variety of applications, including telecommunications, computer networks, and
data storage systems.
Single Error Correction: Hamming code is capable of correcting a single-bit error,
which makes it ideal for use in applications where errors are likely to occur due to
external factors such as electromagnetic interference.
D. S. R 12
Unit-2CN
Limited Multiple Error Correction: Hamming code can only correct a limited
number of multiple errors. In applications where multiple errors are likely to occur,
more advanced error-correction techniques may be required.
Elementary Data Link protocols are classified into three categories, as given below −
Data transmitting is carried out in one direction only. The transmission (Tx) and receiving (Rx)
are always ready and the processing time can be ignored. In this protocol, infinite buffer space is
available, and no errors are occurring that is no damage frames and no lost frames.
D. S. R 13
Unit-2CN
In this protocol we assume that data is transmitted in one direction only. No error occurs; the
receiver can only process the received information at finite rate. These assumptions imply that the
transmitter cannot send frames at rate faster than the receiver can process them.
The main problem here is how to prevent the sender from flooding the receiver. The general
solution for this problem is to have the receiver send some sort of feedback to sender, the process
is as follows −
Step1 − The receiver send the acknowledgement frame back to the sender telling the sender that
the last received frame has been processed and passed to the host.
Step 3 − The sender after sending the sent frame has to wait for an acknowledge frame from the
receiver before sending another frame.
This protocol is called Simplex Stop and wait protocol, the sender sends one frame and waits for
feedback from the receiver. When the ACK arrives, the sender sends the next frame.
D. S. R 14
Unit-2CN
Data transfer is only in one direction, consider separate sender and receiver, finite processing
capacity and speed at the receiver, since it is a noisy channel, errors in data frames or
acknowledgement frames are expected. Every frame has a unique sequence number.
After a frame has been transmitted, the timer is started for a finite time. Before the timer expires,
if the acknowledgement is not received, the frame gets retransmitted, when the acknowledgement
gets corrupted or sent data frames gets damaged, how long the sender should wait to transmit the
next frame is infinite.
1. Go-Back-N ARQ
D. S. R 15
Unit-2CN
Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is a data
link layer protocol that uses a sliding window method. In this, if any frame is corrupted or lost, all
subsequent frames have to be sent again.
The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the sender
window, will be 8. The receiver window size is always 1.
If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a corrupted
frame. When the timer expires, the sender sends the correct frame again. The design of the Go-
Back-N ARQ protocol is shown below.
Now what exactly happens in GBN, we will explain with a help of example. Consider the
diagram given below. We have sender window size of 4. Assume that we have lots of sequence
numbers just for the sake of explanation. Now the sender has sent the packets 0, 1, 2 and 3. After
acknowledging the packets 0 and 1, receiver is now expecting packet 2 and sender window has
also slided to further transmit the packets 4 and 5. Now suppose the packet 2 is lost in the
network, Receiver will discard all the packets which sender has transmitted after packet 2 as it
is expecting sequence number of 2.
On the sender side for every packet send there is a time out timer which will expire for packet
number 2. Now from the last transmitted packet 5 sender will go back to the packet number 2 in
the current window and transmit all the packets till packet number 5. That’s why it is called Go
Back N. Go back means sender has to go back N places from the last transmitted packet in the
unacknowledged window and not from the point where the packet is lost.
D. S. R 16
Unit-2CN
If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative
acknowledgment to the sender. The sender sends that frame again as soon as on the receiving
negative acknowledgment. There is no waiting for any time-out to send that frame. The design of
the Selective Repeat ARQ protocol is shown below.
D. S. R 17
Unit-2CN
If a frame is corrupted or lost in it,all In this, only the frame is sent again, which is
subsequent frames have to be sent again. corrupted or lost.
D. S. R 18
Unit-2CN
Types of Aloha
Pure ALOHA
When a station sends data it waits for an acknowledgement. If the acknowledgement doesn’t
come within the allotted time then the station waits for a random amount of time called back-off
time (Tb) and re-sends the data. Since different stations wait for different amount of time, the
probability of further collision decreases.
D. S. R 19
Unit-2CN
Slotted ALOHA
It is similar to pure aloha, except that we divide time into slots and sending of data is allowed
only at the beginning of these slots. If a station misses out the allowed time, it must wait for the
next slot. This reduces the probability of collision.
D. S. R 20
Unit-2CN
CSMA
Carrier Sense Multiple Access ensures fewer collisions as the station is required to first sense
the medium (for idle or busy) before transmitting data. If it is idle then it sends data, otherwise
it waits till the channel becomes idle. However, there is still chance of collision in CSMA due
to propagation delay. For example, if station A wants to send data, it will first sense the medium.
If it finds the channel idle, it will start sending data. However, by the time the first bit of data is
transmitted (delayed due to propagation delay) from station A, if station B requests to send data
and senses the medium it will also find it idle and will also send data. This will result in collision
of data from station A and B.
CSMA
1-Persistent: The node senses the channel, if idle it sends the data, otherwise it
continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
Non-Persistent: The node senses the channel, if idle it sends the data, otherwise it
checks the medium after a random amount of time (not continuously) and transmits
when found idle.
D. S. R 21
Unit-2CN
P-Persistent: The node senses the medium, if idle it sends the data with p probability.
If the data is not transmitted ((1-p) probability) then it waits for some time and checks
the medium again, now if it is found idle then it send with p probability. This repeat
continues until the frame is sent. It is used in Wifi and packet radio systems.
O-Persistent: Superiority of nodes is decided beforehand and transmission occurs in
that order. If the medium is idle, node waits for its time slot to send data.
CSMA/CD
Carrier sense multiple access with collision detection. Stations can terminate transmission of
data if collision is detected. For more details refer – Efficiency of CSMA/CD.
CSMA/CA
Carrier sense multiple access with collision avoidance. The process of collisions detection
involves sender receiving acknowledgement signals. If there is just one signal (its own) then the
data is successfully sent but if there are two signals (its own and the one with which it has
collided) then it means a collision has occurred. To distinguish between these two cases, collision
must have a lot of impact on received signal. However, it is not so in wired networks, so
CSMA/CA is used in this case.
CSMA/CA Avoids Collision By
Interframe Space: Station waits for medium to become idle and if found idle it does
not immediately send data (to avoid collision due to propagation delay) rather it waits
for a period of time called Interframe space or IFS. After this time, it again checks
the medium for being idle. The IFS duration depends on the priority of station.
Contention Window: It is the amount of time divided into slots. If the sender is ready
to send data, it chooses a random number of slots as wait time which doubles every
time medium is not found idle. If the medium is found busy it does not restart the
entire process, rather it restarts the timer when the channel is found idle again.
Acknowledgement: The sender re-transmits the data if acknowledgement is not
received before time-out.
2. Controlled Access
Controlled access protocols ensure that only one device uses the network at a time. Think of it
like taking turns in a conversation so everyone can speak without talking over each other.
In this, the data is sent by that station which is approved by all other stations. For further details
refer – Controlled Access Protocols.
3. Channelization
In this, the available bandwidth of the link is shared in time, frequency and code to multiple
stations to access channel simultaneously.
Frequency Division Multiple Access (FDMA) – The available bandwidth is divided
into equal bands so that each station can be allocated its own band. Guard bands are
also added so that no two bands overlap to avoid crosstalk and noise.
Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between
multiple stations. To avoid collision time is divided into slots and stations are allotted
these slots to transmit data. However, there is an overhead of synchronization as each
station needs to know its time slot. This is resolved by adding synchronization bits to
each slot. Another issue with TDMA is propagation delay which is resolved by
addition of guard bands.
Code Division Multiple Access (CDMA) – One channel carries all transmissions
D. S. R 22
Unit-2CN
Spatial Division Multiple Access (SDMA) – SDMA uses multiple antennas at the transmitter
and receiver to separate the signals of multiple users that are located in different spatial
directions. This technique is commonly used in MIMO (Multiple-Input, Multiple-Output)
wireless communication systems.
Advantages SDMA
Frequency band uses effectively
The overall signal quality will be improved
The overall data rate will be increased
Disadvantages SDMA
It is complex to implement
It requires the accurate information about the channel
Features of Multiple Access Protocols
Contention-Based Access: Multiple access protocols are typically contention-based,
meaning that multiple devices compete for access to the communication channel. This
can lead to collisions if two or more devices transmit at the same time, which can
result in data loss and decreased network performance.
Carrier Sense Multiple Access (CSMA): CSMA is a widely used multiple access
protocol in which devices listen for carrier signals on the communication channel
before transmitting. If a carrier signal is detected, the device waits for a random
amount of time before attempting to transmit to reduce the likelihood of collisions.
Collision Detection (CD): CD is a feature of some multiple access protocols that
allows devices to detect when a collision has occurred and take appropriate action,
such as backing off and retrying the transmission.
Collision Avoidance (CA): CA is a feature of some multiple access protocols that
attempts to avoid collisions by assigning time slots to devices for transmission.
Token Passing: Token passing is a multiple access protocol in which devices pass a
special token between each other to gain access to the communication channel.
Devices can only transmit data when they hold the token, which ensures that only one
device can transmit at a time.
D. S. R 23
Unit-2CN
Bandwidth Utilization: Multiple access protocols can affect the overall bandwidth
utilization of a network. For example, contention-based protocols may result in lower
bandwidth utilization due to collisions, while token passing protocols may result in
higher bandwidth utilization due to the controlled access to the communication
channel.
LAN Switching:
LAN stands for Local-area Network. It is a computer network that covers a relatively small area
such as within a building or campus of up to a few kilometers in size. LANs are generally used
to connect personal computers and workstations in company offices to share common resources,
like printers, and exchange information.
LAN switching is a technology that promises to increase the efficiency of local area networks
and solve the current bandwidth problems. Examples of Lan Switching are as follows:
Wired LAN: Ethernet, Hub, Switch
Wireless LAN: Wi-fi
D. S. R 24
Unit-2CN
At the data link layer, it extracts the message from the sender and provides it to the receiver by
providing the sender’s and receiver’s addresses. The advantage of using frames is that data is
broken up into recoverable chunks that can easily be checked for corruption.
The process of dividing the data into frames and reassembling it is transparent to the user and is
handled by the data link layer.
Framing is an important aspect of data link layer protocol design because it allows the
transmission of data to be organized and controlled, ensuring that the data is delivered accurately
and efficiently.
Problems in Framing
D. S. R 25
Unit-2CN
Detecting start of the frame: When a frame is transmitted, every station must be
able to detect it. Station detects frames by looking out for a special sequence of bits
that marks the beginning of the frame i.e., SFD (Starting Frame Delimiter).
How does the station detect a frame: Every station listens to link for SFD pattern
through a sequential circuit. If SFD is detected, sequential circuit alerts station.
Station checks destination address to accept or reject frame.
Detecting end of frame: When to stop reading the frame.
Handling errors: Framing errors may occur due to noise or other transmission
errors, which can cause a station to misinterpret the frame. Therefore, error detection
and correction mechanisms, such as cyclic redundancy check (CRC), are used to
ensure the integrity of the frame.
Framing overhead: Every frame has a header and a trailer that contains control
information such as source and destination address, error detection code, and other
protocol-related information. This overhead reduces the available bandwidth for data
transmission, especially for small-sized frames.
Framing incompatibility: Different networking devices and protocols may use
different framing methods, which can lead to framing incompatibility issues. For
example, if a device using one framing method sends data to a device using a different
framing method, the receiving device may not be able to correctly interpret the frame.
Framing synchronization: Stations must be synchronized with each other to avoid
collisions and ensure reliable communication. Synchronization requires that all
stations agree on the frame boundaries and timing, which can be challenging in
complex networks with many devices and varying traffic loads.
Framing efficiency: Framing should be designed to minimize the amount of data
overhead while maximizing the available bandwidth for data transmission. Inefficient
framing methods can lead to lower network performance and higher latency.
Types of framing
There are two types of framing:
1. Fixed-size: The frame is of fixed size and there is no need to provide boundaries to the frame,
the length of the frame itself acts as a delimiter.
Drawback: It suffers from internal fragmentation if the data size is less than the
frame size
Solution: Padding
2. Variable size: In this, there is a need to define the end of the frame as well as the beginning
of the next frame to distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to indicate the length of
the frame. Used in Ethernet (802.3). The problem with this is that sometimes the
length field might get corrupted.
2. End Delimiter (ED) – We can introduce an ED (pattern) to indicate the end of the
frame. Used in Token Ring. The problem with this is that ED can occur in the data.
This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of characters. If data contains
ED, then, a byte is stuffed into data to differentiate it from ED.
D. S. R 26
Unit-2CN
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’ character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is escaped
using \O).
Examples:
If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing.
--> 011010001101100
If Data –> 110001001 and ED –> 1000 then, find data after bit stuffing?
D. S. R 27
Unit-2CN
--> 11001010011
framing in the Data Link Layer also presents some challenges, which include:
Variable frame length: The length of frames can vary depending on the data being transmitted,
which can lead to inefficiencies in transmission. To address this issue, protocols such as HDLC
and PPP use a flag sequence to mark the start and end of each frame.
Bit stuffing: Bit stuffing is a technique used to prevent data from being interpreted as control
characters by inserting extra bits into the data stream. However, bit stuffing can lead to issues
with synchronization and increase the overhead of the transmission.
Synchronization: Synchronization is critical for ensuring that data frames are transmitted and
received correctly. However, synchronization can be challenging, particularly in high-speed
networks where frames are transmitted rapidly.
Error detection: Data Link Layer protocols use various techniques to detect errors in the
transmitted data, such as checksums and CRCs. However, these techniques are not foolproof and
can miss some types of errors.
Efficiency: Efficient use of available bandwidth is critical for ensuring that data is transmitted
quickly and reliably. However, the overhead associated with framing and error detection can
reduce the overall efficiency of the transmission.
Link virtualization
Link virtualization is a technique that is used in computer networking to create virtual links or
connections between different devices. This technique is often used in the context of the link layer,
which is the second layer of the OSI (Open Systems Interconnection) model.
Link virtualization is a type of network virtualization that involves virtualizing Internet Protocol
(IP) routing, forwarding, and addressing schemes. Network virtualization is a process that
combines hardware and software network resources into a single virtual network. This allows you
to create multiple virtual networks that share the same physical infrastructure.
Flexibility
You can group or separate virtual networks as needed, or connect virtual machines (VMs)
however you want.
Speed
You can spin up logical networks more quickly in response to business requirements.
Control
You can improve control over your network.
Software testing
You can test software in a simulated network environment without having to physically test
it on all possible hardware or system software.
D. S. R 28
Unit-2CN
Link virtualization in the context of the network link layer refers to the abstraction and
management of physical network links to create virtual links. This concept is particularly important
in modern networking environments where flexibility, scalability, and efficient resource utilization
are crucial. Here’s a breakdown of the key aspects:
D. S. R 29
Unit-2CN
D. S. R 30