Error Correction
Error Correction
Data link layer does many tasks on behalf of upper layer. These are:
Framing: Data-link layer takes packets from Network Layer and encapsulates them
into Frames. Then, it sends each frame bit-by-bit on the hardware. At receiver’ end,
data link layer picks up signals from hardware and assembles them into frames.
Addressing: Data-link layer provides layer-2 hardware addressing mechanism.
Hardware address is assumed to be unique on the link. It is encoded into hardware at
the time of manufacturing.
Synchronization: When data frames are sent on the link, both machines must be
synchronized in order to transfer to take place.
Error Control: Sometimes signals may have encountered problem in transition and
the bits are flipped. These errors are detected and attempted to recover actual data
bits. It also provides error reporting mechanism to the sender.
Flow Control: Stations on same link may have different speed or capacity. Data-link
layer ensures flow control that enables both machine to exchange data on same
speed.
Multi-Access: When host on the shared link tries to transfer the data, it has a high
probability of collision. Data-link layer provides mechanism such as CSMA/CD to
equip capability of accessing a shared media among multiple Systems.
Framing
The first service provided by the data link layer is Framing. The DLL at each node needs to
encapsulate the datagram(Packet received from the network layer) in a frame before sending it to the
next node. The node also needs to decapsulate the datagram from the frame received on the logical
channel.
“A packet at the data link layer is normally called a frame”.
The process of dividing the data into frames and reassembling it is transparent to the user and is
handled by the data link layer.
Framing is an important aspect of data link layer protocol design because it allows the transmission
of data to be organized and controlled, ensuring that the data is delivered accurately and
efficiently.
Error Detection and Correction
There are many reasons such as noise, cross-talk etc., which may help data to get corrupted
during transmission. The upper layers work on some generalized view of network
architecture and are not aware of actual hardware data processing. Hence, the upper layers
expect error- free transmission between the systems. Most of the applications would not
function expectedly if they receive erroneous data. Applications such as voice and video
may not be that affected and with some errors they may still function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it
is essential to know what types of errors may occur.
Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
Singlebit error − In the received frame, only one bit has been corrupted, i.e. either
changed from 0 to 1 or from 1 to 0.
Multiple bits error − In the received frame, more than one bits are corrupted.
;
Burst error − In the received frame, more than one consecutive bits are corrupted.
Error Control
Error control can be done in two ways
Error detection − Error detection involves checking whether any error has occurred or
not. The number of error bits and the type of error does not matter.
Error correction − Error correction involves ascertaining the exact number of bits that
has been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits
along with the data bits. The receiver performs necessary checks based upon the additional
redundant bits. If it finds that the data is free from errors, it removes the redundant bits before
passing the message to the upper layers.
Error Detection Techniques
There are three main techniques for detecting errors in frames: Parity Check, Two-
dimensional Parity check, Checksum and Cyclic Redundancy Check (CRC).
Parity Check
The parity check is done by adding an extra bit, called parity bit to the data to make a number
of 1s either even in case of even parity or odd in case of odd parity.
While creating a frame, the sender counts the number of 1s in it and adds the parity bit in the
following way
In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of 1s
is odd then parity bit value is 1.
In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is
even then parity bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even parity check,
if the count of 1s is even, the frame is accepted, otherwise, it is rejected. A similar rule is
adopted for odd parity check.
The parity check is suitable for single bit error detection only.
Two-dimensional Parity check
Parity check bits are calculated for each row, which is equivalent to a simple parity check bit.
Parity check bits are also calculated for all columns, then both are sent along with the data. At
the receiving end these are compared with the parity bits calculated on the received data.
Checksum
In checksum error detection scheme, the data is divided into k segments each of m bits.
In the sender’s end the segments are added using 1’s complement arithmetic to get the
sum. The sum is complemented to get the checksum.
The checksum segment is sent along with the data segments.
At the receiver’s end, all received segments are added using 1’s complement arithmetic to
get the sum. The sum is complemented.
If the result is zero, the received data is accepted; otherwise discarded.
Cyclic Redundancy Check (CRC)
Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by a
predetermined divisor agreed upon by the communicating system. The divisor is generated
using polynomials.
Here, the sender performs binary division of the data segment by the divisor. It then
appends the remainder called CRC bits to the end of the data segment. This makes the
resulting data unit exactly divisible by the divisor.
The receiver divides the incoming data unit by the divisor. If there is no remainder, the
data unit is assumed to be correct and is accepted. Otherwise, it is understood that the
data is corrupted and is therefore rejected.
Error Correction Techniques
Error correction techniques find out the exact number of bits that have been corrupted and as
well as their locations. There are two principle ways
Hamming Codes
Binary Convolution Code
Reed – Solomon Code
Low-Density Parity-Check Code
Hamming Codes:
Hamming code is a set of error-correction codes that can be used to detect and correct the
errors that can occur when the data is moved or stored from the sender to the receiver. It
is technique developed by R.W. Hamming for error correction.
Redundant bits –
Redundant bits are extra binary bits that are generated and added to the information-carrying
bits of data transfer to ensure that no bits were lost during the data transfer.
The number of redundant bits can be calculated using the following formula:
2^r ≥ m + r + 1
where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be calculated
using:
= 2^4 ≥ 7 + 4 + 1
Thus, the number of redundant bits= 4
Parity bits –
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s
in the data is even or odd. Parity bits are used for error detection. There are two types of
parity bits:
1. R1 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the least significant position.
R1: bits 1, 3, 5, 7, 9, 11
To find the redundant bit R1, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s
value) = 0
2. R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit.
R2: bits 2,3,6,7,10,11
To find the redundant bit R2, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1
3. R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit.
R4: bits 4, 5, 6, 7
To find the redundant bit R4, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R4 is odd the value of R4(parity bit’s value) = 1
4. R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit.
R8: bit 8,9,10,11
To find the redundant bit R8, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R8 is an even number the value of R8(parity bit’s
value)=0.
Thus, the data transferred is:
D7 D6 D5 P4 D3 P2 P1
1 1 0 0 1 1 0
7 6 5 4 3 2 1
D7 D6 D5 P4 D3 P2 P1
1 0 =1 0 0 1 1 0
7 6 5 4 3 2 1
P1 = 0 = 1, 0, 1 = even no of 1’s = P1 = 0
P2 = 1 = 1, 0, 1 = odd no of 1’s = P2 = 1
P4 = 0 = 0, 0, 1 = odd no of 1’s = P4 = 1
Error Correction using Hamming codes:
4 2 1
P4 P2 P1 = P4=1, P2=1, P1=0 = (1*4) + (1*2) + (0*1) = 4+2+0 = 6 th bit there is an error
Example: If the 7-bit hamming code word received by a receiver is 1011011. Assuming the
even parity, state whether the received code is correct or wrong. If wrong locate the bit and
correct it.
D7 D6 D5 P4 D3 P2 P1
1 0 1 1 0 1 1
7 6 5 4 3 2 1
• P1=>D3, D5, D7
• P2=>D3, D6, D7
• P4=>D5, D6, D7
P1 = 1 = 0, 1, 1 = odd no of 1’s = 1
P2 = 1 = 0, 0, 1 = even no of 1’s = 0
P4 = 1 = 1, 0, 1 = odd no of 1’s = 1
4 2 1
P4 P2 P1 = (1*4) + (0*2) + (1*1) = 5th bit error
corrected Data = 1001011
Reed Solomon Code:
Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S.
Reed and Gustave Solomon in 1960. Reed-Solomon codes are block-based error correcting
codes with a wide range of applications in digital communications and storage. Reed-
Solomon codes are used to correct errors in many systems including:
A Reed - Solomon encoder accepts a block of data and adds redundant bits (parity bits)
before transmitting it over noisy channels. On receiving the data, a decoder corrects the error
depending upon the code characteristics.
• In the physical layer, data transmission involves synchronized transmission of bits from
the source to the destination. The data link layer packs these bits into frames.
• Data-link layer takes the packets from the Network Layer and encapsulates them
into frames.
• Frames are the units of digital transmission particularly in computer networks and
telecommunications
A frame has the following parts −
• Frame Header − It contains the source and the destination addresses of the frame.
• Payload field − It contains the message to be delivered.
• Trailer − It contains the error detection and error correction bits.
• Flag − It marks the beginning and end of the frame.
Problems in Framing:
• Detecting start of the frame: When a frame is transmitted, every station must be able to
detect it. Station detect frames by looking out for special sequence of bits that marks the
beginning of the frame i.e. SFD (Starting Frame Delimeter).
• How do station detect a frame: Every station listen to link for SFD pattern through
a sequential circuit. If SFD is detected, sequential circuit alerts station. Station checks
destination address to accept or reject frame.
• Detecting end of frame: When to stop reading the frame. EFD (End frame delimeter)
Types of Framing:
Fixed-sized Framing
• Here the size of the frame is fixed and so the frame length acts as delimiter of the
frame. Consequently, it does not require additional boundary bits to identify the start
and end of the frame.
• Example − ATM cells. Asynchronous transfer mode – 53 byte
Variable – Sized Framing
• Here, the size of each frame to be transmitted may be different. So additional
mechanisms are kept to mark the end of one frame and the beginning of the next
frame.
• It is used in local area networks.
Two ways to define frame delimiters in variable sized framing are −
• Length Field − Here, a length field is used that determines the size of the frame. It
is used in Ethernet (IEEE 802.3).
• End Delimiter − Here, a pattern is used as a delimiter to determine the size of
frame. It is used in Token Rings. If the pattern occurs in the message, then two
approaches are used to avoid the situation −
Byte – Stuffing − A byte is stuffed in the message to differentiate from the delimiter.
This is also called character-oriented framing.
Used when frames consist of character. If data contains ED then, byte is stuffed into data to
diffentiate it from ED.Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using
‘\O’ character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is escaped using
\O).
Bit – Stuffing − A pattern of bits of arbitrary length is stuffed in the message to differentiate
from the delimiter. This is also called bit – oriented framing.
Let ED = 01111 and if data = 01111
–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
–> Receiver receives the frame.
–> If data contains 011101, receiver removes the 0 and reads the data.
Data Flow Control:
• It is technique that generally observes proper flow of data from sender to receiver. It
is very essential because it is possible for sender to transmit data or information at
very fast rate and hence receiver can receive this information and process it. This
can happen only if receiver has very high load of traffic as compared to sender, or if
receiver has power of processing less as compared to sender.
• Flow control is basically technique that gives permission to two of stations that are
working and processing at different speeds to just communicate with one another.
Flow control in Data Link Layer simply restricts and coordinates number of frames
or amount of data sender can send just before it waits for an acknowledgment from
receiver.
This method is the easiest and simplest form of flow control. In this method, basically
message or data is broken down into various multiple frames, and then receiver indicates its
readiness to receive frame of data. When acknowledgment is received, then only sender will
send or transfer the next frame.
This process is continued until sender transmits EOT (End of Transmission) frame. In this
method, only one of frames can be in transmission at a time. It leads to inefficiency i.e. less
productivity if propagation delay is very much longer than the transmission delay.
Advantages –
This method is very easiest and simple and each of the frames is checked and
acknowledged well.
It can also be used for noisy channels.
This method is also very accurate.
Disadvantages –
This method is fairly slow.
In this, only one packet or frame can be sent at a time.
It is very inefficient and makes the transmission process very slow.
1.1. Error control- Stop and wait ARQ (automatic repeat request)
Stop-and-Wait ARQ is also known as alternating bit protocol. It is one of simplest flow and
error control techniques or mechanisms. This mechanism is generally required in
telecommunications to transmit data or information among two connected devices. Receiver
simply indicates its readiness to receive data for each frame. In these, sender sends
information or data packet to receiver. Sender then stops and waits for ACK
(Acknowledgment) from receiver. Further, if ACK does not arrive within given time period
i.e., time-out, sender then again resends frame and waits for ACK. But, if sender receives
ACK, then it will transmi next data packet to receiver and then again wait for ACK fro
receiver. This process to stop and wait continues until sender has no data frame or packet to
send.
2. Sliding Window Flow Control:
This protocol improves the efficiency of stop and wait protocol by allowing multiple frames to
be transmitted before receiving an acknowledgment.
1. Random Access Protocol: In this, all stations have same superiority that is no station
has more priority than another station. Any station can send data depending on medium’s
state (idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.
Aloha Rules
Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any
station transmits the data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the specified time, the
station waits for a random amount of time, called the backoff time (Tb). And the station may
assume the frame has been lost or destroyed. Therefore, it retransmits the frame until all the
data are successfully transmitted to the receiver.
The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha
has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided
into a fixed time interval called slots. So that, if a station wants to send a frame to a shared
channel, the frame can only be sent at the beginning of the slot, and only one frame is
allowed to be sent to each slot. And if the stations are unable to send data to the beginning of
the slot, the station will have to wait until the beginning of the slot for the next time.
However, the possibility of a collision remains when trying to send a frame at the beginning
of two or more station time slot.
It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.
It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it
transmits a frame to check whether the transmission was successful. If the frame is
successfully received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data
transmission. After that, it waits for a random time before sending a frame to a channel.
CSMA/ CA
It is a method of reducing data frame collision on a shared channel. In controlled access, the
stations seek information from one another to find which station has the right to send. It
allows only one node to send at a time, to avoid collision of messages on shared medium.
The three controlled-access methods are:
1. Reservation
2. Polling
3. Token Passing
Reservation
In the reservation method, a station needs to make a reservation before sending data.
The time line has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
If there are M stations, the reservation interval is divided into M slots, and each station
has one slot.
Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
In general, i th station may announce that it has a frame to send by inserting a 1 bit into
i th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
The stations which have reserved their slots transfer their frames in that order.
After data transmission period, next reservation interval begins.
Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five slot reservation frame.
In the first interval, only stations 1, 3, and 4 have made reservations. In the second
interval, only station 1 has made a reservation.
Polling
Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
In this, one acts as a primary station(controller) and the others are secondary stations.
All data exchanges must be made through the controller.
The message sent by the controller contains the address of the node being selected for
granting access.
Although all nodes receive the message but the addressed one responds to it and sends
data, if any. If there is no data, usually a “poll reject”(NAK) message is sent back.
Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.
Efficiency
Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,
Token Passing
In token passing scheme, the stations are connected logically to each other in form of
ring and access of stations is governed by tokens.
A token is a special bit pattern or a small message, which circulate from one station to
the next in the some predefined order.
In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station
uses the bus to send the token to the next station in some predefined order.
In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the
token to the next station. If it has no queued frame, it passes the token simply.
After sending a frame, each station must wait for all N stations (including itself) to
send the token to their neighbors and the other N – 1 stations to send a frame, if they
have one.
There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable operation
of this scheme.
Performance
Performance of token ring can be concluded by 2 parameters:-
1. Delay, which is a measure of time between when a packet is ready and when it is
delivered.So, the average time (delay) required to send a token to the next station =
a/N.
2. Throughput, which is a measure of the successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)
Channelization Protocols:
Channelization is basically a method that provides the multiple-access and in this, the
available bandwidth of the link is shared in time, frequency, or through the code in
between the different stations.
With the help of this technique, the available bandwidth is divided into frequency bands.
Each station is allocated a band in order to send its data. Or in other words, we can say that
each band is reserved for a specific station and it belongs to the station all the time.
Each station makes use of the bandpass filter in order to confine the frequencies
of the transmitter.
In order to prevent station interferences, the allocated bands are separated from one
another with the help of small guard bands.
The Frequency-division multiple access mainly specifies a predetermined
frequency for the entire period of communication.
Stream of data can be easily used with the help of FDMA.
Advantages of FDMA
Given below are some of the benefits of using the FDMA technique:
Disadvantages of FDMA
By using FDMA, the maximum flow rate per channel is fixed and small.
2. Time-Division Multiple Access
Time-Division Multiple access is another method to access the channel for shared medium
networks.
With the help of this technique, the stations share the bandwidth of the channel in
time.
A time slot is allocated to each station during which it can send the data.
Data is transmitted by each station in the assigned time slot.
There is a problem in using TDMA and it is due to TDMA the synchronization
cannot be achieved between the different stations.
When using the TDMA technique then each station needs to know the beginning of its
slot and the location of its slot.
If the stations are spread over a large area, then there occur propagation delays;
in order to compensate this guard, times are used.
The data link layer in each station mainly tells its physical layer to use the allocated
time slot.
CDMA technique differs from the FDMA because only one channel occupies the
entire bandwidth of the link.
The CDMA technique differs from the TDMA because all the stations can send data
simultaneously as there is no timesharing.
The CDMA technique simply means communication with different codes.
In the CDMA technique, there is only one channel that carries all the transmission
simultaneously.
CDMA is mainly based upon the coding theory; where each station is assigned a
code, Code is a sequence of numbers called chips.
The data from the different stations can be transmitted simultaneously but using
different code languages.
Advantages of CDMA
Given below are some of the advantages of using the CDMA technique:
Noiseless Channels
There are two noiseless channels which are as follows −
Simplex channel
Stop & wait channel
Simplest Protocol
Step 1 − Simplest protocol that does not have flow or error control.
Step 2 − It is a unidirectional protocol where data frames are traveling in one direction that is from
the sender to receiver.
Step 3 − Let us assume that the receiver can handle any frame it receives with a processing time that
is small enough to be negligible, the data link layer of the receiver immediately removes the header
from the frame and hands the data packet to its network layer, which can also accept the packet
immediately.
Stop-and-Wait Protocol
Step 1 − If the data frames that arrive at the receiver side are faster than they can be processed, the
frames must be stored until their use.
Step 2 − Generally, the receiver does not have enough storage space, especially if it is receiving data
from many sources. This may result in either discarding of frames or denial of service.
Step 3 − To prevent the receiver from becoming overwhelmed with frames, the sender must slow
down. There must be ACK from the receiver to the sender.
Step 4 − In this protocol the sender sends one frame, stops until it receives confirmation from the
receiver, and then sends the next frame.
Step 5 − We still have unidirectional communication for data frames, but auxiliary ACK frames
travel from the other direction. We add flow control to the previous protocol.
Noisy Channels
There are three types of requests for the noisy channels, which are as follows −
Go-Back-N ARQ
To improve the transmission efficiency, we need more than one frame to be outstanding to keep the
channel busy while the sender is waiting for acknowledgement.
There are two protocols developed for achieving this goal and they are as follows −
are either 0 or 1. A 𝑘 – bit word is represented by a polynomial ranging from 𝑥0 to 𝑥𝑘−1. The order
The code words, which are essentially bit strings, are represented by polynomials whose coefficients