0% found this document useful (0 votes)
3 views34 pages

Cn Unit 2 Notes Complete

The document provides an overview of the Data Link Layer in the OSI model, detailing its functions such as framing, reliable delivery, flow control, error control, synchronization, access control, and physical addressing. It explains the structure of frames, types of framing, error detection techniques, and flow control protocols. Additionally, it describes elementary data link protocols for both noiseless and noisy channels.

Uploaded by

fsk99541
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views34 pages

Cn Unit 2 Notes Complete

The document provides an overview of the Data Link Layer in the OSI model, detailing its functions such as framing, reliable delivery, flow control, error control, synchronization, access control, and physical addressing. It explains the structure of frames, types of framing, error detection techniques, and flow control protocols. Additionally, it describes elementary data link protocols for both noiseless and noisy channels.

Uploaded by

fsk99541
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

1

Unit 2
Data Link Layer

Packet: -The basic unit of communication between a source and a destination in a network is a
packet.

Fragment: -packet is divided into smaller pieces of data called fragments.

Frame: -frames are small parts of a message in the network. packet is the unit of data used in the
network layer; a frame is the unit of data used in the OSI model’s data link layer

Datagram: -The datagram represents a data unit of transfer in networking.

Data Link Layer

o In the OSI model, the data link layer is a 6th layer from the top and 2nd layer from the bottom.
o The main responsibility of the Data Link Layer is to transfer the datagram across an individual
link.
o Data link layer is responsible for converting data stream to signals bit by bit and to send that
over the underlying hardware. At the receiving end, Data link layer picks up data from
hardware which are in the form of electrical signals, assembles them in a recognizable frame
format, and hands over to upper layer.

Data link layer has two sub-layers:


o Logical Link Control: It deals with protocols, flow-control, and error control
o Media Access Control: It deals with actual control of media
Functionality of Data-link Layer
Data link layer does many tasks on behalf of upper layer. These are:

1. Framing
2. Reliable Delivery
3. Flow control
4. Error control

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


2

5. Synchronization
6. Access Control
7. Physical Addressing

1. Framing
Data-link layer takes packets from Network Layer and encapsulates them into Frames. Then,
it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up
signals from hardware and assembles them into frames.
2. Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits the
network layer datagram without any error. A reliable delivery service is accomplished with
transmissions and acknowledgements.
3. Flow Control
Stations on same link may have different speed or capacity. Data-link layer ensures flow
control that enables both machine to exchange data on same speed.
4. Error Control
Sometimes signals may have encountered problem in transition and the bits are flipped. These
errors are detected and attempted to recover actual data bits. It also provides error reporting
mechanism to the sender.
5. Synchronization
When data frames are sent on the link, both machines must be synchronized in order to
transfer to take place.

6. Access Control: Protocols of this layer determine which of the devices has control over the
link at any given time, when two or more devices are connected to the same link.

7. Physical Addressing: The Data Link layer adds a header to the frame in order to define
physical address of the sender or receiver of the frame, if the frames are to be distributed to
different systems on the network.

Framing

Framing in data link layer is a point -to-point connection between the sender and
receiver. The framing is the primary function of the data link layer and it provides a
way to transmit data between the connected devices.

Framing use frames to send or receive data. The data link layer receives packets from
the network layer and converts them into frames. Framing provides a way for a sender to
transmit a set of bits that are meaningful to the receiver.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


3

Parts of a Frame

A frame has the following parts –

Flag − It marks the beginning and end of the frame.


Frame Header − It contains the source and the destination addresses of the frame.
Payload field − It contains the message to be delivered.
Trailer − It contains the error detection and error correction bits.

Types of Framing

Framing can be of two types, fixed sized framing and variable sized framing.

Fixed-sized Framing

Variable – Sized Framing

Fixed Size Framing

The frame has a fixed size. In fixed-size framing, there is no need for defining the
boundaries of the frames to mark the beginning and end of a frame.

For example- This type of framing is used in ATMs, Wide area networks. They use
frames of fixed size called cells.

Variable – Sized Framing

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


4

The size of the frame is variable in this type of framing. In variable size framing, we
need a way to define the end of the frame and the beginning of the next frame. This is
used in local area networks . It could be implemented in two different forms:

Length field: In a frame, we can define the length field to show the length of a frame. It is applied in
the Ethernet (802.3). The issue with it is that the length field may get corrupted sometimes.

ED (End Delimiter): In a frame, we can address an end delimiter to show the completion of a frame.
It is applied in the Token Ring. The issue with it is that the end delimiter can appear in the data.

There are three methods used to start and end of each frame.

Character count

Bit stuffing

Byte stuffing

Character count: - Character count framing method uses a field in the header to specify the
number of bytes in the frame.

1. Data link layer at sender sends the byte count.


2. Data link layer at receiver counts the byte count. Send by sender.
3. If there is difference between bytes counts of sender and receiver. There is error in data received.
4. Else received data is correct.
5. Above points are shown in diagram below

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


5

Bit stuffing framing method:


1. In this method bit stuffing is used.
2. When sender’s data link layer encounters five consecutive 1s in the data, it automatically stuffs a 0
bit.
3. At receiver end this stuffed 0 bit automatically deleted. As shown in the figure below.

Byte stuffing framing method:


1. In this method a flag byte, is used as both the starting and ending of a frame. See in the figure
below.
2. Two consecutive flag bytes indicate the end of one frame and the start of the next frame.
3. If the receiver ever loses synchronization it can just search for two flag bytes to find the end of the
current frame and the start of the next frame.
If accidental unwanted flag bytes occur in the data?
To prevent from wrong flag bytes, escape byte (ESC) is inserted just before each ‘‘accidental’’ flag
byte in the data. As shown in the figure below.

The data link layer on the receiving end removes the ESC before giving the data to the network layer.

This technique is called byte stuffing.

If accidental unwanted ESC occur in the data?


This unwanted ESC is also stuffed with an escape byte (ESC). As shown in the figure below.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


6

Error Detection

When data is transmitted from one device to another device, the system does not guarantee
whether the data received by the device is identical to the data transmitted by another device. An
Error is a situation when the message received at the receiver end is not identical to the message
transmitted.

Types of Errors

Errors can be classified into two categories:

o Single-Bit Error
o Burst Error

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


7

Single-Bit Error:

The only one bit of a given data unit is changed from 1 to 0 or from 0 to 1.

In the above figure, the message which is sent is corrupted as single-bit, i.e., 0 bit is changed to 1.

Burst Error:

The two or more bits are changed from 0 to 1 or from 1 to 0 is known as Burst Error.

The Burst Error is determined from the first corrupted bit to the last corrupted bit.

Error Detecting Techniques:

The most popular Error Detecting Techniques are:

o Single parity check


o Two-dimensional parity check
o Checksum

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


8

o Cyclic redundancy check

Single Parity Check


o Single Parity checking is the simple mechanism and inexpensive to detect the errors.
o In this technique, if the number of 1s bits is odd, then parity bit 1 is appended and if the
number of 1s bits is even, then parity bit 0 is appended at the end of the data unit.
o At the receiving end, the parity bit is calculated from the received data bits and compared with
the received parity bit.
o This technique generates the total number of 1s even, so it is known as even-parity checking.

Drawbacks of Single Parity Checking


o It can only detect single-bit errors which are very rare.
o If two bits are interchanged, then it cannot detect the errors.

Two-Dimensional Parity Check


o Performance can be improved by using Two-Dimensional Parity Check which organizes the
data in the form of a table.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


9

o Parity check bits are computed for each row, which is equivalent to the single-parity check.
o In Two-Dimensional Parity check, a block of bits is divided into rows, and the redundant row
of bits is added to the whole block.
o At the receiving end, the parity bits are compared with the parity bits computed from the
received data

Drawbacks of 2D Parity Check


o If two bits in one data unit are corrupted and two bits exactly the same position in another data
unit are also corrupted, then 2D Parity checker will not be able to detect the error.
o This technique cannot be used to detect the 4-bit errors or more in some cases.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


10

Checksum

A Checksum is an error detection technique based on the concept of redundancy.

It is divided into two parts:

Checksum Generator
A Checksum is generated at the sending side. Checksum generator subdivides the data into equal
segments of n bits each, and all these segments are added together by using one's complement
arithmetic. The sum is complemented and appended to the original data, known as checksum field.
The extended data is transmitted across the network.

The Sender follows the given steps:


1. The block unit is divided into k sections, and each of n bits.
2. All the k sections are added together by using one's complement to get the sum.
3. The sum is complemented and it becomes the checksum field.
4. The original data and checksum field are sent across the network.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


11

Checksum Checker

A Checksum is verified at the receiving side. The receiver subdivides the incoming data into equal
segments of n bits each, and all these segments are added together, and then this sum is
complemented. If the complement of the sum is zero, then the data is accepted otherwise data is
rejected.

The Receiver follows the given steps:


1. The block unit is divided into k sections and each of n bits.
2. All the k sections are added together by using one's complement algorithm to get the sum.
3. The sum is complemented.
4. If the result of the sum is zero, then the data is accepted otherwise the data is discarded.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


12

Cyclic Redundancy Check (CRC)


• CRC is a redundancy error technique used to determine the error. It is based on binary division.

Following are the steps used in CRC for error detection:

o In CRC technique, a string of n 0s is appended to the data unit, and this n number is less than
the number of bits in a predetermined number, known as division which is n+1 bits.

o Secondly, the newly extended data is divided by a divisor using a process is known as binary
division. The remainder generated from this division is known as CRC remainder.

o Thirdly, the CRC remainder replaces the appended 0s at the end of the original data. This
newly generated unit is sent to the receiver.

o The receiver receives the data followed by the CRC remainder. The receiver will treat this
whole unit as a single unit, and it is divided by the same divisor that was used to find the CRC
remainder.

If the resultant of this division is zero which means that it has no error, and the data is accepted.

If the resultant of this division is not zero which means that the data consists of an error. Therefore,
the data is discarded.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


13

Let's understand this concept through an example:

Suppose the original data is 11100 and divisor is 1001.

CRC Generator

o A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the end of
the data as the length of the divisor is 4 and we know that the length of the string 0s to be
appended is always one less than the length of the divisor.

o Now, the string becomes 11100000, and the resultant string is divided by the divisor 1001.

o The remainder generated from the binary division is known as CRC remainder. The generated
value of the CRC remainder is 111.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


14

o CRC remainder replaces the appended string of 0s at the end of the data unit, and the final
string would be 11100111 which is sent across the network.

CRC Checker

o The functionality of the CRC checker is similar to the CRC generator.

o When the string 11100111 is received at the receiving end, then CRC checker performs the
modulo-2 division.

o A string is divided by the same divisor, i.e., 1001.

o In this case, CRC checker generates the remainder of zero. Therefore, the data is accepted.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


15

Flow Control:-
It is a collection of protocols that advises the sender how much data it can communicate before the recipient
becomes overwhelmed.
The receiving device has limited speed and limited memory to store the data. Therefore, the receiving device
must be able to inform the sending device to stop the transmission temporarily before the limits are reached.
It requires a buffer, a block of memory for storing the information until they are processed.
Approaches of Flow Control
Flow control can be broadly classified into two categories −

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


16

• Feedback based Flow Control In these protocols, the sender sends frames after it has received
acknowledgments from the user. This is used in the data link layer.
• Rate based Flow Control These protocols have built in mechanisms to restrict the rate of
transmission of data without requiring acknowledgment from the receiver. This is used in the network
layer and the transport layer.

Protocol

For noiseless For Noisy channel


channel

Simplest Stop and wait ARQ


Stop and wait Go Back N ARQ
Selective Repeat ARQ

ELEMENTARY DATA LINK PROTOCOLS


For Noiseless Channel
Simplest Protocol
Step 1 − Simplest protocol that does not have flow or error control.
Step 2 − It is a unidirectional protocol where data frames are traveling in one direction that is from the sender to receiver.

Step 3 − Let us assume that the receiver can handle any frame it receives with a processing time that is small enough to be
negligible, the data link layer of the receiver immediately removes the header from the frame and hands the data packet to
its network layer, which can also accept the packet immediately.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


17

Stop and Wait


This protocol involves the following transitions −
• The sender sends a frame and waits for acknowledgment.
• Once the receiver receives the frame, it sends an acknowledgment frame back to the sender.
• On receiving the acknowledgment frame, the sender understands that the receiver is ready to
accept the next frame. So it sender the next frame in queue.

Suppose if any frame sent is not received by the receiver and is lost. So the receiver will not send any
acknowledgment as it has not received any frame. Also, the sender will not send the next frame as it

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


18

will wait for the acknowledgment for the previous frame which it had sent. So a deadlock situation
can be created here.

Advantages of Stop and Wait Protocol

1. It is very simple to implement.


Disadvantages of Stop and Wait Protocol

1. We can send only one packet at a time.


2. If the distance between the sender and the receiver is large then the propagation delay would
be more than the transmission delay. Hence, efficiency would become very low.
3. After every transmission, the sender has to wait for the acknowledgment and this time will
increase the total transmission time.
For Noisy channel:-
Sliding Window
This protocol improves the efficiency of stop and waits protocol by allowing multiple frames to be
transmitted before receiving an acknowledgment.
The working principle of this protocol can be described as follows −
• Both the sender and the receiver has finite sized buffers called windows. The sender and the
receiver agree upon the number of frames to be sent based upon the buffer size.
• The sender sends multiple frames in a sequence, without waiting for acknowledgment. When
its sending window is filled, it waits for acknowledgment. On receiving acknowledgment, it
advances the window and transmits the next frames, according to the number of
acknowledgments received.
Advantages –
• It performs much better than stop-and-wait flow control.
• This method increases efficiency.
• Multiples frames can be sent one after another.
Disadvantages –
• The main issue is complexity at the sender and receiver due to the transferring of multiple
frames.
• The receiver might receive data frames or packets out the sequence.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


19

Different Protocols in Sliding Window ARQ:

Stop-and-Wait Automatic Repeat Request:-

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


20

The following transition may occur in Stop-and-Wait ARQ:

• The sender maintains a timeout counter.


• When a frame is sent, the sender starts the timeout counter.
• If acknowledgement of frame comes in time, the sender transmits the next frame in queue.
• If acknowledgement does not come in time, the sender assumes that either the frame or its
acknowledgement is lost in transit. Sender retransmits the frame and starts the timeout
counter.
• If a negative acknowledgement is received, the sender retransmits the frame.

Go-Back-n ARQ:
If a frame is lost or damaged in the Go-Back-N ARQ protocol, it retransmits all of the frames and
does not get a valid ACK.

Cases when frames are retransmitted:

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


21

Lost Frame:
Data frames are transmitted consecutively in Sliding Window systems. If any of the frames is lost, the
following frame received by the receiver is out of order. The receiver examines each frame’s
sequence number, identifies the frame that was skipped, and returns the NAK for the missing frame.
The frame indicated by NAK, as well as the frames transferred following the lost frame, is
retransmitted by the transmitting device.

Lost Acknowledgment:
Before waiting for an acknowledgement, the sender can transmit as many frames as the windows
allow. When the window’s limit is reached, the sender has no more frames to send and must wait for
acknowledgement. If the acknowledgement is lost, the sender may be forced to wait indefinitely. To
circumvent this, the transmitter is outfitted with a timer that begins counting when the window
capacity is reached. If the acknowledgement is not received within the specified time period, the
sender retransmits the frame since the last ACK.

ii. Selective-Reject ARQ:

The Selective-Reject ARQ approach is more efficient than the Go-Back-n ARQ technique.
Only frames that have received negative acknowledgement (NAK) are retransmitted using this
approach.
All damaged frames are held in the receiver storage buffer until the frame in mistake is successfully
received.
The receiver must contain the necessary logic to reinsert the frames in the right order.
The sender must provide a searching mechanism that only retransmits the requested frame.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


22

Channel Allocation
Channel Allocation in computer networks refers to the method of assigning communication
resources (channels) to users or devices for transmitting data. A channel can be a frequency band, a
time slot, a code, or even a physical wire, depending on the type of network (wired or wireless).

Goals of Channel Allocation

• Efficient use of available bandwidth


• Minimize interference and collisions
• Ensure fair access to all users
• Maximize throughput and minimize delay

Types of Channel Allocation Strategies:

The different types of channel allocation schemes are as follows −


• Static channel allocation
• Dynamic channel allocation
• Hybrid channel allocation

Static Channel Allocation

• Each user is assigned a fixed channel.


• Channels are pre-allocated whether the user is active or not.

Advantages :

• Simple and predictable


• No contention

Disadvantages :

• Inefficient for bursty or low-usage communication


• Wastes bandwidth during idle time

Example:

• Traditional circuit-switched telephone system

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


23

Dynamic Channel Allocation

• Channels are assigned to users as needed.


• More flexible and efficient than static allocation

Advantages:
1. Efficient use of available bandwidth.
2. Reduces call blocking and improves call quality.
3. Allows for dynamic allocation of resources.

Disadvantages:
1. Requires more complex equipment and algorithms.
2. May result in call drops or poor quality if resources are not available
Examples:

• Wireless LANs (using CSMA/CA)


• Cellular networks during call setup

Hybrid Channel Allocation

• Combines static and dynamic methods.


• Some channels are permanently assigned, others are dynamically allocated based on demand.

Advantages:
1. Provides the benefits of both FCA and DCA.
2. Allows for dynamic allocation of resources while maintaining predictable call quality and
reliability.
Disadvantages:
1. Requires more complex equipment and algorithms than FCA.
2. May not provide the same level of efficiency as pure DCA.

Multiple access protocol

The data link control is sufficient to handle the channel when a sender and receiver have a dedicated
link for transmitting data packets. Assume there is no dedicated communication or data transfer path
between two devices. In this case, multiple stations access the channel and transmit data over it at the
same time. It could result in collisions and cross-talk. As a result, the multiple access protocol is
required to reduce channel collisions and avoid crosstalk.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


24

Random Access Protocol


In this protocol, all the station has the equal priority to send the data over a channel. No anyone
system can depend and control another system. However, if more than one station attempts to
transmit the data, there is an access conflict—collision, due to which the frames are either lost or
changed.

Following are the different methods of random-access protocols for broadcasting frames on the
channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to
transmit data. Using this method, any station can transmit data across a network simultaneously when
a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


25

3. Collision and data frames may be lost during the transmission of data through multiple
stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Pure Aloha:- When a station sends data it waits for an acknowledgement. If the acknowledgement
doesn’t come within the allotted time then the station waits for a random amount of time called
back-off time (Tb) and re-sends the data. Since different stations wait for different amount of time,
the probability of further collision decreases.

1. The total vulnerable time of pure Aloha is 2 * Tfr.

Vulnerable Time = 2*(Frame Transmission Time)=2*Tfr


Maximum Throughput (G=1/2) = 18.4%
Probability of successful transmission of data frame = G*e-2G where, G = no. of stations willing to
transmit data.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


26

As we can see in the figure above, there are four stations for accessing a shared channel and
transmitting data frames. Some frames collide because most stations send their frames at the same
time. Only two frames, frame 1.1 and frame 2.2, are successfully transmitted to the receiver end. At
the same time, other frames are lost or destroyed. Whenever two frames fall on a shared channel
simultaneously, collisions can occur, and both will suffer damage. If the new frame's first bit enters
the channel before finishing the last bit of the second frame. Both frames are completely finished, and
both stations must retransmit the data frame.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


27

Procedure for pure Aloha

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very
high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time
interval called slots. So that, if a station wants to send a frame to a shared channel, the frame can only
be sent at the beginning of the slot, and only one frame is allowed to be sent to each slot. And if the
stations are unable to send data to the beginning of the slot, the station will have to wait until the
beginning of the slot for the next time. However, the possibility of a collision remains when trying to
send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e-2G
3. The total vulnerable time required in slotted Aloha is Tfr.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


28

CSMA (Carrier Sense Multiple Access):-


It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel (idle or
busy) before transmitting the data. It means that if the channel is idle, the station can send data to the channel.
Otherwise, it must wait until the channel becomes idle. Hence, it reduces the chances of a collision on a
transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel and if
the channel is idle, it immediately sends the data. Else it must wait and keep track of the status of the channel
to be idle and broadcast the frame unconditionally as soon as the channel is idle.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


29

NON- Persistent: It is an NON-persistent method that defines the superiority of the station before the
transmission of the frame on the shared channel. If it is found that the channel is inactive, each station waits for
its turn to retransmit the data.

P-Persistent: The node senses the medium, if idle it sends the data with p probability. If the data is
not transmitted ((1-p) probability) then it waits for some time and checks the medium again, now if it is
found idle then it send with p probability. This repeat continues until the frame is sent. It is used in Wifi and
packet radio systems.

CSMA/ CD
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network protocol for carrier
transmission that operates in the Medium Access Control (MAC) layer. It senses or listens whether the shared
channel for transmission is busy or not, and defers transmissions until the channel is free. The collision
detection technology detects collisions by sensing transmissions from other stations. On detection of a
collision, the station stops transmitting, sends a jam signal, and then waits for a random time interval before
retransmission.
Algorithms
The algorithm of CSMA/CD is:
• When a frame is ready, the transmitting station checks whether the channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


30

• If the channel is idle, the station starts transmitting and continually monitors the channel to detect
collision.
• If a collision is detected, the station starts the collision resolution algorithm.
• The station resets the retransmission counters and completes frame transmission.
The algorithm of Collision Resolution is:
• The station continues transmission of the current frame for a specified time along with a jam signal, to
ensure that all the other stations detect collision.
• The station increments the retransmission counters.
• If the maximum number of retransmission attempts is reached, then the station aborts transmission.
• Otherwise, the station waits for a back off period which is generally a function of the number of
collisions and restart main algorithm.

CSMA/ CA
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) is a network protocol for carrier
transmission that operates in the Medium Access Control (MAC) layer. In contrast to CSMA/CD (Carrier

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


31

Sense Multiple Access/Collision Detection) that deals with collisions after their occurrence, CSMA/CA
prevents collisions prior to their occurrence.
Algorithm
The algorithm of CSMA/CA is:
• When a frame is ready, the transmitting station checks whether the channel is idle or busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station waits for an Inter-frame gap (IFG) amount of time and then sends the
frame.
• After sending the frame, it sets a timer.
• The station then waits for acknowledgement from the receiver. If it receives the acknowledgement
before expiry of timer, it marks a successful transmission.
• Otherwise, it waits for a back-off time period and restarts the algorithm.
The following flowchart summarizes the algorithms:

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


32

Advantages of CMSA/CD
• CMSA/CA prevents collision.

• Due to acknowledgements, data is not lost unnecessarily.

• It avoids wasteful transmission.

• It is very much suited for wireless transmissions.

Disadvantages of CSMA/CD
• The algorithm calls for long waiting times.

• It has high power consumes.

Controlled Access Protocol

It is a method of reducing data frame collision on a shared channel. In the controlled access method,
each station interacts and decides to send a data frame by a particular station approved by all other
stations. It means that a single station cannot send the data frames unless all other stations are not
approved. It has three types of controlled access: Reservation, Polling, and Token Passing.

The protocols lies under the category of Controlled access are as follows:
1. Reservation
2. Polling
3. Token Passing

Reservation
In this method, a station needs to make a reservation before sending the data.
• Time is mainly divided into intervals.
• Also, in each interval, a reservation frame precedes the data frame that is sent in that interval.
• Suppose if there are 'N' stations in the system in that case there are exactly 'N' reservation
mini slots in the reservation frame; where each mini slot belongs to a station.
• Whenever a station needs to send the data frame, then the station makes a reservation in its
own mini slot.
• Then the stations that have made reservations can send their data after the reservation frame.

Example

Let us take an example of 5 stations and a 5-minislot reservation frame. In the first interval, the
station 2,3 and 5 have made the reservations. While in the second interval only station 2 has made the
reservations.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


33

Polling

The polling method mainly works with those topologies where one device is designated as the
primary station and the other device is designated as the secondary station.

• All the exchange of data must be made through the primary device even though the final
destination is the secondary device.
• Thus to impose order on a network that is of independent users, and in order to establish one
station in the network that will act as a controller and periodically polls all other stations is
simply referred to as polling.
• The Primary device mainly controls the link while the secondary device follows the
instructions of the primary device.
• The responsibility is on the primary device in order to determine which device is allowed to
use the channel at a given time.
• Therefore the primary device is always an initiator of the session.

Token Passing

In the token passing methods, all the stations are organized in the form of a logical ring. We can also
say that for each station there is a predecessor and a successor.

Learning Bridge:

A learning bridge is a network device that connects multiple LAN segments and learns the MAC
addresses of devices on each port. It builds a MAC address table (also called a forwarding table)
to decide where to forward frames.

How It Works:

1. Learning:
o When a frame arrives at a bridge port, it examines the source MAC address and
stores it in a table with the port number.
2. Forwarding:

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)


34

o If the destination MAC address is in the table, the frame is forwarded to the correct
port.
o If it’s not in the table, the bridge floods the frame to all ports (except the source port).
3. Filtering:
o If the destination and source are on the same port, the frame is not forwarded.

Benefits:

• Reduces unnecessary traffic.


• Learns automatically; no manual configuration needed.

Spanning Tree Algorithm (STA):


Spanning Tree Protocol (STP) is a Layer 2 network protocol used to prevent looping within a
network topology. STP was created to avoid the problems that arise when computers exchange data
on a local area network (LAN) that contains redundant paths. STP is defined in IEEE 802.1D. It
creates a loop-free logical topology by disabling some redundant paths, forming a spanning tree over
the physical topology.

ZAINAB KAMAL KHAN (CSE) COMPUTER NETWORK (BCS-603)

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy