0% found this document useful (0 votes)
146 views25 pages

CN Unit 2

Various methods of framing data at the data link layer are discussed, including character counting, character stuffing, bit stuffing, and encoding violations. Error detection techniques like parity checks, checksums, and cyclic redundancy checks are also summarized. Flow control is important at the data link layer to regulate the flow of data between sender and receiver and approaches like feedback-based and rate-based flow control are described. Specific techniques like stop-and-wait flow control are also mentioned.

Uploaded by

Saumya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views25 pages

CN Unit 2

Various methods of framing data at the data link layer are discussed, including character counting, character stuffing, bit stuffing, and encoding violations. Error detection techniques like parity checks, checksums, and cyclic redundancy checks are also summarized. Flow control is important at the data link layer to regulate the flow of data between sender and receiver and approaches like feedback-based and rate-based flow control are described. Specific techniques like stop-and-wait flow control are also mentioned.

Uploaded by

Saumya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Various kind of Framing in Data link

layer
Framing is function of Data Link Layer that is used to separate message from
source or sender to destination or receiver or simply from all other messages to
all other destinations just by adding sender address and destination address.
The destination or receiver address is simply used to represent where message
or packet is to go and sender or source address is simply used to help recipient
to acknowledge receipt.
Frames are generally data unit of data link layer that is transmitted or
transferred among various network points. It includes complete and full
addressing, protocols that are essential, and information under control. Physical
layers only just accept and transfer stream of bits without any regard to
meaning or structure. Therefore it is up to data link layer to simply develop and
recognize frame boundaries.
This can be achieved by attaching special types of bit patterns to start and end
of the frame. If all of these bit patterns might accidentally occur in data, special
care is needed to be taken to simply make sure that these bit patterns are not
interpreted incorrectly or wrong as frame delimiters.
Framing is simply point-to-point connection among two computers or devices
that consists or includes wire in which data is transferred as stream of bits.
However, all of these bits should be framed into discernible blocks of
information.

Methods of Framing :
There are basically four methods of framing as given below –
1. Character Count
2. Flag Byte with Character Stuffing
3. Starting and Ending Flags, with Bit Stuffing
4. Encoding Violations
These are explained as following below.
1. Character Count :
This method is rarely used and is generally required to count total number of
characters that are present in frame. This is be done by using field in
header. Character count method ensures data link layer at the receiver or
destination about total number of characters that follow, and about where the
frame ends.
There is disadvantage also of using this method i.e., if anyhow character
count is disturbed or distorted by an error occurring during transmission,
then destination or receiver might lose synchronization. The destination or
receiver might also be not able to locate or identify beginning of next frame.

2. Character Stuffing :
Character stuffing is also known as byte stuffing or character-oriented framing
and is same as that of bit stuffing but byte stuffing actually operates on bytes
whereas bit stuffing operates on bits. In byte stuffing, special byte that is
basically known as ESC (Escape Character) that has predefined pattern is
generally added to data section of the data stream or frame when there is
message or character that has same pattern as that of flag byte.
But receiver removes this ESC and keeps data part that causes some problems
or issues. In simple words, we can say that character stuffing is addition of 1
additional byte if there is presence of ESC or flag in text.
3. Bit Stuffing :
Bit stuffing is also known as bit-oriented framing or bit-oriented approach.
In bit stuffing, extra bits are being added by network protocol designers to
data streams. It is generally insertion or addition of extra bits into
transmission unit or message to be transmitted as simple way to provide
and give signaling information and data to receiver and to avoid or ignore
appearance of unintended or unnecessary control sequences.
It is type of protocol management simply performed to break up bit pattern
that results in transmission to go out of synchronization. Bit stuffing is very
essential part of transmission process in network and communication
protocol. It is also required in USB.
4. Physical Layer Coding Violations :
Encoding violation is method that is used only for network in which encoding
on physical medium includes some sort of redundancy i.e., use of more than
one graphical or visual structure to simply encode or represent one variable
of data.
Error Detection in Computer Networks
Error
A condition when the receiver’s information does not match with the sender’s
information. During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from sender to receiver. That means
a 0 bit may change to 1 or a 1 bit may change to 0.

Error Detecting Codes (Implemented either at Data link layer or Transport


Layer of OSI Model)
Whenever a message is transmitted, it may get scrambled by noise or data may
get corrupted. To avoid this, we use error-detecting codes which are additional
data added to a given digital message to help us detect if any error has
occurred during transmission of the message.

Basic approach used for error detection is the use of redundancy bits, where
additional bits are added to facilitate detection of errors.
Some popular techniques for error detection are:
1. Simple Parity check
2. Two-dimensional Parity check
3. Checksum
4. Cyclic redundancy check
1. Simple Parity check
Blocks of data from the source are subjected to a check bit or parity bit
generator form, where a parity of :
 1 is added to the block if it contains odd number of 1’s, and
 0 is added if it contains even number of 1’s
This scheme makes the total number of 1’s even, that is why it is called even
parity checking.

2. Two-dimensional Parity check


Parity check bits are calculated for each row, which is equivalent to a
simple parity check bit. Parity check bits are also calculated for all
columns, then both are sent along with the data. At the receiving end
these are compared with the parity bits calculated on the received data.
3. Checksum
 In checksum error detection scheme, the data is divided into k segments
each of m bits.
 In the sender’s end the segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented to get the checksum.
 The checksum segment is sent along with the data segments.
 At the receiver’s end, all received segments are added using 1’s
complement arithmetic to get the sum. The sum is complemented.
 If the result is zero, the received data is accepted; otherwise discarded.

4. Cyclic redundancy check (CRC)


 Unlike checksum scheme, which is based on addition, CRC is based on
binary division.
 In CRC, a sequence of redundant bits, called cyclic redundancy check bits,
are appended to the end of data unit so that the resulting data unit becomes
exactly divisible by a second, predetermined binary number.
 At the destination, the incoming data unit is divided by the same number. If
at this step there is no remainder, the data unit is assumed to be correct and
is therefore accepted.
 A remainder indicates that the data unit has been damaged in transit and
therefore must be rejected.
Flow Control in Data Link Layer
Flow control is design issue at Data Link Layer. It is technique that generally
observes proper flow of data from sender to receiver. It is very essential
because it is possible for sender to transmit data or information at very fast rate
and hence receiver can receive this information and process it. This can
happen only if receiver has very high load of traffic as compared to sender, or if
receiver has power of processing less as compared to sender.
Flow control is basically technique that gives permission to two of stations that
are working and processing at different speeds to just communicate with one
another. Flow control in Data Link Layer simply restricts and coordinates
number of frames or amount of data sender can send just before it waits for an
acknowledgment from receiver. Flow control is actually set of procedures that
explains sender about how much data or frames it can transfer or transmit
before data overwhelms receiver.
The receiving device also contains only limited amount of speed and memory to
store data. This is why receiving device should be able to tell or inform the
sender about stopping the transmission or transferring of data on temporary
basis before it reaches limit. It also needs buffer, large block of memory for just
storing data or frames until they are processed.
Approaches to Flow Control :
Flow Control is classified into two categories –

 Feedback – based Flow Control :


In this control technique, sender simply transmits data or information or
frame to receiver, then receiver transmits data back to sender and also
allows sender to transmit more amount of data or tell sender about how
receiver is processing or doing. This simply means that sender transmits
data or frames after it has received acknowledgments from user.
 Rate – based Flow Control :
In this control technique, usually when sender sends or transfer data at
faster speed to receiver and receiver is not being able to receive data at the
speed, then mechanism known as built-in mechanism in protocol will just
limit or restricts overall rate at which data or information is being transferred
or transmitted by sender without any feedback or acknowledgment from
receiver.

Techniques of Flow Control in Data Link Layer :


There are basically two types of techniques being developed to control the flow
of data –

1. Stop-and-Wait Flow Control :


This method is the easiest and simplest form of flow control. In this method,
basically message or data is broken down into various multiple frames, and
then receiver indicates its readiness to receive frame of data. When
acknowledgment is received, then only sender will send or transfer the next
frame.
This process is continued until sender transmits EOT (End of Transmission)
frame. In this method, only one of frames can be in transmission at a time. It
leads to inefficiency i.e. less productivity if propagation delay is very much
longer than the transmission delay.
Advantages –
 This method is very easiest and simple and each of the frames is checked
and acknowledged well.
 It can also be used for noisy channels.
 This method is also very accurate.
Disadvantages –
 This method is fairly slow.
 In this, only one packet or frame can be sent at a time.
 It is very inefficient and makes the transmission process very slow.

2. Sliding Window Flow Control :


This method is required where reliable in-order delivery of packets or frames is
very much needed like in data link layer. It is point to point protocol that
assumes that none of the other entity tries to communicate until current data or
frame transfer gets completed. In this method, sender transmits or sends
various frames or packets before receiving any acknowledgment.
In this method, both the sender and receiver agree upon total number of data
frames after which acknowledgment is needed to be transmitted. Data Link
Layer requires and uses this method that simply allows sender to have more
than one unacknowledged packet “in-flight” at a time. This increases and
improves network throughput.
Advantages –
 It performs much better than stop-and-wait flow control.
 This method increases efficiency.
 Multiples frames can be sent one after another.
Disadvantages –
 The main issue is complexity at the sender and receiver due to the
transferring of multiple frames.
 The receiver might receive data frames or packets out the sequence.

Channel Allocation Problem in Computer Networks


In a broadcast network, the single broadcast channel is to be allocated
to one transmitting user at a time. When multiple users use a shared
network and want to access the same network. Then channel
allocation problem in computer networks occurs.

So, to allocate the same channel between multiple users, techniques


are used, which are called channel allocation techniques in computer
networks.

Channel Allocation Techniques


For the efficient use of frequencies, time-slots and bandwidth channel
allocation techniques are used. There are three types of channel
allocation techniques that you can use to resolve channel allocation
problem in computer networks as follows:
 Static channel allocation
 Dynamic channel allocation
 Hybrid channel allocation.
Static Channel Allocation
The traditional way of allocating a single channel between multiple
users is called static channel allocation. Static channel allocation is
also called fixed channel allocation. Such as a telephone channel
among many users is a real-life example of static channel allocation.

The frequency division multiplexing (FDM) and time -division


multiplexing (TDM) are two examples of static channel allocation. In
these methods, either a fixed frequency or fixed time slot is allotted to
each user.

Dynamic Channel Allocation


The technique in which channels are not permanently allocated to the
users is called dynamic channel allocation. In this technique, no fixed
frequency or fixed time slot is allotted to the user.

The allocation depends upon the traffic. If the traffic increases, more
channels are allocated, otherwise fewer channels are allocated to the
users.

This technique optimizes bandwidth usage and provides fast data


transmission. Dynamic channel allocation is further categorized into
two parts as follows:

 Centralized dynamic channel allocation


 Distributed dynamic channel allocation
The following are the assumptions in dynamic channel allocation:

Station Model: Comprises N independent stations with a program for


transmission.

Single Channel: A single channel is available for all communication.

Collision: If frames are transmitted at the same time by two or more


stations, then the collision occurs.
Continuous or slotted time: There is no master clock that divides
time into discrete time intervals.

Carrier or no carrier sense: Stations sense the channel before


transmission.

Hybrid Channel Allocation


The mixture of fixed channel allocation and dynamic channel allocation
is called hybrid channel allocation. The total channels are divided into
two sets, fixed and dynamic sets.

First, a fixed set of channels is used when the user makes a call. If all
fixed sets are busy, then dynamic sets are used. When there is heavy
traffic in a network, then h ybrid channel allocation is used.

Difference Between Static and Dynamic Channel


Allocation
There are some differences between static and dynamic channel
allocation. The following table shows the comparison of fixed channel
allocation and dynamic channel allocation.

Fixed Channel allocation Dynamic Channel allocation

In this technique, a fixed number of channels are In this technique, channels are not permanently allocated to
allocated to the cells. the cells.

Mobile station centre has fewer responsibilities. The mobile station centre has more responsibilities.

The allocation is not dependent on traffic. The allocation depends on the traffic.

Fixed channel allocation is cheaper than dynamic Dynamic channel allocation is costly as compared to fixed
Fixed Channel allocation Dynamic Channel allocation

channel allocation. channel allocation.

In this no need of complex algorithms. Complex algorithms are used in this.

Multiple Access Protocols in Computer


Network
he Data Link Layer is responsible for transmission of data between two nodes.
Its main functions are-
 Data Link Control
 Multiple Access Control

Data Link control –


The data link control is responsible for reliable transmission of message over
transmission channel by using techniques like framing, error control and flow
control. For Data link control refer to – Stop and Wait ARQ
Multiple Access Control –
If there is a dedicated link between the sender and the receiver then data link
control layer is sufficient, however if there is no dedicated link present then
multiple stations can access the channel simultaneously. Hence multiple access
protocols are required to decrease collision and avoid crosstalk. For example, in
a classroom full of students, when a teacher asks a question and all the
students (or stations) start answering simultaneously (send data at same time)
then a lot of chaos is created( data overlap or data lost) then it is the job of the
teacher (multiple access protocols) to manage the students and make them
answer one at a time.
Thus, protocols are required for sharing data on non dedicated channels.
Multiple access protocols can be subdivided further as –
1. Random Access Protocol: In this, all stations have same superiority that is
no station has more priority than another station. Any station can send data
depending on medium’s state( idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for shared
medium. In this, multiple stations can transmit data at the same time and can
hence lead to collision and data being garbled.
 Pure Aloha:
When a station sends data it waits for an acknowledgement. If the
acknowledgement doesn’t come within the allotted time then the station
waits for a random amount of time called back-off time (Tb) and re-sends the
data. Since different stations wait for different amount of time, the probability
of further collision decreases.
Vulnerable Time = 2* Frame transmission time
Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
 Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of
data is allowed only at the beginning of these slots. If a station misses out
the allowed time, it must wait for the next slot. This reduces the probability of
collision.
Vulnerable Time = Frame transmission time
Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
For more information on ALOHA refer – LAN Technologies
(b) CSMA – Carrier Sense Multiple Access ensures fewer collisions as the
station is required to first sense the medium (for idle or busy) before
transmitting data. If it is idle then it sends data, otherwise it waits till the channel
becomes idle. However there is still chance of collision in CSMA due to
propagation delay. For example, if station A wants to send data, it will first
sense the medium.If it finds the channel idle, it will start sending data. However,
by the time the first bit of data is transmitted (delayed due to propagation delay)
from station A, if station B requests to send data and senses the medium it will
also find it idle and will also send data. This will result in collision of data from
station A and B.
CSMA access modes-
 1-persistent: The node senses the channel, if idle it sends the data,
otherwise it continuously keeps on checking the medium for being idle and
transmits unconditionally(with 1 probability) as soon as the channel gets idle.
 Non-Persistent: The node senses the channel, if idle it sends the data,
otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
 P-persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for
some time and checks the medium again, now if it is found idle then it send
with p probability. This repeat continues until the frame is sent. It is used in
Wifi and packet radio systems.
 O-persistent: Superiority of nodes is decided beforehand and transmission
occurs in that order. If the medium is idle, node waits for its time slot to send
data.
(c) CSMA/CD – Carrier sense multiple access with collision detection. Stations
can terminate transmission of data if collision is detected. For more details refer
– Efficiency of CSMA/CD
(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The
process of collisions detection involves sender receiving acknowledgement
signals. If there is just one signal(its own) then the data is successfully sent but
if there are two signals(its own and the one with which it has collided) then it
means a collision has occurred. To distinguish between these two cases,
collision must have a lot of impact on received signal. However it is not so in
wired networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:
1. Interframe space – Station waits for medium to become idle and if found
idle it does not immediately send data (to avoid collision due to propagation
delay) rather it waits for a period of time called Interframe space or IFS. After
this time it again checks the medium for being idle. The IFS duration
depends on the priority of station.
2. Contention Window – It is the amount of time divided into slots. If the
sender is ready to send data, it chooses a random number of slots as wait
time which doubles every time medium is not found idle. If the medium is
found busy it does not restart the entire process, rather it restarts the timer
when the channel is found idle again.
3. Acknowledgement – The sender re-transmits the data if acknowledgement
is not received before time-out.
2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations.
For further details refer – Controlled Access Protocols
3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code
to multiple stations to access channel simultaneously.
 Frequency Division Multiple Access (FDMA) – The available bandwidth is
divided into equal bands so that each station can be allocated its own band.
Guard bands are also added so that no two bands overlap to avoid crosstalk
and noise.
 Time Division Multiple Access (TDMA) – In this, the bandwidth is shared
between multiple stations. To avoid collision time is divided into slots and
stations are allotted these slots to transmit data. However there is a
overhead of synchronization as each station needs to know its time slot. This
is resolved by adding synchronization bits to each slot. Another issue with
TDMA is propagation delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
 Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth nor
division of time. For example, if there are many people in a room all
speaking at the same time, then also perfect reception of data is possible if
only two person speak the same language. Similarly, data from different
stations can be transmitted simultaneously in different code languages.
Types of area networks – LAN, MAN and
WAN
The Network allows computers to connect and communicate with
different computers via any medium. LAN, MAN, and WAN are the three
major types of networks designed to operate over the area they cover.
There are some similarities and dissimilarities between them. One of the
major differences is the geographical area they cover, i.e. LAN covers the
smallest area; MAN covers an area larger than LAN and WAN comprises
the largest of all.
There are other types of Computer Networks also, like :


PAN (Personal Area Network)
 SAN (Storage Area Network)
 EPN (Enterprise Private Network)
 VPN (Virtual Private Network)
Local Area Network (LAN) –
LAN or Local Area Network connects network devices in such a way that
personal computers and workstations can share data, tools, and programs. The
group of computers and devices are connected together by a switch, or stack of
switches, using a private addressing scheme as defined by the TCP/IP protocol.
Private addresses are unique in relation to other computers on the local
network. Routers are found at the boundary of a LAN, connecting them to the
larger WAN.
Data transmits at a very fast rate as the number of computers linked is limited.
By definition, the connections must be high speed and relatively inexpensive
hardware (Such as hubs, network adapters, and Ethernet cables). LANs cover a
smaller geographical area (Size is limited to a few kilometers) and are privately
owned. One can use it for an office building, home, hospital, schools, etc. LAN
is easy to design and maintain. A Communication medium used for LAN has
twisted-pair cables and coaxial cables. It covers a short distance, and so the
error and noise are minimized.
Early LANs had data rates in the 4 to 16 Mbps range. Today, speeds are
normally 100 or 1000 Mbps. Propagation delay is very short in a LAN. The
smallest LAN may only use two computers, while larger LANs can
accommodate thousands of computers. A LAN typically relies mostly on wired
connections for increased speed and security, but wireless connections can
also be part of a LAN. The fault tolerance of a LAN is more and there is less
congestion in this network. For example A bunch of students playing Counter-
Strike in the same room (without internet).
Metropolitan Area Network (MAN) –
MAN or Metropolitan area Network covers a larger area than that of a LAN and
smaller area as compared to WAN. It connects two or more computers that are
apart but reside in the same or different cities. It covers a large geographical
area and may serve as an ISP (Internet Service Provider). MAN is designed for
customers who need high-speed connectivity. Speeds of MAN range in terms of
Mbps. It’s hard to design and maintain a Metropolitan Area Network.
The fault tolerance of a MAN is less and also there is more congestion in the
network. It is costly and may or may not be owned by a single organization. The
data transfer rate and the propagation delay of MAN are moderate. Devices
used for transmission of data through MAN are Modem and Wire/Cable.
Examples of a MAN are the part of the telephone company network that can
provide a high-speed DSL line to the customer or the cable TV network in a city.
Wide Area Network (WAN) –
WAN or Wide Area Network is a computer network that extends over a large
geographical area, although it might be confined within the bounds of a state or
country. A WAN could be a connection of LAN connecting to other LANs via
telephone lines and radio waves and may be limited to an enterprise (a
corporation or an organization) or accessible to the public. The technology is
high speed and relatively expensive.
There are two types of WAN: Switched WAN and Point-to-Point WAN. WAN is
difficult to design and maintain. Similar to a MAN, the fault tolerance of a WAN
is less and there is more congestion in the network. A Communication medium
used for WAN is PSTN or Satellite Link. Due to long-distance transmission, the
noise and error tend to be more in WAN.
WAN’s data rate is slow about a 10th LAN’s speed since it involves increased
distance and increased number of servers and terminals etc. Speeds of WAN
ranges from a few kilobits per second (Kbps) to megabits per second (Mbps).
Propagation delay is one of the biggest problems faced here. Devices used for
the transmission of data through WAN are Optic wires, Microwaves, and
Satellites. An example of a Switched WAN is the asynchronous transfer mode
(ATM) network and Point-to-Point WAN is a dial-up line that connects a home
computer to the Internet.
Conclusion –
There are many advantages of LAN over MAN and WAN, such as LAN’s
provide excellent reliability, high data transmission rate, they can easily be
managed and shares peripheral devices too. Local Area Network cannot cover
cities or towns and for that Metropolitan Area Network is needed, which can
connect a city or a group of cities together. Further, for connecting a Country or
a group of Countries one requires a Wide Area Network.

LAN Technologies | ETHERNET


A local Area Network (LAN) is a data communication network connecting
various terminals or computers within a building or limited geographical area.
The connection among the devices could be wired or wireless. Ethernet, Token
Ring and Wireless LAN using IEEE 802.11 are examples of standard LAN
technologies.
LAN has the following topologies:

 Star Topology
 Bus Topology
 Ring Topology
 Mesh Topology
 Hybrid Topology
 Tree Topology
Ethernet:-
Ethernet is the most widely used LAN technology, which is defined under IEEE
standards 802.3. The reason behind its wide usability is Ethernet is easy to
understand, implement, maintain, and allows low-cost network implementation.
Also, Ethernet offers flexibility in terms of topologies that are allowed. Ethernet
generally uses Bus Topology. Ethernet operates in two layers of the OSI model,
Physical Layer, and Data Link Layer. For Ethernet, the protocol data unit is
Frame since we mainly deal with DLL. In order to handle collision, the Access
control mechanism used in Ethernet is CSMA/CD.
Manchester Encoding Technique is used in Ethernet.
Since we are talking about IEEE 802.3 standard Ethernet, therefore, 0 is
expressed by a high-to-low transition, a 1 by the low-to-high transition. In both
Manchester Encoding and Differential Manchester, the Encoding Baud rate is
double of bit rate.
Advantages of Ethernet:
Speed: When compared to a wireless connection, Ethernet provides
significantly more speed. Because Ethernet is a one-to-one connection, this is
the case. As a result, speeds of up to 10 Gigabits per second (Gbps) or even
100 Gigabits per second (Gbps) are possible.
Efficiency: An Ethernet cable, such as Cat6, consumes less electricity, even
less than a wifi connection. As a result, these ethernet cables are thought to be
the most energy-efficient.
Good data transfer quality: Because it is resistant to noise, the information
transferred is of high quality.

Baud rate = 2* Bit rate


Ethernet LANs consist of network nodes and interconnecting media or links.
The network nodes can be of two types:
Data Terminal Equipment (DTE):- Generally, DTEs are the end devices that
convert the user information into signals or reconvert the received signals.
DTEs devices are: personal computers, workstations, file servers or print
servers also referred to as end stations. These devices are either the source or
the destination of data frames. The data terminal equipment may be a single
piece of equipment or multiple pieces of equipment that are interconnected and
perform all the required functions to allow the user to communicate. A user can
interact with DTE or DTE may be a user.
Data Communication Equipment (DCE):- DCEs are the intermediate network
devices that receive and forward frames across the network. They may be
either standalone devices such as repeaters, network switches, routers, or
maybe communications interface units such as interface cards and modems.
The DCE performs functions such as signal conversion, coding, and maybe a
part of the DTE or intermediate equipment.
Currently, these data rates are defined for operation over optical fibres and
twisted-pair cables:
i) Fast Ethernet
Fast Ethernet refers to an Ethernet network that can transfer data at a rate of
100 Mbit/s.
ii) Gigabit Ethernet
Gigabit Ethernet delivers a data rate of 1,000 Mbit/s (1 Gbit/s).
iii) 10 Gigabit Ethernet
10 Gigabit Ethernet is the recent generation and delivers a data rate of 10
Gbit/s (10,000 Mbit/s). It is generally used for backbones in high-end
applications requiring high data rates.
ALOHA
The Aloha protocol was designed as part of a project at the University of
Hawaii. It provided data transmission between computers on several of the
Hawaiian Islands involving packet radio networks. Aloha is a multiple access
protocol at the data link layer and proposes how multiple terminals access the
medium without interference or collision.
There are two different versions of ALOHA:
1. Pure Aloha
Pure Aloha is an un-slotted, decentralized, and simple to implement the
protocol. In pure ALOHA, the stations simply transmit frames whenever they
want data to send. It does not check whether the channel is busy or not before
transmitting. In case, two or more stations transmit simultaneously, the collision
occurs and frames are destroyed. Whenever any station transmits a frame, it
expects acknowledgement from the receiver. If it is not received within a
specified time, the station assumes that the frame or acknowledgement has
been destroyed. Then, the station waits for a random amount of time and sends
the frame again. This randomness helps in avoiding more collisions. This
scheme works well in small networks where the load is not much. But in largely
loaded networks, this scheme fails poorly. This led to the development of
Slotted Aloha.
To assure pure aloha: Its throughput and rate of transmission of the frame to be
predicted.
For that to make some assumptions:
i) All the frames should be the same length.
ii) Stations can not generate frames while transmitting or trying to transmit
frames.
iii)The population of stations attempts to transmit (both new frames and old
frames that collided) according to a Poisson distribution.

Vulnerable Time = 2 * Tt
The efficiency of Pure ALOHA:

Spure= G * e^-2G
where G is number of stations wants to transmit in Tt slot.

Maximum Efficiency:
Maximum Efficiency will be obtained when G=1/2

(Spure)max = 1/2 * e^-1


= 0.184

Which means, in Pure ALOHA, only about 18.4% of the time is used for
successful transmissions.
2. Slotted Aloha
This is quite similar to Pure Aloha, differing only in the way transmissions take
place. Instead of transmitting right at demand time, the sender waits for some
time. In slotted ALOHA, the time of the shared channel is divided into discrete
intervals called Slots. The stations are eligible to send a frame only at the
beginning of the slot and only one frame per slot is sent. If any station is not
able to place the frame onto the channel at the beginning of the slot, it has to
wait until the beginning of the next time slot. There is still a possibility of
collision if two stations try to send at the beginning of the same time slot. But
still, the number of collisions that can possibly take place is reduced by a large
margin and the performance becomes much well compared to Pure Aloha.

The efficiency of Slotted ALOHA:

Sslotted = G * e^-G

Maximum Efficiency:
(Sslotted)max = 1 * e^-1
= 1/e = 0.368
Maximum Efficiency, in Slotted ALOHA, is 36.8%.

What is Data Link Layer Switching?


Network switching is the process of forwarding data frames or packets from one port to
another leading to data transmission from source to destination. Data link layer is the
second layer of the Open System Interconnections (OSI) model whose function is to
divide the stream of bits from physical layer into data frames and transmit the frames
according to switching requirements. Switching in data link layer is done by network
devices called bridges.

Bridges
A data link layer bridge connects multiple LANs (local area networks) together to form a
larger LAN. This process of aggregating networks is called network bridging. A bridge
connects the different components so that they appear as parts of a single network.
The following diagram shows connection by a bridge −
Switching by Bridges
When a data frame arrives at a particular port of a bridge, the bridge examines the
frame’s data link address, or more specifically, the MAC address. If the destination
address as well as the required switching is valid, the bridge sends the frame to the
destined port. Otherwise, the frame is discarded.
The bridge is not responsible for end to end data transfer. It is concerned with
transmitting the data frame from one hop to the next. Hence, they do not examine the
payload field of the frame. Due to this, they can help in switching any kind of packets
from the network layer above.
Bridges also connect virtual LANs (VLANs) to make a larger VLAN.
If any segment of the bridged network is wireless, a wireless bridge is used to perform
the switching.
There are three main ways for bridging −

 simple bridging
 multi-port bridging
 learning or transparent bridging

Difference between Switch and Bridge


1. Switch :
A switch is basically a hardware or a device which is responsible for channeling
the data that is coming into the various input ports to a particular output port
which will further take the data to the desired destination.It is thus mainly used
for the transfer of the data packets among various network devices such as
routers, servers etc.It is actually a data link layer device (layer 2 device) which
ensures that the data packets being forwarded are error free and accurate.The
switch makes the use of the MAC address in order to forward the data to the
data link layer. Since the switch inputs the data from multiple ports thus it is
also called multiport bridge.
2. Bridge :
A bridge is basically a device which is responsible for dividing a single network
into various network segments.Thus the process of dividing a single network
into various multiple network segments is called as network bridging.Every
network segment thus represents a separate collision domain where each
domain has a different bandwidth.The performance of the network can be
improved by using a bridge as the number of collisions occurring on the network
get reduced.The bridge takes the decision that the incoming network traffic has
to be forwarded or filtered.Bridge is also responsible for maintaining the MAC
(media access control) address table.
Difference between switch and bridge:
S.NO. Switch Bridge

It is a device which is responsible for It is basically a device


channeling the data that is coming into which is responsible for
the various input ports to a particular dividing a single network
output port which will further take the into various network
1. data to the desired destination. segments.

A bridge can have 2 or 4


2. A switch can have a lot of ports. ports only.

The bridge performs the


The switch performs the packet packet forwarding by using
forwarding by using hardwares such as softwares so it is software
3. ASICS hence, it is hardware based. based.

The switching method in case of a switch The switching method in


can thus be store, forward, fragment free case of a bridge is store
4. or cut through. and forward.

5. The task of error checking is performed A bridge cannot perform


S.NO. Switch Bridge

by a switch. the error checking.

A bridge may not have a


6. A switch has buffers. buffer.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy