0% found this document useful (0 votes)
8 views24 pages

CN Unit 2

The document discusses the Data Link Layer, focusing on error detection and correction techniques, including Automatic Repeat Request (ARQ), parity checks, checksums, and Cyclic Redundancy Check (CRC). It also covers flow control methods, such as Stop and Wait ARQ and Go-Back-N ARQ, as well as the structure and types of HDLC frames and the Point-to-Point Protocol (PPP). Overall, it provides a comprehensive overview of how data integrity and communication efficiency are maintained in network transmissions.

Uploaded by

cabhi7789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views24 pages

CN Unit 2

The document discusses the Data Link Layer, focusing on error detection and correction techniques, including Automatic Repeat Request (ARQ), parity checks, checksums, and Cyclic Redundancy Check (CRC). It also covers flow control methods, such as Stop and Wait ARQ and Go-Back-N ARQ, as well as the structure and types of HDLC frames and the Point-to-Point Protocol (PPP). Overall, it provides a comprehensive overview of how data integrity and communication efficiency are maintained in network transmissions.

Uploaded by

cabhi7789
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT 2

DATA LINK LAYER


Errors
 When bits are transmitted over the computer network, they are subject to get corrupted
due to interference and network problems. The corrupted bits leads to spurious data
being received by the destination and are called errors.
 Error control in data link layer is the process of detecting and correcting data frames
that have been corrupted or lost during transmission.
 In case of lost or corrupted frames, the receiver does not receive the correct data-frame
and sender is ignorant about the loss.
 Data link layer follows a technique to detect transit errors and take necessary actions,
which is retransmission of frames whenever error is detected or frame is lost. The
process is called Automatic Repeat Request (ARQ).

Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
 Single bit error − In the received frame, only one bit has been corrupted, i.e. either
changed from 0 to 1 or from 1 to 0.

 Multiple bits error − In the received frame, more than one bits are corrupted.

 Burst error − In the received frame, more than one consecutive bits are corrupted.

Error control can be done in two ways


 Error detection − Error detection involves checking whether any error has occurred or
not. The number of error bits and the type of error does not matter.
 Error correction − Error correction involves ascertaining the exact number of bits that
has been corrupted and the location of the corrupted bits.
33
For both error detection and error correction, the sender needs to send some additional bits
along with the data bits. The receiver performs necessary checks based upon the additional
redundant bits. If it finds that the data is free from errors, it removes the redundant bits before
passing the message to the upper layers.

Bit Stuffing and Byte Stuffing


 Byte stuffing is a mechanism to convert a message formed of a sequence of bytes that
may contain reserved values such as frame delimiter, into another byte sequence that
does not contain the reserved values.
 Bit stuffing is the mechanism of inserting one or more non-information bits into a
message to be transmitted, to break up the message sequence, for synchronization
purpose.

Purposes of byte stuffing and bit stuffing


 In Data Link layer, the stream of bits from physical layer are divided into data frames.
 The data frames can be of fixed length or variable length.
 In variable - length framing, the size of each frame to be transmitted may be different.
So, a pattern of bits is used as a delimiter to mark the end of one frame and the beginning
of the next frame.
 However, if the pattern occurs in the message, then mechanisms needs to be
incorporated so that this situation is avoided.

The two common approaches are −


Byte - Stuffing − A byte is stuffed in the message to differentiate from the delimiter. This
is also called character-oriented framing.
Bit - Stuffing − A pattern of bits of arbitrary length is stuffed in the message to differentiate
from the delimiter. This is also called bit - oriented framing.

Data link layer frames in byte stuffing and bit stuffing


A data link frame has the following parts −
 Frame Header − It contains the source and the destination addresses of the frame.
 Payload field − It contains the message to be delivered. In bit stuffing it is a variable
sequence of bits, while in byte stuffing it is a variable sequence of data bytes.
 Trailer − It contains the error detection and error correction bits.
 Flags − Flags are the frame delimiters signalling the start and end of the frame. In bit
stuffing, flag comprises of a bit pattern that defines the beginning and end bits. It is
generally of 8-bits and comprises of six or more consecutive 1s. In byte stuffing, flag is
of 1- byte denoting a protocol - dependent special character.

34
Mechanisms of byte stuffing versus bit stuffing
Byte Stuffing Mechanism
If the pattern of the flag byte is present in the message byte sequence, there should be a strategy
so that the receiver does not consider the pattern as the end of the frame. Here, a special byte
called the escape character (ESC) is stuffed before every byte in the message with the same
pattern as the flag byte. If the ESC sequence is found in the message byte, then another ESC
byte is stuffed before it.

Bit Stuffing Mechanism


Here, the delimiting flag sequence generally contains six or more consecutive 1s. Most
protocols use the 8-bit pattern 01111110 as flag. In order to differentiate the message from the
flag in case of same sequence, a single bit is stuffed in the message. Whenever a 0 bit is
followed by five consecutive 1bits in the message, an extra 0 bit is stuffed at the end of the
five 1s. When the receiver receives the message, it removes the stuffed 0s after each sequence
of five 1s. The un-stuffed message is then sent to the upper layers.

Error Detection Techniques


There are three main techniques for detecting errors in frames: Parity Check, Checksum and
Cyclic Redundancy Check (CRC).

35
Parity Check
 The parity check is done by adding an extra bit, called parity bit to the data to make a
number of 1s either even in case of even parity or odd in case of odd parity.
 While creating a frame, the sender counts the number of 1s in it and adds the parity bit
in the following way
 In case of even parity: If a number of 1s is even then parity bit value is 0. If the
number of 1s is odd then parity bit value is 1.
 In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number
of 1s is even then parity bit value is 1.
 On receiving a frame, the receiver counts the number of 1s in it. In case of even
parity check, if the count of 1s is even, the frame is accepted, otherwise, it is rejected.
A similar rule is adopted for odd parity check.
 The parity check is suitable for single bit error detection only.

Checksum
In this error detection scheme, the following procedure is applied
 Data is divided into fixed sized frames or segments.
 The sender adds the segments using 1’s complement arithmetic to get the sum. It then
complements the sum to get the checksum and sends it along with the data frames.
 The receiver adds the incoming segments along with the checksum using 1’s
complement arithmetic to get the sum and then complements it.

 If the result is zero, the received frames are accepted; otherwise, they are discarded.

Cyclic Redundancy Check (CRC)


Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by a
predetermined divisor agreed upon by the communicating system. The divisor is generated
using polynomials.
 Here, the sender performs binary division of the data segment by the divisor. It then
appends the remainder called CRC bits to the end of the data segment. This makes the
resulting data unit exactly divisible by the divisor.
 The receiver divides the incoming data unit by the divisor. If there is no remainder, the
data unit is assumed to be correct and is accepted. Otherwise, it is understood that the
data is corrupted and is therefore rejected.

Error Correction Techniques


Error correction techniques find out the exact number of bits that have been corrupted and as
well as their locations. There are two principle ways
 Backward Error Correction (Retransmission) − If the receiver detects an error in
the incoming frame, it requests the sender to retransmit the frame. It is a relatively
simple technique. But it can be efficiently used only where retransmitting is not
expensive as in fiber optics and the time for retransmission is low relative to the
requirements of the application.
 Forward Error Correction − If the receiver detects some error in the incoming frame,
it executes error-correcting code that generates the actual frame. This saves bandwidth
36
required for retransmission. It is inevitable in real-time systems. However, if there are
too many errors, the frames need to be retransmitted.

The four main error correction codes are


 Hamming Codes
 Binary Convolution Code
 Reed – Solomon Code
 Low-Density Parity-Check Code

Flow Control
Flow control is a technique that allows two stations working at different speeds to
communicate with each other. It is a set of measures taken to regulate the amount of data that
a sender sends so that a fast sender does not overwhelm a slow receiver. In data link layer,
flow control restricts the number of frames the sender can send before it waits for an
acknowledgment from the receiver.

Approaches of Flow Control


Flow control can be broadly classified into two categories −

 Feedback based Flow Control - In these protocols, the sender sends frames after it has
received acknowledgments from the user. This is used in the data link layer.
 Rate based Flow Control - These protocols have built in mechanisms to restrict the
rate of transmission of data without requiring acknowledgment from the receiver. This
is used in the network layer and the transport layer.
Flow Control Techniques in Data Link Layer
Data link layer uses feedback based flow control mechanisms. There are two main techniques

Stop and Wait ARQ


This protocol involves the following transitions:
 A timeout counter is maintained by the sender, which is started when a frame is sent.

37
 If the sender receives acknowledgment of the sent frame within time, the sender is
confirmed about successful delivery of the frame. It then transmits the next frame in
queue.
 If the sender does not receive the acknowledgment within time, the sender assumes that
either the frame or its acknowledgment is lost in transit. It then retransmits the frame.
 If the sender receives a negative acknowledgment, the sender retransmits the frame.

Go-Back-N ARQ
The working principle of this protocol is:
 The sender has buffers called sending window.
 The sender sends multiple frames based upon the sending-window size, without
receiving the acknowledgment of the previous ones.
 The receiver receives frames one by one. It keeps track of incoming frame’s sequence
number and sends the corresponding acknowledgment frames.
 After the sender has sent all the frames in window, it checks up to what sequence number
it has received positive acknowledgment.
 If the sender has received positive acknowledgment for all the frames, it sends next set
of frames.
 If sender receives NACK or has not receive any ACK for a particular frame, it
retransmits all the frames after which it does not receive any positive ACK.

Selective Repeat ARQ


 Both the sender and the receiver have buffers called sending window and receiving
window respectively.
 The sender sends multiple frames based upon the sending-window size, without
receiving the acknowledgment of the previous ones.
 The receiver also receives multiple frames within the receiving window size.
 The receiver keeps track of incoming frame’s sequence numbers, buffers the frames in
memory.
 It sends ACK for all successfully received frames and sends NACK for only frames
which are missing or damaged.
 The sender in this case, sends only packet for which NACK is received.

Sliding Window
This protocol improves the efficiency of stop and wait protocol by allowing multiple frames
to be transmitted before receiving an acknowledgment.
The working principle of this protocol can be described as follows −
 Both the sender and the receiver has finite sized buffers called windows. The sender and
the receiver agrees upon the number of frames to be sent based upon the buffer size.
 The sender sends multiple frames in a sequence, without waiting for acknowledgment.
When its sending window is filled, it waits for acknowledgment. On receiving
acknowledgment, it advances the window and transmits the next frames, according to
the number of acknowledgments received.

38
HDLC
 High-level Data Link Control (HDLC) is a group of communication protocols of the
data link layer for transmitting data between network points or nodes.
 Since it is a data link protocol, data is organized into frames.
 A frame is transmitted via the network to the destination that verifies its successful
arrival.
 It is a bit - oriented protocol that is applicable for both point - to - point and multipoint
communications.

Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.
 Normal Response Mode (NRM) − Here, two types of stations are there, a primary
station that send commands and secondary station that can respond to received
commands. It is used for both point - to - point and multipoint communications.

 Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each
station can both send commands and respond to commands. It is used for only point -
to - point communications.

HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure
varies according to the type of frame. The fields of a HDLC frame are −

39
 Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The bit
pattern of the flag is 01111110.
 Address − It contains the address of the receiver. If the frame is sent by the primary
station, it contains the address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary station. The address field may
be from 1 byte to several bytes.
 Control − It is 1 or 2 bytes containing flow and error control information.
 Payload − This carries the data from the network layer. Its length may vary from one
network to another.
 FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)

Types of HDLC Frames


There are three types of HDLC frames. The type of frame is determined by the control field
of the frame −
 I-frame − I-frames or Information frames carry user data from the network layer. They
also include flow and error control information that is piggybacked on user data. The
first bit of control field of I-frame is 0.
 S-frame − S-frames or Supervisory frames do not contain information field. They are
used for flow and error control when piggybacking is not required. The first two bits of
control field of S-frame is 10.
 U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous
functions, like link management. It may contain an information field, if required. The
first two bits of control field of U-frame is 11.

40
MEDIUM ACCESS SUB LAYER
Point to Point Protocol
 Point - to - Point Protocol (PPP) is a communication protocol of the data link layer that
is used to transmit multiprotocol data between two directly connected (point-to-point)
computers.
 It is a byte - oriented protocol that is widely used in broadband communications having
heavy loads and high speeds.
 Since it is a data link layer protocol, data is transmitted in frames. It is also known as
RFC 1661.
Services Provided by PPP
The main services provided by Point - to - Point Protocol are −
 Defining the frame format of the data to be transmitted.
 Defining the procedure of establishing link between two points and exchange of data.
 Stating the method of encapsulation of network layer data in the frame.
 Stating authentication rules of the communicating devices.
 Providing address for network communication.
 Providing connections over multiple links.
 Supporting a variety of network layer protocols by providing a range os services.

Components of PPP
Point - to - Point Protocol is a layered protocol having three components −
 Encapsulation Component − It encapsulates the datagram so that it can be transmitted
over the specified physical layer.
 Link Control Protocol (LCP) − It is responsible for establishing, configuring, testing,
maintaining and terminating links for transmission. It also imparts negotiation for set up
of options and use of features by the two endpoints of the links.
 Authentication Protocols (AP) − These protocols authenticate endpoints for use of
services. The two authentication protocols of PPP are:
o Password Authentication Protocol (PAP)
o Challenge Handshake Authentication Protocol (CHAP)
 Network Control Protocols (NCPs) − These protocols are used for negotiating the
parameters and facilities for the network layer. For every higher-layer protocol
supported by PPP, one NCP is there. Some of the NCPs of PPP are:
 Internet Protocol Control Protocol (IPCP)
 OSI Network Layer Control Protocol (OSINLCP)
 Internetwork Packet Exchange Control Protocol (IPXCP)
 DECnet Phase IV Control Protocol (DNCP)
 NetBIOS Frames Control Protocol (NBFCP)
 IPv6 Control Protocol (IPV6CP)

41
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one or more
bytes. The fields of a PPP frame are −
 Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the
flag is 01111110.
 Address − 1 byte which is set to 11111111 in case of broadcast.
 Control − 1 byte set to a constant value of 11000000.
 Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
 Payload − This carries the data from the network layer. The maximum length of the
payload field is 1500 bytes. However, this may be negotiated between the endpoints of
communication.
 FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)

FDDI
Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for transmission
of data in local area network (LAN) over fiber optic cables. It is applicable in large LANs that
can extend up to 200 kilometers in diameter.

Features
 FDDI uses optical fiber as its physical medium.
 It operates in the physical and medium access control (MAC layer) of the Open Systems
Interconnection (OSI) network model.
 It provides high data rate of 100 Mbps and can support thousands of users.
42
 It is used in LANs up to 200 kilometers for long distance voice and multimedia
communication.
 It uses ring based token passing mechanism and is derived from IEEE 802.4 token bus
standard.
 It contains two token rings, a primary ring for data and token transmission and a
secondary ring that provides backup if the primary ring fails.
 FDDI technology can also be used as a backbone for a wide area network (WAN).

The following diagram shows FDDI –

Frame Format
The frame format of FDDI is similar to that of token bus as shown in the following diagram-

The fields of an FDDI frame are −


 Preamble: 1 byte for synchronization.
 Start Delimiter: 1 byte that marks the beginning of the frame.
 Frame Control: 1 byte that specifies whether this is a data frame or control frame.
 Destination Address: 2-6 bytes that specifies address of destination station.
 Source Address: 2-6 bytes that specifies address of source station.
 Payload: A variable length field that carries the data from the network layer.
 Checksum: 4 bytes frame check sequence for error detection.
43
 End Delimiter: 1 byte that marks the end of the frame.

Token Bus and Token Ring


Token Ring
 Token ring (IEEE 802.5) is a communication protocol in a local area network (LAN)
where all stations are connected in a ring topology and pass one or more tokens for
channel acquisition.
 A token is a special frame of 3 bytes that circulates along the ring of stations. A station
can send data frames only if it holds a token. The tokens are released on successful
receipt of the data frame.

Token Passing Mechanism in Token Ring


If a station has a frame to transmit when it receives a token, it sends the frame and then passes
the token to the next station; otherwise it simply passes the token to the next station. Passing
the token means receiving the token from the preceding station and transmitting to the
successor station. The data flow is unidirectional in the direction of the token passing. In order
that tokens are not circulated infinitely, they are removed from the network once their purpose
is completed. This is shown in the following diagram –

Token Bus
 Token Bus (IEEE 802.4) is a standard for implementing token ring over virtual ring in
LANs.
 The physical media has a bus or a tree topology and uses coaxial cables.
 A virtual ring is created with the nodes/stations and the token is passed from one node
to the next in a sequence along this virtual ring.
 Each node knows the address of its preceding station and its succeeding station.
 A station can only transmit data when it has the token.
 The working principle of token bus is similar to Token Ring.

Token Passing Mechanism in Token Bus


A token is a small message that circulates among the stations of a computer network providing
permission to the stations for transmission. If a station has data to transmit when it receives a
token, it sends the data and then passes the token to the next station; otherwise, it simply passes
the token to the next station. This is depicted in the following diagram -

44
Reservation
 In the reservation method, a station needs to make a reservation before sending data.
 The time line has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
 If there are M stations, the reservation interval is divided into M slots, and each station
has one slot.
 Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
 In general, i th station may announce that it has a frame to send by inserting a 1 bit into
i th slot. After all N slots have been checked, each station knows which stations wish to
transmit.
 The stations which have reserved their slots transfer their frames in that order.
 After data transmission period, next reservation interval begins.
 Since everyone agrees on who goes next, there will never be any collisions.

The following figure shows a situation with five stations and a five slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only
station 1 has made a reservation.

Polling
 Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
 In this, one acts as a primary station(controller) and the others are secondary stations.
All data exchanges must be made through the controller.
 The message sent by the controller contains the address of the node being selected for
granting access.
45
 Although all nodes receive the message but the addressed one responds to it and sends
data, if any. If there is no data, usually a “poll reject”(NAK) message is sent back.
 Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.

Pure Aloha
 It allows user to transmit whenever they have data to be sent.
 Senders wait to see if a collision occurred (after the whole message has been sent).
 If collision occurs, each station involved waits a random amount of time then tries again.
 System in which multiple users share a common channel in a way can lead to conflicts
are widely known as contention system.
 Whenever 2 frames try to copy the channel at the same time, there will be a collision
and both will be garbled.
 If the 1st bit of the new frame overlaps with hust the last bit of a frame almost finished,
both frames will be totally destroyed and both frames will have to be retransmitted later.
 Frames are transmitted at completely arbitrary times.
 The throughput of the Pure Aloha is maximized when the frames are of uniform length.
 Formula to calculate the throughput of Pure Aloha is,
S = G * e^2G
 The throughput is maximum when G=1/2 which is 18% of the total transmitted data
frames.

46
Slotted Aloha
 It was invented to improve the efficiency of Pure Aloha as chances of collision in Pure
Aloha are very high.
 The time of the shared channel is divided into discrete intervals called slots.
 The stations can send a frame only at the beginning of the slot and only one frame is
sent in each slot.
 If any station is not able to place the frame onto the channel at the beginning of the slot
then the station has to wait until the beginning of the next time slot.
 The formula to calculate the throughput of the Slotted Aloha is
S = G * e^G
 The throughput is maximum when G=1 which is 37% of the total transmitted data
frames.
 37% of the time slot is empty, 37% successes and 26% collision.

Difference between Pure Aloha and Slotted Aloha


S.NO Pure Aloha Slotted Aloha
In this aloha, any station can transmit the data at In this, any station can transmit the data at the
1.
any time. beginning of any time slot.
In this, The time is continuous and not globally In this, The time is discrete and globally
2.
synchronized. synchronized.
Vulnerable time for pure aloha Vulnerable time for Slotted aloha
3.
= 2 x Tt = Tt
In Pure Aloha, Probability of successful
In Slotted Aloha, Probability of successful
4. transmission of data packet
transmission of data packet
= G x e-G
= G x e-2G

5. In pure aloha, Maximum efficiency In slotted aloha, Maximum efficiency


= 18.4% = 36.8%
Pure aloha doesn’t reduces the number of Slotted aloha reduces the number of collisions to half
6.
collisions to half. and doubles the efficiency of pure aloha.

47
Carrier Sense Multiple Access (CSMA)
 This method was developed to decrease the chances of collisions when two or more
stations start sending their signals over the datalink layer.
 Carrier Sense multiple access requires that each station first check the state of the
medium before sending.
Vulnerable Time – It is the Propagation Time (Tp). This is the time needed for a signal to
propagate from one end of the medium to the other end.
Vulnerable time = Propagation time (Tp)

Persistence Methods in CSMA


 1 – persistent method: If the station finds the line idle, it sends frame immediately
(with probability 1).

 Non – persistent method: If the line is idle, station sends the frame immediately. If the
line is not idle, it waits for a random amount of time and then senses the line again.

 p – persistent method: It combines the advantages of the other 2 strategies.

48
Carrier Sense Multiple Access with Collision Detection (CSMA/CD)
 In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful.
 If successful, the station is finished, if not, the frame is sent again.

In the diagram, A starts send the first bit of its frame at t1 and since C sees the channel idle at
t2, starts sending its frame at t2. C detects A’s frame at t3 and aborts transmission. A detects
C’s frame at t4 and aborts its transmission. Transmission time for C’s frame is therefore t3-t2
and for A’s frame is t4-t1.

So, the frame transmission time (Tfr) should be at least twice the maximum propagation time
(Tp). This can be deduced when the two stations involved in collision are maximum distance
apart.

Process –

49
The entire process of collision detection can be explained as follows:

Throughput and Efficiency – The throughput of CSMA/CD is much greater than pure or
slotted ALOHA.
 For 1-persistent method throughput is 50% when G=1.
 For non-persistent method throughput can go upto 90%.

FDMA
 Frequency Division Multiple Access (FDMA) is one of the most common analogue
multiple access methods.
 The frequency band is divided into channels of equal bandwidth so that each
conversation is carried on a different frequency (as shown in the figure below).

Overview
 In FDMA method, guard bands are used between the adjacent signal spectra to minimize
crosstalk between the channels.
 A specific frequency band is given to one person, and it will be received by identifying
each of the frequency on the receiving end.
 It is often used in the first generation of analog mobile phone.

Advantages of FDMA
As FDMA systems use low bit rates (large symbol time) compared to average delay spread, it
offers the following advantages −
 Reduces the bit rate information and the use of efficient numerical codes increases the
capacity.
 It reduces the cost and lowers the inter symbol interference (ISI)
 Equalization is not necessary.
 An FDMA system can be easily implemented. A system can be configured so that the
improvements in terms of speech encoder and bit rate reduction may be easily
incorporated.
 Since the transmission is continuous, less number of bits are required for
synchronization and framing.

Disadvantages of FDMA
Although FDMA offers several advantages, it has a few drawbacks as well, which are listed
below −

50
 It does not differ significantly from analog systems; improving the capacity depends on
the signal-to-interference reduction, or a signal-to-noise ratio (SNR).
 The maximum flow rate per channel is fixed and small.
 Guard bands lead to a waste of capacity.
 Hardware implies narrowband filters, which cannot be realized in VLSI and therefore
increases the cost.
TDMA
 Time Division Multiple Access (TDMA) is a digital cellular telephone communication
technology.
 It facilitates many users to share the same frequency without interference.
 Its technology divides a signal into different timeslots, and increases the data carrying
capacity.

Overview
 Time Division Multiple Access (TDMA) is a complex technology, because it requires
an accurate synchronization between the transmitter and the receiver.
 TDMA is used in digital mobile radio systems. The individual mobile stations cyclically
assign a frequency for the exclusive use of a time interval.
 In most of the cases, the entire system bandwidth for an interval of time is not assigned
to a station.
 However, the frequency of the system is divided into sub-bands, and TDMA is used for
the multiple access in each sub-band. Sub-bands are known as carrier frequencies.
 The mobile system that uses this technique is referred as the multi-carrier systems.

In the following example, the frequency band has been shared by three users. Each user is
assigned definite timeslots to send and receive data. In this example, user ‘B’ sends after user
‘A,’ and user ‘C’ sends thereafter. In this way, the peak power becomes a problem and larger
by the burst communication.

Advantages of TDMA
Here is a list of few notable advantages of TDMA −
 Permits flexible rates (i.e. several slots can be assigned to a user, for example, each time
interval translates 32Kbps, a user is assigned two 64 Kbps slots per frame).
 Can withstand gusty or variable bit rate traffic. Number of slots allocated to a user can
be changed frame by frame (for example, two slots in the frame 1, three slots in the
frame 2, one slot in the frame 3, frame 0 of the notches 4, etc.).
 No guard band required for the wideband system.

51
 No narrowband filter required for the wideband system.

Disadvantages of TDMA
The disadvantages of TDMA are as follow −
 High data rates of broadband systems require complex equalization.
 Due to the burst mode, a large number of additional bits are required for synchronization
and supervision.
 Call time is needed in each slot to accommodate time to inaccuracies (due to clock
instability).
 Electronics operating at high bit rates increase energy consumption.
 Complex signal processing is required to synchronize within short slots.

CDMA
 Code Division Multiple Access (CDMA) is a sort of multiplexing that facilitates various
signals to occupy a single transmission channel.
 It optimizes the use of available bandwidth. The technology is commonly used in ultra-
high-frequency (UHF) cellular telephone systems.

Overview
 Code Division Multiple Access system is very different from time and frequency
multiplexing.
 In this system, a user has access to the whole bandwidth for the entire duration.
 The basic principle is that different CDMA codes are used to distinguish among the
different users.

How communication with code takes place?


i. If codes are multiplied with each other, then the answer is 0.
ii. If codes are multiplied with itself, then we get 4 [no. of stations].
For ex.
Let there be 4 stations.
Let code for Station 1, 2, 3 and 4 be c1, c2, c3 and c4 respectively.
Let data for Station 1, 2, 3 and 4 be d1, d2, d3 and d4 respectively.
Therefore, c1 * c2 = 0 and c2 * c2 = 4

52
The factors deciding the CDMA capacity are −
 Processing Gain
 Signal to Noise Ratio
 Voice Activity Factor
 Frequency Reuse Efficiency

Advantages of CDMA
CDMA has a soft capacity. The greater the number of codes, the more the number of users. It
has the following advantages −
 CDMA requires a tight power control, as it suffers from near-far effect. In other words,
a user near the base station transmitting with the same power will drown the signal latter.
All signals must have more or less equal power at the receiver
 Rake receivers can be used to improve signal reception. Delayed versions of time (a
chip or later) of the signal (multipath signals) can be collected and used to make
decisions at the bit level.
 Flexible transfer may be used. Mobile base stations can switch without changing
operator. Two base stations receive mobile signal and the mobile receives signals from
the two base stations.
 Transmission Burst − reduces interference.

Disadvantages of CDMA
The disadvantages of using CDMA are as follows −
 The code length must be carefully selected. A large code length can induce delay or may
cause interference.
 Time synchronization is required.
 Gradual transfer increases the use of radio resources and may reduce capacity.
 As the sum of the power received and transmitted from a base station needs constant
tight power control. This can result in several handovers.

LLC
 The logical link control (LLC) is the upper sublayer of the data link layer of the open
system interconnections (OSI) reference model for data transmission.
 It acts act an interface between the network layer and the medium access control (MAC)
sublayer of the data link layer.
 The LLC sublayer is mainly used for its multiplexing property.
 It allows several network protocols to operate simultaneously within a multipoint
network over the same network medium.
53
 The Open System Interconnections (OSI) model is a 7 – layered networking framework
that conceptualizes how communications should be done between heterogeneous
systems.
 The data link layer is the second lowest layer. It is divided into two sublayers −
 The logical link control (LLC) sublayer
 The medium access control (MAC) sublayer
The following diagram depicts the position of the LLC sublayer -

Functions
 The primary function of LLC is to multiplex protocols over the MAC layer while
transmitting and likewise to de-multiplex the protocols while receiving.
 LLC provides hop-to-hop flow and error control.
 It allows multipoint communication over computer network.
 Frame Sequence Numbers are assigned by LLC.
 In case of acknowledged services, it tracks acknowledgements

Ethernet
 Ethernet is most widely used LAN Technology, which is defined under IEEE standards
802.3.
 The reason behind its wide usability is Ethernet is easy to understand, implement,
maintain and allows low-cost network implementation.
 Also, Ethernet offers flexibility in terms of topologies which are allowed.
 Ethernet operates in two layers of the OSI model, Physical Layer, and Data Link Layer.
 For Ethernet, the protocol data unit is Frame since we mainly deal with DLL. In order
to handle collision, the Access control mechanism used in Ethernet is CSMA/CD.

Traditional Ethernet and Fast Ethernet


 Traditional Ethernet supports data transfers at a rate of 10 megabits per second (Mbps).
As the performance needs of networks increased over time, the industry created
additional Ethernet specifications for Fast Ethernet and Gigabit Ethernet. The most
common form of traditional Ethernet, however, is 10Base-T. It offers better electrical
properties than Thicknet or Thinnet because 10Base-T cables use unshielded twisted
pair (UTP) wiring rather than coaxial. 10Base-T is also more cost-effective than
alternatives such as fiber optic cabling.
54
 Fast Ethernet extends traditional Ethernet performance up to 100 Mbps, and Gigabit
Ethernet, up to 1,000 Mbps. Although they aren't available to the average consumer, 10
Gigabit Ethernet (10,000 Mbps) now powers the networks of some businesses, data
centers, and Internet2 entities. Generally, however, the expense limits its widespread
adoption. Fast Ethernet comes in two major varieties:
i. 100Base-T (using unshielded twisted pair cable)
ii. 100Base-FX (using fiber optic cable)

Network Devices
1. Repeater – A repeater operates at the physical layer. Its job is to regenerate the signal over
the same network before the signal becomes too weak or corrupted so as to extend the length
to which the signal can be transmitted over the same network. An important point to be noted
about repeaters is that they do not amplify the signal. When the signal becomes weak, they
copy the signal bit by bit and regenerate it at the original strength. It is a 2 port device.

2. Hub – A hub is basically a multiport repeater. A hub connects multiple wires coming from
different branches, for example, the connector in star topology which connects different
stations. Hubs cannot filter data, so data packets are sent to all connected devices. In other
words, collision domain of all hosts connected through Hub remains one. Also, they do not
have intelligence to find out best path for data packets which leads to inefficiencies and
wastage.

Types of Hub
 Active Hub - These are the hubs which have their own power supply and can clean,
boost and relay the signal along with the network. It serves both as a repeater as well as
wiring centre. These are used to extend the maximum distance between nodes.
 Passive Hub - These are the hubs which collect wiring from nodes and power supply
from active hub. These hubs relay signals onto the network without cleaning and
boosting them and can’t be used to extend the distance between nodes.

3. Bridge – A bridge operates at data link layer. A bridge is a repeater, with add on the
functionality of filtering content by reading the MAC addresses of source and destination. It
is also used for interconnecting two LANs working on the same protocol. It has a single input
and single output port, thus making it a 2 port device.

Types of Bridges
 Transparent Bridges - These are the bridge in which the stations are completely
unaware of the bridge’s existence i.e. whether or not a bridge is added or deleted from
the network, reconfiguration of the stations is unnecessary. These bridges make use of
two processes i.e. bridge forwarding and bridge learning.
 Source Routing Bridges - In these bridges, routing operation is performed by source
station and the frame specifies which route to follow. The hot can discover frame by
sending a special frame called discovery frame, which spreads through the entire
network using all possible paths to destination.

55
4. Switch – A switch is a multiport bridge with a buffer and a design that can boost its
efficiency (a large number of ports imply less traffic) and performance. A switch is a data link
layer device. The switch can perform error checking before forwarding data, that makes it very
efficient as it does not forward packets that have errors and forward good packets selectively
to correct port only. In other words, switch divides collision domain of hosts, but broadcast
domain remains same.

5. Routers – A router is a device like a switch that routes data packets based on their IP
addresses. Router is mainly a Network Layer device. Routers normally connect LANs and
WANs together and have a dynamically updating routing table based on which they make
decisions on routing the data packets. Router divide broadcast domains of hosts connected
through it.

6. Gateway – A gateway, as the name suggests, is a passage to connect two networks together
that may work upon different networking models. They basically work as the messenger agents
that take data from one system, interpret it, and transfer it to another system. Gateways are
also called protocol converters and can operate at any network layer. Gateways are generally
more complex than switch or router.

7. Brouter – It is also known as bridging router is a device which combines features of both
bridge and router. It can work either at data link layer or at network layer. Working as router,
it is capable of routing packets across networks and working as bridge, it is capable of filtering
local area network traffic.

56

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy