0% found this document useful (0 votes)
16 views78 pages

1%.IV-Sem - Computer Network Notes

The document provides an overview of Computer Networks, covering definitions, goals, components, and classifications such as LAN, MAN, and WAN. It discusses the layered architecture of networks, including the OSI and TCP/IP models, detailing their respective layers and functionalities. Additionally, it highlights connection-oriented and connectionless services, design issues, and the significance of protocols in data communication.

Uploaded by

Aditi Agrawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views78 pages

1%.IV-Sem - Computer Network Notes

The document provides an overview of Computer Networks, covering definitions, goals, components, and classifications such as LAN, MAN, and WAN. It discusses the layered architecture of networks, including the OSI and TCP/IP models, detailing their respective layers and functionalities. Additionally, it highlights connection-oriented and connectionless services, design issues, and the significance of protocols in data communication.

Uploaded by

Aditi Agrawal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 78

Program : B.

Tech
Subject Name: Computer Networks
Subject Code: CS-602
Semester: 6th
Downloaded from www.rgpvnotes.in

Department of Computer Science and Engineering


CS602 Computer Networks
Subject Notes: UNIT-I

Syllabus: Computer Network: Definitions, goals, components, Architecture, Classifications & Types.
Layered Architecture: Protocol hierarchy, Design Issues, Interfaces and Services, Connection Oriented &
Connectionless Services, Service primitives, Design issues & its functionality. ISO OSI Reference Model:
Principle, Model, Descriptions of various layers and its comparison with TCP/IP. Principals of physical
layer: Media, Bandwidth, Data rate and Modulations.

Computer Network: Definition


A computer network is a set of computers connected together for the purpose of sharing resources. The
most common resource shared today is connection to the Internet. Other shared resources can include a
printer or a file server.

#Goals
 Several machines can share printers, tape drives, etc.
 Reduced cost
 Resource and load sharing
 Programs do not need to run on a single machine
 High reliability
 If a machine goes down, another can take over
 Mail and communication

#Components
A data communications system has five components.

Fig. 1.1 Computer Network: Components

1. Message. The message is the information (data) to be communicated. Popular forms of information
include text, numbers, pictures, audio, and video.
2. Sender. The sender is the device that sends the data message. It can be a computer, workstation,
telephone handset, video camera, and so on.
3. Receiver. The receiver is the device that receives the message. It can be a computer, workstation,
telephone handset, television, and so on.
4. Transmission medium. The transmission medium is the physical path by which a message travels from
sender to receiver. Some examples of transmission media include twisted-pair wire, coaxial cable, fiber-
optic cable, and radio waves
5. Protocol. A protocol is a set of rules that govern data communications. It represents an agreement
between the communicating devices. Without a protocol, two devices may be connected but not
communicating.

#Architecture

Page no: 1 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Network architecture is the design of a communications network. It is a framework for the specification of
a network's physical components and their functional organization and configuration.
In telecommunication, the specification of a network architecture may also include a detailed description
of products and services delivered via a communications network, as well as detailed rate and billing
structures under which services are compensated. The network architecture of the Internet is
predominantly expressed by its use of the Internet Protocol Suite, rather than a specific model for
interconnecting networks or nodes in the network, or the usage of specific types of hardware link
#Computer Network’s: Classifications & Types.
There are three types of network classification
1) LAN (Local area network)
2) MAN (Metropolitan Area network)
3) WAN (Wide area network)

Fig. 1.2 Computer Network: Classifications

1) Local area network (LAN)


LAN is a group of the computers placed in the same room, same floor, or the same building so they are
connected to each other to form a single network to share their resources such as disk drives, data,
CPU, modem etc. LAN is limited to some geographical area less than 2 km. Most of LAN is used widely
is an Ethernet system of the bus topology.
Characteristics of LAN
LAN connects the computer in a single building, block and they are working in any limited area less
than 2 km.
Media access control methods in a LAN, the bus-based Ethernet and token ring.

Fig. 1.3 Local area network

2) Metropolitan Area network (MAN)


The metropolitan area network is a large computer network that expands a Metropolitan area or
campus. Its geographic area between a WAN and LAN. its expand round 50km devices used are modem
and wire/cable.

Fig. 1.4 Metropolitan Area network

Page no: 2 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Characteristics of MAN
1) Its covers the towns and cities (50km)
2) MAN is used by the communication medium for optical fibre cables, it also used for other media.

3) Wide area Network (WAN)


The wide area network is a network which connects the countries, cities or the continents, it is a public
communications links. The most popular example of a WAN is the internet. WAN is used to connect LAN so
the users and the computer in the one location can communicate with each other.

Fig. 1.5 Wide area Network


Characteristics of WAN
1) Its covers the large distances (More than 100 KM).
2) Communication medium used are satellite, telephones which are connected by the routers.

#Layered Architecture:
Protocol hierarchy: - To tackle with the design complexity most of the networks are organize as a set of
layers or levels. The fundamental idea of layered architecture is to divide the design into small pieces. The
layering provides modularity to the network design. The main duty of each layer is to provide offer services
to higher layers, and provide abstraction. The main benefits of layered architecture are modularity and
clear interfaces.

Fig. 1.6 Five Layered Network


#Design Issues:
Layered architecture in computer network design
Layered architectures have several advantages. Some of them are,
 Modularity and clear interface
 Provide flexibility to modify network services

Page no: 3 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

 Ensure independence of layers


 Management of network architecture is easy
 Each layer can be analysed and tested independent of other layers

#Interfaces and Services:


The benefits to layering networking protocol specifications are many including:
Interoperability - Layering promotes greater interoperability between devices from different
manufacturers and even between different generations of the same type of device from the same
manufacturer.
Greater Compatibility - One of the greatest of all the benefits of using a hierarchal or layered approach to
networking and communications protocols is the greater compatibility between devices, systems and
networks that this delivers.
Better Flexibility - Layering and the greater compatibility that it delivers goes a long way to improving the
flexibility. Particularly in terms of options and choices.
Increased Life Expectancy - Increased product working life expectancies as backwards compatibility is
made considerably easier. Devices from different technology generations can co-exist thus the older units
do not get discarded immediately newer technologies are adopted.
Scalability - Experience has shown that a layered or hierarchal approach to networking protocol design and
implementation scales better than the horizontal approach.
Mobility - Greater mobility is more readily delivered whenever we adopt the layered and segmented
strategies into our architectural design
Cost Effective Quality - The layered approach has proven time again and again to be the most economical
way of developing and implementing any system be they are small, simple, large or complex makes no
difference. This ease of development and implementation translates to greater efficiency and effectiveness
which in turn translates into greater economic rationalization and cheaper products while not
compromising quality.
Standardization and Certification - The layered approach to networking protocol specifications facilitates a
more streamlined and simplified standardization and certification process; particularly from an "industry"
point of view. This is due to the clearer and more distinct definition and demarcation of what functions
occur at each layer when the layered approach is taken.
Rapid Application Development (RAD) - Workloads can be evenly distributed which means that multiple
activities can be conducted in parallel thereby reducing the time taken to develop, debug, optimize and
package new technologies ready for production implementation.

#Connection Oriented & Connectionless Services, Service primitives, Design issues & its functionality
 Connection-oriented
There is a sequence of operation to be followed by the users of connection-oriented service. They are:
1. Connection is established
2. Information is sent
3. Connection is released
In connection-oriented service we must establish a connection before starting the communication. When
connection is established we send the message or the information. Then we release the connection.
Connection oriented service is more reliable than connectionless service. Example of connection oriented
is TCP (Transmission Control Protocol) protocol.

 Connectionless
It is similar to postal services, as it carries the full address where the message (letter) is to be carried. Each
message is routed independently from source to destination. The order of message sent can be different
from the order received.

Page no: 4 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

In connectionless the data is transferred in one direction from source to destination without checking that
destination is still there or not or if it prepared to accept the message. Authentication is not needed in this.
Example of Connectionless service is UDP (User Datagram Protocol) protocol.

Fig. 1.7 Connection Oriented & Connectionless Services

#Service Primitives
Connection Oriented Service Primitives
LISTEN Block waiting for an incoming connection
CONNECTION Establish a connection with a waiting peer
RECEIVE Block waiting for an incoming message
SEND Sending a message to the peer
DISCONNECT Terminate a connection
Connectionless Service Primitives
UNIDATA This primitive sends a packet of data
FACILITY, REPORT Primitive for enquiring about the performance of the network, like delivery statistics.

Design issues & its functionality


 Justifying a Network: - Some applications may be best satisfied by individual point to point
connections to handle very specific communication requirements.
 Scope: - The scope of the network is viewed as bounded on one side by the offerings of the
common carriers who provide communication facilities from which the network is built and on the
other side by the application on which it is interconnected.
 Manageability:-
 Network Architecture: - While designing the network architecture, network may be a single
homogeneous mesh comprised of a single type of node and a single type of link. Network
architecture might be hierarchical network with one type link riding on another.
 Switch Mode: - For data transmission, different types of switching methods are possible. These are
packet switching, circuit switching and hybrid switching.
 Node Placement and sizing: - A fundamental problem in the topological optimization of a network
is the selection of the network node sites and where to place multiplexers, hubs and switch.
 Link Topology and sizing: - It involves selecting the specific links interconnecting nodes. At the
highest level, that is where the architecture of the network is derived. Thus a hierarchy that include
a backbone as well as LAN’S may be defined. It is possible to permit the backbone to be a mesh
while LAN is constrained to be trees.
 Routing: - It involves selecting paths for each requirements. At higher level, this involves selecting
the routing procedure itself.

Page no: 5 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

#ISO-OSI Reference Model


#Principles of OSI Reference Model
The OSI reference model has 7 layers. The principles that were applied to arrive at the seven layers can be
briefly summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown together in
the same layer out of necessity and small enough that architecture does not become unwieldly.

Feature of OSI Model:


1. Big picture of communication over network is understandable through this OSI model.
2. We see how hardware and software work together.
3. We can understand new technologies as they are developed.
4. Troubleshooting is easier by separate networks.
5. Can be used to compare basic functional relationships on different networks.

Fig. 1.8 OSI Reference Model


#Description of Different Layers:

Layer 1: The Physical Layer:


1. It is the lowest layer of the OSI Model.

Page no: 6 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

2. It activates, maintains and deactivates the physical connection.


3. It is responsible for transmission and reception of the unstructured raw data over network.
4. Voltages and data rates needed for transmission is defined in the physical layer.
5. It converts the digital/ analog bits into electrical signal or optical signals.
6. Data encoding is also done in this layer.
Layer 2: Data Link Layer:
1. Data link layer synchronizes the information which is to be transmitted over the physical layer.
2. The main function of this layer is to make sure data transfer is error free from one node to another,
over the physical layer.
3. Transmitting and receiving data frames sequentially is managed by this layer.
4. This layer sends and expects acknowledgements for frames received and sent respectively.
Resending of non-acknowledgement received frames is also handled by this layer.
5. This layer establishes a logical layer between two nodes and also manages the Frame traffic control
over the network. It signals the transmitting node to stop, when the frame buffers are full.
Layer 3: The Network Layer:
1. It routes the signal through different channels from one node to other.
2. It acts as a network controller. It manages the Subnet traffic.
3. It decides by which route data should take.
4. It divides the outgoing messages into packets and assembles the incoming packets into messages
for higher levels.
Layer 4: Transport Layer:
1. It decides if data transmission should be on parallel path or single path.
2. Functions such as Multiplexing, Segmenting or Splitting on the data are done by this layer
3. It receives messages from the Session layer above it, converts the message into smaller units and
passes it on to the Network layer.
4. Transport layer can be very complex, depending upon the network requirements.
Layer 5: The Session Layer:
1. Session layer manages and synchronize the conversation between two different applications.
2. Transfer of data from source to destination session layer streams of data are marked and are
resynchronized properly, so that the ends of the messages are not cut prematurely, and data loss is
avoided.
Layer 6: The Presentation Layer:
1. Presentation layer takes care that the data is sent in such a way that the receiver will understand
the information (data) and will be able to use the data.
2. While receiving the data, presentation layer transforms the data to be ready for the application
layer.
3. Languages (syntax) can be different of the two communicating systems. Under this condition
presentation layer plays a role of translator.
4. It performs Data compression, Data encryption, Data conversion etc.
Layer 7: Application Layer:
1. It is the topmost layer.
2. Transferring of files disturbing the results to the user is also done in this layer. Mail services,
directory services, network resource etc are services provided by application layer.
3. This layer mainly holds application programs to act upon the received and to be sent data.

Merits of OSI reference model:


1. OSI model distinguishes well between the services, interfaces and protocols.
2. Protocols of OSI model are very well hidden.
3. Protocols can be replaced by new protocols as technology changes.
4. Supports connection-oriented services as well as connectionless service.

Page no: 7 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Demerits of OSI reference model:


1. Model was devised before the invention of protocols.
2. Fitting of protocols is tedious task.
3. It is just used as a reference model.

#TCP/IP Reference Model:


The TCP/IP reference model was developed prior to OSI model. The major design goals of this
model were,
1. To connect multiple networks together so that they appear as a single network.
2. To survive after partial subnet hardware failures.
3. To provide a flexible architecture.
Unlike OSI reference model, TCP/IP reference model has only 4 layers. They are,
1. Host-to-Network Layer
2. Internet Layer
3. Transport Layer
4. Application Layer
Layer 1: Host-to-network Layer
1. Lowest layer of the all.
2. Protocol is used to connect to the host, so that the packets can be sent over it.
3. Varies from host to host and network to network.
Layer 2: Internet layer
1. Selection of a packet switching network which is based on a connectionless internetwork layer is
called a internet layer.
2. It is the layer which holds the whole architecture together.
3. It helps the packet to travel independently to the destination.
4. Order in which packets are received is different from the way they are sent.
5. IP (Internet Protocol) is used in this layer.
6. The various functions performed by the Internet Layer are:
o Delivering IP packets
o Performing routing
o Avoiding congestion
Layer 3: Transport Layer
1. It decides if data transmission should be on parallel path or single path.
2. Functions such as multiplexing, segmenting or splitting on the data is done by transport layer.
3. The applications can read and write to the transport layer.
4. Transport layer adds header information to the data.
5. Transport layer breaks the message (data) into small units so that they are handled more efficiently
by the network layer.
6. Transport layer also arrange the packets to be sent, in sequence.
Layer 4: Application Layer
The TCP/IP specifications described a lot of applications that were at the top of the protocol stack. Some of
them were TELNET, FTP, SMTP, DNS etc.
1. TELNET is a two-way communication protocol which allows connecting to a remote machine and
run applications on it.
2. FTP (File Transfer Protocol) is a protocol, that allows File transfer amongst computer users
connected over a network. It is reliable, simple and efficient.
3. SMTP (Simple Mail Transport Protocol) is a protocol, which is used to transport electronic mail
between a source and destination, directed via a route.
4. DNS (Domain Name Server) resolves an IP address into a textual address for Hosts connected over a
network.

Page no: 8 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

5. It allows peer entities to carry conversation.


6. It defines two end-to-end protocols: TCP and UDP
o TCP (Transmission Control Protocol): It is a reliable connection-oriented protocol which
handles byte-stream from source to destination without error and flow control.
o UDP (User-Datagram Protocol): It is an unreliable connection-less protocol that does not
want TCPs, sequencing and flow control. Example: One-shot request-reply kind of service.

Merits of TCP/IP model


1. It operated independently.
2. It is scalable.
3. Client/server architecture.
4. Supports number of routing protocols.
5. Can be used to establish a connection between two computers.
Demerits of TCP/IP
1. In this, the transport layer does not guarantee delivery of packets.
2. The model cannot be used in any other application.
3. Replacing protocol is not easy.
4. It has not clearly separated its services, interfaces and protocols.

Fig 1.9 The TCP/IP reference model. Fig 1.10 Protocols in the TCP/IP model initially.

#Comparison of the OSI and TCP/IP Reference Models:


TCP/IP (Transmission Control Protocol / Internet
OSI (Open System Interconnection)
Protocol)
1. TCP/IP model is based on standard protocols around
1. OSI is a generic, protocol independent
which the Internet has developed. It is a communication
standard, acting as a communication gateway
protocol, which allows connection of hosts over a
between the network and end user.
network.
2. In TCP/IP model the transport layer does not
2. In OSI model the transport layer guarantees
guarantees delivery of packets. Still the TCP/IP model is
the delivery of packets.
more reliable.
3. Follows vertical approach. 3. Follows horizontal approach.
4. OSI model has a separate Presentation layer 4. TCP/IP does not have a separate Presentation layer or
and Session layer. Session layer.
5. Transport Layer is both Connection Oriented and
5. Transport Layer is Connection Oriented.
Connection less.

Page no: 9 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

6. Network Layer is both Connection Oriented


6. Network Layer is Connection less.
and Connection less.

7. OSI is a reference model around which the


7. TCP/IP model is, in a way implementation of the OSI
networks are built. Generally, it is used as a
model.
guidance tool.
8. Network layer of OSI model provides both 8. The Network layer in TCP/IP model provides
connection oriented and connectionless service. connectionless service.
9. OSI model has a problem of fitting the
9. TCP/IP model does not fit any protocol
protocols into the model.
10. Protocols are hidden in OSI model and are
10. In TCP/IP replacing protocol is not easy.
easily replaced as the technology changes.
11. OSI model defines services, interfaces and
protocols very clearly and makes clear 11. In TCP/IP, services, interfaces and protocols are not
distinction between them. It is protocol clearly separated. It is also protocol dependent.
independent.
12. It has 7 layers 12. It has 4 layers
Principals of physical layer: Physical components are the electronic hardware devices, media, and
other connectors that transmit and carry the signals to represent the bits. ... Codes are groupings
of bits used to provide a predictable pattern that can be recognized by both the sender and the
received.The physical components are the electronic hardware devices, media, and other
connectors that transmit and carry the signals to represent the bits. Hardware components such as
network adapters (NICs), interfaces and connectors, cable materials, and cable designs are all
specified in standards associated with the physical layer. The various ports and interfaces on a
Cisco 1941 router are also examples of physical components with specific connectors and pinouts
resulting from standards.
Encoding
Encoding or line encoding is a method of converting a stream of data bits into a predefined "code”.
Codes are groupings of bits used to provide a predictable pattern that can be recognized by both
the sender and the received. In the case of networking, encoding is a pattern of voltage or current
used to represent bits; the 0s and 1s.
Signaling
The physical layer must generate the electrical, optical, or wireless signals that represent the "1"
and "0" on the media. The method of representing the bits is called the signaling method. The
physical layer standards must define what type of signal represents a "1" and what type of signal
represents a "0". This can be as simple as a change in the level of an electrical signal or optical
pulse. For example, a long pulse might represent a 1, whereas a short pulse represents a 0.

Media, Bandwidth, Data rate and Modulations

Media:-
Network media refers to the communication channels used to interconnect nodes on a computer
network. Typical examples of network media include copper coaxial cable, copper twisted pair

Page no: 10 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

cables and optical fiber cables used in wired networks, and radio waves used in wireless data
communications networks.
In data communication terminology, a transmission medium is a physical path between the
transmitter and the receiver i.e it is the channel through which data is sent from one place to
another.

Bandwidth
Bandwidth describes the maximum data transfer rate of a network or Internet connection. It
measures how much data can be sent over a specific connection in a given amount of time. For
example, a gigabit Ethernet connection has a bandwidth of 1,000 Mbps (125 megabytes per
second). An Internet connection via cable modem may provide 25 Mbps of bandwidth. Bandwidth
also refers to a range of frequencies used to transmit a signal. This type of bandwidth is measured
in hertz and is often referenced in signal processing applications.

Data rate

The data rate is a term to denote the transmission speed, or the number of bits per second transferred.
The useful data rate for the user is usually less than the actual data rate transported on the network. One
reason for this is that additional bits are transferred for e.g signalling, the address, the recovery of timing
information at the receiver or error correction to compensate for possible transmission errors. In
telecommunications, it is common use to express the data rate in bits per seconds (bit/s), see bit rate. In
data communication, the data rate is often expressed in bytes per second (B/s).

Modulation

Modulation plays a key role in communication system to encode information digitally in analog world. It is
very important to modulate the signals before sending them to the receiver section for larger distance
transfer, accurate data transfer and low-noise data reception.

Page no: 11 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Modulation is a process of changing the characteristics of the wave to be transmitted by superimposing


the message signal on the high frequency signal. In this process video, voice and other data signals modify
high frequency signals – also known as carrier wave. This carrier wave can be DC or AC or pulse chain
depending on the application used. Usually high frequency sine wave is used as a carrier wave signal.These
modulation techniques are classified into two major types: analog and digital or pulse modulation. Prior to
discussing further about the different types of modulation techniques, let us understand the importance of
modulation.

Note: Bandwidth and data rate are related by the modulation format.
Different modulation formats will require different bandwidths for the same data rate. For
FM modulation, the bandwidth is approximately 2*(df + fm) where df is the
maximum frequency deviation and fm is the frequency of the message

Page no: 12 Get real-time updates from RGPV


We hope you find these notes useful.
You can get previous year question papers at
https://qp.rgpvnotes.in .

If you have any queries or you want to submit your


study notes please write us at
rgpvnotes.in@gmail.com
Program : B.Tech
Subject Name: Computer Networks
Subject Code: CS-602
Semester: 6th
Downloaded from www.rgpvnotes.in

Department of Computer Science and Engineering


CS602 Computer Networks
Subject Notes: UNIT-II

Syllabus: Data Link Layer: Need, Services Provided, Framing, Flow Control, Error control. Data Link Layer
Protocol: Elementary &Sliding Window protocol: 1-bit, Go-Back-N, Selective Repeat, Hybrid ARQ.
Protocol verification: Finite State Machine Models & Petri net models. ARP/RARP/GARP

DATA LINK LAYER: NEED


Data Link Layer is second layer of OSI Layered Model. This layer is one of the most complicated layers and
has complex functionalities and liabilities. Data link layer hides the details of underlying hardware and
represents itself to upper layer as the medium to communicate.
Data link layer works between two hosts which are directly connected in some sense. This direct
connection could be point to point or broadcast. Data link layer is responsible for converting data stream
to signals bit by bit and to send that over the underlying hardware.

Fig. 2.1 Seven Layer Architecture


Data link layer has two sub-layers:
• Logical Link Control: It deals with protocols, flow-control, and error control
• Media Access Control: It deals with actual control of media

DATA LINK LAYER: SERVICE PROVIDED


• Encapsulation of network layer data packets into frames.
• Frame synchronization.
• Error Control
• Flow control, in addition to the one provided on the transport layer.
• LAN switching (packet switching) including MAC filtering and spanning tree protocol
• Data packet queuing or scheduling
• Store-and-forward switching or cut-through switching

DATA LINK LAYER: FRAMING


Since the physical layer merely accepts and transmits a stream of bits without any regard to meaning or
structure, it is up to the data link layer to create and recognize frame boundaries. This can be accomplished
by attaching special bit patterns to the beginning and end of the frame. If these bit patterns can
accidentally occur in data, special care must be taken to make sure these patterns are not incorrectly
interpreted as frame delimiters. The four framing methods that are widely used are
• Character count
• Starting and ending characters, with character stuffing
• Starting and ending flags, with bit stuffing

Page no: 1 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig. 2.2 Data Link Layer: Framing

Character Count
This method uses a field in the header to specify the number of characters in the frame. When the data
link layer at the destination sees the character count, it knows how many characters follow, and hence
where the end of the frame is. The disadvantage is that if the count is garbled by a transmission error, the
destination will lose synchronization and will be unable to locate the start of the next frame. So, this
method is rarely used.
Character stuffing
In the second method, each frame starts with the ASCII character sequence DLE STX and ends with the
sequence DLE ETX. This method overcomes the drawbacks of the character count method. However,
character stuffing is closely associated with 8-bit characters and this is a major hurdle in transmitting
arbitrary sized characters.
Bit stuffing
The third method allows data frames to contain an arbitrary number of bits and allows character codes
with an arbitrary number of bits per character. At the start and end of each frame is a flag byte consisting
of the special bit pattern 01111110. Whenever the sender's data link layer encounters five consecutive 1s
in the data, it automatically stuffs a zero bit into the outgoing bit stream. This technique is called bit
stuffing
Physical layer coding violations
The final framing method is physical layer coding violations and is applicable to networks in which the
encoding on the physical medium contains some redundancy. In such cases normally, a 1 bit is a high-low
pair and a 0 bit is a low-high pair. The combinations of low-low and high-high which are not used for data
may be used for marking frame boundaries.

DATALINK LAYER: FLOW CONTROL


Flow control coordinates that amount of data that can be sent before receiving acknowledgement.
• It is one of the most important duties of the data link layer.
• Flow control tells the sender how much data to send.
• It makes the sender wait for some sort of an acknowledgment (ACK) before continuing to send more
data.
• Flow Control Techniques: Stop-and-wait, and Sliding Window

DATA LINK LAYER: ERROR CONTROL


Error control in the data link layer is based on ARQ (automatic repeat request), which is the retransmission
of data.
• The term error control refers to methods of error detection and retransmission.
• Anytime an error is detected in an exchange, specified frames are retransmitted. This process is
called ARQ.

To ensure reliable communication, there needs to exist flow control (managing the amount of data the
sender sends), and error control (that data arrives at the destination error free).
• Flow and error control needs to be done at several layers.
• For node-to-node links, flow and error control is carried out in the data-link layer.
Page no: 2 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in

• For end-point to end-point, flow and error control is carried out in the transport layer.
There may be three types of errors:

Fig. 2.3 Single bit error


In a frame, there is only one bit, anywhere though, which is corrupt.

Fig. 2.4 Multiple bits error


Frame is received with more than one bit in corrupted state.

Fig. 2.5 Burst error


Frame contains more than1 consecutive bits corrupted.

DATA LINK LAYER PROTOCOL


The basic function of the layer is to transmit frames over a physical communication link. Transmission may
be half duplex or full duplex. To ensure that frames are delivered free of errors to the destination station
(IMP) a number of requirements are placed on a data link protocol. The protocol (control mechanism)
should be capable of performing:
1. The identification of a frame (i.e. recognises the first and last bits of a frame).
2. The transmission of frames of any length up to a given maximum. Any bit pattern is permitted in a
frame.
3. The detection of transmission errors.
4. The retransmission of frames which were damaged by errors.
5. The assurance that no frames were lost.
6. In a multidrop configuration some mechanism must be used for preventing conflicts caused by
simultaneous transmission by many stations.
7. The detection of failure or abnormal situations for control and monitoring purposes.
It should be noted that as far as layer 2 is concerned a host message is pure data, every single bit of which
is to be delivered to the other host. The frame header pertains to layer 2 and is never given to the host.

Elementary Data Link Protocols


• Data are transmitted in one direction only
• The transmitting (Tx) and receiving (Rx) hosts are always ready
• Processing time can be ignored
• Infinite buffer space is available
• No errors occur; i.e. no damaged frames and no lost frames (perfect channel)

Sliding Window protocol:


A sliding window protocol is a feature of packet-based data transmission protocols. Sliding window
protocols are used where reliable in-order delivery of packets is required, such as in the Data Link Layer
(OSI model) as well as in the Transmission Control Protocol (TCP).
The Sliding Window ARQ has three techniques
1. 1-bit
2. Go- Back N
3. Selective Repeat
1-bit
One-bit sliding window protocol is also called Stop-And-Wait protocol. In this protocol, the sender sends
out one frame, waits for acknowledgment before sending next frame, thus the name Stop-And-Wait.

Page no: 3 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Problem with Stop-And-Wait protocol is that it is very inefficient. At any one moment, only in frame is in
transition. The sender will have to wait at least one round trip time before sending next. The waiting can
be long for a slow network such as satellite link.
Stop and Wait Protocol
Characteristics
• Used in Connection-oriented communication.
• It offers error and flow control
• It is used in Data Link and Transport Layers
• Stop and Wait ARQ mainly implements Sliding Window Protocol concept with Window Size 1
Useful Terms:
• Propagation Delay: Amount of time taken by a packet to make a physical journey from one router to
another router.
Propagation Delay = (Distance between routers) / (Velocity of propagation)
• RoundTripTime (RTT) = 2* Propagation Delay
• TimeOut (TO) = 2* RTT
• Time To Live (TTL) = 2* TimeOut. (Maximum TTL is 180 seconds)

Simple Stop and Wait


Sender:
Rule 1) Send one data packet at a time.
Rule 2) Send next packet only after receiving acknowledgement for previous.
Receiver:
Rule 1) Send acknowledgement after receiving and consuming of data packet.
Rule 2) after consuming packet acknowledgement need to be sent (Flow Control)

Fig. 2.6 Stop and Wait


Problems:
1. Lost Data

Page no: 4 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig. 2.7 Stop and Wait- Lost Data


2. Lost Acknowledgement:

Fig. 2.8 Stop and Wait- Lost Acknowledgement

3. Delayed Acknowledgement/Data: After timeout on sender side, a long-delayed acknowledgement


might be wrongly considered as acknowledgement of some other recent packet.

Stop and Wait ARQ (Automatic Repeat Request)


Above 3 problems are resolved by Stop and Wait ARQ (Automatic Repeat Request) that does both error
control and flow control.

Fig. 2.9 Stop and Wait ARQ (Automatic Repeat Request)


1. Time Out:

Fig. 2.10 Stop and Wait ARQ-Time Out


2. Sequence Number (Data)

Fig. 2.11 Stop and Wait ARQ-ACK Lost

3. Delayed Acknowledgement:
This is resolved by introducing sequence number for acknowledgement also.
Page no: 5 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in

Working of Stop and Wait ARQ:


1) Sender A sends a data frame or packet with sequence number 0.
2) Receiver B, after receiving data frame, sends and acknowledgement with sequence number 1 (sequence
number of next expected data frame or packet)
There is only one-bit sequence number that implies that both sender and receiver have buffer for one
frame or packet only.

Fig. 2.12 Working of Stop and Wait ARQ


Characteristics of Stop and Wait ARQ:
• It uses link between sender and receiver as half duplex link
• Throughput = 1 Data packet/frame per RTT
• If Bandwidth*Delay product is very high, then stop and wait protocol is not so useful. The sender has
to keep waiting for acknowledgements before sending the processed next packet.
• It is an example for “Closed Loop OR connection oriented “protocols
• It is a special category of SWP where its window size is 1
• Irrespective of number of packets sender is having stop and wait protocol requires only 2 sequences
numbers 0 and 1
The Stop and Wait ARQ solves main three problems, but may cause big performance issues as sender
always waits for acknowledgement even if it has next packet ready to send. Consider a situation where you
have a high bandwidth connection and propagation delay is also high (you are connected to some server in
some other country though a high-speed connection). To solve this problem, we can send more than one
packet at a time with a larger sequence numbers. We will be discussing these protocols in next articles.
So, Stop and Wait ARQ may work fine where propagation delay is very less for example LAN connections,
but performs badly for distant connections like satellite connection.

Go- Back N protocol


Go-Back-N protocol is a sliding window protocol. It is a mechanism to detect and control the error in
datalink layer. During transmission of frames between sender and receiver, if a frame is damaged, lost, or
an acknowledgement is lost then the action performed by sender and receiver.

Page no: 6 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig. 2.13 Go- Back N protocol

Selective Repeat protocol


Selective repeat is also the sliding window protocol which detects or corrects the error occurred in datalink
layer. The selective repeat protocol retransmits only that frame which is damaged or lost. In selective
repeat protocol, the retransmitted framed is received out of sequence. The selective repeat protocol can
perform following actions
• The receiver is capable of sorting the frame in a proper sequence, as it receives the retransmitted
frame whose sequence is out of order of the receiving frame.
• The sender must be capable of searching the frame for which the NAK has been received.
• The receiver must contain the buffer to store all the previously received frame on hold till the
retransmitted frame is sorted and placed in a proper sequence.
• The ACK number, like NAK number, refers to the frame which is lost or damaged.
• It requires the less window size as compared to go-back-n protocol.

Fig. 2.14 Selective Repeat protocol

HYBRID ARQ
The HARQ is the use of conventional ARQ along with an Error Correction technique called 'Soft Combining',
which no longer discards the received bad data (with error).

With the 'Soft Combining' data packets that are not properly decoded are not discarded anymore. The
received signal is stored in a 'buffer', and will be combined with next retransmission.

That is, two or more packets received each one with insufficient SNR to allow individual decoding can be
combined in such a way that the total signal can be decoded!

Page no: 7 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

The following image explains this procedure. The transmitter sends a package [1]. The package [1] arrives,
and is 'OK'. If the package [1] is 'OK' then the receiver sends an 'ACK'.

Fig. 2.15 Transmitter sends a packet-1


The transmission continues, and is sent a package [2]. The package [2] arrives, but let's consider now that it
arrives with errors. If the package [2] arrives with errors, the receiver sends a 'NACK'.

Fig. 2.16 Transmitter sends a packet-2


Only now this package [2] (bad) is not thrown away, as it is done in conventional ARQ. Now it is stored in a
'buffer'.

Fig. 2.17 Receiver buffers a packet-2


Continuing, the transmitter sends another package [2.1] that also (let's consider) arrives with errors.

Fig. 2.18 Transmitter sends another packet-2


We have then in a buffer: bad package [2], and another package [2.1] which is also bad.
Does by adding (combining) these two packages ([2] + [2.1]) we have the complete information?
Yes. So, we send an 'ACK'.

Page no: 8 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig. 2.19 Receiver combining buffers a packet-2 and another packet-2


But if the combination of these two packages still does not give us the complete information, the process
must continue - and another 'NACK' is sent.

Fig. 2.20 Receiver sends NACK


And there we have another retransmission. Now the transmitter sends a third package [2.2].
Let's consider that now it is 'OK', and the receiver sends an 'ACK'.

Fig. 2.21 Receiver sends ACK


Here we can see the following: along with the received package [2.2], the receiver also has packages [2]
and [2.1] that have not been dropped and are stored in the buffer.

Page no: 9 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

In our example, we see that the package arrived 2 times 'wrong'. And what is the limit of these
retransmissions? Up to 4. IE, we can have up to 4 retransmissions in each process. This is the maximum
number supported by 'buffer'.

BIT ORIENTED PROTOCOLS


A bit-oriented protocol is a communications protocol that sees the transmitted data as an opaque stream
of bits with no semantics, or meaning. Control codes are defined in terms of bit sequences instead of
characters. Bit oriented protocol can transfer data frames regardless of frame contents. It can also be
stated as "bit stuffing" this technique allows the data frames to contain an arbitrary number of bits and
allows character codes with arbitrary number of bits per character.
SDLC
Synchronous Data Link Control (SDLC) supports a variety of link types and topologies. It can be used with
point-to-point and multipoint links, bounded and unbounded media, half-duplex and full-duplex
transmission facilities, and circuit-switched and packet-switched networks.

SDLC identifies two types of network nodes: primary and secondary. Primary nodes control the operation
of other stations, called secondary. The primary polls the secondary in a predetermined order and
secondary can then transmit if they have outgoing data. The primary also sets up and tears down links and
manages the link while it is operational. Secondary nodes are controlled by a primary, which means that
secondary can send information to the primary only if the primary grants permission.

SDLC primaries and secondary can be connected in four basic configurations:


• Point-to-point---Involves only two nodes, one primary and one secondary.
• Multipoint---Involves one primary and multiple secondary.
• Loop---Involves a loop topology, with the primary connected to the first and last secondary.
Intermediate secondary pass messages through one another as they respond to the requests of the
primary.
• Hub go-ahead---Involves an inbound and an outbound channel. The primary uses the outbound
channel to communicate with the secondary. The secondary use the inbound channel to
communicate with the primary. The inbound channel is daisy-chained back to the primary through
each secondary.
SDLC Frame Format

Fig. 2.22 SDLC Frame Format


• Flag---Initiates and terminates error checking.
• Address---Contains the SDLC address of the secondary station, which indicates whether the frame
comes from the primary or secondary. This address can contain a specific address, a group address,
or a broadcast address. A primary is either a communication source or a destination, which
eliminates the need to include the address of the primary.
• Control---Employs three different formats, depending on the type of SDLC frame used:
Page no: 10 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in

1. Information (I) frame: Carries upper-layer information and some control information. This
frame sends and receives sequence numbers, and the poll final (P/F) bit performs flow and
error control. The send-sequence number refers to the number of the frame to be sent next.
The receive-sequence number provides the number of the frame to be received next. Both
sender and receiver maintain send- and receive-sequence numbers.
A primary station uses the P/F bit to tell the secondary whether it requires an immediate
response. A secondary station uses the P/F bit to tell the primary whether the current frame is
the last in its current response.
2. Supervisory (S) frame: Provides control information. An S frame can request and suspend
transmission, reports on status, and acknowledge receipt of I frames. S frames do not have an
information field.
3. Unnumbered (U) frame: Supports control purposes and is not sequenced. A U frame can be
used to initialize secondary. Depending on the function of the U frame, its control field is 1 or 2
bytes. Some U frames have an information field.
• Data---Contains path information unit (PIU) or exchange identification (XID) information.
• Frame Check Sequence (FCS) ---Precedes the ending flag delimiter and is usually a cyclic
redundancy check (CRC) calculation remainder. The CRC calculation is redone in the receiver. If the
result differs from the value in the original frame, an error is assumed.
HDLC
High-Level Data Link Control (HDLC) is a bit-oriented code-transparent synchronous data link layer
protocol. HDLC provides both connection-oriented and connectionless service. HDLC can be used for point
to multipoint connections, but is now used almost exclusively to connect one device to another, using what
is known as Asynchronous Balanced Mode (ABM). The original master-slave modes Normal Response
Mode (NRM) and Asynchronous Response Mode (ARM) are rarely used.

FRAMING
HDLC frames can be transmitted over synchronous or asynchronous serial communication links. Those links
have no mechanism to mark the beginning or end of a frame, so the beginning and end of each frame has
to be identified. This is done by using a frame delimiter, or flag, which is a unique sequence of bits that is
guaranteed not to be seen inside a frame. This sequence is '01111110', or, in hexadecimal notation, 0x7E.
Each frame begins and ends with a frame delimiter. A frame delimiter at the end of a frame may also mark
the start of the next frame. A sequence of 7 or more consecutive 1-bits within a frame will cause the frame
to be aborted.
When no frames are being transmitted on a simplex or full-duplex synchronous link, a frame delimiter is
continuously transmitted on the link. Using the standard NRZI encoding from bits to line levels (0 bit =
transition, 1 bit = no transition), this generates one of two continuous waveforms, depending on the initial
state:

Fig. 2.23 HDLC Framing


This is used by modems to train and synchronize their clocks via phase-locked loops. Some protocols allow
the 0-bit at the end of a frame delimiter to be shared with the start of the next frame delimiter, i.e.
'011111101111110'.
Frame structure
The contents of an HDLC frame are shown in the following table:

Flag Address Control Information FCS Flag

8 bits 8 or more bits 8 or 16 bits Variable length, 0 or more bits 16 or 32 bits 8 bits

Page no: 11 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig. 2.24 HDLC Frame structure


Note that the end flag of one frame may be (but does not have to be) the beginning (start) flag of the next
frame.
Data is usually sent in multiples of 8 bits, but only some variants require this; others theoretically permit
data alignments on other than 8-bit boundaries.
There are three fundamental types of HDLC frames.
• Information frames, or I-frames, transport user data from the network layer. In addition, they can
also include flow and error control information piggybacked on data.
• Supervisory Frames, or S-frames, are used for flow and error control whenever piggybacking is
impossible or inappropriate, such as when a station does not have data to send. S-frames do not have
information fields.
• Unnumbered frames, or U-frames, are used for various miscellaneous purposes, including link
management. Some U-frames contain an information field, depending on the type.

BISYNC
Binary Synchronous Communication (BSC or Bisync) is an IBM character-oriented, half duplex link protocol,
announced in 1967 after the introduction of System/360. It replaced the synchronous transmit-receive
(STR) protocol used with second generation computers. The intent was that common link management
rules could be used with three different character encodings for messages. Six-bit Transcode looked
backwards to older systems.
BISYNC establishes rules for transmitting binary-coded data between a terminal and a host computer's
BISYNC port. While BISYNC is a half-duplex protocol, it will synchronize in both directions on a full-duplex
channel. BISYNC supports both point-to-point (over leased or dial-up lines) and multipoint transmissions.
Each message must be acknowledged, adding to its overhead.
BISYNC is character oriented, meaning that groups of bits (bytes) are the main elements of transmission,
rather than a stream of bits. The BISYNC frame is pictured next. It starts with two sync characters that the
receiver and transmitter use for synchronizing. This is followed by a start of header (SOH) command, and
then the header. Following this are the start of text (STX) command and the text. Finally, an end of text
(EOT) command and a cyclic redundancy check (CRC) end the frame. The CRC provides error detection and
correction.

Fig. 2.25 BISYNC


Most of the bisynchronous protocols, of which there are many, provide only half-duplex transmission and
require an acknowledgment for every block of transmitted data. Some do provide full-duplex transmission
and bit-oriented operation.
BISYNC has largely been replaced by the more powerful SDLC (Synchronous Data Link Control).
LAP AND LAPB
Link Access Procedure (LAP) protocols are Data Link layer protocols for framing and transmitting data
across point-to-point links. LAP was originally derived from HDLC (High-Level Data Link Control), but was
later updated and renamed LAPB (LAP Balanced).
LAPB is the data link protocol for X.25.LAPB is a bit-oriented protocol derived from HDLC that ensures that
frames are error free and in the right sequence. It can be used as a Data Link Layer protocol implementing
the connection-mode data link service in the OSI Reference Model as defined by ITU-T Recommendation
X.222.
LAPB is used to manage communication and packet framing between data terminal equipment (DTE) and
the data circuit-terminating equipment (DCE) devices in the X.25 protocol stack. LAPB is essentially HDLC in
Asynchronous Balanced Mode (ABM). LAPB sessions can be established by either the DTE or DCE. The
station initiating the call is determined to be the primary, and the responding station is the secondary.
Page no: 12 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in

Frame types
• I-Frames (Information frames): Carries upper-layer information and some control information. I-
frame functions include sequencing, flow control, and error detection and recovery. I-frames carry
send and receive sequence numbers.
• S-Frames (Supervisory Frames): Carries control information. S-frame functions include requesting
and suspending transmissions, reporting on status, and acknowledging the receipt of I-frames. S-
frames carry only receive sequence numbers.
• U-Frames (Unnumbered Frames): carries control information. U-frame functions include link setup
and disconnection, as well as error reporting. U-frames carry no sequence numbers
Frame format

Fig. 2.26 Frame format


Flag – The value of the flag is always 0x7E. In order to ensure that the bit pattern of the frame delimiter
flag does not appear in the data field of the frame (and therefore cause frame misalignment), a
technique known as Bit stuffing is used by both the transmitter and the receiver.
Address field – In LAPB, this field has no meaning since the protocol works in a point to point mode and
the DTE network address is represented in the layer 3 packets. This byte is therefore put to a
different use; it separates the link commands from the responses and can have only two values:
0x01 and 0x03. 01 identifies frames containing commands from DTE to DCE and responses to these
commands from DCE to DTE. 03 are used for frames containing commands from DCE to DTE and for
responses from DTE to DCE.
Control field – it serves to identify the type of the frame. In addition, it includes sequence numbers,
control features and error tracking according to the frame type.
Modes of operation
LAPB works in the Asynchronous Balanced Mode (ABM). This mode is balanced (i.e., no master/slave
relationship) and is signified by the SABM (E)/SM frame. Each station may initialize, supervise, recover
from errors, and send frames at any time. The DTE and DCE are treated as equals.
FCS – The Frame Check Sequence enables a high level of physical error control by allowing the integrity of
the transmitted frame data to be checked.
Window size – LAPB supports an extended window size (modulo 128 and modulo 32768) where the
maximum number of outstanding frames for acknowledgment is raised from 7 (modulo 8) to 127 (modulo
128) and 32767 (modulo 32768).
Protocol operation
LAPB has no master/slave node relationships. The sender uses the Poll bit in command frames to insist on
an immediate response. In the response frame this same bit becomes the receivers Final bit. The receiver
always turns on the Final bit in its response to a command from the sender with the Poll bit set. The P/F bit
is generally used when either end becomes unsure about proper frame sequencing because of a possible
missing acknowledgment, and it is necessary to re-establish a point of reference. It is also used to trigger
an acknowledgment of outstanding I-frames.
Protocol verification:
Finite State Machine Models
A finite-state machine (FSM) or finite-state automaton (plural: automata), or simply a state machine, is a
mathematical model of computation used to design both computer programs and sequential logic circuits.
It is conceived as an abstract machine that can be in one of a finite number of states. The machine is in
only one state at a time; the state it is in at any given time is called the current state. It can change from
one state to another when initiated by a triggering event or condition; this is called a transition. A
particular FSM is defined by a list of its states, and the triggering condition for each transition.
Finite-state machines can model a large number of problems, among which are electronic design
automation, communication protocol design, language parsing and other engineering applications. In

Page no: 13 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

biology and artificial intelligence research, state machines or hierarchies of state machines have been used
to describe neurological systems. In linguistics, they are used to describe simple parts of the grammars of
natural languages.
The FSM Consist of
 States are those instants that the protocol machine is waiting for, the next event to happen e.g.
waiting for ACK.
 Transitions occur when some event happens. E.g. when a frame is sent, when a frame is arriving,
when timer goes off, when an interrupt occurs.
 Initial State gives description of the system i.e. when it starts running.
 A deadlock is a situation in which the protocol can make no more forward progress, there exists a
set of states from which there is no exit and no progress can be made.
How to know a protocol really works → specifies and verify protocol using, e.g. finite state machine
–Each protocol machine (sender or receiver) is at a specific state at every time instant
–Each state has zero or more possible transitions to other states
–One particular state is initial state: from initial state, some or possibly all other states may be reachable
by a sequence of transitions.
• Simplex stop and wait ARQ protocol:
–State SRC: S = 0, 1 → which frame sender is sending;
R = 0, 1 → which frame receiver is expecting;
C = 0, 1, A (ACK), − (empty) → channel state, i.e. what is in channel
There are 9 transitions
Transition Who runs? Frame Accepted Frame Emitted To Network Layer
0 – Frame lost Frame lost
1 R 0 A –
2 S A 1 Yes
3 R 1 A –
4 S A 0 Yes
5 R 0 A –
6 R 1 A No
7 S Time out 0 No
8 S Time out 1 –
Table 2.1 List of Transitions
–Initial state (000): sender has just sent frame 0, receiver is expecting frame 0, and frame 0 is currently in
channel
–Transition 0: States channel losing its content.
–Transition 1: consists of channel correctly delivering packet 0 to receiver, and receiver expecting frame 1
and emitting ACK 0. Also receiver delivering packet 0 to the network layer.
–During normal operation, transitions 1,2,3,4 are repeated in order over and over: in each cycle, two
frames are delivered, bringing sender back to initial state.

Page no: 14 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig 2.27 FSM for Stop and Wait Protocol (Half Duplex)
Petri Net
Petri Net(PN) is an abstract model to show the interaction between asynchronous processes. It is only one
of the many ways to represent these interactions. Asynchronous means that the designer doesn't know
when the processes start and in which sequence they'll take place. A common manner to visualize the
concepts is with the use of places, tokens, transitions and arcs. We refer to the basics of Petri Net for a first
introduction in notations. We want to mention that a transition can only fire when there are tokens in
every input-place. When it fires, one token is taken from every input-place and every output-place from
the transition gets an (extra) token.
The Basics:
A Petri Net is a collection of directed arcs connecting places and transitions. Places may hold tokens. The
state or marking of a net is its assignment of tokens to places. Here is a simple net containing all
components of a Petri Net:

Fig 2.28 Petri Net Model


Arcs have capacity 1 by default; if other than 1, the capacity is marked on the arc. Places have infinite
capacity by default, and transitions have no capacity, and cannot store tokens at all. With the rule that arcs
can only connect places to transitions and vice versa, we have all we need to begin using Petri Nets. A few
other features and considerations will be added as we need them.

A transition is enabled when the number of tokens in each of its input places is at least equal to the arc
weight going from the place to the transition. An enabled transition may fire at any time. When fired, the
tokens in the input places are moved to output places, according to arc weights and place capacities. This
results in a new marking of the net, a state description of all places.

Page no: 15 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig 2.29
When arcs have different weights, we have what might at first seem confusing behaviour. Here is a similar net, ready
to fire:

Fig 2.30
and here it is after firing:

Fig 2.31
When a transition fires, it takes the tokens that enabled it from the input places; it then distributes tokens
to output places according to arc weights. If the arc weights are all the same, it appears that tokens are
moved across the transition. If they differ, however, it appears that tokens may disappear or be created.
That, in fact, is what happens; think of the transition as removing its enabling tokens and producing output
tokens according to arc weight.

A special kind of arc, the inhibitor arc, is used to reverse the logic of an input place. With an inhibitor arc,
the absence of a token in the input place enables, not the presence:

Page no: 16 Get real-time updates from RGPV


Downloaded from www.rgpvnotes.in

Fig 2.32
This transition cannot fire, because the token in P2 inhibits it.
Tokens can play the following roles:
A physical object: a robot;
• An information object: a message between two robots;
• A collection of objects: the people mover;
• An indicator of a state: the state in which a robot is: defender/attacker;
• An indicator of a condition: a token indicates whether a certain condition is fulfilled (ex. Soccer game
starts when the referee gives the signal).
Transitions can play the following roles:
• An event: start a thread, the switching of a machine from normal to safe mode;
• A transformation of an object: a robot that changes his role, see further;
• A transport of an object: the ball is passed between the robots.
An arc connects only places and transitions and indicates the direction in which the token travels.

Petri net Link https://www.youtube.com/watch?v=EmYVZuczJ6k


Finite State Machine Models https://www.youtube.com/watch?v=hJIST1cEf6A
SDLC AND HDLC https://www.youtube.com/watch?v=_fwVTFO-u4g

ARP:

ARP or Address Resolution Protocol is a simple communications protocol used primarily today in IP and
Ethernet networks. It’s main purpose is to discover and associate IP addresses to physical MAC hardware
addresses. ARP is used to find the MAC address of device on a network using only the IP address. The ARP protocol
will make a broadcast out to the network asking for the MAC address of the destination IP address. The machine
with the IP address will respond with its MAC address. The communication then drops to the link layer for physical
to physical data communication between computers. ARP’s job is to basically discover and associate IP
addresses to physical MAC addresses.

RARP:

RARP (Reverse ARP) is a legacy protocol that has now been replaced by BOOTP and later by DHCP. Its
purpose was for diskless workstations (i.e no ability to store an IP address) to discover what their own IP
address was - based on their MAC address. At the point of boot, the workstation would send a request
requesting its IP, a RARP server would then respond with the appropriate IP. For example:
RARP Request: What is my IP address (MAC address is within Ethernet header)?
RARP Response: Your IP address is 192.168.1.11.
The main problem with RARP was that:
 The RARP server needed to be populated with the MAC to IP mappings.
 No additional data (DNS, NTP) could be sent other than the IP address.
Page no: 17 Get real-time updates from RGPV
Downloaded from www.rgpvnotes.in

 It only operates within a broadcast domain.


RARP was, therefore, superseded by BOOTP. However, BOOTP still required a static mapping to be defined
(MAC to IP). DHCP was then built upon BOOTP with the ability to use a pool of addresses.
GARP:

In more advanced networking situations you may run across something known as Gratuitous ARP (GARP).
A gratuitous arp something that is often performed by a computer when it is first booted up. When a
NIC’s is first powered on, it will do what’s known as a gratuitous ARP and automatically ARP out it’s MAC
address to the entire network. This allows any switches to know the location of the physical devices and
DHCP servers to know where to send an IP address if needed and requested. Gratuitous ARP is also used
by many high availability routing and load balancing devices. Routers or load balancers are often
configured in an HA (high availability) pair to provide optimum reliability and maximum uptime. Usually
these devices will be configured in an Active/Standby pair. One device will be active while the second will
be sleeping waiting for the active device to fail. Think of it as an understudy for the lead role in a movie. If
the leading lady gets sick, the understudy will gladly and quickly take her place in the lime light.
When a failure occurs, the standby device will assert itself as the new active device and issue a gratuitous
ARP out to the network instructing all other devices to send traffic to it’s MAC address instead of the failed
device.

Page no: 18 Get real-time updates from RGPV


We hope you find these notes useful.
You can get previous year question papers at
https://qp.rgpvnotes.in .

If you have any queries or you want to submit your


study notes please write us at
rgpvnotes.in@gmail.com
1

Unit 3: Network Layer

IPV4 ADDRESSES:

An IPv4 address is a 32-bit address that uniquely and universally defines the connection of a host
or a router to the Internet. IPv4 addresses are unique in the sense that each address defines one,
and only one, connection to the Internet.
IPv4 addresses are universal in the sense that the addressing system must be accepted by any host
that wants to be connected to the Internet.

Address Space
An address space is the total number of addresses used by the protocol. If a protocol uses b bits
to define an address, the address space is 2b because each bit can have two different values (0 or
1). IPv4 uses 32-bit addresses, which means that the address space is 232 or 4,294,967,296 (more
than four billion). If there were no restrictions, more than 4 billion devices could be connected to
the Internet.

Notation
There are three common notations to show an IPv4 address: binary notation (base 2), dotted-
decimal notation (base 256), and hexadecimal notation (base 16).

Hierarchy in Addressing
A 32-bit IPv4 address is also hierarchical, but divided only into two parts. The first part of the
address, called the prefix, defines the network; the second part of the address, called the suffix,
defines the node (connection of a device to the Internet). The prefix length is n bits and the suffix
length is (32- n) bits.
Classful Addressing:

Address Depletion:

The reason that classful addressing has become obsolete is address depletion. To understand the
problem, let us think about class A. This class can be assigned to only 128 organizations in the
world, but each organization needs to have a single network with 16,777,216 nodes. Class B
addresses were designed for midsize organizations, but many of the addresses in this class also
remained unused. Class C addresses have a completely different flaw in design. The number of
addresses that can be used in each network (256) was so small that most companies were not
comfortable using a block in this address class.

Subnetting and Supernetting

In subnetting, a class A or class B block is divided into several subnets. Each subnet has a larger
prefix length than the original network. For example, if a network in class A is divided into four
subnets, each subnet has a prefix of nsub = 10. At the same time, if all of the addresses in a network
are not used, subnetting allows the addresses to be divided among several organizations.

While subnetting was devised to divide a large block into smaller ones, supernetting was devised
to combine several class C blocks into a larger block to be attractive to organizations that need
3

more than the 256 addresses available in a class C block. This idea did not work either because it
makes the routing of packets more difficult.

Advantage of Classful Addressing

Given an address, we can easily find the class of the address and, since the prefix length for each
class is fixed, we can find the prefix length immediately. In other words, the prefix length in
classful addressing is inherent in the address; no extra information is needed to extract the prefix
and the suffix.

Classless Addressing:
In 1996, the Internet authorities announced a new architecture called classless addressing. In
classless addressing, variable-length blocks are used that belong to no classes. We can have a
block of 1 address, 2 addresses, 4 addresses, 128 addresses, and so on. In classless addressing, the
whole address space is divided into variable length blocks. The prefix in an address defines the
block (network); the suffix defines the node (device). Theoretically, we can have a block of 20, 21,
22, . . . , 232 addresses.

Unlike classful addressing, the prefix length in classless addressing is variable. We can have a
prefix length that ranges from 0 to 32. The size of the network is inversely proportional to the
length of the prefix. A small prefix means a larger network; a large prefix means a smaller
network. the idea of classless addressing can be easily applied to classful addressing. An address
in class A can be thought of as a classless address in which the prefix length is 8. An address in
class B can be thought of as a classless address in which the prefix is 16, and so on. In other words,
classful addressing is a special case of classless addressing.

Prefix Length: Slash Notation:

In this case, the prefix length, n, is added to the address, separated by a slash. The notation is
informally referred to as slash notation and formally as classless interdomain routing or CIDR
(pronounced cider) strategy.
Extracting Information from an Address
Three pieces of information about the block to which the address belongs: the number of
addresses, the first address in the block, and the last address.

1. The number of addresses in the block is found as N = 232−n.


2. To find the first address, we keep the n leftmost bits and set the (32 − n) rightmost bits all to 0s.
3. To find the last address, we keep the n leftmost bits and set the (32 − n) rightmost bits all to 1s.

Subnetting

An organization (or an ISP) that is granted a range of addresses may divide the range into several
subranges and assign each subrange to a subnetwork (or subnet). Note that nothing stops the
organization from creating more levels. A subnetwork can be divided into several sub-
subnetworks. A sub-subnetwork can be divided into several sub-sub-subnetworks, and so on.

Designing Subnets
We assume the total number of addresses granted to the organization is N, the prefix length is n,
the assigned number of addresses to each subnetwork is Nsub, and the prefix length for each
subnetwork is nsub. Then the following steps need to be carefully followed to guarantee the proper
operation of the subnetworks.

❑ The number of addresses in each subnetwork should be a power of 2.


❑ The prefix length for each subnetwork should be found using the following formula:
first address = (prefix in decimal) × 232 − n = (prefix in decimal) × N.

nsub = 32 − log2Nsub

❑ The starting address in each subnetwork should be divisible by the number of addresses
in that subnetwork. This can be achieved if we first assign addresses to larger subnetworks.

Example
An organization is granted a block of addresses with the beginning address 14.24.74.0/24. The
organization needs to have 3 subblocks of addresses to use in its three subnets: one subblock of 10
addresses, one subblock of 60 addresses, and one subblock of 120 addresses. Design the
subblocks.
5

Question: An ISP is granted a block of addresses starting with 190.100.0.0/16


(65,536 addresses). The ISP needs to distribute these addresses to three groups of
customers as follows:
a. The first group has 64 customers; each needs 256 addresses.
b. The second group has 128 customers; each needs 128 addresses.
c. The third group has 128 customers; each needs 64 addresses.
Design the subblocks and find out how many addresses are still available after these allocations.

Page 1
Network Address Translation (NAT)
A technology that can provide the mapping between the private and universal addresses, and at
the same time support virtual private networks, is Network Address Translation (NAT). The
8

technology allows a site to use a set of private addresses for internal communication and a set of
global Internet addresses (at least one) for communication with the rest of the world.

IPv6 ADDRESSING:

The main reason for migration from IPv4 to IPv6 is the small size of the address space in
IPv4. An IPv6 address is 128 bits or 16 bytes (octets) long, four times the address length in IPv4.

Representation
The following shows two of these notations: binary and colon hexadecimal.
9

Abbreviation
Although the IP address, even in hexadecimal format, is very long, many of the digits are
zeros. In this case, we can abbreviate the address. The leading zeros of a section (four
digits between two colons) can be omitted. Only the leading zeros can be dropped, not
the trailing zeros.

Congestion Control
Congestion control refers to techniques and mechanisms that can either prevent congestion
before it happens or remove congestion after it has happened. In general, we can divide
congestion control mechanisms into two broad categories: open-loop congestion control
(prevention) and closed-loop congestion control (removal).

Open-Loop Congestion Control:


In open-loop congestion control, policies are applied to prevent congestion before it happens. In
these mechanisms, congestion control is handled by either the source or the destination.
Retransmission Policy Retransmission is sometimes unavoidable. If the sender feels that a sent
packet is lost or corrupted, the packet needs to be retransmitted. Retransmission in general may
increase congestion in the network. However, a good retransmission policy can prevent
congestion. The retransmission policy and the retransmission timers must be designed to
optimize efficiency and at the same time prevent congestion.

Window Policy The type of window at the sender may also affect congestion. The Selective Repeat
window is better than the Go-Back-N window for congestion control.

Acknowledgment Policy The acknowledgment policy imposed by the receiver may also affect
congestion. If the receiver does not acknowledge every packet it receives, it may slow down the
sender and help prevent congestion. A receiver may send an acknowledgment only if it has a
packet to be sent or a special timer expires. A receiver may decide to acknowledge only N packets
at a time. the acknowledgments are also part of the load in a network.

Sending fewer acknowledgments means imposing less load on the network.

Discarding Policy A good discarding policy by the routers may prevent congestion and at the
same time may not harm the integrity of the transmission. For example, in audio transmission, if
the policy is to discard less sensitive packets when congestion is likely to happen, the quality of
sound is still preserved and congestion is prevented or alleviated.

Admission Policy An admission policy, which is a quality-of-service mechanism , can also prevent
congestion in virtual-circuit networks. Switches in a flow first check the resource requirement of a
flow before admitting it to the network. A router can deny establishing a virtual-circuit connection
if there is congestion in the network or if there is a possibility of future congestion.

Closed-Loop Congestion Control


Closed-loop congestion control mechanisms try to alleviate congestion after it happens.

Backpressure The technique of backpressure refers to a congestion control mechanism in which a


congested node stops receiving data from the immediate upstream node or nodes. This may cause
the upstream node or nodes to become congested, and they, in turn, reject data from their
upstream node or nodes, and so on. Backpressure is a node-to- node congestion control that starts
with a node and propagates, in the opposite direction of data flow, to the source. The backpressure
technique can be applied only to virtual circuit networks, in which each node knows the upstream
node from which a flow of data is coming.
Choke Packet A choke packet is a packet sent by a node to the source to inform it of congestion.
Note the difference between the backpressure and choke-packet methods. In backpressure, the
warning is from one node to its upstream node, although the warning may eventually reach the
source station. In the choke-packet method, the warning is from the router, which has
encountered congestion, directly to the source station. The intermediate nodes through which the
packet has traveled are not warned.

Implicit Signaling In implicit signaling, there is no communication between the congested node
or nodes and the source. The source guesses that there is congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is congested. The delay in
receiving an acknowledgment is interpreted as congestion in the network; the source should slow
down.

Explicit Signaling The node that experiences congestion can explicitly send a signal to the source
or destination. The explicit-signaling method, however, is different from the choke-packet method.
In the choke-packet method, a separate packet is used for this purpose; in the explicit-signaling
method, the signal is included in the packets that carry data. Explicit signaling can occur in either
the forward or the backward direction. This type of congestion control can be seen in an ATM
network,
ADDRESS MAPPING

A physical address is a local address. It is called a physical address because it is usually (but not
always) implemented in hardware. An example of a physical address is the 48-bit MAC address in
the Ethernet protocol, which is imprinted on the NIC installed in the host or router. The physical
address and the logical address are two different identifiers.

Mapping Logical to Physical Address: ARP

The system on the left (A) has a packet that needs to be delivered to another system (B) with IP
address 141.23.56.23. System A needs to pass the packet to its data link layer for the actual
delivery, but it does not know the physical address of the recipient. It uses the services of ARP by
asking the ARP protocol to send a broadcast ARP request packet to ask for the physical address of
a system with an IP address of 141.23.56.23.

This packet is received by every system on the physical network, but only system B will answer it,
as shown in Figure 21.1 b. System B sends an ARP reply packet that includes its physical address.
Now system A can send all the packets it has for this destination by using the physical address it
received.
Four cases using ARP: The following are four different cases in which the services of ARP can
be used

1. The sender is a host and wants to send a packet to another host on the same network. In this
case, the logical address that must be mapped to a physical address is the destination IP address in
the datagram header.

2. The sender is a host and wants to send a packet to another host on another network. In this
case, the host looks at its routing table and finds the IP address of the next hop (router) for this
destination. If it does not have a routing table, it looks for the IP address of the default router. The
IP address of the router becomes the logical address that must be mapped to a physical address.

3. The sender is a router that has received a datagram destined for a host on another network. It
checks its routing table and finds the IP address of the next router. The IP address of the next
router becomes the logical address that must be mapped to a physical address.
4. The sender is a router that has received a datagram destined for a host on the same network.
The destination IP address of the datagram becomes the logical address that must be mapped to a
physical address.
An ARP request is broadcast; an ARP reply is unicast.

Mapping Physical to Logical Address: RARP, BOOTP, and DHCP


There are occasions in which a host knows its physical address, but needs to know its logical
address. This may happen in two cases:

1. A diskless station is just booted. The station can find its physical address by checking its
interface, but it does not know its IP address.
2. An organization does not have enough IP addresses to assign to each station; it needs to
assign IP addresses on demand. The station can send its physical address and ask for a
short time lease.
15

RARP
Reverse Address Resolution Protocol (RARP) finds the logical address for a machine that knows
only its physical address. The machine can get its physical address (by reading its NIC, for
example), which is unique locally. It can then use the physical address to get the logical address by
using the RARP protocol. A RARP request is created and broadcast on the local network. Another
machine on the local network that knows all the IP addresses will respond with a RARP reply. The
requesting machine must be running a RARP client program; the responding machine must be
running a RARP server program.
There is a serious problem with RARP: Broadcasting is done at the data link layer. The physical
broadcast address, all 1s in the case of Ethernet, does not pass the boundaries of a network. This
means that if an administrator has several networks or several subnets, it needs to assign a RARP
server for each network or subnet. This is the reason that RARP is almost obsolete. Two protocols,
BOOTP and DHCP, are replacing RARP.

BOOTP
The Bootstrap Protocol (BOOTP) is a client/server protocol designed to provide physical address
to logical address mapping. BOOTP is an application layer protocol. The administrator may put the
client and the server on the same network or on different networks.

How a client can send an IP datagram when it knows neither its own IP address (the source
address) nor the server's IP address (the destination address). The client simply uses all 0s as the
source address and all 1s as the destination address.
16

One of the advantages of BOOTP over RARP is that the client and server are application-
layer processes. As in other application-layer processes, a client can be in one network and
the server in another, separated by several other networks. However, there is one problem
that must be solved. The BOOTP request is broadcast because the client does not know the
IP address of the server. A broadcast IP datagram cannot pass through any router. To solve
the problem, there is a need for an intermediary.
One of the hosts (or a router that can be configured to operate at the application layer) can
be used as a relay. The host in this case is called a relay agent. The relay agent knows the
unicast address of a BOOTP server. When it receives this type of packet, it encapsulates the
message in a unicast datagram and sends the request to the BOOTP server.
The Packet carrying a unicast destination address, is routed by any router and reaches the
BOOTP server. The BOOTP server knows the message comes from a relay agent because
one of the fields in the request message defines the IP address of the relay agent. The relay
agent, after receiving the reply, sends it to the BOOTP client.

DHCP
BOOTP is not a dynamic configuration protocol. When a client requests its IP address, the
BOOTP server consults a table that matches the physical address of the client with its IP
address. This implies that the binding between the physical address and the IP address of
the client already exists. The binding is predetermined.

However, what if a host moves from one physical network to another? What if a host wants
a temporary IP address? BOOTP cannot handle these situations because the binding
between the physical and IP addresses is static and fixed in a table until changed by the
administrator. BOOTP is a static configuration protocol.

The Dynamic Host Configuration Protocol (DHCP) has been devised to provide static and
dynamic address allocation that can be manual or automatic.
17

DHCP provides static and dynamic address allocation that can be manual or
automatic.

Static Address Allocation In this capacity DHCP acts as BOOTP does. It is backward
compatible with BOOTP, which means a host running the BOOTP client can request a static
address from a DHCP server. A DHCP server has a database that statically binds physical
addresses to IP addresses.

Dynamic Address Allocation DHCP has a second database with a pool of available IP
addresses. This second database makes DHCP dynamic. When a DHCP client requests a
temporary IP address, the DHCP server goes to the pool of available (unused) IP addresses
and assigns an IP address for a negotiable period of time.

Routing:
The goal of the network layer is to deliver a datagram from its source to its destination or
destinations. If a datagram is destined for only one destination (one-to-one delivery), we have
unicast routing. If the datagram is destined for several destinations (one-to-many delivery), we
have multicast routing.

A routing table can be either static or dynamic. A static table is one with manual entries.

A dynamic table, on the other hand, is one that is updated automatically when there is a change
somewhere in the internet. The tables need to be updated as soon as there is a change in the
internet. For instance, they need to be updated when a router is down, and they need to be
updated whenever a better route has been found.

Intra and Inter-domain Routing:


An internet is divided into autonomous systems. An autonomous system (AS) is a group of
networks and routers under the authority of a single administration. Routing inside an
autonomous system is referred to as intra-domain routing. Routing between autonomous systems
is referred to as inter-domain routing. Each autonomous system can choose one or more intra-
domain routing protocols to handle routing inside the autonomous system.
18

Distance Vector Routing:


In distance vector routing, the least-cost route between any two nodes is the route with minimum
distance. In this protocol, as the name implies, each node maintains a vector (table) of minimum
distances to every node.

Initialization
The tables in Figure 22.14 are stable; each node knows how to reach any other node and the cost.
At the beginning, however, this is not the case. Each node can know only the distance between
itself and its immediate neighbors, those directly connected to it. So for the moment, we assume
that each node can send a message to the immediate neighbors and find the distance between
itself and these neighbors. The distance for any entry that is not a neighbor is marked as infinite
(unreachable).
19

Sharing
The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So if node C shares its routing table
with A, node A can also know how to reach node E. On the other hand, node C does not know how
to reach node D, but node A does. If node A shares its routing table with node C, node C also knows
how to reach node D. In other words, nodes A and C, as immediate neighbors, can improve their
routing tables if they help each other.

In other words, sharing here means sharing only the first two columns.

In distance vector routing, each node shares its routing table with its immediate neighbors
periodically and when there is a change.

Updating

When a node receives a two-column table from a neighbor, it needs to update its routing table.

When to Share
The question now is, When does a node send its partial routing table (only two columns) to all its
immediate neighbors? The table is sent both periodically and when there is a change in the table.

Periodic Update A node sends its routing table, normally every 30 s, in a periodic update. The
period depends on the protocol that is using distance vector routing.
Triggered Update A node sends its two-column routing table to its neighbors anytime there is a
change in its routing table. This is called a triggered update. The change can result from the
following.

1. A node receives a table from a neighbor, resulting in changes in its own table after
updating.
2. A node detects some failure in the neighboring links which results in a distance change to
infinity.

Count to Infinity:

A problem with distance-vector routing is that any decrease in cost (good news) propagates
quickly, but any increase in cost (bad news) will propagate slowly. For a routing protocol to work
properly, if a link is broken (cost becomes infinity), every other router should be aware of it
immediately, but in distance-vector routing, this takes some time. The problem is referred to as
count to infinity.

Two-Node Loop Instability


A problem with distance vector routing is instability, which means that a network using this
protocol can become unstable.
At the beginning, both nodes A and B know how to reach node X. But suddenly, the link between A
and X fails. Node A changes its table. If A can send its table to B immediately, everything is fine.
However, the system becomes unstable if B sends its forwarding table to A before receiving A’s
forwarding table. Node A receives the update and, assuming that B has found a way to reach X,
immediately updates its forwarding table. Now A sends its new update to B. Now B thinks that
something has been changed around A and updates its forwarding table. The cost of reaching X
increases gradually until it reaches infinity. At this moment, both A and B know that X cannot be
reached. However, during this time the system is not stable. Node A thinks that the route to X is via
B; node B thinks that the route to X is via A. If A receives a packet destined for X, the packet goes to
B and then comes back to A. Similarly, if B receives a packet destined for X, it goes to A and comes
back to B. Packets bounce between A and B, creating a two-node loop problem. A few solutions
have been proposed for instability of this kind.

Split Horizon
One solution to instability is called split horizon. In this strategy, instead of flooding the table
through each interface, each node sends only part of its table through each interface. If, according
to its table, node B thinks that the optimum route to reach X is via A, it does not need to advertise
this piece of information to A; the information has come from A (A already knows). Taking
information from node A, modifying it, and sending it back to node A is what creates the confusion.
In our scenario, node B eliminates the last line of its forwarding table before it sends it to A. In this
case, node A keeps the value of infinity as the distance to X. Later, when node A sends its
forwarding table to B, node B also corrects its forwarding table. The system becomes stable after
the first update: both node A and node B know that X is not reachable.

Poison Reverse
Using the split-horizon strategy has one drawback. Normally, the corresponding protocol uses a
timer, and if there is no news about a route, the node deletes the route from its table. When node B
in the previous scenario eliminates the route to X from its advertisement to A, node A cannot
guess whether this is due to the split-horizon strategy (the source of information was A) or
because B has not received any news about X recently.

In the poison reverse strategy B can still advertise the value for X, but if the source of information
is A, it can replace the distance with infinity as a warning: “Do not use this value; what I know
about this route comes from you.”
22

Link-State Routing:
In Link State Routing algorithm the cost associated with an edge defines the state of the link. Links
with lower costs are preferred to links with higher costs; if the cost of a link is infinity, it means
that the link does not exist or has been broken.

Link-State Database (LSDB)


To create a least-cost tree with this method, each node needs to have a complete map of the
network, which means it needs to know the state of each link. The collection of states for all links
is called the link-state database (LSDB). There is only one LSDB for the whole internet; each node
needs to have a duplicate of it to be able to create the least-cost tree. The LSDB can be represented
as a two-dimensional array(matrix) in which the value of each cell defines the cost of the
corresponding link.

Now the question is how each node can create this LSDB that contains information about the
whole internet. This can be done by a process called flooding. Each node can send some greeting
messages to all its immediate neighbors (those nodes to which it is connected directly) to collect
two pieces of information for each neighboring node: the identity of the node and the cost of the
link. The combination of these two pieces of information is called the LS packet (LSP); the LSP is
sent out of each interface, When a node receives an LSP from one of its interfaces, it compares the
LSP with the copy it may already have. If the newly arrived LSP is older than the one it has (found
by checking the sequence number), it discards the LSP. If it is newer or the first one received, the
node discards the old LSP (if there is one) and keeps the received one. It then sends a copy of it out
of each interface except the one from which the packet arrived. This guarantees that flooding stops
somewhere in the network (where a node has only one interface).

In other words, a node can make the whole map if it needs to, using this LSDB.
23

In the distance-vector routing algorithm, each router tells its neighbors what it knows
about the whole internet; in the link-state routing algorithm, each router tells the whole
internet what it knows about its neighbors.

Formation of Least-Cost Trees


To create a least-cost tree for itself, using the shared LSDB, each node needs to run the famous
Dijkstra Algorithm. This iterative algorithm uses the following steps:

1. The node chooses itself as the root of the tree, creating a tree with a single node, and
sets the total cost of each node based on the information in the LSDB.

2. The node selects one node, among all nodes not in the tree, which is closest to the root,
and adds this to the tree. After this node is added to the tree, the cost of all other nodes
not in the tree needs to be updated because the paths may have been changed.

3. The node repeats step 2 until all nodes are added to the tree.
TECHNIQUES TO IMPROVE QoS
There are some techniques that can be used to improve the quality of service. four common
methods: scheduling, traffic shaping, admission control, and resource reservation.

Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service.

FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router
or switch) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is
familiar to those who have had to wait for a bus at a bus stop.

Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its own
queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority
queue are processed last. Note that the system does not stop serving a queue until it is empty.
A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such
as multimedia, can reach the destination with less delay. However, there is a potential drawback. If
there is a continuous flow in a high-priority queue, the packets in the lower-priority queues will
never have a chance to be processed. This is a condition called starvation.

Weighted Fair Queuing


A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues. The queues, however, are
weighted based on the priority of the queues; higher priority means a higher weight. The
system processes packets in each queue in a round-robin fashion with the number of
packets selected from each queue based on the corresponding weight. For example, if the
weights are 3, 2, and 1, three packets are processed from the first queue, two from the
second queue, and one from the third queue. If the system does not impose priority on the
classes, all weights can be equal. In this way, we have fair queuing with priority.

Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the
network. Two techniques can shape traffic: leaky bucket and token bucket.

Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend
26

on the rate at which the water is input to the bucket unless the bucket is empty. The input
rate can vary, but the output rate remains constant. Similarly, in networking, a technique
called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket
and sent out at an average rate.

A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the
data rate. It may drop the packets if the bucket is full.

Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is
not sending for a while, its bucket becomes empty. Now if the host has bursty data, the
leaky bucket allows only an average rate. The time when the host was idle is not taken into
account. On the other hand, the token bucket algorithm allows idle hosts to accumulate
credit for the future in the form of tokens.

Assume the capacity of the bucket is c tokens and tokens enter the bucket at the rate of r
tokens per second. The system removes one token for every cell of data sent. The maximum
number of cells that can enter the network during any time interval of length t is shown
below.
27

The maximum average rate for the token bucket is shown below.

This means that the token bucket limits the average packet rate to the network.

Example 30.2
Let’s assume that the bucket capacity is 10,000 tokens and tokens are added at the rate of 1000
tokens per second. If the system is idle for 10 seconds (or more), the bucket collects 10,000 tokens
and becomes full. Any additional tokens will be discarded. The maximum average rate is shown
below.
Maximum average rate = (1000t + 10,000)/t

The token bucket can easily be implemented with a counter. The counter is initialized to
zero. Each time a token is added, the counter is incremented by 1. Each time a unit of data
is sent, the counter is decremented by 1. When the counter is zero, the host cannot send
data.

The token bucket allows bursty traffic at a regulated maximum rate.

Combining Token Bucket and Leaky Bucket


The two techniques can be combined to credit an idle host and at the same time regulate
the traffic. The leaky bucket is applied after the token bucket; the rate of the leaky bucket
needs to be higher than the rate of tokens dropped in the bucket.S
Unit 4 Transport Layer

USER DATAGRAM PROTOCOL

The User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol. If a


process wants to send a small message and does not care much about reliability, it can use UDP.

User Datagram
UDP packets, called user datagrams, have a fixed-size header of 8 bytes made of four fields, each
of 2 bytes (16 bits). The first two fields define the source and destination port numbers. The third
field defines the total length of the user datagram, header plus data. The 16 bits can define a total
length of 0 to 65,535 bytes. However, the total length needs to be less because a UDP user
datagram is stored in an IP datagram with the total length of 65,535 bytes. The last field can carry
the optional checksum.
UDP Services:

Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a combination of IP
addresses and port numbers.

Connectionless Services

UDP provides a connectionless service. This means that each user datagram sent by UDP is an
independent datagram. There is no relationship between the different user datagrams even if they
are coming from the same source process and going to the same destination program. The user
datagrams are not numbered. Also, unlike TCP, there is no connection establishment and no
connection termination. This means that each user datagram can travel on a different path.

Flow Control
UDP is a very simple protocol. There is no flow control, and hence no window mechanism. The
receiver may overflow with incoming messages. The lack of flow control means that the process
using UDP should provide for this service, if needed.

Error Control
There is no error control mechanism in UDP except for the checksum. This means that the sender
does not know if a message has been lost or duplicated. When the receiver detects an error
through the checksum, the user datagram is silently discarded. The lack of error control means
that the process using UDP should provide for this service, if needed.

Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control. UDP assumes that
the packets sent are small and sporadic and cannot create congestion in the network.

Encapsulation and Decapsulation


To send a message from one process to another, the UDP protocol encapsulates and decapsulates
messages.

TRANSMISSION CONTROL PROTOCOL


Transmission Control Protocol (TCP) is a connection-oriented, reliable protocol. TCP explicitly
defines connection establishment, data transfer, and connection teardown phases to provide a
connection-oriented service.

TCP Services:
Process-to-Process Communication
As with UDP, TCP provides process-to-process communication using port numbers.
Stream Delivery Service
TCP, unlike UDP, is a stream-oriented protocol. TCP, on the other hand, allows the sending process
to deliver data as a stream of bytes and allows the receiving process to obtain data as a stream of
bytes. TCP creates an environment in which the two processes seem to be connected by an
imaginary “tube” that carries their bytes across the Internet. The sending process produces
(writes to) the stream and the receiving process consumes (reads from) it.
Sending and Receiving Buffers
Because the sending and the receiving processes may not necessarily write or read data at the
same rate, TCP needs buffers for storage. There are two buffers, the sending buffer and the
receiving buffer, one for each direction. These buffers are also necessary for flow- and error-
control mechanisms used by TCP. One way to implement a buffer is to use a circular array of 1-
byte locations as shown in
Figure

At the sender, the buffer has three types of chambers. The white section contains empty chambers
that can be filled by the sending process (producer). The colored area holds bytes that have been
sent but not yet acknowledged. The TCP sender keeps these bytes in the buffer until it receives an
acknowledgment. The shaded area contains bytes to be sent by the sending TCP.

The operation of the buffer at the receiver is simpler. The circular buffer is divided into two areas
(shown as white and colored). The white area contains empty chambers to be filled by bytes
received from the network. The colored sections contain received bytes that can be read by the
receiving process. When a byte is read by the receiving process, the chamber is recycled and added
to the pool of empty chambers.

Segments:

The network layer, as a service provider for TCP, needs to send data in packets, not as a stream of
bytes. At the transport layer, TCP groups a number of bytes together into a packet called a segment.
TCP adds a header to each segment (for control purposes) and delivers the segment to the
network layer for transmission. The segments are encapsulated in an IP datagram and
transmitted. This entire operation is transparent to the receiving process.
Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same time Each TCP
endpoint then has its own sending and receiving buffer, and segments move in both directions.

Multiplexing and Demultiplexing


Like UDP, TCP performs multiplexing at the sender and demultiplexing at the receiver.

Connection-Oriented Service
TCP, unlike UDP, is a connection-oriented protocol. When a process at site A wants to send to and
receive data from another process at site B, the following three phases occur:

1. The two TCP’s establish a logical connection between them.


2. Data are exchanged in both directions.
3. The connection is terminated.

Reliable Service
TCP is a reliable transport protocol. It uses an acknowledgment mechanism to check the safe and
sound arrival of data.
TCP Segment
A packet in TCP is called a segment.

Source port address. This is a 16-bit field that defines the port number of the application program
in the host that is sending the segment.

Destination port address. This is a 16-bit field that defines the port number of the application
program in the host that is receiving the segment.

Sequence number. This 32-bit field defines the number assigned to the first byte of data contained
in this segment. TCP is a stream transport protocol. To ensure connectivity, each byte to be
transmitted is numbered. The sequence number tells the destination which byte in this sequence
is the first byte in the segment. During connection establishment (discussed later) each party uses
a random number generator to create an initial sequence number (ISN), which is usually
different in each direction.
Acknowledgment number. This 32-bit field defines the byte number that the receiver of the
segment is expecting to receive from the other party. If the receiver of the segment has successfully
received byte number x from the other party, it returns x+1 as the acknowledgment number.
Acknowledgment and data can be piggybacked together.

Header length. This 4-bit field indicates the number of 4-byte words in the TCP header. The length
of the header can be between 20 and 60 bytes. Therefore, the value of this field is always between
5 (5x4=20) and 15 (15x4=60).

Control. This field defines 6 different control bits or flags, as shown in Figure 24.8. One or more of
these bits can be set at a time.

Window size. This field defines the window size of the sending TCP in bytes. Note that the length
of this field is 16 bits, which means that the maximum size of the window is 65,535 bytes. This
value is normally referred to as the receiving window (rwnd) and is determined by the receiver.
The sender must obey the dictation of the receiver in this case.

Checksum. This 16-bit field contains the checksum. The calculation of the checksum for TCP
follows the same procedure as the one described for UDP

Urgent pointer. This 16-bit field, which is valid only if the urgent flag is set, is used when the
segment contains urgent data.

Options. There can be up to 40 bytes of optional information in the TCP header.


TCP Connection

TCP is connection-oriented. a connection-oriented transport protocol establishes a logical path


between the source and destination.
In TCP, connection-oriented transmission requires three phases: connection establishment, data
transfer, and connection termination.

Connection Establishment

TCP transmits data in full-duplex mode. When two TCPs in two machines are connected, they are
able to send segments to each other simultaneously.
Three-Way Handshaking: The connection establishment in TCP is called three-way
handshaking. The process starts with the server. The server program tells its TCP that it is ready
to accept a connection. This request is called a passive open. Although the server TCP is ready to
accept a connection from any machine in the world, it cannot make the connection itself.

The client program issues a request for an active open. A client that wishes to connect to an open
server tells its TCP to connect to a particular server. TCP can now start the three-way
handshaking process.

1. The client sends the first segment, a SYN segment, in which only the SYN flag is set. This
segment is for synchronization of sequence numbers. The client in our example chooses a
random number as the first sequence number and sends this number to the server. This
sequence number is called the initial sequence number (ISN).
A SYN segment cannot carry data, but it consumes one sequence number.

2. The server sends the second segment, a SYN + ACK segment with two flag bits set as: SYN
and ACK. This segment has a dual purpose.
A SYN + ACK segment cannot carry data, but it does consume one sequence number.

3. The client sends the third segment. This is just an ACK segment. It acknowledges the
receipt of the second segment with the ACK flag and acknowledgment number field.
An ACK segment, if carrying no data, consumes no sequence number.

SYN Flooding Attack:


The connection establishment procedure in TCP is susceptible to a serious security problem
called SYN flooding attack. This happens when one or more malicious attackers send a large
number of SYN segments to a server pretending that each of them is coming from a different
client by faking the source IP addresses in the datagrams. The server, assuming that the clients
are issuing an active open, allocates the necessary resources, such as creating transfer control
block (TCB) tables and setting timers.

TCP server then sends the SYN + ACK segments to the fake clients, which are lost. When the
server waits for the third leg of the handshaking process, however, resources are allocated
without being used. If, during this short period of time, the number of SYN segments is large, the
server eventually runs out of resources and may be unable to accept connection requests from
valid clients. This SYN flooding attack belongs to a group of security attacks known as a denial of
service attack, in which an attacker monopolizes a system with so many service requests that
the system overloads and denies service to valid requests.
Data Transfer:
Connection Termination:
Using Three-Way Handshaking:

Using Half Close:


Silly Window Syndrome:
A serious problem can arise in the sliding window operation when either the sending application
program creates data slowly or the receiving application program consumes data slowly, or both.
Any of these situations results in the sending of data in very small segments, which reduces the
efficiency of the operation. For example, if TCP sends segments containing only 1 byte of data, it
means that a 41-byte datagram (20 bytes of TCP header and 20 bytes of IP header) transfers only
1 byte of user data. Here the overhead is 41/1, which indicates that we are using the capacity of
the network very inefficiently. The inefficiency is even worse after accounting for the data-link
layer and physical-layer overhead. This problem is called the silly window syndrome.

Syndrome Created by the Sender


The sending TCP may create a silly window syndrome if it is serving an application program that
creates data slowly, for example, 1 byte at a time. The application program writes 1 byte at a time
into the buffer of the sending TCP. If the sending TCP does not have any specific instructions, it
may create segments containing 1 byte of data. The result is a lot of 41-byte segments that are
travelling through an internet.

Nagle found an elegant solution. Nagle’s algorithm is simple:

1. The sending TCP sends the first piece of data it receives from the sending application
program even if it is only 1 byte.
2. After sending the first segment, the sending TCP accumulates data in the output buffer
and waits until either the receiving TCP sends an acknowledgment or until enough data
have accumulated to fill a maximum-size segment. At this time, the sending TCP can send
the segment.
3. Step 2 is repeated for the rest of the transmission. Segment 3 is sent immediately if an
acknowledgment is received for segment 2, or if enough data have accumulated to fill a
maximum-size segment.

Syndrome Created by the Receiver


The receiving TCP may create a silly window syndrome if it is serving an application program
that consumes data slowly, for example, 1 byte at a time. Suppose that the sending application
program creates data in blocks of 1 kilobyte, but the receiving application program consumes
data 1 byte at a time. Also suppose that the input buffer of the receiving TCP is 4 kilobytes. The
sender sends the first 4 kilobytes of data. The receiver stores it in its buffer. Now its buffer is full.
It advertises a window size of zero, which means the sender should stop sending data. The
receiving application reads the first byte of data from the input buffer of the receiving TCP. Now
there is 1 byte of space in the incoming buffer. The receiving TCP announces a window size of 1
byte, which means that the sending TCP, which is eagerly waiting to send data, takes this
advertisement as good news and sends a segment carrying only 1 byte of data. The procedure
will continue. One byte of data is consumed and a segment carrying 1 byte of data is sent. Again
we have an efficiency problem and the silly window syndrome.
1. Clark’s solution is to send an acknowledgment as soon as the data arrive, but to
announce a window size of zero until either there is enough space to accommodate a
segment of maximum size or until at least half of the receive buffer is empty.

2. The second solution is to delay sending the acknowledgment. This means that when a
segment arrives, it is not acknowledged immediately. The receiver waits until there is a
decent amount of space in its incoming buffer before acknowledging the arrived
segments. The delayed acknowledgment prevents the sending TCP from sliding its
window. After the sending TCP has sent the data in the window, it stops. This kills the
syndrome.

TCP Congestion Control:


The sender's window size is determined not only by the receiver but also by congestion in the
network. The sender has two pieces of information: the receiver-advertised window size and the
congestion window size. The actual size of the window is the minimum of these two.

Actual window size = minimum (rwnd, cwnd)

Congestion Policy

TCP's general policy for handling congestion is based on three phases: slow start, congestion
avoidance, and congestion detection. In the slow-start phase, the sender starts with a very slow
rate of transmission, but increases the rate rapidly to reach a threshold. When the threshold is
reached, the data rate is reduced to avoid congestion. Finally if congestion is detected, the sender
goes back to the slow-start or congestion avoidance phase based on how the congestion is
detected.

Slow Start: Exponential Increase:


The slow-start algorithm is based on the idea that the size of the congestion window (cwnd)
starts with one maximum segment size (MSS), but it increases one MSS each time an
acknowledgment arrives.
Congestion Avoidance: Additive Increase
To avoid congestion before it happens, we must slow down this exponential growth. TCP defines
another algorithm called congestion avoidance, which increases the cwnd additively instead of
exponentially. When the size of the congestion window reaches the slow-start threshold in the
case where cwnd = i, the slow-start phase stops and the additive phase begins. In this algorithm,
each time the whole “window” of segments is acknowledged, the size of the congestion window
is increased by one. A window is the number of segments transmitted during RTT.
Congestion Detection: Multiplicative Decrease:
If congestion occurs, the congestion window size must be decreased. The only way the sender
can guess that congestion has occurred is by the need to retransmit a segment. However,
retransmission can occur in one of two cases: when a timer times out or when three Duplicate
ACKs are received. In both cases, the size of the threshold is dropped to one-half, a multiplicative
decrease.
TCP implementations have two reactions:

1. If a time-out occurs, there is a stronger possibility of congestion; a segment has probably


been dropped in the network, and there is no news about the sent segments.

In this case TCP reacts strongly:


a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the size of one segment.
c. It starts the slow-start phase again.

2. If three ACKs are received, there is a weaker possibility of congestion; a segment may
have been dropped, but some segments after that may have arrived safely since three
ACKs are received. This is called fast transmission and fast recovery.

In this case, TCP has a weaker reaction:


a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the value of the threshold.
c. It starts the congestion avoidance phase.
An implementations reacts to congestion detection in one of the following ways:
If detection is by time-out, a new slow-start phase starts.
If detection is by three ACKs, a new congestion avoidance phase starts.

Example: We assume that the maximum window size is 32 segments. The threshold is set to 16
segments (one-half of the maximum window size). In the slow-start phase the window size starts
from 1 and grows exponentially until it reaches the threshold. After it reaches the threshold, the
congestion avoidance (additive increase) procedure allows the window size to increase linearly
until a timeout occurs or the maximum window size is reached. In Figure 24.11, the time-out
occurs when the window size is 20. At this moment, the multiplicative decrease procedure takes
over and reduces the threshold to one-half of the previous window size. The previous window
size was 20 when the time-out happened so the new threshold is now 10.

TCP moves to slow start again and starts with a window size of 1, and TCP moves to additive
increase when the new threshold is reached. When the window size is 12, a three duplicate ACKs
event happens. The multiplicative decrease procedure takes over again. The threshold is set to 6
and TCP goes to the additive increase phase this time. It remains in this phase until another time-
out or another three duplicate ACKs happen.

Sockets:
Communication between a client process and a server process is communication between two
sockets, created at two ends The client thinks that the socket is the entity that receives the
request and gives the response; the server thinks that the socket is the one that has a request and
needs the response. If we create two sockets, one at each end, and define the source and
destination addresses correctly, we can use the available instructions to send and receive data.

Assignment:
Question: Consider the effect of using slow start on a line with a 10 m-sec RTT and no
congestion. The receive window is 30KB and the maximum segment size is 2KB. How long does it
take before the first full window can be sent?
Question: Consider the effect of using congestion avoidance on a line with a 10 m-sec RTT and
no congestion. The receive window is 30KB and the maximum segment size is 2KB. How long
does it take before the first full window can be sent?
Socket Addresses:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy