0% found this document useful (0 votes)
53 views180 pages

Computer Networks (R22a0512)

Uploaded by

wekis59166
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views180 pages

Computer Networks (R22a0512)

Uploaded by

wekis59166
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 180

DEPARTMENT OF COMPUTER SCIENCE AND

ENGINEERING

DIGITAL NOTES
ON

COMPUTER NETWORKS
[R22A0512]

B.TECH III YEAR – II SEM


(R22) REGULATION
(2024-25)

Prepared by
P.L. Shailaja, Asst. Professor

MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY


(Autonomous Institution – UGC, Govt. of India)
Recognized under 2(f) and 12 (B) of UGC ACT 1956
(Affiliated to JNTUH, Hyderabad, Approved by AICTE - Accredited by NBA & NAAC – ‘A’ Grade - ISO 9001:2015 Certified)
Maisammaguda, Dhulapally (Post Via. Hakimpet), Secunderabad – 500100, Telangana State, India
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
Vision
To acknowledge quality education and instill high patterns of discipline making the
students technologically superior and ethically strong which involves the
improvement in the quality of life in human race.

Mission
To achieve and impart holistic technical education using the best of infrastructure,
outstanding technical and teaching expertise to establish the students into competent
and confident engineers.
 Evolving the center of excellence through creative and innovative teaching learning
practicesforpromotingacademicachievementtoproduceinternationallyacceptedcompetiti
veand world class professionals.
PROGRAMME EDUCATIONAL OBJECTIVES (PEOs)

PEO1–ANALYTICALSKILLS
To facilitate the graduates with the ability to visualize, gather information, articulate, analyze,
solve complex problems, and make decisions. These are essential to address the challenges of
complex and computation intensive problems increasing their productivity.
PEO2–TECHNICALSKILLS
Tofacilitatethegraduateswiththetechnicalskillsthatpreparethemforimmediateemploymentandpurs
ue certification providing a deeper understanding of the technology in advanced areas of
computer science and related fields, thus encouraging pursuing higher education and research
based on their interest.
PEO3–SOFTSKILLS
To facilitate the graduates with the soft skills that include fulfilling the mission, setting goals,
showing self confidence by communicating effectively, having a positive attitude, get
involved in team-work, being a leader, managing their career and their life.
PEO4–PROFESSIONALETHICS
To facilitate the graduates with the knowledge of professional and ethical responsibilities by
paying attention to grooming, being conservative with style, following dress codes, safety
codes, and adapting them to technological advancements.

PROGRAM SPECIFIC OUTCOMES (PSOs)


After the completion of the course, B.Tech Computer Science and Engineering, the graduates
will have the following Program Specific Outcomes:

1. FundamentalsandcriticalknowledgeoftheComputerSystem:-
AbletoUnderstandtheworkingprinciples of the computer System and its components, Apply
the knowledge to build, asses, and analyze the software and hardware aspects of it.

2. The comprehensive and Applicative knowledge of Software Development: Comprehensive


skills of Programming Languages, Software process models, methodologies, and able to plan,
develop, test, analyze, and manage the software and hardware intensive systems in
heterogeneous platforms individually or working in teams.

3. Applications of Computing Domain & Research: Able to use the professional, managerial,
interdisciplinary skill set, and domain specific tools in development processes, identify their
search gaps, and provide innovative solutions to them.
PROGRAM OUTCOMES (POs)
Engineering Graduates should possess the following:

1. Engineering knowledge: Apply the knowledge of mathematics, science, engineering


fundamentals,andanengineeringspecializationtothesolutionofcomplexengineeringproblems.

2. Problem analysis: Identify, formulate, review researchliterature, and analyze complex


engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.

3. Design / development of solutions: Design solutions for complex


engineeringproblemsanddesignsystemcomponentsorprocessesthatmeetthespecifiedneedswit
happropriateconsideration for the public health and safety, and the
cultural, societal, and environmental considerations.

4. Conduct investigations of complex problems: Use research-based knowledge and research


methods including design of experiments, analysis and interpretation of data, and synthesis of
the information to provide valid conclusions.

5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and modern
engineering and IT tools including prediction and modeling to complex engineering activities
with an understanding of the limitations.

6. The engineer and society: Apply reasoning informed by the contextual knowledge to assess
societal, health, safety, legal and cultural issues and the consequent responsibilities relevant to
the professional engineering practice.

7. Environment and sustainability: Understand the impact of the professional engineering


solutions in societal and environmental contexts, and demonstrate the knowledge of, and need
for sustainable development.

8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.

9. Individual and team work: Function effectively as an individual, and as member or leader in
diverse teams, and in multidisciplinary settings.

10. Communication: Communicate effectively on complex engineering activities with the


engineering community and with society at large, such as, being able to comprehend and write
effective reports and design documentation, make effective presentations, and give and
receive clear instructions.

11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member and
leader in a team, to manage projects and in multidisciplinary environments.

12. Life-long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
MALLA REDDY COLLEGE OF ENGINEERING & TECHNOLOGY

III Year B.Tech.CSE-I Sem L /T/P/ C


3 /0/0/3

(R22A0512)

Computer Networks

Course Objectives:
1. To understand the fundamentals of computer networks, TCP/IP & OSI model.
2. To analyze Data link layer Issues, Protocols.
3. To explain Network layer Protocols, IP addressing.
4. To identify end to end communication & various things in Transport layer.
5. To describe various user services in a network.

UNIT - I:
Introduction: Network, Uses of Networks, Types of Networks, Reference Models: TCP/IP
Model, The OSI Model, Comparison of the OSI and TCP/IP reference model. Physical Layer:
Guided transmission media, Wireless transmission media

UNIT - II:
Data Link Layer - Design issues, Error Detection & Correction, Elementary Data Link Layer
Protocols, Sliding window protocols, Multiple Access Protocols - ALOHA, CSMA,CSMA/CD,
CSMA/CA, Collision free protocols, Ethernet- Physical Layer, Ethernet Mac Sub layer.
UNIT - III:
Network Layer: Network Layer Design issues, store and forward packet switching
connection less and connection oriented networks-routing algorithms-optimality principle,
shortest path, flooding, Distance Vector Routing, Count to Infinity Problem, Link State
Routing, Path Vector Routing, Hierarchical Routing; Congestion control algorithms, IP
addresses, CIDR, Subnetting, SuperNetting, IPv4, Packet Fragmentation, IPv6 Protocol,
Transition from IPv4 to IPv6, ARP, RARP.

UNIT - IV:
Transport Layer: Services provided to the upper layers elements of transport protocol
addressing connection establishment, Connection release, Error Control & Flow Control,
Crash Recovery.The Internet Transport Protocols: UDP, Introduction to TCP, The TCP
Service Model, The TCP Segment Header, The Connection Establishment, The TCP
Connection Release, The TCP Sliding Window, The TCP Congestion Control Algorithm.

UNIT - V:
Application Layer- Introduction, providing services, Applications layer paradigms: Client
server model, HTTP, E-mail, WWW, TELNET, DNS.

TEXT BOOKS:

1. Computer Networks - Andrew S Tanenbaum, 4th Edition, Pearson Education.


2. Data Communications and Networking - Behrouz A. Forouzan, Fifth Edition TMH,
2013.
REFERENCES BOOKS:
1. An Engineering Approach to Computer Networks - S. Keshav, 2nd Edition, Pearson
Education.
2. Understanding communications and Networks, 3rd Edition, W. A. Shay, Cengage
Learning.
3. Computer Networking: A Top-Down Approach Featuring the Internet, James F.
Kurose, K. W. Ross, 3rd Edition, Pearson Education.

Course Outcomes:
1. At the end of this course, students will be able to :
2. Understand basics of Computer Networks and Reference Models.
3. Understand the Datalink Layer Concepts
4. Know allotment of IP addresses, best routing path calculations in network.
5. Analyze TCP,UDP working and know how to handle congestion
6. Get an idea of various things in Application Layer.
UNIT NO TOPIC PAGE NO

INTRODUCTION TO NETWORKS 1

TYPES OF NETWORKS, 5

INTRODUCTION TO PHYSICAL LAYER 22


I
COMPARISON OF OSI AND TCP/IP
28
PROTOCOLS
DATA LINK LAYER DESIGN ISSUES 29

SLIDI NG WINDOW PROTOCOLS 36


II
MULTIPLE ACCESS PROTOCOLS 46

NETWORK LAYER DESIGN ISSUES 80


CONNECTION LESS AND CONNECTION
81
ORIENTED PROTOCOLS
III
ROUTING PROTOCOLS,IP ADDRESS 88

TRANSPORT LAYER SERVICES PROVIDED 110

IV THE INTERNET TRANSPORT


146
PROTOCOLS
APPLICATION LAYER SERVICES 153
V
APPLICATIONS LAYER PARADIGMS 154
COMPUTER NETWORKS A.Y. 2024-2025
UNIT-I

Introduction: Network, Uses of Networks, Types of Networks, Reference Models: TCP/IP


The OSI Model, Comparison of the OSI and TCP/IP reference model.
Physical Layer: Guided transmission media, Wireless transmission media

INTRODUCTION

A network is a set of devices (often referred to as nodes) connected by communication links. A


node can be a computer, printer, or any other device capable of sending and/or receiving data
generated by other nodes on the network.

“Computer network’’ to mean a collection of autonomous computers interconnected by a


single technology. Two computers are said to be interconnected if they are able to
exchange information.

The connection need not be via a copper wire; fiber optics, microwaves, infrared, and
communication satellites can also be used.

Networks come in many sizes, shapes and forms, as we will see later. They are usually
connected together to make larger networks, with the Internet being the most well-known
example of a network of networks.

There is considerable confusion in the literature between a computer network and a


distributed system. The key distinction is that in a distributed system, a collection of
independent computers appears to its users as a single coherent system. Usually, it has a
single model or paradigm that it presents to the users. Often a layer of software on top of
the operating system, called middleware, is responsible for implementing this model. A
well-known example of a distributed system is the World Wide Web. It runs on top of the
Internet and presents a model in which everything looks like a document (Web page).

USES OF COMPUTER NETWORKS

1. Business Applications

to distribute information throughout the company (resource sharing). Sharing physical


resources such as printers, and tape backup systems, is sharing information client-server
model. It is widely used and forms the basis of much network usage. Communication
medium among employees. Email (electronic mail), which employees generally use for a
great deal of daily communication. Telephone calls between employees may be carried by
the computer network instead of by the phone company. This technology is called IP
telephony or
DEPT Of CSE Page 1
COMPUTER NETWORKS A.Y. 2024-2025

Voice over IP (VoIP) when Internet technology is used. Desktop sharing lets remote
workers see and interact with a graphicalcomputer screen

doing business electronically, especially with customers and suppliers. This new model is
called e-commerce (electronic commerce) and it has grown rapidly in recent years.

2 Home Applications

peer-to-peer communication person-to-person communication

electronic commerce entertainment.(game playing,)

3 Mobile Users

Text messaging or texting Smart phones,

GPS (Global Positioning System) m-commerce

NFC (Near Field Communication)

4 Social Issues

With the good comes the bad, as this new-found freedom brings with it many unsolved
social, political, and ethical issues.

Social networks, message boards, content sharing sites, and a host of other applications
allow people to share their views with like-minded individuals. As long as the subjects are
restricted to technical topics or hobbies like gardening, not too many problems will arise.

The trouble comes with topics that people actually care about, like politics, religion, or sex.
Views that are publicly posted may be deeply offensive to some people. Worse yet, they
may not be politically correct. Furthermore, opinions need not be limited to text; high-
resolution color photographs and video clips are easily shared over computer networks.
Some people take a live-and-let-live view, but others feel that posting certain material (e.g.,
verbal attacks on particular countries or religions, pornography, etc.) is simply
unacceptable and that such content must be censored. Different countries have different
and conflicting laws in this area. Thus, the debate rages.

Computer networks make it very easy to communicate. They also make it easy for the
people who run the network to snoop on the traffic. This sets up conflicts over issues such
as employee rights versus employer rights. Many people read and write email at work.
Many employers have claimed the right to read and possibly censor employee messages,
including messages sent from a home computer outside working hours. Not all employees
agree with this, especially the latter part.

DEPT OF CSE Page 2


COMPUTER NETWORKS A.Y. 2024-2025

Another conflict is centered around government versus citizen’s rights.

A new twist with mobile devices is location privacy. As part of the process of providing
service to your mobile device the network operators learn where you are at different times
of day. This allows them to track your movements. They may know which nightclub you
frequent and which medical center you visit.

Phishing ATTACK: Phishing is a type of social engineering attack often usedto steal user
data, including login credentials and credit card numbers. It occurs when an attacker,
masquerading as a trusted entity, dupes a victim into opening an email, instant message, or
text message.

BOTNET ATTACK: Botnets can be used to perform distributed denial-of- service attack
(DDoS attack), steal data, send spam, and allows the attacker to access the device and its
connection.

The effectiveness of a data communications system depends on four fundamental


characteristics: delivery, accuracy, timeliness, and jitter.

I. Delivery. The system must deliver data to the correct destination. Data must be received
by the intended device or user and only by that device or user.

2 Accuracy. The system must deliver the data accurately. Data that have been altered in
transmission and left uncorrected are unusable.

3. Timeliness. The system must deliver data in a timely manner. Data delivered late are
useless. In the case of video and audio, timely delivery means delivering data as they are
produced, in the same order that they are produced, and without significant delay. This
kind of delivery is called real- time transmission.

4. Jitter. Jitter refers to the variation in the packet arrival time. It is the uneven delay in
the delivery of audio or video packets. For example, let us assume that video packets are
sent every 30 ms. If some of the packets arrive with 30-ms delay and others with 40-ms
delay, an uneven quality in the video is the result.

A data communications system has five components

I. Message. The message is the information (data) to be communicated. Popular forms of


information include text, numbers, pictures, audio, and video.2 Sender. The sender is the
device that sends the data message. It can be a computer, workstation, telephone handset,
video camera, and so on.

DEPT OF CSE Page 3


COMPUTER NETWORKS A.Y. 2024-2025

3. Receiver. The receiver is the device that receives the message. It can be acomputer,
workstation, telephone handset, television, and so on.

4. Transmission medium. The transmission medium is the physical path by which a


message travels from sender to receiver. Some examples of transmission media include
twisted-pair wire, coaxial cable, fiber-optic cable, and radio waves.

5. Protocol. A protocol is a set of rules that govern data communications. It represents


an agreement between the communicating devices. Without a protocol, two devices may be
connected but not communicating, just as a person speaking French cannot be understood
by a person who speaks only Japanese.

DEPT OF CSE Page 4


COMPUTER NETWORKS A.Y. 2024-2025

TYPES OF NETWORK

Data Flow

Communication between two devices can be simplex, half-duplex, or full- duplexas shown
in Figure.

Simplex In simplex mode, the communication is unidirectional, as on a one- way street.


Only one of the two devices on a link can transmit; the other can only receive (Figure a).
Keyboards and traditional monitors are examples of simplex devices.

Half-Duplex

In half-duplex mode, each station can both transmit and receive, but not at the same time.
When one device is sending, the other can only receive, and vice versa (Figure b). Walkie-
talkies and CB (citizens band) radios are both half- duplex systems.

Full-Duplex

In full-duplex, both stations can transmit and receive simultaneously (Figure c).One
common example of full-duplex communication is the telephone network. When two
people are communicating by a telephone line, both can talk and listen at the same time.
The full-duplex mode is used when communication in both directions is required all the
time.

Network Criteria

A network must be able to meet a certain number of criteria. The most important of these
are performance, reliability, and security.

Performance

DEPT OF CSE Page 5


COMPUTER NETWORKS A.Y. 2024-2025

Performance can be measured in many ways, including transit time and response time.
Transit time is the amount of time required for a message to travel from one device to
another. Response time is the elapsed time between an inquiry and a response. The
performance of a network depends on a number of factors, including the number of users,
the type of transmission medium, the capabilities of the connected hardware, and the
efficiency of the software.

Performance is often evaluated by two networking metrics: throughput and delay. We


often need more throughput and less delay. However, these two criteria are often
contradictory. If we try to send more data to the network, we may increase throughput but
we increase the delay because of traffic congestionin the network.

Reliability: In addition to accuracy of delivery, network reliability is measured by the


frequency of failure, the time it takes a link to recover from a failure, andthe network's
robustness in a catastrophe.

Security: Network security issues include protecting data from


unauthorized access, protecting data from damage and development, and implementing
policies and procedures for recovery from breaches and data losses

Physical Structures

Before discussing networks, we need to define some network attributes.

Type of Connection

A network is two or more devices connected through links. A link


is a communications pathway that transfers data from one device to another.

There are two possible types of connections: point-to-point and multipoint. Point-to-Point
A point-to-point connection provides a dedicated link betweentwo devices. The entire
capacity of the link is reserved for transmission between those two devices. Most point-to-
point connections use an actual length of wire or cable to connect the two ends, but other
options, such as microwave or satellite links, are also possible

When you change television channels by infrared remote control, you are establishing a
point-to-point connection between the remote control and the television's control system.

Multipoint A multipoint (also called multi-drop) connection is one in which more than two
specific devices share a single link

DEPT OF CSE Page 6


COMPUTER NETWORKS A.Y. 2024-2025

In a multipoint environment, the capacity of the channel is shared, either spatially or


temporally. If several devices can use the link simultaneously, it is aspatially shared
connection. If users must take turns, it is a timeshared connection.

Physical Topology

The term physical topology refers to the way in which a network is laid out physically.

Two or more devices connect to a link; two or more links form a topology. The topology of
a network is the geometric representation of the relationship of all the links and linking
devices (usually called nodes) to one another.

There are four basic topologies possible: mesh, star, bus, and ring

DEPT OF CSE Page 7


COMPUTER NETWORKS A.Y. 2024-2025

MESH:

A mesh topology is the one where every node is connected to every other nodein the
network

A mesh topology can be a full mesh topology or a partially connected mesh topology.

In a full mesh topology, every computer in the network has a connection to each of the
other computers in that network. The number of connections in

This network can be calculated using the following formula (n is the number of computers
in the network): n(n-1)/2

In a partially connected mesh topology, at least two of the computers in the network have
connections to multiple other computers in that network. It is an inexpensive way to
implement redundancy in a network. In the event that oneof the primary computers or
connections in the network fails, the rest of the network continues to operatenormally.

Advantages of a mesh topology Can handle high amounts of traffic, because multiple
devices can transmitdata simultaneously.

A failure of one device does not cause a break in the network or transmissionof data.

Adding additional devices does not disrupt data transmission between otherdevices.

Disadvantages of a mesh topology

The cost to implement is higher than other network topologies, making it aless desirable
option.

Building and maintaining the topology is difficult and time consuming. The chance of
redundant connections is high, which adds to the high costsand potential for reduced
efficiency.

DEPT OF CSE Page 8


COMPUTER NETWORKS A.Y. 2024-2025

STAR:

A star network, star topology is one of the most common network setups. Inthis
configuration, every node connects to a central network

device, likea hub, switch, or computer. The central network device acts as a server and the
peripheral devices act as clients. Depending on the type of network card used in each
computer of the star topology, a coaxial cable or a RJ-45 network cableis used to connect
computers together.

Advantages of star topology

Centralized management of the network, through the use of the central computer, hub, or
switch.

Easy to add another computer to the network.

If one computer on the network fails, the rest of the network continues tofunction
normally.

The star topology is used in local-area networks (LANs), High speed LANsoften use a star
topology with a central hub.

Disadvantages of star topology Can have a higher cost to implement, especially when using
a switch orrouter as the central network device.

The central network device determines the performance and number ofnodes the network
can handle.

If the central computer, hub, or switch fails, the entire network goes downand all
computers are disconnected from the network

DEPT OF CSE Page 9


COMPUTER NETWORKS A.Y. 2024-2025

BUS:

a line topology, a bus topology is a network setup in which each computerand network
device are connected to a single cable or backbone.

Advantages of bus topology

It works well when you have a small network.

It's the easiest network topology for connecting computers or peripheralsin a linear
fashion.

It requires less cable length than a star topology.

Disadvantages of bus topology.

It can be difficult to identify the problems if the whole network goes down.

It can be hard to troubleshoot individual device issues. Bus topology is not great for
large networks.

Terminators are required for both ends of the main cable. Additional devices slow the
network down.

If a main cable is damaged, the network fails or splits into two.

DEPT OF CSE Page 10


COMPUTER NETWORKS A.Y. 2024-2025

RING:

A ring topology is a network configuration in which device connections create a circular


data path. In a ring network, packets of data travel from one device to the next until they
reach their destination. Most ring topologies allow packets to travel only in one direction,
called a unidirectional ring network. Otherspermit data to move in either direction, called

bidirectional. The major disadvantage of a ring topology is that if any individual connection
in the ring is broken, the entire network is affected.

Ring topologies may be used in either local area networks (LANs) or wide area networks
(WANs).

Advantages of ring topology

All data flows in one direction, reducing the chance of packet collisions.

A network server is not needed to control network connectivity betweeneach


workstation.

Data can transfer between workstations at high speeds. Additional workstations can be
added without impacting

performance ofthe network. Disadvantages of ring topology

All data being transferred over the network must pass through each workstation on the
network, which can make it slower than a star topology.

The entire network will be impacted if one workstation shuts down.

DEPT OF CSE Page 11


COMPUTER NETWORKS A.Y. 2024-2025

The hardware needed to connect each workstation to the network is moreexpensive than
Ethernet cards and hubs/switches.

Hybrid Topology A network can be hybrid. For example, we can have a main star topology
with each branch connecting several stations in a bus topology as shown in Figure

Types of Network based on size

The types of network are classified based upon the size, the area it covers and its physical
architecture. The three primary network categories are LAN, WAN and MAN. Each network
differs in their characteristics such as distance, transmission speed, cables and cost.

Basic types

LAN (Local Area Network)

Group of interconnected computers within a small area. (room, building, campus)

Two or more pc's can from a LAN to share files, folders, printers, applicationsand other
devices.

Coaxial or CAT 5 cables are normally used for connections. Due to short distances, errors
and noise are minimum.

Data transfer rate is 10 to 100 mbps. Example: A computer lab in a school. MAN
(Metropolitan Area Network) Design to extend over a large area.

Connecting number of LAN's to form larger network, so that resources can be shared.

DEPT OF CSE Page 12


COMPUTER NETWORKS A.Y. 2024-2025

Networks can be up to 5 to 50 km. Owned by organization or individual.

Data transfer rate is low compare to LAN.

Example: Organization with different branches located in the city.

WAN (Wide Area Network) Are country and worldwide network. Contains multiple LAN's
and MAN's. Distinguished in terms of geographical range.Uses satellites and microwave
relays.

Data transfer rate depends upon the ISP provider and varies over the location.Best example
is the internet.

Other types

WLAN (Wireless LAN)

A LAN that uses high frequency radio waves for communication. Provides short range
connectivity with high speed data transmission.PAN (Personal Area Network)

Network organized by the individual user for its personal use.

SAN (Storage Area Network)

Connects servers to data storage devices via fiber-optic cables.E.g.: Used for daily backup of
organization or a mirror copy.

A transmission medium can be broadly defined as anything that can carry information from
a source to a destination.

Classes of transmission media

DEPT OF CSE Page 13


COMPUTER NETWORKS A.Y. 2024-2025

Guided Media: Guided media, which are those that provide a medium from one device to
another, include twisted-pair cable, coaxial cable, and fiber-optic cable.

Twisted-Pair Cable: A twisted pair consists of two conductors (normally copper), each with
its own plastic insulation, twisted together. One of the wires is used to carry signals to the
receiver, and the other is used only as a ground reference.

Unshielded Versus Shielded Twisted-Pair Cable

The most common twisted-pair cable used in communications is referred to as unshielded


twisted-pair (UTP). STP cable has a metal foil or braided mesh covering that encases each
pair of insulated conductors. Although metal casing improves the quality of cable by
preventing the penetration of noise or crosstalk, it is bulkier and more expensive.

The most common UTP connector is RJ45 (RJ stands for registered jack) Applications

Twisted-pair cables are used in telephone lines to provide voice and data

channels.

Local-area networks, such as l0Base-T and l00Base-T, also use twisted-pair cables.

DEPT OF CSE Page 14


COMPUTER NETWORKS A.Y. 2024-2025

Coaxial Cable

Coaxial cable (or coax) carries signals of higher frequency ranges than those in twisted pair
cable. coax has a central core conductor of solid or stranded wire (usually copper) enclosed
in an insulating sheath, which is, in turn, encased in an outer conductor of metal foil, braid,
or a combination of the two. The outer metallic wrapping serves both as a shield against
noise and as the second conductor, which completes the circuit. This outer conductor is
also

enclosed in an insulating sheath, and the whole cable is protected by a plastic cover.

The most common type of connector used today is the Bayone-Neill- Concelman(BNe),
connector.

Applications

Coaxial cable was widely used in analog telephone networks,digital telephone networks

Cable TV networks also use coaxial cables.

Another common application of coaxial cable is in traditional Ethernet LANs

DEPT OF CSE Page 15


COMPUTER NETWORKS A.Y. 2024-2025
Fiber-Optic Cable

A fiber-optic cable is made of glass or plastic and transmits signals in the formof light. Light
travels in a straight line as long as it is moving through a single uniform substance.

If a ray of light traveling through one substance suddenly enters another substance(of a
different density), the ray changes direction.

Bending of light ray

Optical fibers use reflection to guide light through a channel. A glass or plastic core is
surrounded by a cladding of less dense glass or plastic.

Propagation Modes

Multimode is so named because multiple beams from a light source move through the core
in different paths. How these beams move within the cable depends on the structure of the
core, as shown in Figure.

DEPT OF CSE Page 16


COMPUTER NETWORKS A.Y. 2024-2025

In multimode step-index fiber, the density of the core remains constant from the
center to the edges. A beam of light moves through this constant density in a straight line
until it reaches the interface of the core and the cladding. The term step index refers to the
suddenness of this change, which contributes to the distortion of the signal as it passes
through the fiber.

A second type of fiber, called multimode graded-index fiber, decreases this distortion of the
signal through the cable. The word index here refers to the index of refraction.

Single-Mode: Single-mode uses step-index fiber and a highly focused sourceof light that
limits beams to a small range of angles, all close to the horizontal.

Fiber Construction

The subscriber channel (SC) connector, The straight-tip (ST) connector,MT-RJ(mechanical


transfer registered jack) is a connector

Applications

Fiber-optic cable is often found in backbone networks becauseits


widebandwidth is cost-effective..

Some cable TV companies use a combination of optical fiber and coaxial cable,thus creating
a hybrid network.

DEPT OF CSE Page 17


COMPUTER NETWORKS A.Y. 2024-2025

Local-area networks such as 100Base-FX network (Fast Ethernet)


and1000Base-X also use fiber-optic cable

Advantages and Disadvantages of Optical Fiber

Advantages Fiber-optic cable has several advantages over metallic cable (twisted pair or
coaxial).

1 Higher bandwidth.

2 Less signal attenuation. Fiber-optic transmission distance is significantly


greaterthan that of other guided media. A signal can run for 50 km without requiring
regeneration. We need repeaters every 5 km for coaxial or twisted- pair cable.

3 Immunity to electromagnetic interference. Electromagnetic noise cannot affect


fiber-optic cables.

4 Resistance to corrosive materials. Glass is more resistant to corrosive materials than


copper.

5 Light weight. Fiber-optic cables are much lighter than copper cables.

6 Greater immunity to tapping. Fiber-optic cables are more immune to tapping than
copper cables. Copper cables create antenna effects that can easily be tapped.

Disadvantages There are some disadvantages in the use of optical fiber. 1Installation and
maintenance

2 Unidirectional light propagation. Propagation of light is unidirectional. If weneed


bidirectional communication, two fibers are needed.

3 Cost. The cable and the interfaces are relatively more expensive than those of other
guided media. If the demand for bandwidth is not high, often the use of optical fiber cannot
be justified.

UNGUIDED MEDIA: WIRELESS

Unguided media transport electromagnetic waves without using a physical conductor. This
type of communication is often referred to as wireless communication.Radio
WavesMicrowaves Infrared

DEPT OF CSE Page 18


COMPUTER NETWORKS A.Y. 2024-2025

Unguided signals can travel from the source to destination in several ways: ground
propagation, sky propagation, and line-of-sight propagation, as shown in Figure

Radio Waves

Electromagnetic waves ranging in frequencies between 3 kHz and 1 GHz are normally
called radio waves. Radio waves are omni directional. When an antenna transmits radio
waves, they are propagated in all directions. This means that the sending and receiving
antennas do not have to be aligned. A sending antenna sends waves that can be received by
any receiving antenna. The omni directional property has a disadvantage, too. The radio
waves transmitted by one antenna are susceptible to interference by another antenna that
may send signals using the same frequency or band.

Omni directional Antenna


Radio waves use omnidirectional antennas that send out signals in all directions. Based on
the wavelength, strength, and the purpose of transmission, we can have several types of

DEPT OF CSE Page 19


COMPUTER NETWORKS A.Y. 2024-2025
antennas. Figure shows an omnidirectional antenna.

Applications

The Omni directional characteristics of radio waves make them useful for multicasting, in
which there is one sender but many receivers. AM and FM radio, television, maritime radio,
cordless phones, and paging are examples of multicasting.

Microwaves

Electromagnetic waves having frequencies between 1 and 300 GHz are called microwaves.
Microwaves are unidirectional. The sending and receiving antennas need to be aligned. The
unidirectional property has an obvious advantage. A pair of antennas can be aligned
without interfering with another pair of aligned antennas

Unidirectional Antenna

Microwaves need unidirectional antennas that send out signals in one direction. Two types
of antennas are used for microwave communications: the parabolic dish and the horn

DEPT OF CSE Page 20


COMPUTER NETWORKS A.Y. 2024-2025

Applications:

Microwaves are used for unicast communication such as cellular telephones,satellite


networks, and wireless LANs

Infrared

Infrared waves, with frequencies from 300 GHz to 400 THz (wavelengths from 1 mm to
770 nm), can be used for short-range communication. Infrared waves, having high
frequencies, cannot penetrate walls. This advantageous

characteristic prevents interference between one system and another; a short - range
communication system in one room cannot be affected by another system in the next room.

When we use our infrared remote control, we do not interfere with the use of the remote
by our neighbors. Infrared signals useless for long -range communication. In addition, we
cannot use infrared waves outside a building because the sun's rays contain infrared waves
that can interfere with the communication.

Applications:

Infrared signals can be used for short-range communication in a closedarea using line-of-
sight propagation.

REFERENCE MODELS

OSI

OSI stands for Open Systems Interconnection

Created by International Standards Organization (ISO)

Was created as a framework and reference model to explain how differentnetworking


technologies work together andinteract

It is not a standard that networking protocols must follow Each layer has specific functions
it is responsible for

DEPT OF CSE Page 21


COMPUTER NETWORKS A.Y. 2024-2025

All layers work together in the correct order to move data around a network

Top to bottom

–All People Seem To Need Data ProcessingBottom to top

–Please Do Not Throw Sausage Pizza Away

Physical Layer

Deals with all aspects of physically moving data from one computer to the next Converts
data from the upper layers into 1s and 0s for transmission over media

Defines how data is encoded onto the media to transmit the data Defined on this layer:
Cable standards, wireless standards, and fiber opticstandards.

DEPT OF CSE Page 22


COMPUTER NETWORKS A.Y. 2024-2025

Copper wiring, fiber optic cable, radio frequencies, anything that can be used to

transmit data is defined on the Physical layer of the OSI Model Device example: Hub

Used to transmit data

Data Link Layer

Is responsible for moving frames from node to node or computer to computer Can move
frames from one adjacent computer to another, cannot move frames across routers

Encapsulation = frame

Requires MAC address or physical address

Protocols defined include Ethernet Protocol and Point-to-Point Protocol (PPP) Device
example: Switch

Two sublayers: Logical Link Control (LLC) and the Media Access Control (MAC)

o Logical Link Control (LLC)

–Data Link layer addressing, flow control, address notification, error control

o Media Access Control (MAC)

–Determines which computer has access to the network media at any given time

–Determines where one frame ends and the next one starts, called frame

synchronization

Network Layer

Responsible for moving packets (data) from one end of the network to the other, called
end- to-end communications

Requires logical addresses such as IP addresses Device example: Router

–Routing is the ability of various network devices and their related software tomove data
packets from source to destination Transport Layer

Takes data from higher levels of OSI Model and breaks it into segments that

can be sent to lower-level layers for data transmission

Conversely, reassembles data segments into data that higher-level protocols and
applications can use

DEPT OF CSE Page 23


COMPUTER NETWORKS A.Y. 2024-2025

Also puts segments in correct order (called sequencing ) so they

can be reassembled in correct order at destination Concerned with the reliability of the
transport of sent data

May use a connection-oriented protocol such as TCP to ensure destinationreceived


segments

May use a connectionless protocol such as UDP to send segments without

assurance of delivery Uses port addressing Session Layer

Responsible for managing the dialog between networked devices Establishes, manages, and
terminates connections

Provides duplex, half-duplex, or simplex communications between devices Provides


procedures for establishing checkpoints, adjournment, termination,and restart or recovery
procedures

Presentation Layer

Concerned with how data is presented to the network

Handles three primary tasks: –Translation , –Compression , –Encryption

Application Layer

Contains all services or protocols needed by application software or operatingsystem to


communicate on the network

Examples

o –Firefox web browser uses HTTP (Hyper-Text Transport Protocol)

DEPT OF CSE Page 24


COMPUTER NETWORKS A.Y. 2024-2025

o –E-mail program may use POP3 (Post Office Protocol version 3) to read e-mailsand
SMTP (Simple Mail Transport Protocol) to send e-mails

The interaction between layers in the OSI model

An exchange using the OSI model

DEPT OF CSE Page 25


COMPUTER NETWORKS A.Y. 2024-2025

Summary

TCP/IP Model (Transmission Control Protocol/Internet Protocol) –A protocol suite is a


large number of related protocols that work together toallow networked computers to
communicate

Relationship of layers and addresses in TCP/IP

Application Layer

Application layer protocols define the rules when implementing specific


networkapplications

Rely on the underlying layers to provide accurate and efficient data delivery

DEPT OF CSE Page 26


COMPUTER NETWORKS A.Y. 2024-2025

Typical protocols:

o FTP – File Transfer Protocol For file transfer

o Telnet – Remote terminal protocol

For remote login on any other computer on the network

o SMTP – Simple Mail Transfer Protocol For mail transfer

o HTTP – Hypertext Transfer Protocol For Web browsing

Encompasses same functions as these OSI Model layers Application Presentation Session

Transport Layer TCP &UDP

TCP is a connection-oriented protocol

o Does not mean it has a physical connection between sender and receiver

o TCP provides the function to allow a connection virtually exists – also calledvirtual
circuit

UDP provides the functions:

o Dividing a chunk of data into segments

o Reassembly segments into the original chunk

o Provide further the functions such as reordering and data resend Offering a reliable
byte-stream delivery service

Functions the same as the Transport layer in OSI

Synchronize source and destination computers to set up the session between the respective
computers

Internet Layer

The network layer, also called the internet layer, deals with packets and connects
independent networks to transport the packets across network boundaries. The network
layer protocols are the IP and the Internet Control Message Protocol (ICMP), which is used
for error reporting.

Host-to-network layer

DEPT OF CSE Page 27


COMPUTER NETWORKS A.Y. 2024-2025

The Host-to-network layer is the lowest layer of the TCP/IP reference model. It combines
the link layer and the physical layer of the ISO/OSI model. At this layer, data is transferred
between adjacent network nodes in a WAN or between nodes on the same LAN.

DEPT OF CSE Page 28


COMPUTER NETWORKS A.Y. 2024-2025

UNIT- II

Data Link Layer - Design issues, Error Detection & Correction, Elementary Data Link Layer
Protocols, Sliding window protocols, Multiple Access Protocols - ALOHA, CSMA,CSMA/CD,
CSMA/CA, Collision free protocols, Ethernet- Physical Layer, Ethernet Mac Sub layer

DATA LINK LAYER (SERVICES)

1. Providing services to the network layer:

1 Unacknowledged connectionless service.

Appropriate for low error rate and real-time traffic. Ex: Ethernet

2. Acknowledged connectionless service.

Useful in unreliable channels, WiFi. Ack/Timer/Resend

3. Acknowledged connection-oriented service.

Guarantee frames are received exactly once and in the right order. Appropriate over long,
unreliable links such as a satellite channel or a long - distance telephone circuit

2. Framing: Frames are the streams of bits received from the network layer into
manageable data units. This division of stream of bits is done by Data Link Layer.

3. Physical Addressing: The Data Link layer adds a header to the frame in order to
define physical address of the sender or receiver of the frame, if the frames are to be
distributed to different systems on the network.

4. Flow Control: A receiving node can receive the frames at a faster rate than it can
process the frame. Without flow control, the receiver's buffer can overflow, and frames can
get lost. To overcome this problem, the data link layer uses the flow control to prevent the
sending node on one side of the link from overwhelming the receiving node on another side
of the link. This prevents traffic jam at the receiver side.

5. Error Control: Error control is achieved by adding a trailer at the end of the frame.
Duplication of frames are also prevented by using this mechanism. Data Link Layers adds
mechanism to prevent duplication of frames.

Error detection: Errors can be introduced by signal attenuation and noise. Data Link Layer
protocol provides a mechanism to detect one or more errors. This is achieved by adding
error detection bits in the frame and then receiving node can perform an error check.

Error correction: Error correction is similar to the Error detection, except that receiving
node not only detects the errors but also

DEPT OF CSE Page 29


COMPUTER NETWORKS A.Y. 2024-2025

determine where the errors have occurred in the frame.

6. Access Control: Protocols of this layer determine which of the devices has control
over the link at any given time, when two or more devices are connected to the same link.

7. Reliable delivery: Data Link Layer provides a reliable delivery service, i.e., transmits
the network layer datagram without any error. A reliable delivery service is
accomplished with transmissions and acknowledgements. A data link layer
mainly provides the reliable delivery

service over the links as they have higher error rates and they can be corrected locally, link
at which an error occurs rather than forcing to retransmit the data.

8. Half-Duplex & Full-Duplex: In a Full-Duplex mode, both the nodes can transmit the
data at the same time. In a Half-Duplex mode, only one node can transmit the data at the
same time.

FRAMING:

To provide service to the network layer, the data link layer must use the service provided
to it by the physical layer. What the physical layer does is accept a raw bit stream and
attempt to deliver it to the destination. This bit stream is not guaranteed to be error free.
The number of bits received may be less than, equal to, or more than the number of bits
transmitted, and theymay have different values. It is up to the data link layer to detect and,
if necessary, correct errors. The usual approach is for the data link layer to break the bit
stream up into discrete frames and compute the checksum for each frame (framing). When
a frame arrives at the destination, the checksum is recomputed. If the newly computed
checksum is different from the one contained in the frame, the data link layer knows that
an error has occurred and takes steps to deal with it (e.g., discarding the bad frame and
possibly also sending back an error report).We will look at four framing methods:

1. Character count.

2. Flag bytes with byte stuffing.

3. Starting and ending flags, with bit stuffing.

4. Physical layer coding violations.

DEPT OF CSE Page 30


COMPUTER NETWORKS A.Y. 2024-2025

Character count method uses a field in the header to specify the number of characters in
the frame. When the data link layer at the destination sees the character count, it knows
how many characters follow and hence where the end of the frame is. This technique is
shown in Fig. (a) For four frames of sizes 5, 5, 8, and 8 characters, respectively.

A character stream. (a) Without errors. (b) With one error

The trouble with this algorithm is that the count can be garbled by a transmission error.
For example, if the character count of 5 in the second frame of Fig. (b) becomes a 7, the
destination will get out of synchronization and will be unable to locate the start of the next
frame. Even if the checksum is incorrect so the destination knows that the frame is bad, it
still has no way of telling where the next frame starts. Sending a frame back to the source
asking for a retransmission does not help either, since the destination does not know how
many characters to skip over to get to the start of the retransmission. For this reason, the
character count method is rarely used anymore.

Flag bytes with byte stuffing method gets around the problem of resynchronization after an
error by having each frame start and end with special bytes.

DEPT OF CSE Page 31


COMPUTER NETWORKS A.Y. 2024-2025

a)A frame delimited by flag bytes (b) Four examples of byte sequences before and after
byte stuffing

In the past, the starting and ending bytes were different, but in recent years most protocols
have used the same byte, called a flag byte, as both the starting and ending delimiter, as
shown in Fig. (a) as FLAG. In this way, if the receiver ever loses synchronization, it can just
search for the flag byte to find the end of the current frame. Two consecutive flag bytes
indicate the end of one frame and start of the next one.

It may easily happen that the flag byte's bit pattern occurs in the data. This situation will
usually interfere with the framing. One way to solve this problem is to have the sender's
data link layer insert a special escape byte (ESC) just before each ''accidental'' flag byte in
the data. The data link layer on the receiving end removes the escape byte before the data
are given to the network layer. This technique is called byte stuffing or character stuffing.

Thus, a framing flag byte can be distinguished from one in the data by the absence or
presence of an escape byte before it.

What happens if an escape byte occurs in the middle of the data? The answer is that, it too
is stuffed with an escape byte. Thus, any single escape byte is part of an escape sequence,
whereas a doubled one indicates that a single escape occurred naturally in the data. Some
examples are shown inFig. (b). In all cases, the byte sequence delivered after de stuffing is
exactly the same as the original byte sequence.

A major disadvantage of using this framing method is that it is closely tied to the use of 8-
bit characters. Not all character codes use 8-bit characters. For example UNICODE uses 16-
bit characters, so a new technique had to be developed to allow arbitrary sized characters

Starting and ending flags, with bit stuffing allows data frames to contain an arbitrary
number of bits and allows character codes with an arbitrary number of bits per character.
It works like this. Each frame begins and ends with a special bit pattern, 01111110 (in fact,
a flag byte). Whenever the sender's data link layer encounters five consecutive 1s in the
data, it automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is
analogous to byte stuffing, in which an escape byte is stuffed into the outgoing character
stream before a flag byte in the data.

When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it
automatically de- stuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely
transparent to the network layer in both computers, so is bit stuffing. If the user data

DEPT OF CSE Page 32


COMPUTER NETWORKS A.Y. 2024-2025

contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in the
receiver's memory as 01111110.

Fig:Bit stuffing. (a) The original data. (b) The data as they appear on the line.

(c) The data as they are stored in the receiver's memory after destuffing. With bit stuffing,
the boundary between two frames can be unambiguously recognized by the flag pattern.
Thus, if the receiver loses track of where it is, all it has to do is scan the input for flag
sequences, since they can only occurat frame boundaries and never within the data.

Physical layer coding violations method of framing is only applicable to networks in which
the encoding on the physical medium contains some redundancy. For example, some LANs
encode 1 bit of data by using 2 physical bits. Normally, a 1 bit is a high-low pair and a 0 bit
is a low-high pair. The scheme means that every data bit has a transition in the middle,
making it easy for the receiver to locate the bit boundaries. The combinations high-

high and low-low are not used for data but are used for delimiting frames in some
protocols.

DEPT OF CSE Page 33


COMPUTER NETWORKS A.Y. 2024-2025

As a final note on framing, many data link protocols use combination of a character count
with one of the other methods for extra safety. When a frame arrives, the count field is used
to locate the end of the frame. Only if the appropriate delimiter is present at that position
and the checksum is correct is the frame accepted as valid. Otherwise, the input stream
is

scanned for the next delimiter

ELEMENTARY DATA LINK PROTOCOLS

Simplest Protocol

It is very simple. The sender sends a sequence of frames without even thinking about the
receiver. Data are transmitted in one direction only. Both sender & receiver always ready.
Processing time can be ignored. Infinite buffer space is available. And best of all, the
communication channel between the data link layers never damages or loses frames. This
thoroughly unrealistic protocol, which we will nickname ‘‘Utopia,’’ .The utopia protocol is

DEPT OF CSE Page 34


COMPUTER NETWORKS A.Y. 2024-2025
unrealistic because it does not handle either flow control or error correction

DEPT OF CSE Page 35


COMPUTER NETWORKS A.Y. 2024-2025

Stop-and-wait Protocol

It is still very simple. The sender sends one frame and waits for feedback from the receiver.
When the ACK arrives, the sender sends the next frame It is Stop-and-Wait Protocol
because the sender sends one frame, stops until it receives confirmation from the receiver
(okay to go ahead), and then sendsthe next frame. We still have unidirectional
communication for data frames, but auxiliary ACK frames (simple tokens of
acknowledgment) travel from the other direction. We add flow control to our previous
protocol.

NOISY CHANNELS

Although the Stop-and-Wait Protocol gives us an idea of how to add flow control to its
predecessor, noiseless channels are nonexistent. We can ignore the error (as we
sometimes do), or we need to add error control to our protocols. We discuss three
protocols in this section that use error control.

Sliding Window Protocols:

1 Stop-and-Wait Automatic Repeat Request

2 Go-Back-N Automatic Repeat Request

3 Selective Repeat Automatic Repeat Request

1 Stop-and-Wait Automatic Repeat Request

To detect and correct corrupted frames, we need to add redundancy bits to our data frame.
When the frame arrives at the receiver site, it is checked and if it is corrupted, it is silently

DEPT OF CSE Page 36


COMPUTER NETWORKS A.Y. 2024-2025

discarded. The detection of errors in this protocol is manifested by the silence of the
receiver.

Lost frames are more difficult to handle than corrupted ones. In our previous protocols,
there was no way to identify a frame. The received frame could be the correct one, or a
duplicate, or a frame out of order. The solution is to number the frames. When the receiver
receives a data frame that is out of order, this means that frames were either lost or
duplicated

The lost frames need to be resent in this protocol. If the receiver does not respond when
there is an error, how can the sender know which frame to resend? To remedy this
problem, the sender keeps a copy of the sent frame. At the same time, it starts a timer. If the
timer expires and there is no ACK for the sent frame, the frame is resent, the copy is held,
and the timer is restarted. Since the protocol uses the stop-and-wait mechanism, there is
only one specific frame that needs an ACK

Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent frame and
retransmitting of the frame when the timer expires

In Stop-and-Wait ARQ, we use sequence numbers to number the frames. The sequence
numbers are based on modulo-2 arithmetic.

In Stop-and-Wait ARQ, the acknowledgment number always announces in modulo-2


arithmetic the sequence number of the next frame expected.

DEPT OF CSE Page 37


COMPUTER NETWORKS A.Y. 2024-2025

Bandwidth Delay Product:

Assume that, in a Stop-and-Wait ARQ system, the bandwidth of the line is 1 Mbps, and 1 bit
takes 20 ms to make a round trip. What is the bandwidth- delay product? If the system data

frames are 1000 bits in length, what is the utilization percentage of the link?

The link utilization is only 1000/20,000, or 5 percent. For this reason, for a link with a high
bandwidth or long delay, the use of Stop-and-Wait ARQ wastes the capacity of the link.

DEPT OF CSE Page 38


COMPUTER NETWORKS A.Y. 2024-2025

2. Go-Back-N Automatic Repeat Request

To improve the efficiency of transmission (filling the pipe), multiple frames must be in
transition while waiting for acknowledgment. In other words, we need to let more than one
frame be outstanding to keep the channel busy while the sender is waiting for
acknowledgment.

The first is called Go-Back-N Automatic Repeat. In this protocol we can send several frames
before receiving acknowledgments; we keep a copy of these frames until the
acknowledgments arrive.

In the Go-Back-N Protocol, the sequence numbers are modulo 2m, where m is the size of
the sequence number field in bits. The sequence numbers range from 0 to 2 power m- 1.
For example, if m is 4, the only sequence numbers are 0 through 15 inclusive.

The sender window at any time divides the possible sequence numbers into four regions.

The first region, from the far left to the left wall of the window, defines the sequence
numbers belonging to frames that are already acknowledged. The sender does not worry
about these frames and keeps no copies of them.

The second region, colored in Figure (a), defines the range of sequence numbers belonging
to the frames that are sent and have an unknown status. The sender needs to wait to find
out if these frames have been received or were lost. We call these outstanding frames.

DEPT OF CSE Page 39


COMPUTER NETWORKS A.Y. 2024-2025

The third range, white in the figure, defines the range of sequence numbers for frames that
can be sent; however, the corresponding data packets have not yet been received from the
network layer.

Finally, the fourth region defines sequence numbers that cannot be used until the window
slides

The send window is an abstract concept defining an imaginary box of size 2m − 1 with
three variables: Sf, Sn, and Ssize. The variable Sf defines the sequence number of the first
(oldest) outstanding frame. The variable Sn holds the sequence number that will be
assigned to the next frame to be sent. Finally, the variable Ssize defines the size of the
window.

Figure (b) shows how a send window can slide one or more slots to the right when an
acknowledgment arrives from the other end. The acknowledgments in this protocol are
cumulative, meaning that more than one frame can be acknowledged by an ACK frame. In
Figure, frames 0, I, and 2 are

acknowledged, so the window has slide to the right three slots. Note that the value of Sf is 3
because frame 3 is now the first outstanding frame. The send window can slide one or more
slots when a valid acknowledgment arrives.

Receiver window: variable Rn (receive window, next frame expected) . The sequence
numbers to the left of the window belong to the frames already received and
acknowledged; the sequence numbers to the right of this window define the frames that
cannot be received. Any received frame with a sequence number in these two regions
is discarded. Only a frame with a sequence number matching the value of Rn is
accepted and acknowledged. The receive window also slides, but only one slot at a time.
When a correct frame is received (and a frame is received only one at a time), the window
slides.( see below figure for receiving window)

DEPT OF CSE Page 40


COMPUTER NETWORKS A.Y. 2024-2025

The receive window is an abstract concept defining an imaginary box of size 1 with one
single variable Rn. The window slides when a correct frame has arrived; sliding occurs one
slot at a time

Fig: Receiver window (before sliding (a), After sliding (b))

Timers

Although there can be a timer for each frame that is sent, in our protocol we use only one.
The reason is that the timer for the first outstanding frame always expires first; we send all
outstanding frames when this timer expires.

Acknowledgment

The receiver sends a positive acknowledgment if a frame has arrived safe and sound and in
order. If a frame is damaged or is received out of order, the receiver is silent and will
discard all subsequent frames until it receives the one it is expecting. The silence of the
receiver causes the timer of the unacknowledged frame at the sender side to expire. This, in
turn, causes the sender to go back and resend all frames, beginning with the one with the
expired timer. The receiver does not have to acknowledge each frame received. It can send
one cumulative acknowledgment for several frames.

Resending a Frame

When the timer expires, the sender resends all outstanding frames. For example, suppose
the sender has already sent frame 6, but the timer for frame 3 expires. This means that
frame 3 has not been acknowledged; the sender goes back and sends frames 3,4,5, and 6
again. That is why the protocol is called Go-Back-N ARQ.

DEPT OF CSE Page 41


COMPUTER NETWORKS A.Y. 2024-2025

Below figure is an example(if ack lost) of a case where the forward channel is reliable, but
the reverse is not. No data frames are lost, but some ACKs are delayed and one is lost. The
example also shows how cumulative acknowledgments can help if acknowledgments are
delayed or lost

DEPT OF CSE Page 42


COMPUTER NETWORKS A.Y. 2024-2025

Below figure is an example(if frame lost)

Stop-and-Wait ARQ is a special case of Go-Back-N ARQ in which the size of the send
window is 1.

3 Selective Repeat Automatic Repeat Request

In Go-Back-N ARQ, The receiver keeps track of only one variable, and there isno need to
buffer out-of- order frames; they are simply discarded. However, this protocol is very
inefficient for a noisy link.

In a noisy link a frame has a higher probability of damage, which means the

resending of multiple frames. This resending uses up the bandwidth and slows down the
transmission.

For noisy links, there is another mechanism that does not resend N frames when just one
frame is damaged; only the damaged frame is resent. This mechanism is called Selective
Repeat ARQ.

It is more efficient for noisy links, but the processing at the receiver is more complex.

DEPT OF CSE Page 43


COMPUTER NETWORKS A.Y. 2024-2025

Sender Window (explain go-back N sender window concept (before & after sliding.) The
only difference in sender window between Go-back N and Selective Repeat is Window size)

Receiver window

The receiver window in Selective Repeat is totally different from the one in Go Back-N.
First, the size of the receive window is the same as the size of the send window (2m-1).

The Selective Repeat Protocol allows as many frames as the size of the receiver window to
arrive out of order and be kept until there is a set of in- order frames to be delivered to the
network layer. Because the sizes of the send window and receive window are the same, all
the frames in the send frame can arrive out of order and be stored until they can be
delivered. However the receiver never delivers packets out of order to the network layer.
Above Figure shows the receive window. Those slots inside the window that are colored
define frames that have arrived out of order and are waiting for their neighbors to arrive
before delivery to the network layer.

In Selective Repeat ARQ, the size of the sender and receiver window must be at most one-
half of 2m

Delivery of Data in Selective Repeat ARQ:

DEPT OF CSE Page 44


COMPUTER NETWORKS A.Y. 2024-2025

Flow Diagram

Differences between Go-Back N & Selective Repeat

One main difference is the number of timers. Here, each frame sent or resent needs a timer,
which means that the timers need to be numbered (0, 1,2, and 3). The timer for frame 0
starts at the first request, but stops whenthe ACK for this frame arrives.

There are two conditions for the delivery of frames to the network layer: First, a set of
consecutive frames must have arrived. Second, the set starts from the beginning of the
window. After the first arrival, there was only one frame and it started from the beginning
of the window. After the last arrival, there are three frames and the first one starts from the
beginning of the window.

Another important point is that a NAK is sent.

The next point is about the ACKs. Notice that only two ACKs are sent here. The first one
acknowledges only the first frame; the second one acknowledges three frames. In Selective
Repeat, ACKs are sent when data are delivered to the network layer. If the data belonging
to n frames are delivered in one shot, only one ACK is sent for all of them.

DEPT OF CSE Page 45


COMPUTER NETWORKS A.Y. 2024-2025

Piggybacking

A technique called piggybacking is used to improve the efficiency of the bidirectional


protocols. When a frame is carrying data from A to B, it can also carry control information
about arrived (or lost) frames from B; when a frame is carrying data from B to A, it can also
carry control information about the arrived (or lost) frames from A.

RANDOM ACCESS PROTOCOLS

We can consider the data link layer as two sub layers. The upper sub layer is responsible
for data link control, and the lower sub layer is responsible for resolving access to the
shared media

The upper sub layer that is responsible for flow and error control is called the logical link
control (LLC) layer; the lower sub layer that is mostly responsiblefor multiple access
resolution is called the media access control (MAC) layer. When nodes or stations are
connected and use a common link, called a multipoint or broadcast link, we need a
multiple-access protocol to coordinate access to the link.

Taxonomy of multiple-access protocols

DEPT OF CSE Page 46


COMPUTER NETWORKS A.Y. 2024-2025

RANDOM ACCESS

In random access or contention methods, no station is superior to another station and none
is assigned the control over another.

Two features give this method its name. First, there is no scheduled time for a station to
transmit. Transmission is random among the stations. That is why these methods are called
random access. Second, no rules specify which station should send next. Stations compete
with one another to access the medium. That is why these methods are also called
contention methods.

ALOHA 1 Pure ALOHA

The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol.
The idea is that each station sends a frame whenever it has a frame to send. However, since
there is only one channel to share, there is the possibility of collision between frames from
different stations. Below Figure shows an example of frame collisions in pure ALOHA.

Frames in a pure ALOHA network

In pure ALOHA, the stations transmit frames whenever they have data to send.

When two or more stations transmit simultaneously, there is collision and the frames are
destroyed.

In pure ALOHA, whenever any stationtransmits a frame, it


expects the acknowledgement from the receiver. If acknowledgement is not
received within specified time, the station assumes that the frame (or acknowledgement)

DEPT OF CSE Page 47


COMPUTER NETWORKS A.Y. 2024-2025

has been destroyed. If the frame is destroyed because of collision the station waits for a
random amount of time and sends it again. This waiting time must be random
otherwise same frames will collide again and again.

Therefore pure ALOHA dictates that when time-out period passes, each station must wait
for a random amount of time before resending its frame. This randomness will help avoid
more collisions.

Vulnerable time Let us find the length of time, the vulnerable time, in which there is a
possibility of collision. We assume that the stations send fixed- length frames with each
frame taking Tfr S to send. Below Figure shows the vulnerable time for station A.

Station A sends a frame at time t. Now imagine station B has already sent a frame between t
- Tfr and t. This leads to a collision between the frames from station A and station B. The
end of B's frame collides with the beginning of A's frame. On the other hand, suppose that
station C sends a frame between t and t + Tfr . Here, there is a collision between frames
from station A and station C. The beginning of C's frame collides with the end of A's frame

DEPT OF CSE Page 48


COMPUTER NETWORKS A.Y. 2024-2025

Looking at Figure, we see that the vulnerable time, during which a collision may occur in
pure ALOHA, is 2 times the frame transmission time. Pure ALOHA vulnerable time = 2 xTfr

Procedure for pure ALOHA protocol

Example

A pure ALOHA network transmits 200-bit frames on a shared channel of 200kbps. What is
the requirement to make this frame collision-free?

Solution

Average frame transmission time Tfr is 200 bits/200 kbps or 1 ms. The vulnerable time is 2
x 1 ms =2 ms. This means no station should send laterthan 1 ms before this station starts
transmission and no station should start sending during the one I-ms period that this
station is sending.

The throughput for pure ALOHA is S = G × e −2G . The maximum

throughput Smax = 0.184 when G=(1/2).

PROBLEM

A pure ALOHA network transmits 200-bit frames on a shared channel of

200 kbps. What is the throughput if the system (all stations together) produces a. 1000
frames per second b. 500 frames per second c. 250 frames per second.

The frame transmission time is 200/200 kbps or 1 ms.

DEPT OF CSE Page 49


COMPUTER NETWORKS A.Y. 2024-2025

a. If the system creates 1000 frames per second, this is 1 frame per millisecond. The
load is 1. In this case S = G× e−2 G or S = 0.135 (13.5 percent). This means that the
throughput is 1000 × 0.135 = 135 frames. Only 135 frames out of 1000 will probably
survive.

b. If the system creates 500 frames per second, this is (1/2) frame per millisecond. The
load is (1/2). In this case S = G × e −2G or S = 0.184 (18.4 percent). This means that
the throughput is 500 ×

0.184 = 92 and that only 92 frames out of 500 will probably survive. Notethat this is the
maximum throughput case, percentage wise.

c. If the system creates 250 frames per second, this is (1/4) frameper millisecond. The
load is (1/4). In this case S = G × e −2G or S = 0.152 (15.2 percent). This means that the
throughput is 250 × 0.152 = 38. Only 38 frames out of 250 will probably survive.

2 Slotted ALOHA

Pure ALOHA has a vulnerable time of 2xTfr .

This is so because there is no rule that defines when the station can send. A station may
send soon after another station has started or soon before another station has finished.
Slotted ALOHA was invented to improve the efficiency of pure ALOHA.

In slotted ALOHA we divide the time into slots of Tfr s and force the station to send only at
the beginning of the time slot. Figure 3 shows an example of frame collisions in slotted
ALOHA

DEPT OF CSE Page 50


COMPUTER NETWORKS A.Y. 2024-2025

FIG:3

Because a station is allowed to send only at the beginning of the synchronized time slot, if a
station misses this moment, it must wait until the beginning of the next time slot. This
means that the station which started at the beginning of this slot has already finished
sending its frame. Of course, there is still the possibility of collision if two stations try to
send at the beginning of the same time slot. However, the vulnerable time is now reduced
to one-half, equal to Tfr Figure 4 shows the situation

Below fig shows that the vulnerable time for slotted ALOHA is one-half that ofpure ALOHA.
Slotted ALOHA vulnerable time = Tfr

The throughput for slotted ALOHA is S = G × e−G . The maximum throughput Smax = 0.368
when G =1.

A slotted ALOHA network transmits 200-bit frames using a shared channel witha 200-
Kbps bandwidth. Find the throughput if the system (all stations together) produces

a. 1000 frames per second b. 500 frames per second c. 250 frames per second

Solution

This situation is similar to the previous exercise except that the network is using slotted
ALOHA instead of pure ALOHA. The frame transmission time is 200/200 kbps or 1 ms.

a. In this case G is 1. So S =G x e-G or S =0.368 (36.8 percent). This means thatthe


throughput is 1000 x 0.0368 =368 frames. Only 368 out of 1000 frames will probably
survive. Note that this is the maximum throughput case, percentagewise.

b. Here G is 1/2 In this case S =G x e-G or S =0.303 (30.3 percent). This means that
the throughput is 500 x 0.0303 =151. Only 151 frames out of 500 will probably survive.

c. Now G is 1/4. In this case S =G x e-G or S =0.195 (19.5 percent). This means that the
DEPT OF CSE Page 51
COMPUTER NETWORKS A.Y. 2024-2025
throughput is 250 x 0.195 = 49. Only 49 frames out of 250 will probably survive

Comparison between Pure Aloha & Slotted Aloha

Carrier Sense Multiple Access (CSMA)

To minimize the chance of collision and, therefore, increase the performance, the CSMA
method was developed. The chance of collision can be reduced if a station senses the
medium before trying to use it. Carrier sense multiple access (CSMA) requires that each
station first listen to the medium (or check the state of the medium) before sending. In
other words, CSMA is based on the principle "sense before transmit" or "listen before talk."

CSMA can reduce the possibility of collision, but it cannot eliminate it. Thereason for this is
shown in below Figure. Stations are connected to a shared channel (usually a dedicated
medium).

The possibility of collision still exists because of propagation delay; station may sense the
medium and find it idle, only because the first bit sent by another station has not yet been
received.

At time tI' station B senses the medium and finds it idle, so it sends a frame. At time t2 (t2>
tI)' station C senses the medium and finds it idle because, at this time, the first bits from
station B have not reached station C. Station C also sends a frame. The two signals collide
and both frames are destroyed.

Space/time model of the collision in CSMA

DEPT OF CSE Page 52


COMPUTER NETWORKS A.Y. 2024-2025

Vulnerable Time

The vulnerable time for CSMA is the propagation time Tp . This is the time needed for a
signal to propagate from one end of the medium to the other. When a station sends a frame,
and any other station tries to send a frame during this time, a collision will result. But if the
first bit of the frame reaches the end of the medium, every station will already have heard
the bit and will refrain from sending

Vulnerable time in CSMA

DEPT OF CSE Page 53


COMPUTER NETWORKS A.Y. 2024-2025
Persistence Methods

What should a station do if the channel is busy? What should a station do if the channel is
idle? Three methods have been devised to answer these questions: the 1-persistent
method, the non-persistent method, and the p- persistent method

1-

Persistent: In this method, after the station finds the line idle, it sends its frame
immediately (with probability 1). This method has the highest chance of collision because
two or more stations may find the line idle and send their frames immediately.

Non-persistent: a station that has a frame to send senses the line. If the line is idle, it sends
immediately. If the line is not idle, it waits a random amount of time and then senses the
line again. This approach reduces the chance of collision because it is unlikely that two or
more stations will wait the same amount of time and retry to send simultaneously.
However, this methodreduces the efficiency of the network because the medium remains
idle when there may be stations with frames to send.

p-Persistent: This is used if the channel has time slots with a slot duration equal to or
greater than the maximum propagation time. The p-persistent approach combines the
DEPT OF CSE Page 54
COMPUTER NETWORKS A.Y. 2024-2025
advantages of the other two strategies. It reduces the chance of collision and improves
efficiency.

In this method, after the station finds the line idle it follows these steps:

1. With probability p, the station sends itsframe.

2. With probability q = 1 - p, the station waits for the beginning of the next time slot
and checks the line again.

a. If the line is idle, it goes to step 1.

b. If the line is busy, it acts as though a collision has occurred and uses the backoff
procedure.

Carrier Sense Multiple Access with Collision Detection (CSMA/CD)

The CSMA method does not specify the procedure following a collision. Carrier sense
multiple access with collision detection (CSMA/CD) augments the algorithm to handle the
collision.

In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful. If so, the station is finished. If, however,there is a collision, the
frame is sent again.

DEPT OF CSE Page 55


COMPUTER NETWORKS A.Y. 2024-2025

To better understand CSMA/CD, let us look at the first bits transmitted by the two stations
involved in the collision. Although each station continues to send bits in the frame until it
detects the collision, we show what happens as the first bits collide. In below Figure,
stations A and C are involved in the collision.

Collision of the first bit in CSMA/CD

At time t 1, station A has executed its persistence procedure and starts sending the bits of
its frame. At time t2, station C has not yet sensed the firstbit sent by A. Station C executes its
persistence procedure and starts sending the bits in its frame, which propagate both to the
left and to the right. The collision occurs sometime after time t2.Station C detects a collision
at time t3 when it receives the first bit of A's frame. Station C immediately (or after ashort
time, but we assume immediately) aborts transmission.

Station A detects collision at time t4 when it receives the first bit of C's frame;

it also immediately aborts transmission. Looking at the figure, we see that A

transmits for the duration t4 - tl; C transmits for the duration t3 - t2.

DEPT OF CSE Page 56


COMPUTER NETWORKS A.Y. 2024-2025

Minimum Frame Size

For CSMA/CD to work, we need a restriction on the frame size. Before sending the last bit of
the frame, the sending station must detect a collision, if any, and abort the transmission.
This is so because the station, once the entire frame is sent, does not keep a copy of the
frame and does not monitor the line for collision detection. Therefore, the frame
transmission time Tfr must be at least two times the maximum propagation time Tp. To
understand the reason, let us think about the worst-case scenario. If the two stations
involved in a collision are the maximum distance apart, the signal from the first takes time
Tp to reach the second, and the effect of the collision takes another time Tp to reach the
first. So the requirement is that the first station must still be transmitting after 2Tp .

Collision and abortion in CSMA/CD

Flow diagram for the CSMA/CD PROBLEM

DEPT OF CSE Page 57


COMPUTER NETWORKS A.Y. 2024-2025

A network using CSMA/CD has a bandwidth of 10 Mbps. If the maximum propagation time
(including the delays in the devices and ignoring the time needed to send a jamming signal,
as we see later) is 25.6 μs, what is the minimum size of the frame?

SOL

The frame transmission time is Tfr = 2 × Tp = 51.2 μs. This means, in the worst case, a
station needs to transmit for a period of 51.2 μs to detect the collision. The minimum size of
the frame is 10 Mbps × 51.2 μs = 512 bits or 64 bytes. This is actually the minimum size of
the frame for Standard Ethernet.

DIFFERENCES BETWEEN ALOHA & CSMA/CD

The first difference is the addition of the persistence process. We need to sense the channel
before we start sending the frame by using one of the persistence processes.

The second difference is the frame transmission. In ALOHA, we first transmit the entire
frame and then wait for an acknowledgment. In CSMA/CD, transmission and collision
detection is a continuous process. We do not send the entire frame and then look for a
collision. The station transmits andreceives continuously and simultaneously

The third difference is the sending of a short jamming signal that enforcesthe collision in
case other stations have not yet sensed the collision.

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA)

We need to avoid collisions on wireless networks because they cannot be detected. Carrier
sense multiple access with collision avoidance (CSMAlCA) was invented for wirelesss
network. Collisions are avoided through the use of CSMA/CA's three strategies: the inter
frame space, the contention window, and acknowledgments, as shown in Figure

Timing in CSMA/CA

DEPT OF CSE Page 58


COMPUTER NETWORKS A.Y. 2024-2025

Inter frame Space (IFS)

First, collisions are avoided by deferring transmission even if the channel is found idle.
When an idle channel is found, the station does not send immediately. It waits for a period
of time called the inter frame space or IFS.

Even though the channel may appear idle when it is sensed, a distant station may have
already started transmitting. The distant station's signal has not yet reached this station.
The IFS time allows the front of the transmitted signal by the distant station to reach this
station. If after the IFS time the channel is still idle, the station can send, but it still needs to
wait a time equal to the contention time. The IFS variable can also be used to prioritize
stations or frame types. For example, a station that is assigned shorter IFS has a higher
priority.

In CSMA/CA, the IFS can also be used to define the priority of a station or a frame.

Contention Window

The contention window is an amount of time divided into slots. A station that is ready to
send chooses a random number of slots as its wait time. The number of slots in the window
changes according to the binary exponential back-off strategy. This means that it is set to
one slot the first time and then doubles each time the station cannot detect an idle channel
after the IFS time. This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station.

One interesting point about the contention window is that the station needs to sense the
channel after each time slot. However, if the station finds the channel busy, it does not
restart the process; it just stops the timer and restarts it when the channel is sensed as idle.
This gives priority to the station with the longest waiting time.

In CSMA/CA, if the station finds the channel busy, it does not restart the timer of the
contention window; it stops the timer and restarts it when the channel becomes idle.

Acknowledgment

With all these precautions, there still may be a collision resulting in destroyed data. In
addition, the data may be corrupted during the transmission. The positive acknowledgment
and the time-out timer can help guarantee thatthe receiver has received the frame

This is the CSMA protocol with collision avoidance.

DEPT OF CSE Page 59


COMPUTER NETWORKS A.Y. 2024-2025

• The station ready to transmit, senses the line by using one of the
persistentstrategies.

• As soon as it finds the line to be idle, the station waits for an IFS (Inter frame

space) amount of time.

• If then waits for some random time and sends the frame.

• After sending the frame, it sets a timer and waits for the acknowledgementfrom the
receiver.

• If the acknowledgement is received before expiry of the timer, then the transmission
is successful.

• But if the transmitting stationdoes not receive the expected


acknowledgement before the timer expiry then it increments the back off parameter, waits
for the back off time and re senses the line

Controlled Access Protocols

In controlled access, the stations seek information from one another to find which station
has the right to send. It allows only one node to send at a time, to avoid collision
of messages on sharedmedium. The three controlled-access methods
are:

1 Reservation 2 Polling 3 Token Passing Reservation

• In the reservation method, a station needs to make a reservation before sending


data.

• The time line has two kinds of periods:

1. Reservation interval of fixed time length

2. Data transmission period of variable frames.

• If there are M stations, the reservation interval is divided into M slots, and each
station has one slot.

DEPT OF CSE Page 60


COMPUTER NETWORKS A.Y. 2024-2025

• Suppose if station 1 has a frame to send, it transmits 1 bit during the slot

1. No other station is allowed to transmit during this slot.

• In general, i th station may announce that it has a frame to send by inserting


a 1 bit into i th slot. After all N slots have been checked, each stationknows which stations
wish to transmit.

• The stations which have reserved their slots transfer their frames in that order.

• After data transmission period, next reservation interval begins.

• Since everyone agrees on who goes next, there will never be any
collisions.

The following figure shows a situation with five stations and a five slot reservation frame.
In the first interval, only stations 1, 3, and 4 ha ve made reservations. In the second
interval, only station 1 has made a reservation.

Polling

Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.

In this, one acts as a primary station(controller) and the others are secondary stations. All
data exchanges must be made through the controller.

The message sent by the controller contains the address of the node being selected for
granting access.

Although all nodes receive the message but the addressed one responds to it and sends
data, if any. If there is no data, usually a “poll reject”(NAK) message is sent back.

DEPT OF CSE Page 61


COMPUTER NETWORKS A.Y. 2024-2025

Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.

Token Passing

In token passing scheme, the stations are connected logically to eachother in form of ring
and access of stations is governed by tokens.

A token is a special bit pattern or a small message, which circulate fromone station to the
next in the some predefined order.

In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station uses the bus to send the token to the next
station in some predefined order.

In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the token to
the next station. If it has no queued frame,it passes the token simply.

After sending a frame, each station must wait for all N stations (including itself) to send
the token to their neighbors and the other N – 1 stations to senda frame, if they have one.

DEPT OF CSE Page 62


COMPUTER NETWORKS A.Y. 2024-2025

There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable operation of
this scheme.

Error Detection

Error

A condition when the receiver’s information does not matches with the
sender’sinformation. During transmission, digital signals suffer from noise that can
introduce errors in the binary bits travelling from sender to

receiver. Thatmeans a 0bit may change to 1 or a1 bit may chan Transport


Layer of OSI Model) Wh added to a given digital message to help us
detect if any errorhas occurred

during transmission of the message.

Basic approach used for error detection is the use of redundancy bits, where additional bits
are added to facilitate detection of errors. Some popular techniques for error detection are:

1. Simple Parity check

2. Two-dimensional Parity check

DEPT OF CSE Page 63


COMPUTER NETWORKS A.Y. 2024-2025

3. Checksum

4. Cyclic redundancy check

Simple Parity check

Blocks of data from the source are subjected to a check bit or parity bitgenerator form,
where a parity of : 1 is added to the block if it contains odd number of 1’s, and

0 is added if it contains even number of 1’s

This scheme makes the total number of 1’s even, that is why it is called even parity
checking.

Two-dimensional Parity check

DEPT OF CSE Page 64


COMPUTER NETWORKS A.Y. 2024-2025

Parity check bits are calculated for each row, which is equivalent to a simple parity check
bit. Parity check bits are also calculated for all columns, then both are sent along with the
data. At the receiving end these are compared with the parity bits calculated on the
received data.

Checksum

• In checksum error detection scheme, the data is divided into k segmentseach of m


bits.

• In the sender’s end the segments are added using 1’s


complement

arithmetic to get the sum. The sum is complemented to get the checksum.

• The checksum segment is sent along with the data segments.

• At the receiver’s end, all received segments are added using 1’s
complementarithmetic to get the sum. The sum is complemented.

• If the result is zero, the received data is accepted; otherwise discarded.

DEPT OF CSE Page 65


COMPUTER NETWORKS A.Y. 2024-2025

Cyclic redundancy check (CRC)

• Unlike checksum scheme, which is based on addition, CRC is based on


binarydivision.

• In CRC, a sequence of redundant bits, called cyclic redundancy check bits,

are appended to the end of data unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.

• At the destination, the incoming data unit is divided by the same number.

If

at this step there is no remainder, the data unit is assumed to be correct andis therefore
accepted.

• A remainder indicates that the data unit has been damaged in transit

DEPT OF CSE Page 66


COMPUTER NETWORKS A.Y. 2024-2025

and

therefore must be rejected.

Error Correction

Error Correction codes are used to detect and correct the errors when data is transmitted
from the sender to the receiver.

Error Correction can be handled in two ways:

Backward error correction: Once the error is discovered, the receiver requeststhe sender
to retransmit the entire data unit.

Forward error correction: In this case, the receiver uses the error- correcting

code which automatically corrects the errors.

A single additional bit can detect the error, but cannot correct it.

For correcting the errors, one has to know the exact position of the error. For example, If
we want to calculate a single-bit error, the error correction code willdetermine which one
of seven bits is in error. To achieve this, we have to add some additional redundant bits.

Suppose r is the number of redundant bits and d is the total number of the data bits. The
number of redundant bits r can be calculated by using the formula:

2r >=d+r+1

The value of r is calculated by using the above formula. For example, if the value of d is 4,
then the possible smallest value that satisfies the above relation would be 3.

To determine the position of the bit which is in error, a technique developed by

R.W Hamming is Hamming code which can be applied to any length of the dataunit and
uses the relationship between data units and redundant units.

Hamming Code

Parity bits: The bit which is appended to the original data of binary bits so thatthe total
number of 1s is even or odd.

Even parity: To check for even parity, if the total number of 1s is even, then the

value of the parity bit is 0. If the total number of 1s occurrences is odd, then the value of the
parity bit is 1.

DEPT OF CSE Page 67


COMPUTER NETWORKS A.Y. 2024-2025

Odd Parity: To check for odd parity, if the total number of 1s is even, then the

value of parity bit is 1. If the total number of 1s is odd, then the value of paritybit is 0.

Algorithm of Hamming code:

An information of 'd' bits are added to the redundant bits 'r' to form d+r.The location of
each of the (d+r) digits is assigned a decimal value.

The 'r' bits are placed in the positions 1,2, ... 2k-1

At the receiving end, the parity bits are recalculated. The decimal value of the parity bits
determines the position of an error.

Relationship b/w Error position & binary number.

Let's understand the concept of Hamming code through an example: Suppose the original
data is 1010 which is to be sent.

Total number of data bits 'd' = 4 Number of redundant bits r : 2r >= d+r+1

2r >= 4+r+1

Therefore, the value of r is 3 that satisfies the above relation.Total number of bits = d+r =
4+3 = 7;

Determining the position of the redundant bits

The number of redundant bits is 3. The three bits are represented by r1, r2, r4. The position
of the redundant bits is calculated with corresponds to the raised power of 2. Therefore,
their corresponding positions are 1, 21, 22.

DEPT OF CSE Page 68


COMPUTER NETWORKS A.Y. 2024-2025

The position of r1 = 1, The position of r2 = 2 , The position of r4 = 4

Representation of Data on the addition of parity bits:

Determining the Parity bits

Determining the r1 bit: The r1 bit is calculated by performing a parity check on the bit
positions whose binary representation includes 1 in the first position.

We observe from the above figure that the bit position that includes 1 in the first position
are 1, 3, 5, 7. Now, we perform the even-parity check at these bit positions. The total
number of 1 at these bit positions corresponding to r1 is even, therefore, the value of the r1
bit is 0.

Determining r2 bit: The r2 bit is calculated by performing a parity check on the bit
positions whose binary representation includes 1 in the second position.

DEPT OF CSE Page 69


COMPUTER NETWORKS A.Y. 2024-2025

We observe from the above figure that the bit positions that include 1 in the second
position are 2, 3, 6, 7. Now, we perform the even-parity check at these bit positions. The
total number of 1 at these bit positions corresponding to r2 is odd, therefore, the value of
the r2 bit is 1.

Determining r4 bit: The r4 bit is calculated by performing a parity check on the bit
positions whose binary representation includes 1 in the third position.

We observe from the above figure that the bit positions that includes 1 in the third position
are 4, 5, 6, 7. Now, we perform the even-parity check at these bit positions. The total
number of 1 at these bit positions corresponding to r4 is even, therefore, the value of the r4
bit is 0.

Data transferred is given below:

Suppose the 4th bit is changed from 0 to 1 at the receiving end, then parity bits are
recalculated.

R1 bit

The bit positions of the r1 bit are 1,3,5,7

DEPT OF CSE Page 70


COMPUTER NETWORKS A.Y. 2024-2025

We observe from the above figure that the binary representation of r1 is 1100. Now, we
perform the even-parity check, the total number of 1s appearing in the r1 bit is an even
number. Therefore, the value of r1 is 0.

R2 bit

The bit positions of r2 bit are 2,3,6,7.

We observe from the above figure that the binary representation of r2 is

1001. Now, we perform the even-parity check, the total number of 1s appearing in ther2 bit
is an even number. Therefore, the value of r2 is 0.

R4 bit

The bit positions of r4 bit are 4,5,6,7.

We observe from the above figure that the binary representation of r4 is 1011. Now, we
perform the even-parity check, the total number of 1s appearing in ther4 bit is an odd
number. Therefore, the value of r4 is 1.

DEPT OF CSE Page 71


COMPUTER NETWORKS A.Y. 2024-2025

The binary representation of redundant bits, i.e., r4r2r1 is 100, and its corresponding
decimal value is 4. Therefore, the error occurs in a 4th bit position. The bit value must be
changed from 1 to 0 to correct the error.

Wired LANs: Ethernet

In 1985, the Computer Society of the IEEE started a project, called Project 802, to set
standards to enable intercommunication among equipment from a variety of
manufacturers. Project 802 is a way of specifying functions of the physical layer and the
data link layer of major LAN protocols.

The relationship of the 802 Standard to the traditional OSI model is shown in below Figure.
The IEEE has subdivided the data link layer into two sub layers: logical link control (LLC)
and media access control).

IEEE has also created several physical layer standards for different LAN protocols

IEEE standard for LANs

STANDARD ETHERNET

The original Ethernet was created in 1976 at Xerox’s Palo Alto ResearchCenter (PARC).
Since then, it has gone through four generations.

DEPT OF CSE Page 72


COMPUTER NETWORKS A.Y. 2024-2025

Standard Ethernet (l0 Mbps), Fast Ethernet (100 Mbps), Gigabit Ethernet (l Gbps), and Ten-
Gigabit Ethernet (l0 Gbps),

We briefly discuss the Standard (or traditional) Ethernet in this section

Ethernet evolution through four generations

MAC Sublayer

In Standard Ethernet, the MAC sublayer governs the operation of the access method. It also
frames data received from the upper layer and passes them to the physical layer.

Frame Format

The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of protocol
data unit (PDU), upper-layer data, and the CRC. Ethernet does not provide any mechanism
for acknowledging received frames, making it what is known as an unreliable medium.
Acknowledgments must be implemented at the higher layers. The format of the MAC frame
is shown in below figure

802.3 MAC frame

Preamble. The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating 0s and
1s that alerts the receiving system to the coming frame and enables it to synchronize its
input timing. The pattern provides only an alert and a timing pulse. The 56-bit pattern

DEPT OF CSE Page 73


COMPUTER NETWORKS A.Y. 2024-2025

allows the stations to miss some bits at the beginning of the frame. The preamble is actually
added at the physical layer and is not (formally) part of the frame.

Start frame delimiter (SFD). The second field (l byte: 10101011) signals the beginning of
the frame. The SFD warns the station or stations that this is the last chance for
synchronization. The last 2 bits is 11 and alerts the receiverthat the next field is the
destination address.

Destination address (DA). The DA field is 6 bytes and contains the physical address of the
destination station or stations to receive the packet.

Source address (SA). The SA field is also 6 bytes and contains the physical address of the
sender of thepacket.

Length or type. This field is defined as a type field or length field. The original Ethernet
used this field as the type field to define the upper-layer protocol using the MAC frame. The
IEEE standard used it as the length field to define the number of bytes in the data field.
Both uses are common today.

Data. This field carries data encapsulated from the upper-layer protocols. It is aminimum of
46 and a maximum of 1500 bytes.

CRC. The last field contains error detection information, in this case a CRC-32

Frame Length

Ethernet has imposed restrictions on both the minimum and maximum lengthsof a frame,
as shown in below Figure

DEPT OF CSE Page 74


COMPUTER NETWORKS A.Y. 2024-2025

Minimum and maximum lengths

An Ethernet frame needs to have a minimum length of 512 bits or 64 bytes. Part of this
length is the header and the trailer. If we count 18 bytes of header and trailer (6 bytes of
source address, 6 bytes of destination address, 2 bytes of length or type, and 4 bytes of
CRC), then the minimum length of data from the upper layer is 64 - 18 = 46 bytes. If the
upper-layer packet is less than 46 bytes, padding is added to make up the difference

The standard defines the maximum length of a frame (without preamble and SFD field) as
1518 bytes. If we subtract the 18 bytes of header and trailer,the maximum length of the
payload is 1500 bytes.

The maximum length restriction has two historical reasons.

First, memory was very expensive when Ethernet was designed: a maximum length
restriction helped to reduce the size of the buffer.

Second, the maximum length restriction prevents one station from monopolizing the
shared medium, blocking other stations that have data to send.

Addressing

The Ethernet address is 6 bytes (48 bits), normally written in hexadecimalnotation, with a
colon between the bytes.

Example of an Ethernet address in hexadecimal notation

Unicast, Multicast, and Broadcast Addresses A source address is always a unicast address-
the frame comes from only one station. The destination address, however, can be unicast,
multicast, or broadcast. Below Figure shows how to distinguish a unicast address from a
multicast address.

DEPT OF CSE Page 75


COMPUTER NETWORKS A.Y. 2024-2025

If the least significant bit of the first byte in a destination address is 0, the address is
unicast; otherwise, it is multicast.

Unicast and multicast addresses

A unicast destination address defines only one recipient; the relationship between the
sender and the receiver is one-to-one.

A multicast destination address defines a group of addresses; the relationship between the
sender and the receivers is one-to-many.

The broadcast address is a special case of the multicast address; the recipients are all the
stations on the LAN. A broadcast destination address is forty-eight1s.

Access Method: CSMA/CD Standard Ethernet uses I-persistent CSMA/CDSlot Time In an


Ethernet network.

Slot time =round-trip time + time required to send the jam sequence

The slot time in Ethernet is defined in bits. It is the time required for a station to send 512
bits. This means that the actual slot time depends on the datarate; for traditional 10-Mbps
Ethernet it is 51.2 micro sec.

Slot Time and Maximum Network Length There is a relationship between the slot time and
the maximum length of the network (collision domain). It is dependent on the propagation
speed of the signal in the particular medium.

In most transmission media, the signal propagates at 2 x 108 m/s (two-thirds of the rate
for propagation inair).

For traditional Ethernet, we calculate MaxLength=PropagationSpeedx (SlotTime/2)


MaxLength= (2 x 108) X(51.2 X10-6 )/2= 5120m

Of course, we need to consider the delay times in repeaters and interfaces,and the time
required to send the jam sequence. These reduce the maximum- length of a traditional

DEPT OF CSE Page 76


P
COMPUTER NETWORKS A.Y. 2024-2025

Ethernet network to 2500 m, just 48 percent of the theoretical calculation.


MaxLength=2500 m

Physical Layer

The Standard Ethernet defines several physical layer implementations; four of the most
common, are shown in Figure

Encoding and Decoding

All standard implementations use digital signaling (baseband) at 10 Mbps. At the sender,
data are converted to a digital signal using the Manchester scheme; at the receiver, the
received signal is interpreted as Manchester and decoded into data. Manchester encoding is
self-synchronous, providing a transition at each bit interval. Figure shows the encoding
scheme for Standard Ethernet

In Manchester encoding, the transition at the middle of the bit is used for synchronization

DEPT OF CSE Page 77


P
COMPUTER NETWORKS A.Y. 2024-2025

lOBase5: Thick Ethernet

10Base5 implementation

The first implementation is called 10Base5, thick Ethernet, or Thicknet. lOBase5 was the
first Ethernet specification to use a bus topology with an external transceiver
(transmitter/receiver) connected via a tap to a thick coaxial cable. Figure shows a
schematic diagram of a lOBase5 implementation

10Base2: Thin Ethernet

10Base2 implementation

The second implementation is called 10 Base2, thin Ethernet, or Cheapernet. 10Base2 also
uses a bus topology, but the cable is much thinner and more flexible. Figure shows the
schematic diagram of a thin coaxial cable is less expensive than thick coaxial.Installation is
simpler because the thin coaxial cable is very flexible. However, the length of each segment

DEPT OF CSE Page 78


P
COMPUTER NETWORKS A.Y. 2024-2025

cannot exceed 185 m (close to 200 m)due to the high level of attenuation in thin coaxial
cable.

1OBase-T: Twisted-Pair Ethernet

10Base-T implementation

The third implementation is called 10Base-T or twisted-pair Ethernet. It uses a physical


star topology. The stations are connected to a hub via two pairs of twisted cable, as shown
in Figure

The maximum length of the twisted cable here is defined as 100 m, to minimize the effect of
attenuation in the twisted cable

Although there are several types of optical fiber 10-Mbps Ethernet, the most common is
called 10Base-F.10Base-F uses a star topology to connect stationsto a hub. The stations are
connected to the hub using two fiber-optic cables, as shown in Figure

10Base-F implementation

DEPT OF CSE Page 79


P
COMPUTER NETWORKS A.Y. 2024-2025

UNIT-III

Network Layer: Network Layer Design issues, store and forward packet switching
connection less and connection oriented networks-routing algorithms-optimality principle,
shortest path, flooding, Distance Vector Routing, Count to Infinity Problem, Link State
Routing, Path Vector Routing, Hierarchical Routing; Congestion control algorithms, IP
addresses, CIDR, Subnetting, SuperNetting, IPv4, Packet Fragmentation, IPv6 Protocol,
Transition from IPv4 to IPv6, ARP, RARP.

Network Layer Design Issues

1. Store-and-forward packet switching

2. Services provided to transport layer

3. Implementation of connectionless service

4. Implementation of connection-oriented service

5. Comparison of virtual-circuit and datagram networks

1 Store-and-forward packet switching

DEPT OF CSE Page 80


P
COMPUTER NETWORKS A.Y. 2024-2025

A host with a packet to send transmits it to the nearest router, either on its own LAN or
over a point-to-point link to the ISP. The packet is stored there until it has fully arrived and
the link has finished its processing by verifying the checksum. Then it is forwarded to the
next router along the path until it reaches the destination host, where it is delivered. This
mechanism is store-and-forward packet switching.

2 Services provided to transport layer

The network layer provides services to the transport layer at the network layer/transport
layerinterface. The services need to be carefully designed with the following goals in mind:

1. Services independent of router technology.


2. Transport layer shielded from number, type, topology of routers.

3. Network addresses available to transport layer use uniform numbering plan –even
across LANs and WANs

3 Implementation of connectionless service

If connectionless service is offered, packets are injected into the network individually and
routed independently of each other. No advance setup is needed. In this context, the
packets are frequently called datagrams (in analogy with telegrams) and the network is
called a datagram network.

A’s table (initially) A’s table (later) C’s Table E’s Table
DEPT OF CSE Page 81
P
COMPUTER NETWORKS A.Y. 2024-2025

Let us assume for this example that the message is four times longer than the maximum
packet size, so the network layer has to break it into four packets, 1, 2, 3, and 4, and send
eachof them in turn to router A.

Every router has an internal table telling it where to send packets for each of the possible
destinations. Each table entry is a pair(destination and the outgoing line). Only directly
connected lines can be used.

A’s initial routing table is shown in the figure under the label ‘‘initially.’’

At A, packets 1, 2, and 3 are stored briefly, having arrived on the incoming link.

Then each packet is forwarded according to A’s table, onto the outgoing link to C

within a new frame. Packet 1 is then forwarded to E and then to F.

However, something different happens to packet 4. When it gets to A it is sent to router B,


even though it is also destined for F. For some reason (traffic jam along ACE path), A
decided to send packet 4 via a different route than that of the first three packets. Router A
updated its routing table, as shown under the label ‘‘later.’’

The algorithm that manages the tables and makes the routing decisions is called the
routing algorithm.

4 Implementation of connection-oriented service

A’s

DEPT OF CSE Page 82


P
COMPUTER NETWORKS A.Y. 2024-2025

table C’s Table E’s Table

If connection-oriented service is used, a path from the source router all the way to
thedestination router must be established before any data packets can be sent. This
connection is called a VC (virtual circuit), and the network is called a virtual-circuit
network

When a connection is established, a route from the source machine to the destination
machine is chosen as part of the connection setup and stored in tables inside the routers.
That route is used for all traffic flowing over the connection, exactly the same way that the
telephone system works. When the connection is released, the virtual circuit is also
terminated. With connection-oriented service, each packet carries an identifier telling
which virtual circuit it belongs to.

As an example, consider the situation shown in Figure. Here, host H1 has established
connection 1 with host H2. This connection is remembered as the first entry in each of the
routing tables. The first line of A’s table says that if a packet bearing connection identifier 1
comes in from H1, it is to be sent to router C and given connection identifier 1. Similarly,
the first entry at C routes the packet to E, also with connection identifier 1.

Now let us consider what happens if H3 also wants to establish a connection to H2. It
chooses connection identifier 1 (because it is initiating the connection and this is its only
connection) and tells the network to establish the virtual circuit.

This leads to the second row in the tables. Note that we have a conflict here because
although A can easily distinguish connection 1 packets from H1 from connection 1 packets
from H3, C cannot do this. For this reason, A assigns a different connection identifier to the
outgoing traffic for the second connection. Avoiding conflicts of this kind is why routers
need the ability to replace connection identifiers in outgoing packets.

In some contexts, this process is called label switching. An example of a connection-

DEPT OF CSE Page 83


P
COMPUTER NETWORKS A.Y. 2024-2025
oriented network service is MPLS (Multi Protocol Label Switching).

5 Comparison of virtual-circuit and datagram networks

Routing Algorithms

The main function of NL (Network Layer) is routing packets from the source machine to
thedestination machine.

There are two processes inside router:

a) One of them handles each packet as it arrives, looking up the outgoing line to use for
it inthe routing table. This process is forwarding.

b) The other process is responsible for filling in and updating the routing tables. That is
wherethe routing algorithm comes into play. This process is routing.

Regardless of whether routes are chosen independently for each packet or only when new
connections are established, certain properties are desirable in a routing algorithm
correctness, simplicity, robustness, stability, fairness, optimality

Routing algorithms can be grouped into two major classes:

1) nonadaptive (Static Routing)

2) adaptive. (Dynamic Routing)

DEPT OF CSE Page 84


P
COMPUTER NETWORKS A.Y. 2024-2025

Nonadaptive algorithm do not base their routing decisions on measurements or estimates


of the current traffic and topology. Instead, the choice of the route to use to get from I to J is
computed in advance, off line, and downloaded to the routers when the network is booted.
This procedure is sometimes called static routing.

Adaptive algorithm, in contrast, change their routing decisions to reflect changes in


thetopology, and usually the traffic as well.

Adaptive algorithms differ in

1) Where they get their information (e.g., locally, from adjacent routers, or from all
routers),

2) When they change the routes (e.g., every ∆T sec, when the load changes or when
thetopology changes), and

3) What metric is used for optimization (e.g., distance, number of hops, or estimated
transittime).

This procedure is called dynamic routing

Different Routing Algorithms

• Optimality principle

• Shortest path algorithm

• Flooding

• Distance vector routing

• Link state routing

• Hierarchical Routing

The Optimality Principle

One can make a general statement about optimal routes without regard to network
topology or traffic. This statement is known as the optimality principle. It states that if
router J is on the optimal path from router I to router K, then the optimal path from J to K
also falls along the same

DEPT OF CSE Page 85


P
COMPUTER NETWORKS A.Y. 2024-2025

As a direct consequence of the optimality principle, we can see that the set of optimal
routes from all sources to a given destination form a tree rooted at the destination. Such a
tree is called a sink tree. The goal of all routing algorithms is to discover and use the sink
trees for all routers

(a) A network. (b) A sink tree for router B.

Shortest Path Routing (Dijkstra’s)

The idea is to build a graph of the subnet, with each node of the graph representing a
routerand each arc of the graph representing a communication line or link.

To choose a route between a given pair of routers, the algorithm just finds the shortest
pathbetween them on the graph

1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to

this node andmake it the first permanent node.

2. Examine each neighbor of the node that was the last permanent node.

3. Assign a cumulative cost to each node and make it tentative

4. Among the list of tentative nodes

a. Find the node with the smallest cost and make it Permanent

b. If a node can be reached from more than one route then select the route with
theshortest cumulative cost.

DEPT OF CSE Page 86


P
COMPUTER NETWORKS A.Y. 2024-2025

5. Repeat steps 2 to 4 until every node becomes permanent

DEPT OF CSE Page 87


P
COMPUTER NETWORKS A.Y. 2024-2025

Flooding

• Another static algorithm is flooding, in which every incoming packet is sent out on
every outgoing line except the one it arrived on.

• Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite


number unless some measures are taken to damp the process.

• One such measure is to have a hop counter contained in the header of each
packet, which is decremented at each hop, with the packet being discarded when the
counter reaches zero. Ideally, the hop counter should be initialized to the length of the path
from source to destination.

• A variation of flooding that is slightly more practical is selective flooding. In this


algorithm the routers do not send every incoming packet out on every line, only on those
lines that are going approximately in the right direction.

• Flooding is not practical in most applications.

Intra- and Inter domain Routing

An autonomous system (AS) is a group of networks and routers under the authority of a
singleadministration.

Routing inside an autonomous system is referred to as intra domain routing.


(DISTANCEVECTOR, LINK STATE)

DEPT OF CSE Page 88


P
COMPUTER NETWORKS A.Y. 2024-2025

Routing between autonomous systems is referred to as inter domain routing. (PATH


VECTOR)Each autonomous system can choose one or more intra domain routing protocols
to handle routing inside the autonomous system. However, only one inter domain routing
protocol handles routing between autonomous systems.

Distance Vector Routing

In distance vector routing, the least-cost route between any two nodes is the route with
minimum distance. In this protocol, as the name implies, each node maintains a vector
(table) of minimum distances to every node.

Mainly 3 things in this Initialization Sharing Updating

Initialization

Each node can know only the distance between itself and its immediate neighbors, those
directly connected to it. So for the moment, we assume that each node can send a message
to the immediate neighbors and find the distance between itself and these neighbors.
Below fig shows the initial tables for each node. The distance for any entry that is not a
neighbor is marked as infinite (unreachable).

Initialization of tables in distance vector routing

Sharing

The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So if node C shares its routing
table with A, node A can also know how to reach node

DEPT OF CSE Page 89


P
COMPUTER NETWORKS A.Y. 2024-2025

E. On the other hand, node C does not know how to reach node D, but node A does. If node
A shares its routing table with node C, node C also knows how to reach node D. In other
words, nodes A and C, as immediate neighbors, can improve their routing tables if they help
each other.

NOTE: In distance vector routing, each node shares its routing table with its immediate
neighbors periodically and when there is a change

Updating

When a node receives a two-column table from a neighbor, it needs to update its
routingtable. Updating takes three steps:

1. The receiving node needs to add the cost between itself and the sending node to
each valuein the second column. (x+y)

2. If the receiving node uses information from any row. The sending node is the next
node inthe route.

3. The receiving node needs to compare each row of its old table with the
corresponding rowof the modified version of the received table.

a. If the next-node entry is different, the receiving node chooses the row with
thesmaller cost. If there is a tie, the old one is kept.

b. If the next-node entry is the same, the receiving node chooses the new row.

For example, suppose node C has previously advertised a route to node X with distance 3.
Suppose that now there is no path between C and X; node C now advertises this route with
a distance of infinity. Node A must not ignore this value even though its old entry is smaller.
The old route does not exist anymore. The new route has a distance of infinity.

DEPT OF CSE Page 90


P
COMPUTER NETWORKS A.Y. 2024-2025

Updating in distance vector routing

Final Diagram

When to Share

DEPT OF CSE Page 91


P
COMPUTER NETWORKS A.Y. 2024-2025

The question now is, When does a node send its partial routing table (only two columns) to
all its immediate neighbors? The table is sent both periodically and when there is a change
in the table.

Periodic Update A node sends its routing table, normally every 30 s, in a periodic update.
Theperiod depends on the protocol that is using distance vector routing.

Triggered Update A node sends its two-column routing table to its neighbors anytime there
isa change in its routing table. This is called a triggered update. The change can result from
thefollowing.

1. A node receives a table from a neighbor, resulting in changes in its own table after
updating.

2. A node detects some failure in the neighboring links which results in a distance
change toinfinity.

Two-node instability

Three-node instability

DEPT OF CSE Page 92


P
COMPUTER NETWORKS A.Y. 2024-2025

SOLUTIONS FOR INSTABILITY

1. Defining Infinity: redefine infinity to a smaller number, such as 100. For our
previous scenario, the system will be stable in less than 20 updates. As a matter of fact,
most implementations of the distance vector protocol define the distance between each
node to

be 1 and define 16 as infinity. However, this means that the distance vector routing cannot
be used in large systems. The size of the network, in each direction, cannot exceed 15 hops.

2. Split Horizon: In this strategy, instead of flooding the table through each interface,
each node sends only part of its table through each interface. If, according to its table, node
B thinks that the optimum route to reach X is via A, it does not need to advertise this piece
of information to A; the information has come from A (A already knows). Taking
information from node A, modifying it, and sending it back to node A creates the confusion.
In our scenario, node B eliminates the last line of its routing table before it sends it to

A. In this case, node A keeps the value of infinity as the distance to X. Later when node A
sends its routing table to B, node B also corrects its routing table. The system becomes
stable after the first update: both node A and B know that X is not reachable.

3. Split Horizon and Poison Reverse Using the split horizon strategy has one drawback.
Normally, the distance vector protocol uses a timer, and if there is no news about a route,
the node deletes the route from its table. When node B in the previous scenario eliminates
the route to X from its advertisement to A, node A cannot guess that this is due to the split
horizon strategy (the source of information was A) or because B has not received any news
about X recently. The split horizon strategy can be combined with the poison reverse
strategy. Node B can still advertise the value for X, but if the source of information is A, it
can replace the distance with infinity as a warning: "Do not use this value; what I know
about this route comes from you."

The Count-to-Infinity Problem

DEPT OF CSE Page 93


P
COMPUTER NETWORKS A.Y. 2024-2025

Link State Routing

Link state routing is based on the assumption that, although the global knowledge about
the topology is not clear, each node has partial knowledge: it knows the state (type,
condition, and cost) of its links. In other words, the whole topology can be compiled from
the partial knowledgeof each node

Building Routing Tables

1. Creation of the states of the links by each node, called the link state packet (LSP).

2. Dissemination of LSPs to every other router, called flooding, in an efficient and

reliable way.

3. Formation of a shortest path tree for each node.

4. Calculation of a routing table based on the shortest path tree

I. Creation of Link State Packet (LSP)

A link state packet can carry a large amount of information. For the moment, we assume
that it carries a minimum amount of data: the node identity, the list of links, a sequence
number, and age. The first two, node identity and the list of links, are needed to make the
topology. The third, sequence number, facilitates flooding and distinguishes new LSPs from
old ones. The fourth, age, prevents old LSPs from remaining in the domain for a long time.

LSPs are generated on two occasions:

DEPT OF CSE Page 94


P
COMPUTER NETWORKS A.Y. 2024-2025

1. When there is a change in the topology of the domain

2. on a periodic basis: The period in this case is much longer compared to distance
vector. The timer set for periodic dissemination is normally in the range of 60 min or 2 h
based on the implementation. A longer period ensures that flooding does not create too
much trafficon the network.

II. Flooding of LSPs:

After a node has prepared an LSP, it must be disseminated to all other nodes, not only to its
neighbors. The process is called flooding andbased on the following

1. The creating node sends a copy of the LSP out of each interface

2. A node that receives an LSP compares it with the copy it may already have. If the
newly arrived LSP is older than the one it has (found by checking the sequence number),it
discards the LSP. If it is newer, the node does the following:

a. It discards the old LSP and keeps the new one.

b. It sends a copy of it out of each interface except the one from which the packet
arrived. This guarantees that flooding stops somewhere in the domain (where a node has
only one interface).

III. Formation of Shortest Path Tree: Dijkstra Algorithm

A shortest path tree is a tree in which the path between the root and every other node is
the shortest.

DEPT OF CSE Page 95


P
COMPUTER NETWORKS A.Y. 2024-2025

The Dijkstra algorithm creates a shortest path tree from a graph. The algorithm divides the
nodes into two sets: tentative and permanent. It finds the neighbors of a current node,
makes them tentative, examines them, and if they pass the criteria, makes them permanent.

IV. Calculation of a routing table

routing table for node A

Path Vector Routing

DEPT OF CSE Page 96


P
COMPUTER NETWORKS A.Y. 2024-2025

Distance vector and link state routing are both intra domain routing protocols. They can be
used inside an autonomous system, but not between autonomous systems. These two
protocols are not suitable for inter domain routing mostly because of scalability. Both of
these routing protocols become intractable when the domain of operation becomes large.
Distance vector routing is subject to instability in the domain of operation. Link state
routing needs a

huge amount of resources to calculate routing tables. It also creates heavy traffic because of
flooding. There is a need for a third routing protocol which we call path vector routing.

Path vector routing proved to be useful for inter domain routing. The principle of path
vector routing is similar to that of distance vector routing. In path vector routing, we
assume that there is one node (there can be more, but one is enough for our conceptual
discussion) in each AS that acts on behalf of the entire AS. Let us call it the speaker node.
The speaker node in an AS creates a routing table and advertises it to speaker nodes in the
neighboring ASs. The idea is the same as for distance vector routing except that only
speaker nodes in each AS can communicate with each other. However, what is advertised is
different. A speaker node advertises the path, not the metric of the nodes, in its
autonomous system or other autonomous systems

Initialization

Initial routing tables in path vector routing

DEPT OF CSE Page 97


P
COMPUTER NETWORKS A.Y. 2024-2025

Sharing Just as in distance vector routing, in path vector routing, a speaker in an


autonomous systemshares its table with immediate neighbors. In Figure, node A1 shares
its table with nodes B1

and C1. Node C1 shares its table with nodes D1, B1, and A1. Node B1 shares its table with
C1 and A1. Node D1 shares its table with C1.

Updating When a speaker node receives a two-column table from a neighbor, it updates its
own table by adding the nodes that are not in its routing table and adding its own
autonomous system and the autonomous system that sent the table. After a while each
speaker has a table and knows how to reach each node in other Ass

a) Loop prevention. The instability of distance vector routing and the creation of loops
can be avoided in path vector routing. When a router receives a message, it checks to see if
its ASis in the path list to the destination. If it is, looping is involved and the message is
ignored.

b) Policy routing. Policy routing can be easily implemented through path vector

routing.When a router receives a message, it can check the path. If one of the AS listed in
the path is against its policy, it can ignore that path and that destination. It does not update
its routing table with this path, and it does not send this message to its neighbors.

DEPT OF CSE Page 98


P
COMPUTER NETWORKS A.Y. 2024-2025

c) Optimum path. What is the optimum path in path vector routing? We are looking for
apath to a destination that is the best for the organization that runs the AS. One system may
use RIP, which defines hop count as the metric; another may use OSPF with minimum delay
defined as the metric. In our previous figure, each AS may have more than one path to a
destination. For example, a path from AS4 to ASI can be AS4-AS3-AS2-AS1, or it can be AS4-
AS3-ASI. For the tables, we chose the one that had the smaller number of ASs, but this is not
always the case. Other criteria, such as security, safety, and reliability, can also be applied

Hierarchical Routing

As networks grow in size, the router routing tables grow proportionally. Not only is router
memory consumed by ever-increasing tables, but more CPU time is needed to scan them
and more bandwidth is needed to send status reports about them.

At a certain point, the network may grow to the point where it is no longer feasible for
every router to have an entry for every other router, so the routing will have to be done
hierarchically, as it is in the telephone network.

When hierarchical routing is used, the routers are divided into what we will call regions.
Each router knows all the details about how to route packets to destinations within its own
region but knows nothing about the internal structure of other regions.

For huge networks, a two-level hierarchy may be insufficient; it may be necessary to group
theregions into clusters, the clusters into zones, the zones into groups, and so on, until we
run outof names for aggregations

DEPT OF CSE Page 99


P
COMPUTER NETWORKS A.Y. 2024-2025

When a single network becomes very large, an interesting question is ‘‘how many
levelsshould the hierarchy have?’’

For example, consider a network with 720 routers. If there is no hierarchy, each router
needs720 routing table entries.

If the network is partitioned into 24 regions of 30 routers each, each router needs 30
localentries plus 23 remote entries for a total of 53 entries.

If a three-level hierarchy is chosen, with 8 clusters each containing 9 regions of 10 routers,


each router needs 10 entries for local routers, 8 entries for routing to other regions within
its own cluster, and 7 entries for distant clusters, for a total of 25 entries

Kamoun and Kleinrock (1979) discovered that the optimal number of levels for an N router
network is ln N, requiring a total of e ln N entries per router

CONGESTION CONTROL ALGORITHMS

Too many packets present in (a part of) the network causes packet delay and loss
thatdegrades performance. This situation is called congestion.

The network and transport layers share the responsibility for handling congestion. Since
congestion occurs within the network, it is the network layer that directly experiences it
and must ultimately determine what to do with the excess packets.

DEPT OF CSE Page 100


P
COMPUTER NETWORKS A.Y. 2024-2025

However, the most effective way to control congestion is to reduce the load that the
transport layer is placing on the network. This requires the network and transport layers
to work together. In this chapter we will look at the network aspects of congestion.

When too much traffic is offered, congestion sets in and performance degrades sharply.

Above Figure depicts the onset of congestion. When the number of packets hosts send into
the network is well within its carrying capacity, the number delivered is proportional to the
number sent. If twice as many are sent, twice as many are delivered. However, as the
offered load approaches the carrying capacity, bursts of traffic occasionally fill up the
buffers inside routers and some packets are lost. These lost packets consume some of the
capacity, so the number of delivered packets falls below the ideal curve. The network is
now congested. Unless the network is well designed, it may experience a congestion
collapse

difference between congestion control and flow control.

Congestion control has to do with making sure the network is able to carry the offered
traffic. It is a global issue, involving the behavior of all the hosts and routers.

Flow control, in contrast, relates to the traffic between a particular sender and a particular
receiver. Its job is to make sure that a fast sender cannot continually transmit data faster
than the receiver is able to absorb it.

To see the difference between these two concepts, consider a network made up of 100-
Gbps fiber optic links on which a supercomputer is trying to force feed a large file to a
personal computer that is capable of handling only 1 Gbps. Although there is no congestion
(thenetwork itself is not in trouble), flow control is needed to force the supercomputer to
stop frequently to give the personal computer a chance to breathe.

At the other extreme, consider a network with 1-Mbps lines and 1000 large computers, half
of which are trying to transfer files at 100 kbps to the other half. Here, the problem is not
that of fast senders overpowering slow receivers, but that the total offered traffic exceeds
what the network can handle.

The reason congestion control and flow control are often confused is that the best way to
handle both problems is to get the host to slow down. Thus, a host can get a ‘‘slow down’’
message either because the receiver cannot handle the load or because the network cannot
handle it.

Several techniques can be employed. These include:

DEPT OF CSE Page 101


P
COMPUTER NETWORKS A.Y. 2024-2025

1. Warning bit

2. Choke packets

3. Load shedding

4. Random early discard

5. Traffic shaping

The first 3 deal with congestion detection and recovery. The last 2 deal with

Congestion avoidance

Warning Bit

1. A special bit in the packet header is set by the router to warn the source when
congestions detected.

2. The bit is copied and piggy-backed on the ACK and sent to the sender.

3. The sender monitors the number of ACK packets it receives with the warning bit set
and adjusts its transmission rate accordingly.

Choke Packets

1. A more direct way of telling the source to slow down.

2. A choke packet is a control packet generated at a congested node and transmitted to


restrict traffic flow.

3. The source, on receiving the choke packet must reduce its transmission rate by a
certain percentage.

4. An example of a choke packet is the ICMP Source Quench Packet. Hop-by-Hop Choke
Packets

1. Over long distances or at high speeds choke packets are not very effective.

2. A more efficient method is to send to choke packets hop-by-hop.

3. This requires each hop to reduce its transmission even before the choke packet
arrive atthe source

Load Shedding

1. When buffers become full, routers simply discard packets.

DEPT OF CSE Page 102


P
COMPUTER NETWORKS A.Y. 2024-2025

2. Which packet is chosen to be the victim depends on the application and on the
errorstrategy used in the data link layer.

3. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in
thereceived data.

4. For real-time voice or video it is probably better to throw away old data and
keepnewpackets.

5. Get the application to mark packets with discard priority.

Random Early Discard (RED)

1. This is a proactive approach in which the router discards one or more packets
before thebuffer becomes completely full.

2. Each time a packet arrives, the RED algorithm computes the average queue length,

avg.

3. If avg is lower than some lower threshold, congestion is assumed to be minimal or


non-existent and the packet is queued.

4. If avg is greater than some upper threshold, congestion is assumed to be serious and
thepacket is discarded.

5. If avg is between the two thresholds, this might indicate the onset of congestion.
Theprobability of congestion is then calculated.

Traffic Shaping

1. Another method of congestion control is to “shape” the traffic before it enters


thenetwork.

2. Traffic shaping controls the rate at which packets are sent (not just how many).
Used inATM and Integrated Services networks.

3. At connection set-up time, the sender and carrier negotiate a traffic pattern (shape).

Two traffic shaping algorithms are:Leaky BucketToken Bucket

DEPT OF CSE Page 103


P
COMPUTER NETWORKS A.Y. 2024-2025

The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a


single- server queue with constant service time. If the bucket (buffer) overflows then
packets are discarded.

(a) A leaky bucket with water. (b) a leaky bucket with


packets.

1. The leaky bucket enforces a constant output rate (average rate) regardless of

the burstiness of the input. Does nothing when input is idle.

2. The host injects one packet per clock tick onto the network. This results in a uniform
flow of packets, smoothing out bursts and reducing congestion.

3. When packets are the same size (as in ATM cells), the one packet per tick is okay.
For variable length packets though, it is better to allow a fixed number of bytes per tick. E.g.
1024 bytes per tick will allow one 1024-byte packet or two 512-byte packets or four 256-
byte packets on 1 tick

Token Bucket Algorithm

1. In contrast to the LB, the Token Bucket Algorithm, allows the output rate to
vary,depending on the size of the burst.

2. In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must
captureand destroy one token.

3. Tokens are generated by a clock at the rate of one token every t sec.

DEPT OF CSE Page 104


P
COMPUTER NETWORKS A.Y. 2024-2025

4. Idle hosts can capture and save up tokens (up to the max. size of the bucket) in
order tosend larger bursts later.

(a)Before. (b) After.

Leaky Bucket vs. Token Bucket

1. LB discards packets; TB does not. TB discards tokens.

2. With TB, a packet can only be transmitted if there are enough tokens to cover its
length inbytes.

3. LB sends packets at an average rate. TB allows for large bursts to be sent faster by
speedingup the output.

4. TB allows saving up tokens (permissions) to send large bursts. LB does not allow
saving.

Every device connected to the Internet needs to have an identifier. Internet Protocol (IP)
addresses are the numerical addresses used to identify a particular piece of hardware
connected to the Internet.

DEPT OF CSE Page 105


P
COMPUTER NETWORKS A.Y. 2024-2025

The two most common versions of IP in use today are Internet Protocol version 4 (IPv4)
and Internet Protocol version 6 (IPv6).

For IPv4, this pool is 32-bits in size and contains 4,294,967,296 IPv4 addresses. The IPv6
address space is 128-bits(2128) in size, containing
340,282,366,920,938,463,463,374,607,431,768,211,456 IPv6 addresses.

DEPT OF CSE Page 106


P
COMPUTER NETWORKS A.Y. 2024-2025

DEPT OF CSE Page 107


P
COMPUTER NETWORKS A.Y. 2024-2025

DEPT OF CSE Page 108


P
COMPUTER NETWORKS A.Y. 2024-2025

DEPT OF CSE Page 109


P
COMPUTER NETWORKS A.Y. 2024-2025

UNIT-IV

Transport Layer: Services provided to the upper layers elements of

transport protocol addressing connection establishment, Connection

release, Error Control & Flow Control, Crash Recovery.The Internet

Transport Protocols: UDP, Introduction to TCP, The TCP Service Model, The

TCP Segment Header, The Connection Establishment, The TCP Connection

Release, The TCP Sliding Window, The TCP Congestion Control Algorithm.

TRANSPORT LAYER
The network layer provides end-to-end packet delivery using datagrams or virtual circuits.

The transport layer builds on the network layer to provide data transport from a process
on a source machine to a process on a destination machine with a desired level of reliability
that is independent of the physical networks currently in use.

Services Provided to the Upper Layers

The ultimate goal of the transport layer is to provide efficient, reliable, and cost-effective
data transmission service to its users, normallyprocesses in the application layer.

To achieve this, the transport layer makes use of the services provided by the network
layer. The software and/or hardware within the transport layer that does the work is
called the transport entity.

DEPT OF CSE Page 110


P
COMPUTER NETWORKS A.Y. 2024-2025
The network, transport, and application layers.

The connection-oriented transport service. connections have three phases: establishment,


data transfer, and release.

Addressing and flow control

The connectionless transport service .

DEPT OF CSE Page 111


COMPUTER NETWORKS A.Y. 2024-2025

Figure 23.6 Multiplexing and demultiplexing

DEPT OF CSE Page 112


COMPUTER NETWORKS A.Y. 2024-2025

Figure 23.7 Error control IP addresses versus port numbers

Figure 23.8 Position of UDP, TCP, and SCTP in TCP/IPsuite

DEPT OF CSE Page 113


COMPUTER NETWORKS A.Y. 2024-2025

Transport Service Primitives

The primitives for a simple transport service.

To start with, the server executes a LISTEN primitive, typically by calling a library
procedure that makes a system call that blocks the server until a client turns up.

When a client wants to talk to the server, it executes a CONNECT primitive. The transport
entity carries out this primitive by blocking the caller and sending a packet to the server.
The client’s CONNECT call causes a CONNECTION REQUEST segment to be sent to the
server. When it arrives, the transport entity checks to see that the server is blocked on a
LISTEN (i.e., is interested in handling requests). If so, it then unblocks the server and sends
a CONNECTION ACCEPTED segment back to the client. When this segment arrives, the
client is unblocked and the connection is established.

Data can now be exchanged using the SEND and RECEIVE primitives. In the simplest form,
either party can do a (blocking) RECEIVE to wait for the other party to do a SEND. When
the segment arrives, the receiver is unblocked. It can then process the segment and send a
reply. As long as both sides can keep track of whose turn it is to send, this scheme works
fine.

When a connection is no longer needed, it must be released to free up table space within
the two transport entities. Disconnection has two variants: asymmetric and symmetric.

DEPT OF CSE Page 114


COMPUTER NETWORKS A.Y. 2024-2025

In the asymmetric variant, either transport user can issue a DISCONNECT primitive, which
results in a DISCONNECT segment being sent to the remote transport entity. Upon its
arrival, the connection is released.

In the symmetric variant, each direction is closed separately, independently of the other
one. When one side does a DISCONNECT, that means it has no more data to send but it is
still willing to accept data from its partner. In this model, a connection is released when
both sides have done a DISCONNECT

DEPT OF CSE Page 115


COMPUTER NETWORKS A.Y. 2024-2025

Transport Service Primitives

A state diagram for a simple connection management scheme. Transitions labeled in italics
are caused by packet arrivals. The solid lines show the client's state sequence.

The dashed lines show the server's state sequence.

Elements of Transport Protocols

• Addressing

• Connection Establishment

• Connection Release

• Flow Control and Buffering

• Multiplexing

• Crash Recovery

Transport Protocol

(a) Environment of the data link layer.

(b) Environment of the transport layer.

DEPT OF CSE Page 116


COMPUTER NETWORKS A.Y. 2024-2025

1 Over point-to-point links such as wires or optical fiber, it is usually not necessary for
a router to specify which router it wants to talk to—each outgoing line leads directly to a
particular router. In the transport layer, explicit addressing of destinations is required.

2 The process of establishing a connection over the wire of Fig(a) is simple: the other
end is always there (unless it has crashed, in which case it is not there). Either way, there is
not much to do. Even on wireless links theprocess is not much different. Just sending a
message is sufficient to have it reach all other destinations. If the message is not
acknowledged due to an error, it can be resent. In the transport layer, initial connection
establishment is complicated, as we will see.

3 Another (exceedingly annoying) difference between the data link layer and the
transport layer is the potential existence of storage capacity in thenetwork. The
consequences of the network’s ability to delay and duplicate packets can sometimes be
disastrous and can require the use of special protocols to correctly transport information.

Addressing When an application (e.g., a user) process wishes to set up a connection to a


remote application process, it must specify which one to connect to. (Connectionless
transport has the same problem: to whom should each message be sent?) The method
normally used is to define transport addresses to which processes can listen for connection
requests. In the Internet, these endpoints are called ports. We will use the generic term
TSAP (Transport Service Access Point) to mean a specific endpoint in thetransport layer.
The analogous endpoints in the network layer (i.e., network layer addresses) are not-
surprisingly called NSAPs (Network Service Access Points). IP addresses are examples of
NSAPs.

Addressing

A possible scenario for a transport connection is as follows:

DEPT OF CSE Page 117


COMPUTER NETWORKS A.Y. 2024-2025

1. A mail server process attaches itself to

TSAP 1522 on host 2 to wait for an incoming call. A call such as our LISTEN might be used,
for example.

2. An application process on host 1 wants to send an email message, so it attaches


itself to TSAP 1208 and issues a CONNECT request. The request specifies TSAP 1208 on
host 1 as the source and TSAP 1522 on host 2 as the destination. This action ultimately
results in a transport connection being established between the application process and
the server.

3. The application process sends over the mail message.

4. The mail server responds to say that it will deliver the message.

5. The transport connection is released. special process called a portmapper

CONNECTION ESTABLISHMENT

Establishing a connection sounds easy, but it is actually surprisingly tricky. At first glance, it
would seem sufficient for one transport entity to just send a CONNECTION REQUEST
segment to the destination and wait for a CONNECTION ACCEPTED reply. The problem
occurs when the network can lose, delay, corrupt, and duplicate packets. This behavior
causes serious complications

To solve this specific problem, (DELAYED DUPLICATES) Tomlinson (1975) introduced the
three-way handshake. This establishment protocol involves one peer checking with the
other that the connection request is indeed current. The normal setup procedure when

DEPT OF CSE Page 118


COMPUTER NETWORKS A.Y. 2024-2025

host 1 initiates is shown in Fig. (a). Host 1 chooses a sequence number, x, and sends a
CONNECTION REQUEST segment containing it to host 2. Host 2 replies with an ACK
segment acknowledging x and announcing its own initial sequence number, y. Finally, host
1 acknowledges host 2’s choice of an initial sequence number in the first data segment that
it sends.

Connection Establishment

Three protocol scenarios for establishing a connection using a three-way handshake. CR


denotes CONNECTION REQUEST.

(a) Normal operation,

(b) Old CONNECTION REQUEST appearing out of nowhere.

(c) Duplicate CONNECTION REQUEST and duplicate ACK.

In Fig.(b), the first segment is a delayed duplicate CONNECTION REQUEST from an old
connection. This segment arrives at host 2 without host 1’s knowledge. Host 2 reacts to this
segment by sending host 1 an ACK segment, in effect asking for verification that host 1 was
indeed trying to set up a new connection. When host 1 rejects host 2’s attempt to establish
a connection, host 2 realizes that it was tricked by a delayed duplicate and abandons the
connection. In this way, a delayed duplicate does no damage

The worst case is when both a delayed CONNECTION REQUEST and an ACK are floating
around in the subnet. This case is shown in Fig. (c). As in the previous example, host 2 gets
a delayed CONNECTION REQUEST and replies to it. At this point, it is crucial to realize that
host 2 has proposed using y as the initial sequence number for host 2 to host 1 traffic,
knowing full well that no segments containing sequence number y or acknowledgements to

DEPT OF CSE Page 119


COMPUTER NETWORKS A.Y. 2024-2025

y are still in existence. When the second delayed segment arrives at host 2, the fact that z
has been acknowledged rather than y tells host 2 that this, too, is an old duplicate. The
important thing to realize here is that there is no combination of old segments that can
cause the protocol to fail and have a connection set up by accident when no one wants it.

Connection Release

Abrupt disconnection with loss of data.

there are two styles of terminating a connection: asymmetric release and symmetric
release Asymmetric release is the way the telephone system works: when one party hangs
up, the connection is broken. Symmetric release treats the connection as two separate
unidirectional connections and requires each one to be released separately

Asymmetric release is abrupt and may result in data loss. Consider the scenario of Fig. After
the connection is established, host 1 sends a segment that arrives properly at host 2. Then
host 1 sends another segment. Unfortunately, host 2 issues a DISCONNECT before the
second segment arrives. The result is that the connection is released and data are lost.

DEPT OF CSE Page 120


COMPUTER NETWORKS A.Y. 2024-2025

Clearly, a more sophisticated release protocol is needed to avoid data loss. One way is to
use symmetric release, in which each direction is released independently of the other one.
Here, a host can continue to receive data even after it has sent a DISCONNECT segment.

Symmetric release does the job when each process has a fixed amount of data to send and
clearly knows when it has sent it. One can envision a protocol in which host 1 says ‘‘I am
done. Are you done too?’’ If host 2 responds: ‘‘I am done too. Goodbye, the connection can
be safely released.’’

Connection Release

The two-army problem.

DEPT OF CSE Page 121


COMPUTER NETWORKS A.Y. 2024-2025

(c) Response lost. (d) Response lost and subsequent DRs lost.

(c) Response lost. (d) Response lost and subsequent DRs lost.

In Fig. (a), we see the normal case in which one of the users sends a DR (DISCONNECTION
REQUEST) segment to initiate the connection release. When it arrives, the recipient sends
back a DR segment and starts a timer, just in case its DR is lost. When this DR arrives, the
original sender sends back an ACK segment and releases the connection. Finally, when the
ACK segment arrives, the receiver also releases the connection.

If the final ACK segment is lost, as shown in Fig.(b), the situation is saved by the timer.
When the timer expires, the connection is released anyway. Now consider the case of the
second DR being lost. The user initiating the disconnection will not receive the expected
response, will time out, and will start all over again.

In Fig.(c), we see how this works, assuming that the second time no segments
are lost and all segments are delivered correctly and on time.

Last scenario, Fig.(d), is the same as Fig. (c) except that now we assume all the repeated
attempts to retransmit the DR also fail due to lost segments. After N retries, the sender just
gives up andreleases the connection. Meanwhile, the receiver times out and

TCP

TCP is a connection oriented protocol; it creates a virtual connection between two TCPs to
send data. In addition, TCP uses flow and error control mechanisms at the transport level.
In brief, TCP is called a connection- oriented, reliable transport protocol. It adds
connection-oriented and reliability features to the services of IP.

DEPT OF CSE Page 122


COMPUTER NETWORKS A.Y. 2024-2025

TCP Services TCP Features Segment

A TCP Connection Flow Control Error Control

TCP Services

1 Process-to-Process Communication

TCP provides process-to-process communication using port numbers.

Below Table lists some well-known port numbers used by TCP.

2 Stream Delivery Service

TCP, on the other hand, allows the sending process to deliver data as a stream of bytes and
allows the receiving process to obtain data as a stream of bytes. TCP creates an
environment in which the two processes seem to be connected by an imaginary "tube“ that
carries their data across the Internet. This imaginary environment is showed in below

DEPT OF CSE Page 123


COMPUTER NETWORKS A.Y. 2024-2025

Figure. The sending process produces (writes to) the stream of bytes, and the receiving
process consumes (reads from) them

3 Sending and Receiving Buffers because the sending and the receiving processes may
not write or read data at the same speed, TCP needs buffers for storage. There are two
buffers, the sending buffer and the receiving buffer, one for each direction. One way to
implement a buffer isto use a circular array of I-byte locations as shown in Figure. For
simplicity, we have shown two buffers of 20 bytes each. Normally the buffers are hundreds
or thousands of bytes, depending on the implementation. We also show the buffers as the
same size, which is not always the case.

DEPT OF CSE Page 124


COMPUTER NETWORKS A.Y. 2024-2025

Figure shows the movement of the data in one direction. At the sending site, the buffer has
three types of chambers. The white section contains empty chambers that can be filled by
the sending process (producer). The gray area holds bytes that have been sent but not yet
acknowledged. TCP keeps these bytes in the buffer until it receives an acknowledgment.
The colored area contains bytes to be sent by the sending TCP.

However, as we will see later in this chapter, TCP may be able to send only part of this
colored section. This could be due to the slowness of the receiving process or perhaps to
congestion in the network. Also note that after the bytes in the gray chambers are
acknowledged, the chambers are recycled and available for use by the sending process.

This is why we show a circular buffer.

The operation of the buffer at the receiver site is simpler. The circular buffer is divided into
two areas (shown as white and colored). The white area contains empty chambers to be
filled by bytes received from the network. The colored sections contain received bytes that
can be read by the receiving process. When a byte is read by the receiving process, the
chamber is recycled and added to the pool of empty chambers.

4 TCP segments

At the transport layer, TCP groups a number of bytes together into a packet called a
segment. TCP adds a header to each segment (for control purposes) and delivers the
segment to the IP layer for transmission. The segments are encapsulated in IP datagrams
and transmitted.

DEPT OF CSE Page 125


COMPUTER NETWORKS A.Y. 2024-2025

This entire operation is transparent to the receiving process. Later we will see that
segments may be received out of order, lost, or corrupted and resent. All these are handled
by TCP with the receiving process unaware ofany activities. Above fig shows how segments
are created from the bytes in the buffers

5 Full-Duplex Communication

TCP offers full-duplex service, in which data can flow in both directions at the same time.
Each TCP then has a sending and receiving buffer, and segments move in both directions

6 Connection-Oriented Service

TCP is a connection-oriented protocol. When a process at site A wants to send and receive
data from another process at site B, the following occurs:

1. The two TCPs establish a connection between them.

2. Data are exchanged in both directions.

3. The connection is terminated.

7 Reliable Service

TCP is a reliable transport protocol. It uses an acknowledgment mechanism to check the


safe and sound arrival of data. We will discuss this feature further in the section on error
control.

TCP Features

1 Numbering System

There are two fields called the sequence number and the acknowledgment number. These
two fields refer to the byte number and not the segment number. Byte Number The bytes of
data being transferred in each connection are numbered by TCP. The numbering starts
with a randomly generated number. For example, if the random number happens to be
1057 and thetotal data to be sent are 6000 bytes, the bytes are numbered from 1057to
7056. We will see

that byte numbering is used for flow and error control.

Sequence Number After the bytes have been numbered, TCP assigns a sequence number to
each segment that is being sent. The sequence number for each segment is the number of
the first byte carried in that segment.

DEPT OF CSE Page 126


COMPUTER NETWORKS A.Y. 2024-2025

Acknowledgment Number The value of the acknowledgment field in a segment defines the
number of the next byte a party expects to receive. The acknowledgment number is
cumulative.

2 Flow Control

TCP, provides flow control. The receiver of the data controls the amount of data that are to
be sent by the sender. This is done to prevent the receiver from being overwhelmed with
data. The numbering system allows TCP to use a byte-oriented flow control.

3 Error Control

To provide reliable service, TCP implements an error control mechanism. Although error
control considers a segment as the unit of data for error detection (loss or corrupted
segments), error control is byte-oriented, as we will see later.

4 Congestion Control

TCP takes into account congestion in the network. The amount of data sent by a sender is
not only controlled by the receiver (flow control), but is also determined by the level of
congestion in the network

DEPT OF CSE Page 127


COMPUTER NETWORKS A.Y. 2024-2025

TCP segment format

The segment consists of a 20- to 60-byte header,.

Source port address. This is a 16-bit field that defines the port number of the application
program in the host that is sending the segment.

Destination port address. This is a 16-bit field that defines the port number of the
application program in the host that is receiving the segment.

Sequence number. This 32-bit field defines the number assigned to the first byte of data
contained in this segment. As we said before, TCP is a stream transport protocol. To ensure
connectivity, each byte to be transmitted is numbered. The sequence number tells the
destination which byte in this sequence comprises the first byte in the segment. During
connection establishment, each party uses a random number generator to create an initial
sequence number (ISN), which is usually different in each direction.

DEPT OF CSE Page 128


COMPUTER NETWORKS A.Y. 2024-2025

Acknowledgment number. This 32-bit field defines the byte number that the receiver of the
segment is expecting to receive from the other party. If the receiver of the segment has
successfully received byte number x from the other party, it defines x + I as the
acknowledgment number. Acknowledgment and data can be piggybacked together.

Header length. This 4-bit field indicates the number of 4-byte words in the TCP header. The
length of the header can be between 20 and 60 bytes. Therefore, the value of this field can
be between 5 (5 x 4 =20) and 15 (15 x 4 =60).

Reserved. This is a 6-bit field reserved for future use.

Control. This field defines 6 different control bits or flags as shown in Figure. Oneor more of
these bits can be set at a time.

These bits enable flow control, connection establishment and termination,connection


abortion, and the mode of data transfer in TCP.

Window size. This field defines the size of the window, in bytes, that the other party must
maintain. Note that the length of this field is 16 bits, which means that the maximum size of
the window is 65,535 bytes. This value is normally referred to as the receiving window
(rwnd) and is determined by the receiver. The sender must obey the dictation of the
receiver in this case.

Checksum. This 16-bit field contains the checksum. The calculation of the checksum for TCP
follows the same procedure as the one described for UDP. However, the inclusion of the
checksum in the UDP datagram is optional, whereas the inclusion of the checksum for TCP
is mandatory. The same pseudoheader, serving the same purpose, is added to the segment.
For the TCP pseudoheader, the value for the protocol field is 6.

Urgent pointer. This l6-bit field, which is valid only if the urgent flag is set, isused when the
segment contains urgent data. It defines the number that must be added to the sequence

DEPT OF CSE Page 129


COMPUTER NETWORKS A.Y. 2024-2025

number to obtain the number of the last urgent byte in the data section of the segment.
This will be discussed later in this chapter.

Options. There can be up to 40 bytes of optional information in the TCP header. We will not
discuss these options here; please refer to the reference list for more information.

A TCP Connection

TCP is connection-oriented. A connection-oriented transport protocol establishes a virtual


path between the source and destination. All the segments belonging to a message are then
sent over this virtual path. Using a single virtual pathway for the entire message facilitates
the acknowledgment process as well as retransmission of damaged or lost frames.

In TCP, connection-oriented transmission requires three phases:

1. connection establishment,

2. data transfer,

3. connection termination.
TCP connection establishment(3 way handshaking)

1 The client sends the first segment, a SYN segment, in which only the SYN flag is set.

NOTE:A SYN segment cannot carry data, but it consumes onesequence number.

2. The server sends the second segment, a SYN +ACK segment, with 2 flag bitsset: SYN
and ACK. This segment has a dual purpose. It is a SYN

segment for communication in the other direction and serves segment cannot carry data,
but does consume one sequence number

3. The client sends the third segment. This is just an ACK segment. It acknowledges the
receipt of the second segment with the ACK flag and acknowledgment number field. Note
that the sequence number in this segment is the same as the one in the SYN segment; the
ACK segment does not consume any sequence numbers.

NOTE: An ACK segment, if carrying no data, consumes no sequence number

DEPT OF CSE Page 130


COMPUTER NETWORKS A.Y. 2024-2025

DEPT OF CSE Page 131


COMPUTER NETWORKS A.Y. 2024-2025

SYN Flooding Attack

This happens when a malicious attacker sends a large number of SYN segments to a server,
pretending that each of them is corning from a different client by faking the source IP
addresses in the datagram's.

The server, assuming that the clients are issuing an active open, allocates the necessary
resources, such as creating communication tables and setting timers. The TCP server then
sends the SYN +ACK segments to the fake clients, which are lost. During this time, however,
a lot of resources are occupied without being used. If, during this short time, the number of
SYN segments is large, the server eventually runs out of resources and may crash. This SYN
flooding attack belongs to a type of security attack known as a denial-of-service attack, in
which an attacker monopolizes a system with so many service requests that the system
collapses anddenies service to every request.

SOLUTIONS:

1 Some have imposed a limit on connection requests during a specified period of

time.

2 Others filter out datagrams coming from unwanted source addresses.

• One recent strategy is to postpone resource allocation until the entire connection is
setup, using what is called a cookie.

Data Transfer

After connection is established, bidirectional data transfer can take place.The client and
server can both send data and acknowledgments. Data traveling in the same direction as an
acknowledgment are carried on the same segment. The acknowledgment is piggybacked
with the data

In this example, after connection is established (not shown in the figure), the client sends
2000 bytes of data in two segments. The server then sends 2000 bytes in one segment. The
client sends one more segment. The first three segments carry both data and
acknowledgment, but the last segment carries only an acknowledgment because there are
no more datato be sent.

Note the values of the sequence and acknowledgment numbers. The data segments sent by
the client have the PSH (push) flag set so that theserver TCP knows to deliver data to
the server process as soon as theyare received.

DEPT OF CSE Page 132


COMPUTER NETWORKS A.Y. 2024-2025

Data transfer

PUSHING DATA: Delayed transmission and delayed delivery of data may not be acceptable
by the application program.

TCP can handle such a situation. The application program at the sending site can request a
push operation. This means that the sending TCP must not wait for the window to be filled.
It must create a segment and send it immediately. The sending TCP must also set the push
bit (PSH) to let the receiving TCP know that the segment includes data that must be
delivered to the receiving application program as soon as possible and not to wait for more
data to come.

Urgent Data : TCP is a stream-oriented protocol. This means that the data are presented
from the application program to TCP as a stream of bytes. Each byte of data has a position
in the stream. However, sending application program wants a piece of data to be read out
of order by the receiving application program.

DEPT OF CSE Page 133


COMPUTER NETWORKS A.Y. 2024-2025

Connection Termination (three-way handshaking and four-way handshaking with a half-


close option.)

1. In a normal situation, the client TCP, after receiving a close command from the client
process, sends the first segment, a FIN segment in which theFIN flag is set.

Note that a FIN segment can include the last chunk of data sent by the client, or it can be
just a control segment as shown in Figure. If it is only a control segment, it consumes only
one sequence number.

NOTE: The FIN segment consumes one sequence number ifit does not carry data.

2 The server TCP, after receiving the FIN segment, informs its process of the situation and
sends the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment
from the client and at the same time to announce the closing of the connection in the other
direction. This segment can also contain the last chunk of data from the server. If it does
not carry data, it consumes only one sequence number.

NOTE: The FIN +ACK segment consumes one sequence number if it does not carry data.

3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the
FIN segment from the TCP server. This segment contains the acknowledgment number,
which is 1 plus the sequence number received inthe FIN segment from the server. This
segment cannot carry data and consumes no sequence numbers.

Half-Close In TCP, one end can stop sending data while still receiving data. This is called a
half-close. Although either end can issue a half-close, it is normally initiated by the client. It
can occur when the server needs all the data before processing can begin.

A good example is sorting. When the client sends data to the server to be sorted, the server
needs to receive all the data before sorting can start. This means the client, after sending all
the data, can close the connection in the outbound direction. However, the inbound
direction must remain open to receive the sorted data. The server, after receiving the data,
still needs time for sorting; its outbound direction must remain open

DEPT OF CSE Page 134


COMPUTER NETWORKS A.Y. 2024-2025

Connection termination using three-way handshaking

Figure 23.21 Half-close

DEPT OF CSE Page 135


COMPUTER NETWORKS A.Y. 2024-2025

Flow Control or TCP Sliding Window

TCP uses a sliding window, to handle flow control. The sliding window protocol used by
TCP, however, is something between the Go-Back-N and Selective Repeat sliding window.

The sliding window protocol in TCP looks like the Go-Back-N protocol because it does not
use NAKs;

it looks like Selective Repeat because the receiver holds the out-of-order segments until the
missing ones arrive.

There are two big differences between this sliding window and the one we used at the data
link layer.

1 the sliding window of TCP is byte-

2 oriented; the one we discussed in thedata link layer is frame-oriented.

data link Sliding window is of variable size; the one we discussed in the

The window is opened, closed, or shrunk. These three activities, as we will see, are in the
control of the receiver (and depend on congestion in the network), not the sender.

The sender must obey the commands of the receiver in this matter. Opening a window
means moving the right wall to the right. This allows more new bytes in the buffer that are
eligible for sending.

Closing the window means moving the left wall to the right. This means that some bytes
have been acknowledged and the sender need not worry about them anymore.

DEPT OF CSE Page 136


COMPUTER NETWORKS A.Y. 2024-2025

Shrinking the window means moving the right wall to the left.

The size of the window at one end is determined by the lesser of two values:

receiver window (rwnd) or congestion window (cwnd).

The receiver window is the value advertised by the opposite end in a segment containing
acknowledgment. It is the number of bytes the other end can accept before its buffer
overflows and data are discarded.

The congestion window is a value determined by the network to avoid congestion

Window management in TCP

When the window is 0, the sender may not normally send segments, with twoexceptions.

1) urgent data may be sent, for example, to allow the user to kill the process running
on the remote machine.

2) the sender may send a 1-byte segment to force the receiver to reannounce the next
byte expected and the window size. This packet is called a window probe.

The TCP standard explicitly provides this option to prevent deadlock if a window update
ever gets lost.

Senders are not required to transmit data as soon as they come in from the application.
Neither are receivers required to send acknowledgements as soon as possible.

DEPT OF CSE Page 137


COMPUTER NETWORKS A.Y. 2024-2025

For example, in Fig. when the first 2 KB of data came in, TCP, knowing that it had a 4-KB
window, would have been completely correct in just buffering the data until another 2 KB
came in, to be able to transmit a segment with a 4-KB payload. This freedom can be used to
improve performance

Remote terminal applications (e.g., Telnet) send characters to a server. The server
interprets the character and sends the output at the serverto the client.

For each character typed, you see three packets:

Client Server: Send typed character

Server Client: Echo of character (or user output) andacknowledgement for first packet

Client Server: Acknowledgement for second packet Delayed Acknowledgement

• TCP delays transmission of ACKs for up to 500ms

• Avoid to send ACK packets that do not carry data.

– The hope is that, within the delay, the receiver will have data ready to be sent to the
receiver. Then, the ACK can be piggybacked with a datasegment

Exceptions:

• ACK should be sent for every full sized segment

• Delayed ACK is not used when packets arrive out of order

Although delayed acknowledgements reduce the load placed on the network by the
receiver, a sender that sends multiple short packets (e.g., 41- byte packets containing 1
byte of data) is still operating inefficiently. A way to reduce this usage is known as Nagle’s
algorithm (Nagle, 1984).Nagel’s Rule

Send one byte and buffer all subsequent bytes until acknowledgement is received. Then
send all buffered bytes in a single TCP segment and start buffering again until the sent
segment is acknowledged.

Nagle’s algorithm will put the many pieces in one segment, greatly reducing the bandwidth
used

Nagle’s algorithm is widely used by TCP implementations, but there are times when it is
better to disable it. In particular, in interactive games that arerun over the Internet.

A more subtle problem is that Nagle’s algorithm can sometimes interact with delayed
acknowledgements to cause a temporary deadlock: the receiver waits for data on which to

DEPT OF CSE Page 138


COMPUTER NETWORKS A.Y. 2024-2025

piggyback an acknowledgement, and the sender waits on the acknowledgement to send


more data.

Because of these problems, Nagle’s algorithm can be disabled (which is called the TCP
NODELAY option).

Another problem that can degrade TCP performance is the silly window syndrome (Clark,
1982).

Clark’s solution is to prevent the receiver from sending a window update for 1 byte.
Instead, it is forced to wait until it has a decent amount of space available and advertise
that instead. Specifically, the receiver should not send a window update until it can handle
the maximum segment size it advertised when the connection was established or until its
buffer is half empty, whichever is smaller.

Furthermore, the sender can also help by not sending tiny segments. Instead, it should wait
until it can send a full segment, or at least one containing half of the receiver’s buffer size.

The goal is for the sender not to send small segments and the receiver notto ask for them.
(Nagel + Clark). Both are used to improve TCP performance

The receiver will buffer the data until it can be passed up to the application in order
(handling out of order segments)

Cumulative acknowledgements Error Control

TCP is a reliable transport layer protocol. This means that an application program that
delivers a stream of data to TCP relies on TCP to deliver the entire

DEPT OF CSE Page 139


COMPUTER NETWORKS A.Y. 2024-2025

stream to the application program on the other end in order, without error, and without
any part lost or duplicated.

TCP provides reliability using error control. Error control includes mechanisms for
detecting corrupted segments, lost segments, out-of-order segments, and duplicated
segments. Error control also includes a mechanism for correcting errors after they are
detected. Error detection and correction in TCP is achieved through the use of three simple
tools: checksum, acknowledgment, and time- out.

Checksum

Each segment includes a checksum field which is used to check for acorrupted segment. If
the segment is corrupted, it is discarded by the destination TCP and is considered as lost.
TCP uses a 16-bit checksum that is mandatory in every segment

Figure 23.11 Checksum calculation of a simple UDP user datagram

Acknowledgment

TCP uses acknowledgments to confirm the receipt of data segments.Control segments that
carry no data but consume a sequence number are also acknowledged. ACK segments are
never acknowledged.

ACK segments do not consume sequence numbers and are not acknowledged.

Retransmission

The heart of the error control mechanism is the retransmission of segments. When a
segment is corrupted, lost, or delayed, it isretransmitted.

DEPT OF CSE Page 140


COMPUTER NETWORKS A.Y. 2024-2025

In modern implementations, a retransmission occurs if the retransmission timer expires or


three duplicate ACK segments have arrived.

Retransmission After RTO (retransmission time out)

Retransmission After Three Duplicate ACK Segments (also called fastretransmission)

Out-of-Order Segments

Data may arrive out of order and be temporarily stored by the receiving TCP, but yet
guarantees that no out-of-order segment is delivered to the process

TCP Congestion Control

When the load offered to any network is more than it can handle, congestion builds up.

The network layer detects congestion when queues grow large at routers and tries to
manage it, if only by dropping packets. It is up to the transport layer to receive congestion
feedback from the network layer and slow downthe rate of traffic that it is sending into the
network.

For Congestion control, transport protocol uses an AIMD (Additive Increase Multiplicative
Decrease) control law.

TCP congestion control is based on implementing this approach using a window called
congestion window. TCP adjusts the size of the window according to the AIMD rule.

The window size at the sender is set as follows:

Send Window = MIN (flow control window, congestion window)

where flow control window is advertised by the receiver (rwnd) congestion window is
adjusted based on feedback from the

Modern congestion control was added to TCP largely through the efforts of Van Jacobson
(1988). It is a fascinating story. Starting in 1986, the growing popularity of the early
Internet led to the first occurrence of what became known as a congestion collapse, a
prolonged period during which good put dropped suddenly (i.e., by more than a factor of
100) due to congestion in the network. Jacobson (and many others) set out to understand
what was happening and remedy the situation.

To start, he observed that packet loss is a suitable signal of congestion. This signal comes a
little late (as the network is already congested) but it is quite dependable

DEPT OF CSE Page 141


COMPUTER NETWORKS A.Y. 2024-2025

At the beginning how sender knows at what speed receiver can receive the packets?

The key observation is this: the acknowledgements return to the sender at about the rate
that packets can be sent over the slowest link in the path. This is precisely the rate that the
sender wants to use. If it injects new packets into the network at this rate, they will be sent
as fast as the slow link permits, but they will not queue up and congest any router along the
path. This timing is known as an ack clock. It is an essential part of TCP. By using an ack
clock, TCP smoothes out traffic and avoids unnecessary queues at routers. This is first
consideration

A second consideration is that the AIMD rule will take a very long time to reach a good
operating point on fast networks if the congestion window is started from a small size

Instead, the solution Jacobson chose to handle both of these considerations is a mix of
linear and multiplicative increase.

DEPT OF CSE Page 142


COMPUTER NETWORKS A.Y. 2024-2025

SLOW-START

Whenever a packet loss is detected, for example, by a timeout, the slow start threshold is
set to be half of the congestion window and the entire process is restarted.

Congestion avoidance phase is started if cwnd has reached the slow start threshold value

Whenever the slow start threshold is crossed, TCP switches from slow start to additive
increase. In this mode, the congestion window is increased by one segment every round-
trip time.

DEPT OF CSE Page 143


COMPUTER NETWORKS A.Y. 2024-2025

Responses to Congestion

• So, TCP assumes there is congestion if itdetects a packet loss

• A TCP sender can detect lost packets via:

• Timeout of a retransmission timer

• Receipt of a duplicate ACK

• TCP interprets a Timeout as a binary congestion signal. When atimeout occurs, the
sender performs:

– ssthresh is set to half the current size of the congestion window: ssthressh = cwnd / 2

Fast Retransmit

DEPT OF CSE Page 144


COMPUTER NETWORKS A.Y. 2024-2025

Flavors of TCP Congestion Control

• TCP Tahoe (1988)

– Slow Start

– Congestion Avoidance

– Fast Retransmit

• TCP Reno (1990) (TCP Tahoe+FR)

– Fast Recovery

• New Reno (1996)

• SACK(1996)

(SACK(Selective ACKnowledgeme nts))

• RED (Floyd and Jacobson 1993)

DEPT OF CSE Page 145


COMPUTER NETWORKS A.Y. 2024-2025

The use of ECN (Explicit Congestion Notification) in addition to packet loss as a congestion
signal. ECN is an IP layer mechanism to notify hosts of congestion.

The sender tells the receiver that it has heard the signal by using the CWR (Congestion
Window Reduced) flag.

Internet transport protocols

USER DATAGRAM PROTOCOL (UDP)

The User Datagram Protocol (UDP) is called a connectionless, unreliable transport


protocol. It doesnot add anything to the services of IP except to provide process-to-process
communication instead of host-to-host communication.

DEPT OF CSE Page 146


COMPUTER NETWORKS A.Y. 2024-2025

Table 23.1 Well-known ports used with UDP

DEPT OF CSE Page 147


COMPUTER NETWORKS A.Y. 2024-2025

Checksum (OPTIONAL,IF NOT USED SET ALL 1’S DEFAULT)

The UDP checksum calculation is different from the one for IP and ICMP. Here the

checksum includes three sections: a pseudo header, the UDP header, and the data coming
from the application layer.

The pseudo header is the part of the header of the IP packet in which the user datagram is
to be encapsulated with some fields filled with Os

If the checksum does not include the pseudo header, a user datagram may arrive safe and
sound. However, if the IP header is corrupted, it may bedelivered to the wrong host.

The protocol field is added to ensure that the packet belongs to UDP, and not toother
transport-layer protocols.

Figure 23.10 Pseudoheader for checksum calculation

UDP Operation

Connectionless Services

UDP provides a connectionless service. This means that each user datagram sent by UDP is
an independent datagram. There is no relationship between the different user datagrams
even if they are coming from the same source process and going to the same destination
program. The user datagrams are not numbered. Also, there is no connection establishment
and no connection termination, as is the case for TCP. This means that each user datagram
can travel on a different path.

Flow and Error Control

UDP is a very simple, unreliable transport protocol. There is no flow control and hence no
window mechanism. The receiver may overflow with incoming messages. There is no error
control mechanism in UDP except for the checksum. This means that the sender does not

DEPT OF CSE Page 148


COMPUTER NETWORKS A.Y. 2024-2025

know if a message has been lost or duplicated. When the receiver detects an error through
the checksum, the user datagram is silently discarded. The lack of flow control and error
control

Encapsulation and Decapsulation

Figure 23.11 Checksum calculation of a simple UDP user datagram

To send a message from one process to another, the UDP protocol encapsulates and
decapsulates messages in an IPdatagram.

Figure 23.11 shows the checksum calculation for a very small user datagram with only 7
bytes of data. Because the number of bytes of data is odd, padding is added for checksum
calculation. The pseudoheader as well as the padding will be dropped when the user
datagram is delivered to IP.

DEPT OF CSE Page 149


COMPUTER NETWORKS A.Y. 2024-2025

Figure 23.12 Queues in UDP

DEPT OF CSE Page 150


COMPUTER NETWORKS A.Y. 2024-2025

Remote Procedure Call

The key work was done by Birrell and Nelson (1984). In a nutshell, what Birrell and Nelson
suggested was allowing programs to call procedures located on remote hosts. When a
process on machine 1 calls a procedure on machine 2, the calling process on 1 is suspended
and execution of the called procedure takes place on 2. Information can be transported
from the caller to the callee in the parameters and can come back in the procedure result.
No message passing is visible to the application programmer. This technique is known as
RPC (Remote Procedure Call). Traditionally, the calling procedure is known as the client
and the called procedure is known as the server, and we will use those names here too.

to call a remote procedure, the client program must be bound with a small library
procedure, called the client stub, that represents the server procedure in the client’s
address space. Similarly, the server is bound with aprocedure called the server stub. These
procedures hide the fact that the procedure call from the client to the server is not local

Step 1 is the client calling the client stub. This call is a local procedure call, with the
parameters pushed onto the stack in the normal way.

Step 2 is the client stub packing the parameters into a message andmaking a system call to
send the message. Packing the parameters is called marshaling.

Step 3 is the operating system sending the message from the client machine to the server
machine.

Step 4 is the operating system passing the incoming packet to the server stub.

Finally, step 5 is the server stub calling the server procedure with the unmarshaled
parameters.The reply races the same path in the other direction.

DEPT OF CSE Page 151


COMPUTER NETWORKS A.Y. 2024-2025

Problems with RPC:

1 With RPC, passing pointers is impossible because the client and server are in
different address spaces.

2 It is essentially impossible for the client stub to marshal the parameters: it has no
way of determining how large they are.

3 A third problem is that it is not always possible to deduce the types of the
parameters, not even from a formal specification or the code itself.(exa:printf)

4 A fourth problem relates to the use of global variables. Normally, the calling and
called procedure can communicate by using global variables, in addition to communicating
via parameters. But if the called procedure is moved to a remote machine, the code will fail
because the global variables are no longer shared

TCP

TCP is a connection oriented protocol; it creates a virtual connection between two TCPs to
send data. In addition, TCP uses flow and errorcontrol mechanisms at the transport level.
In brief, TCP is called a connection- oriented, reliable transport protocol. It adds
connection-oriented and reliability features to the services of IP.

DEPT OF CSE Page 152


COMPUTER NETWORKS A.Y. 2024-2025

UNIT-V
Application Layer- Introduction, providing services, Applications layer
paradigms: Client server model, HTTP, E-mail, WWW, TELNET, DNS.

Application Layer
The application provides the following services.

The application layer guarantees that the receiver is recognized, accessible, and ready to
receive data from the sender.

It enables authentication between devices for an extra layer of network security. It


determines the protocol and data syntax rules at the application level.

The protocols of the application layer also define the basic syntax of the message being
forwarded or retrieved.

It also checks whether the sender's computer has the necessary communication interfaces,
such as an Ethernet or Wi-Fi interface.

Finally, the data on the receiving end is presented to the user application.

ARCHITECTURE

The WWW today is a distributed client/server service, in which a client using a browser
can access a service using a server. However, the service provided is distributed over many
locations called sites as shown in fig.

Figure 27.1 Architecture of WWW


DEPT OF CSE Page 153
COMPUTER NETWORKS A.Y. 2024-2025

Applications layer paradigms :

Client (Browser)

A variety of vendors offer commercial browsers that interpret and display a Web
document, and all use nearly the same architecture.

Each browser usually consists of three parts: a controller, client protocol, and interpreters.

The controller receives input from the keyboard or the mouse and uses the client programs
to access the document.

After the document has been accessed, the controller uses one of the interpreters to display
the document on the screen. The interpreter can be HTML, Java, or JavaScript, depending
on the type of document

The client protocol can be one of the protocols describedpreviously such as FTP or HTTP.

Server

The Web page is stored at the server. Each time a client request arrives, the corresponding
document is sent to the client. To improve efficiency, servers normally store requested files
in a cache in memory; memory is faster to access than disk. A servercan also become more
efficient through multithreading or multiprocessing. In this case, a server can answer more
than one

Figure 27.2 Browser

DEPT OF CSE Page 154


COMPUTER NETWORKS A.Y. 2024-2025

Uniform Resource Locator

A client that wants to access a Web page needs the address. To facilitate the access of
documents distributed throughout the world, HTTP uses locators. The uniform resource
locator (URL) is a standard for specifying any kind of information on the Internet. The URL
defines four things: protocol, host computer, port, and path.

The protocol is the client/server program used to retrieve the document. Many different
protocols can retrieve a document; among them are FTP or HTTP. The most common today
is HTTP.

The host is the computer on which the information is located, although the name of the
computer can be an alias. Web pages are usually stored in computers, and computers are
given alias names that usually begin with the characters "www".

The URL can optionally contain the port number of the server. If the port is included, it is
inserted between the host and the path, and it is separated from the host by acolon.

Path is the pathname of the file where the information is located. Note that the path can
itself contain slashes that, in the UNIX operating system, separate the directories from the
subdirectories and files.

Figure 27.3 URL

An HTTP cookie (also called web cookie, Internet cookie, browser cookie or simply cookie,
the latter which is not to be confused with the literal definition), is a small piece of data sent
from a website and stored in a user's web browser while the user is browsing that website

WEB DOCUMENTS

The documents in the WWW can be grouped into three broad categories: static, dynamic,
and active. The category is based on the time at which the contents of the document are
determined.

Static Documents
Static documents are fixed-content documents that are created and stored in a server. The
client can get only a copy of the document. When a client accesses the document, a copy of
the document is sent. The user can then use a browsing program to display the document

DEPT OF CSE Page 155


COMPUTER NETWORKS A.Y. 2024-2025

Figure 27.5 Boldface tags

HTML

Hypertext Markup Language (HTML) is a language for creatingWeb pages.

Figure 27.7 Beginning and ending tags

DEPT OF CSE Page 156


COMPUTER NETWORKS A.Y. 2024-2025

Dynamic Documents

A dynamic document is created by a Web server whenever a browser requests the


document. When a request arrives, the Web server runs an application program or a script
that creates the dynamic document. The server returns the output of the program or script
as a response to the browser that requested the document.

A very simple example of a dynamic document is the retrieval of the time and date from a
server. Time and date are kinds of information that are dynamic in that they change from
moment to moment. The client can ask the server to run a program such as the date
program in UNIX and send the result of the program to the client.

Common Gateway Interface (CGI)

The Common Gateway Interface (CGI) is technology that creates and handles dynamic
documents.

Hypertext Preprocessor (pHP), which uses the Perl language; Java Server Pages (JSP),
which uses the Java language for scripting; Active Server Pages (ASP), a Microsoft product
which uses Visual Basic language for scripting; and ColdFusion, which embeds SQL
database queries in the HTML document

DEPT OF CSE Page 157


COMPUTER NETWORKS A.Y. 2024-2025

Figure 27.8 Dynamic document using CGI

Figure 27.9 Dynamic document using server-site script

DEPT OF CSE Page 158


COMPUTER NETWORKS A.Y. 2024-2025

Dynamic documents are sometimes referred to as server-site dynamic documents.

Figure 27.10 Active document using Java applet

Active Documents For many applications, we need a program or a script to be runat the
client site. These are called active documents

Figure 27.11 Active document using client-site script

DEPT OF CSE Page 159


COMPUTER NETWORKS A.Y. 2024-2025

Active documents are sometimes referred to as client-site dynamic

DNS (Domain Name System)

To identify an entity, TCP/IP protocols use the IP address, which uniquely identifies the
connection of a host to the Internet. However, people prefer to use names instead of
numeric addresses. Therefore, we need a system that can map a name to an address or an
address to a name.

NAME SPACE

A name space that maps each address to a unique name can be organizedin two ways: fiat
or hierarchical.

Flat Name Space

In a flat name space, a name is assigned to an address. A name in this space is a sequence of
characters without structure.

Hierarchical Name Space

In a hierarchical name space, each name is made of several parts. The first part can define
the nature of the organization, the second part can define the name of an organization, the
third part can define departments in the organization, and so on.

Exa: challenger.jhda.edu, challenger.berkeley.edu, and challenger.smart.com

DEPT OF CSE Page 160


COMPUTER NETWORKS A.Y. 2024-2025

DOMAIN NAME SPACE

To have a hierarchical name space, a domain name space was designed. Inthis design the
names are defined in an inverted-tree structure with the root at the top. The tree can have
only 128 levels: level 0 (root) to level 127.

Label

Each node in the tree has a label, which is a string with a maximum of 63 characters. The
root label is a null string (empty string). DNS requires that children of a node (nodes that
branch from the same node) have different labels, which guarantees the uniqueness of the
domain names.

DEPT OF CSE Page 161


COMPUTER NETWORKS A.Y. 2024-2025

Domain Name

Each node in the tree has a domain name. A full domain name is asequence of labels
separated by dots (.). The domain names are always read from the node up to the root. The
last label is the label of the root (null).This means that a full domain name always ends in a
null label, whichmeans the last character is a dot because the null string is nothing. Below
Figure shows some domain names

Domain names and labels

DEPT OF CSE Page 162


COMPUTER NETWORKS A.Y. 2024-2025

Domain

A domain is a subtree of the domain name space. The name of the domainis the domain
name of the node at the top of the subtree.

DISTRIBUTION OF NAME SPACE:

The information contained in the domain name space must be stored. However, it is very
inefficient and also unreliable to have just one computer store such a huge amount of
information. In this section, we discuss the distribution of the domain name space

1 Hierarchy of Name Servers

DEPT OF CSE Page 163


COMPUTER NETWORKS A.Y. 2024-2025

distribute the information among many computers called DNS servers. we let the root
stand alone and create as many domains (subtrees) as there are first-levelnodes

2 Zone

Since the complete domain name hierarchy cannot be stored on a single server, it is divided
among many servers. What a server is responsible for or has authority over is called a zone.
We can define a zone as a contiguous part of the entire tree

3 Root Server

A root server is a server whose zone consists of the whole tree. A root server usually does
not store any information about domains but delegates its authority to other servers,
keeping references to those servers. There areseveral root servers, each covering the whole
domain name space. The servers are distributed all around the world.

4 Primary and Secondary Servers

A primary server is a server that stores a file about the zone for which it is an authority. It
is responsible for creating, maintaining, and updating the zone file. It stores the zone file on
a local disk

A secondary server is a server that transfers the complete information about a zone from
another server (primary or secondary) and stores the file on its local disk. The secondary
server neither creates nor updates the zone files

DEPT OF CSE Page 164


COMPUTER NETWORKS A.Y. 2024-2025

DNS IN THE INTERNET

DNS is a protocol that can be used in different platforms. In the Internet, the domain name
space (tree) is divided into three different sections: generic domains, country domains, and
the inverse domain

1 Generic Domains

DEPT OF CSE Page 165


COMPUTER NETWORKS A.Y. 2024-2025

The generic domains define registered hosts according to their generic behavior. Each node
in the tree defines a domain, which is an index to the domain name space database

DEPT OF CSE Page 166


COMPUTER NETWORKS A.Y. 2024-2025

2 Country Domains

The country domains section uses two-character country abbreviations (e.g., us for United
States). Second labels can be organizational, or theycan be more specific, national
designations. The United States, for example, uses state abbreviations as a subdivision of us
(e.g., ca.us.).

DEPT OF CSE Page 167


COMPUTER NETWORKS A.Y. 2024-2025

3 Inverse Domain

The inverse domain is used to map an address to a name.

RESOLUTION

Mapping a name to an address or an address to a name is called name- address resolution

1 Resolver

DNS is designed as a client/server application. A host that needs to map an address to a


name or a name to an address calls a DNS client called a resolver. The resolver accesses the
closest DNS server with a mapping request. If the server has the information, it satisfies the
resolver; otherwise, it either refers the resolver to other servers or asks other servers to
provide the information.

2 Mapping Names to Addresses

In this case, the server checks the generic domains or the country domains to find the
mapping.

DEPT OF CSE Page 168


COMPUTER NETWORKS A.Y. 2024-2025

3 Mapping Addresses to Names

To answer queries of this kind, DNS uses the inverse domain4


Recursive Resolution

The client (resolver) can ask for a recursive answer from a name server. This means that
the resolver expects the server to supply the final answer.. When the query is finally
resolved, the response travels back until it finally reaches the requesting client. This is
called recursive resolution and is shown in FIG

Recursiv

DEPT OF CSE Page 169


COMPUTER NETWORKS A.Y. 2024-2025

5 Iterative Resolution

If the client does not ask for a recursive answer, the mapping can be done iteratively. If the
server is an authority for the name, it sends the answer. If it is not, it returns (to the client)
the IP address of the server that it thinks can resolve the query

6 Caching

Each time a server receives a query for a name that is not in its domain, it needs to search
its database for a server IP address. Reduction of this search time would increase efficiency.
DNS handles this with a mechanism called caching

DEPT OF CSE Page 170


COMPUTER NETWORKS A.Y. 2024-2025

DNS MESSAGES

DNS has two types of messages: query and response. Both types have thesame format.

The query message consists of a header,and question records;the response message


consists of a header,question records, answer record, authoritative records, and additional
records

Header

Both query and response messages have the same header format with some fieldsset to
zero for the query messages. The header is 12 bytes,

DEPT OF CSE Page 171


COMPUTER NETWORKS A.Y. 2024-2025

TYPES OF RECORDS

The question records are used in the question section of the query and response messages.
The resource records are used in the answer, authoritative and additional information
sections of the response message.

Question Record

A question record is used by the client to get information from a server.. Resource Record

Each domain name (each node on the tree) is associated with a record called the resource
record. The server database consists of resource records. Resource records are also what is
returned by the server to the client.

REGISTRARS

How are new domains added to DNS? This is done through a registrar, a commercial entity
accredited by ICANN. A registrar first verifies that the requested domain name is unique
and then enters it into the DNS database. A fee is charged. Today, there are many registrars;
their names and addresses can be found at http://www.intenic.net

DYNAMIC DOMAIN NAME SYSTEM (DDNS)

In DNS, when there is a change, such as adding a new host, removing a host, or changing an
IP address, the change must be made to the DNS master file. The size of today's Internet
does not allow for this kind of manual operation.

The DNS master file must be updated dynamically. The Dynamic Domain Name System
(DDNS) therefore was devised to respond to this need.

ENCAPSULATION

DNS can use either UDP or TCP. In both cases the well-known port used by the server is
port 53.

DEPT OF CSE Page 172

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy