0% found this document useful (0 votes)
16 views219 pages

CN 1-5 Unit's

The document provides an overview of computer networks, detailing types such as LAN, MAN, WAN, and their respective topologies including bus, ring, star, tree, mesh, and hybrid. It discusses the uses of computer networks in business, home, and mobile applications, as well as social issues related to networking. Additionally, it covers the OSI and TCP/IP reference models, physical media, and the advantages and disadvantages of various network topologies.

Uploaded by

smce.ramu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views219 pages

CN 1-5 Unit's

The document provides an overview of computer networks, detailing types such as LAN, MAN, WAN, and their respective topologies including bus, ring, star, tree, mesh, and hybrid. It discusses the uses of computer networks in business, home, and mobile applications, as well as social issues related to networking. Additionally, it covers the OSI and TCP/IP reference models, physical media, and the advantages and disadvantages of various network topologies.

Uploaded by

smce.ramu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 219

COMPUTER NETWORKS

UNIT I:
Introduction: Network Types, LAN, MAN, WAN, Network Topologies
Reference models- The OSI Reference Model- the TCP/IP Reference Model - A
Comparison of the OSI and TCP/IP Reference Models, OSI Vs TCP/IP, Lack of OSI
models success, Internet History.
Physical Layer –Introduction to Guided Media- Twisted-pair cable, Coaxial cable and Fiber
optic cable and unguided media: Wireless-Radio waves, microwaves, infrared.

NETWORK: Network is a set of devices (often referred to as nodes) connected by


communication links. A node can be a computer, printer, or any other device capable of
sending and/or receiving data generated by other nodes on the network.“Computer
network’’ to mean a collection of autonomous computers interconnected by a single
technology. Two computers are said to be interconnected if they are able to exchange
information.The connection need not be via a copper wire; fiber optics, microwaves,
infrared, and communication satellites can also be used.

USES OF COMPUTER NETWORKS


1. Business Applications
 to distribute information throughout the company (resource
sharing).
 sharing physical resources such as printers, and tape backup systems, is
sharing information
 client-server model. It is widely used and forms the basis of much network
usage.
 communication medium among employees.email (electronic
mail), which employees generally use for a great deal of daily
communication.
Telephone calls between employees may be carried by the computer network
instead of by the phone company. This technology is called IP telephony or
 Voice over IP (VoIP) when Internet technology is used.
 Desktop sharing lets remote workers see and interact with a graphical
computer screen
 doing business electronically, especially with customers and suppliers. This
new model is called e-commerce (electronic commerce) and it has
grown rapidly in recent years.
2.Home Applications

1
 peer-to-peer communication
 person-to-person communication
 electronic commerce
 entertainment.(game playing,)

3.Mobile Users
 Text messaging or texting
 Smart phones,
 GPS (Global Positioning System)
 m-commerce
 NFC (Near Field Communication)
4.Social Issues
With the good comes the bad, as this new-found freedom brings with it many
unsolved social, political, and ethical issues.
Network Definition – A group of computers which are connected to each other and follow
similar usage protocols for the purpose of sharing information and having communications
provided by the networking nodes is called a Computer Network.
A network may be small where it may include just one system or maybe as large as what one
may want. The nodes may further be classified into various types. These include:
1. Personal Computers
2. Servers
3. Networking Hardware
4. General Hosts
 Networking can be classified into three types:
1. Types of Computer Networks
2. Topology
3. Interpreters

All are in detail further below


1.Types of Computer Networks
There are five main types of Computer Networks:
1.LAN (Local Area Network) –
 Systems connected in a small network like in a building or a small office
 It is inexpensive
 It uses Ethernet or Token-ring technology
 Two or more personal computers can be connected through wires or cables acting as
nodes
 Transfer of data is fast and is highly score
2.PAN (Personal Area Network) –

2
 The smallest computer network
 Devices may be connected through Bluetooth or other infra-red enables devices
 It has a connectivity range of upto 10 metres
 It covers an area of upto 30 feet
 Personal devices belonging to a single person can be connected to each other using PAN
3.MAN (Metropolitan Area Network) –
 A network that can be connected within a city, for example, cable TV Connection
 It can be in the form of Ethernet, ATM, Token-ring and FDDI
 It has a higher range
 This type of network can be used to connect citizens with the various Organisations
4.WAN (Wide Area Network) –
 A network which covers over a country or a larger range of people
 Telephonic lines are also connected through WAN
 Internet is the biggest WAN in the world
 Mostly used by Government Organisations to manage data and information

5.VPN (Virtual Private Network): –


 A network which is constructed by using public wires to connect to a private network
 There are a number of systems which enable you to create networks using the Internet as
a medium for transporting data
 These systems use encryptions and other security mechanisms to ensure only authorised
users can access

Topology:
Topology defines the structure of the network of how all the components are interconnected to
each other. There are two types of topology: physical and logical topology.
Physical topology is the geometric representation of all the nodes in a network.
Network Topologies
Given below are the eight types of Network Topologies:
1. Point to Point Topology – Point to Point topology is the simplest topology that connects
two nodes directly together with a common link.
2. Bus Topology – A bus topology is such that there is a single line to which all nodes are
connected and the nodes connect only to the bus
3. Mesh Topology – This type of topology contains at least two nodes with two or more
paths between them
4. Ring Topology – In this topology every node has exactly two branches connected to it.
The ring is broken and cannot work if one of the nodes on the ring fails
5. Star Topology – In this network topology, the peripheral nodes are connected to a
central node, which rebroadcasts all the transmissions received from any peripheral node
to all peripheral nodes on the network, including the originating node

3
6. Tree Topology – In this type of topology nodes are connected in the form of a tree. The
function of the central node in this topology may be distributed
7. Line Topology – in this topology all the nodes are connected in a straight line
8. Hybrid Topology – When two more types of topologies combine together, they form a
Hybrid topology

1.Bus Topology

1. The bus topology is designed in such a way that all the stations are connected through a
single cable known as a backbone cable.
2. Each node is either connected to the backbone cable by drop cable or directly connected
to the backbone cable.
3. When a node wants to send a message over the network, it puts a message over the
network. All the stations available in the network will receive the message whether it has
been addressed or not.
4. The bus topology is mainly used in 802.3 (ethernet) and 802.4 standard networks.
5. The configuration of a bus topology is quite simpler as compared to other topologies.
6. The backbone cable is considered as a "single lane" through which the message is
broadcast to all the stations.
 Advantages of Bus topology:
1. Low-cost cable: In bus topology, nodes are directly connected to the cable without

4
passing through a hub. Therefore, the initial cost of installation is low.
2. Moderate data speeds: Coaxial or twisted pair cables are mainly used in bus-based
networks that support upto 10 Mbps.
3. Familiar technology: Bus topology is a familiar technology as the installation and
troubleshooting techniques are well known, and hardware components are easily
available.
4. Limited failure: A failure in one node will not have any effect on other nodes.
 Disadvantages of Bus topology:
1. Extensive cabling: A bus topology is quite simpler, but still it requires a lot of cabling.
2. Difficult troubleshooting: It requires specialized test equipment to determine the cable
faults. If any fault occurs in the cable, then it would disrupt the communication for all the
nodes.
3. Signal interference: If two nodes send the messages simultaneously, then the signals of
both the nodes collide with each other.
4. Reconfiguration difficult: Adding new devices to the network would slow down the
network.
5. Attenuation: Attenuation is a loss of signal leads to communication issues. Repeaters are
used to regenerate the signal.

2.Ring Topology:

1. Ring topology is like a bus topology, but with connected ends.


2. The node that receives the message from the previous computer will retransmit to the
next node.
3. The data flows in one direction, i.e., it is unidirectional.
4. The data flows in a single loop continuously known as an endless loop.
5. It has no terminated ends, i.e., each node is connected to other node and having no
termination point.
6. The data in a ring topology flow in a clockwise direction.

5
7. The most common access method of the ring topology is token passing.
 Token passing: It is a network access method in which token is passed from one node to
another node.
 Token: It is a frame that circulates around the network.
 Advantages of Ring topology:
1. Network Management: Faulty devices can be removed from the network without
bringing the network down.
2. Product availability: Many hardware and software tools for network operation and
monitoring are available.
3. Cost: Twisted pair cabling is inexpensive and easily available. Therefore, the installation
cost is very low.
4. Reliable: It is a more reliable network because the communication system is not
dependent on the single host computer.
 Disadvantages of Ring topology:
1. Difficult troubleshooting: It requires specialized test equipment to determine the cable
faults. If any fault occurs in the cable, then it would disrupt the communication for all the
nodes.
2. Failure: The breakdown in one station leads to the failure of the overall network.
3. Reconfiguration difficult: Adding new devices to the network would slow down the
network.
4. Delay: Communication delay is directly proportional to the number of nodes. Adding
new devices increases the communication delay.

3.Star Topology

1. Star topology is an arrangement of the network in which every node is connected to the
central hub, switch or a central computer.
2. The central computer is known as a server, and the peripheral devices attached to the
server are known as clients.

6
3. Coaxial cable or RJ-45 cables are used to connect the computers.
4. Hubs or Switches are mainly used as connection devices in a physical star topology.
5. Star topology is the most popular topology in network implementation.
 Advantages of Star topology:
1. Efficient troubleshooting: Troubleshooting is quite efficient in a star topology as
compared to bus topology. In a bus topology, the manager has to inspect the kilometers
of cable. In a star topology, all the stations are connected to the centralized network.
Therefore, the network administrator has to go to the single station to troubleshoot the
problem.
2. Network control: Complex network control features can be easily implemented in the
star topology. Any changes made in the star topology are automatically accommodated.
3. Limited failure: As each station is connected to the central hub with its own cable,
therefore failure in one cable will not affect the entire network.
4. Familiar technology: Star topology is a familiar technology as its tools are cost-
effective.
5. Easily expandable: It is easily expandable as new stations can be added to the open ports
on the hub.
6. Cost effective: Star topology networks are cost-effective as it uses inexpensive coaxial
cable.
7. High data speeds: It supports a bandwidth of approx 100Mbps. Ethernet 100BaseT is
one of the most popular Star topology networks.
 Disadvantages of Star topology
1. A Central point of failure: If the central hub or switch goes down, then all the
connected nodes will not be able to communicate with each other.
2. Cable: Sometimes cable routing becomes difficult when a significant amount of routing
is required.

4.Tree topology

1. Tree topology combines the characteristics of bus topology and star topology.
2. A tree topology is a type of structure in which all the computers are connected with each

7
other in hierarchical fashion.
3. The top-most node in tree topology is known as a root node, and all other nodes are the
descendants of the root node.
4. There is only one path exists between two nodes for the data transmission. Thus, it forms
a parent-child hierarchy.
 Advantages of Tree topology
1. Support for broadband transmission: Tree topology is mainly used to provide
broadband transmission, i.e., signals are sent over long distances without being
attenuated.
2. Easily expandable: We can add the new device to the existing network. Therefore, we
can say that tree topology is easily expandable.
3. Easily manageable: In tree topology, the whole network is divided into segments known
as star networks which can be easily managed and maintained.
4. Error detection: Error detection and error correction are very easy in a tree topology.
5. Limited failure: The breakdown in one station does not affect the entire network.
6. Point-to-point wiring: It has point-to-point wiring for individual segments.
 Disadvantages of Tree topology
1. Difficult troubleshooting: If any fault occurs in the node, then it becomes difficult to
troubleshoot the problem.
2. High cost: Devices required for broadband transmission are very costly.
3. Failure: A tree topology mainly relies on main bus cable and failure in main bus cable
will damage the overall network.
4. Reconfiguration difficult: If new devices are added, then it becomes difficult to
reconfigure.

5.Mesh topology:

1. Mesh technology is an arrangement of the network in which computers are


interconnected with each other through various redundant connections.
2. There are multiple paths from one computer to another computer.
3. It does not contain the switch, hub or any central computer which acts as a central point
of communication.
4. The Internet is an example of the mesh topology.
5. Mesh topology is mainly used for WAN implementations where communication failures

8
are a critical concern.
6. Mesh topology is mainly used for wireless networks.
7. Mesh topology can be formed by using the formula:
Number of cables = (n*(n-1))/2;
8. Where n is the number of nodes that represents the network.
9. Mesh topology is divided into two categories:
10. Fully connected mesh topology
11. Partially connected mesh topology

1.Full Mesh Topology: In a full mesh topology, each computer is connected to all the computers
available in the network.
2.Partial Mesh Topology: In a partial mesh topology, not all but certain computers are
connected to those computers with which they communicate frequently.
 Advantages of Mesh topology:
1. Reliable: The mesh topology networks are very reliable as if any link breakdown will not
affect the communication between connected computers.
2. Fast Communication: Communication is very fast between the nodes.
3. Easier Reconfiguration: Adding new devices would not disrupt the communication
between other devices.
 Disadvantages of Mesh topology
1. Cost: A mesh topology contains a large number of connected devices such as a router
and more transmission media than other topologies.
2. Management: Mesh topology networks are very large and very difficult to maintain and
manage. If the network is not monitored carefully, then the communication link failure
goes undetected.
3. Efficiency: In this topology, redundant connections are high that reduces the efficiency
of the network.

Hybrid Topology:

9
1. The combination of various different topologies is known as Hybrid topology.
2. A Hybrid topology is a connection between different links and nodes to transfer the data.
3. When two or more different topologies are combined together is termed as Hybrid
topology and if similar topologies are connected with each other will not result in Hybrid
topology. For example, if there exist a ring topology in one branch of ICICI bank and bus
topology in another branch of ICICI bank, connecting these two topologies will result in
Hybrid topology.
 Advantages of Hybrid Topology:
1. Reliable: If a fault occurs in any part of the network will not affect the functioning of the
rest of the network.
2. Scalable: Size of the network can be easily expanded by adding new devices without
affecting the functionality of the existing network.
3. Flexible: This topology is very flexible as it can be designed according to the
requirements of the organization.
4. Effective: Hybrid topology is very effective as it can be designed in such a way that the
strength of the network is maximized and weakness of the network is minimized.
 Disadvantages of Hybrid topology:
1. Complex design: The major drawback of the Hybrid topology is the design of the Hybrid
network. It is very difficult to design the architecture of the Hybrid network.
2. Costly Hub: The Hubs used in the Hybrid topology are very expensive as these hubs are
different from usual Hubs used in other topologies.
3. Costly infrastructure: The infrastructure cost is very high as a hybrid network requires a
lot of cabling, network devices, etc.

3 REFERENCE MODELS:
Computer Network Models
A communication subsystem is a complex piece of Hardware and software. Early attempts for

10
Reference models
The reference models give a conceptual frame work that standardizes the communication
between two heterogeneous networks.

There mainly two types of reference models

1. OSI Reference model


2. TCP/IP Reference model

OSI Model

 OSI stands for Open System Interconnection.


 OSI model was developed by the International Organization for Standardization (ISO) in
1984, and it is now considered as an architectural model for the inter-computer
communications.
 This reference model that describes how information from a software application in one
computer moves through a physical medium to the software application in another
computer.
 OSI consists of seven layers, and each layer performs a particular network function. Each
layer is self-contained, so that task assigned to each layer can be performed
independently.

The OSI model is divided into two layers

1. Upper layers
2. Lower layers

 The upper layer of the OSI model mainly deals with the application related issues, and
they are implemented only in the software. The application layer is closest to the end
user. Both the end user and the application layer interact with the software
applications. An upper layer refers to the layer just above another layer.
 The lower layer of the OSI model deals with the data transport issues. The data link layer
and the physical layer are implemented in hardware and software. The physical layer is
the lowest layer of the OSI model and is closest to the physical medium. The physical
layer is mainly responsibl for placing the information on the physical medium.
There are the seven OSI layers. Each layer has different functions. A list of seven layers are
given below:

1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer

1. Physical Layer
The lowest layer of the OSI reference model is the physical layer. It is responsible for the actual
physical connection between the devices. The physical layer contains information in the form
of bits. It is responsible for transmitting individual bits from one node to the next. When
receiving data, this layer will get the signal received and convert it into 0s and 1s and send
them to the Data Link layer, which will put the frame back together.

Functions of a Physical layer:

 Data Transmission: It defines the transmission mode whether it is simplex, half-duplex


or full-duplex mode between the two devices on the network.
 Topology: It defines the way how network devices are arranged.
 Bit synchronization: The physical layer provides the synchronization of the bits by
providing a clock. This clock controls both sender and receiver thus providing
synchronization at the bit level.
 Bit rate control: The Physical layer also defines the transmission rate i.e. the number of
bits sent per second.

Note: 1. Hub, Repeater, Modem, and Cables are Physical Layer devices.

2. Data Link Layer (DLL)

The data link layer is responsible for the node-to-node delivery of the
message. The main function of this layer is to make sure data transfer is error-free from one
node to another, over the physical layer. When a packet arrives in a network, it is the
responsibility of the DLL to transmit it to the Host using its MAC address.

The Data Link Layer is divided into two sub layers:

1. Logical Link Control (LLC)


2. Media Access Control (MAC)
The packet received from the
Network layer is further divided
into frames depending on the
frame size of the NIC (Network
Interface Card). DLL also
encapsulates Sender and
Receiver’s MAC address in the
header.

The Receiver’s MAC address is


obtained by placing an
ARP(Address Resolution Protocol)
request onto the wire asking “Who
has that IP address?” and the
destination host will reply with its
MAC address.

The Functions of the Data Link Layer

 Framing: Framing is a function of the data link layer. It provides a way for a sender to
transmit a set of bits that are meaningful to the receiver. This can be accomplished by
attaching special bit patterns to the beginning and end of the frame.
 Physical addressing: After creating frames, the Data link layer adds physical addresses
(MAC addresses) of the sender and/or receiver in the header of each frame.
 Error control: The data link layer provides the mechanism of error control in which it
detects and retransmits damaged or lost frames.
 Flow Control: The data rate must be constant on both sides else the data may get
corrupted thus, flow control coordinates the amount of data that can be sent before
receiving an acknowledgment.
 Access control: When a single communication channel is shared by multiple devices, the
MAC sub-layer of the data link layer helps to determine which device has control over
the channel at a given time.
Note: 1. Packet in the Data Link layer is referred to as Frame.

2. Data Link layer is handled by the NIC (Network Interface Card) and device drivers of host
machines.

3. Switch & Bridge are Data Link Layer devices.

3. Network Layer

The network layer works for the


transmission of data from one host to the
other located in different networks. It
also takes care of packet routing i.e.
selection of the shortest path to transmit
the packet, from the number of routes
available.

The sender & receiver’s IP addresses are placed in the header by the network layer.

The Functions of the Network Layer

 Routing: The network layer protocols determine which route is suitable from source to
destination. This function of the network layer is known as routing.
 Logical Addressing: To identify each device on Internetwork uniquely, the network layer
defines an addressing scheme. The sender & receiver’s IP addresses are placed in the
header by the network layer. Such an address distinguishes each device uniquely and
universally.

Note: 1. Segment in the Network layer is referred to as Packet.

2. Network layer is implemented by networking devices such as routers.


4. Transport Layer
The transport layer provides services to
the application layer and takes services
from the network layer. The data in the
transport layer is referred to as
Segments. It is responsible for the End to
End Delivery of the complete message.

The transport layer also provides the acknowledgment of the successful data transmission
and re-transmits the data if an error is found.

At the sender’s side: The transport layer receives the formatted data from the upper layers,
performs Segmentation, and also implements Flow & Error control to ensure proper data
transmission. It also adds Source and Destination port numbers in its header and forwards the
segmented data to the Network Layer.

Note: The sender needs to know the port number associated with the receiver’s application.

Generally, this destination port number is configured, either by default or manually. For
example, when a web application requests a web server, it typically uses port number 80,
because this is the default port assigned to web applications. Many applications have default
ports assigned.

At the receiver’s side: Transport Layer reads the port number from its header and forwards the
Data which it has received to the respective application. It also performs sequencing and
reassembling of the segmented data.

The Functions of the Transport Layer


 Segmentation and Reassembly: This layer accepts the message from the (session) layer,
and breaks the message into smaller units. Each of the segments produced has a header
associated with it. The transport layer at the destination station reassembles the
message.
 Service Point Addressing: To deliver the message to the correct process, the transport
layer header includes a type of address called service point address or port address.
Thus by specifying this address, the transport layer makes sure that the message is
delivered to the correct process.

Services Provided by Transport Layer

1. Connection-Oriented Service
2. Connectionless Service

1. Connection-Oriented Service: It is a three-phase process that includes

 Connection Establishment
 Data Transfer
 Termination/disconnection

In this type of transmission, the receiving device sends an acknowledgment, back to the source
after a packet or group of packets is received. This type of transmission is reliable and secure.

2. Connectionless service: It is a one-phase process and includes Data Transfer. In this type of
transmission, the receiver does not acknowledge receipt of a packet. This approach allows for
much faster communication between devices. Connection-oriented service is more reliable
than connectionless Service.

Note:

1. Data in the Transport Layer is called Segments.


2. Transport layer is operated by the Operating System. It is a part of the OS and
communicates with the Application Layer by making system calls.

3. The transport layer is called as Heart of the OSI model.

5. Session Layer

This layer is responsible for the establishment of connection, maintenance of sessions, and
authentication, and also ensures security.

The Functions of the Session Layer

 Session establishment, maintenance, and termination: The layer allows the two
processes to establish, use and terminate a connection.
 Synchronization: This layer allows a process to add checkpoints that are considered
synchronization points in the data. These synchronization points help to identify the
error so that the data is re-synchronized properly, and ends of the messages are not cut
prematurely and data loss is avoided.
 Dialog Controller: The session layer allows two systems to start communication with
each other in half-duplex or full-duplex.

Note: 1. All the below 3 layers(including Session Layer) are integrated as a single layer in the
TCP/IP model as the “Application Layer”.
2. Implementation of these 3 layers is done by the network application itself. These are also
known as Upper Layers or Software Layers.

6. Presentation Layer

The presentation layer is also called the Translation layer. The data from the application layer
is extracted here and manipulated as per the required format to transmit over the network.

The Functions of the Presentation Layer are

 Translation: For example, ASCII to EBCDIC.


 Encryption/ Decryption: Data encryption translates the data into another form or code.
The encrypted data is known as the cipher text and the decrypted data is known as plain
text. A key value is used for encrypting as well as decrypting data.
 Compression: Reduces the number of bits that need to be transmitted on the network.

7. Application Layer
At the very top of the OSI Reference Model stack of layers, we find the Application layer which
is implemented by the network applications. These applications produce the data, which has to
be transferred over the network. This layer also serves as a window for the application services
to access the network and for displaying the received information to the user.

Example: Application – Browsers, Skype Messenger, etc.

Note: The application Layer is also called Desktop Layer.

The Functions of the Application Layer are

 Network Virtual Terminal


 FTAM- File transfer access and management
 Mail Services
 Directory Services

OSI model acts as a reference model and is not implemented on the Internet because of its
late invention. The current model being used is the TCP/IP model.
OSI Model in a Nutshell
Layer Information
Layer Name Responsibility Device
No Form(Data Unit)
Application Helps in identifying the client and
7 Message
Layer synchronizing communication. –
Data from the application layer is extracted
Presentation
6 and manipulated in the required format for Message
Layer –
transmission.
Establishes Connection, Maintenance,
5 Session Layer Message Gateway
EnsuresAuthentication, and Ensures security.
Transport Take Service from Network Layer and
4 Segment Firewall
Layer provide it to the Application Layer.
Network Transmission of data from one host to
3 Packet Router
Layer another, located in different networks.
Data Link
2 Node to Node Delivery of Message. Frame Switch, Bridge
Layer
Physical Establishing Physical Connections between Hub, Repeater,
1 Bits
Layer Devices. Modem, Cables

TCP / IP Reference Model


 The TCP/IP model was developed prior to the OSI model.
 The TCP/IP model is not exactly similar to the OSI model.
 The TCP/IP model consists of five layers: the application layer, transport layer,
network layer, data link layer and physical layer.
 The first four layers provide physical standards, network interface,
internetworking, and transport functions that correspond to the first four layers
of the OSI model and these four layers are represented in TCP/IP model by a
single layer called the application layer.
 TCP/IP is a hierarchical protocol made up of interactive modules, and each of
them provides specific functionality.
TCP/IP Reference Model is a four-layered suite of communication protocols. It was developed
by the DoD (Department of Defence) in the 1960s. It is named after the two main protocols
that are used in the model, namely, TCP and IP. TCP stands for Transmission Control Protocol
and IP stands for Internet Protocol.

The four layers in the TCP/IP protocol suite are

1. Network interface
2. Internet Layer
3. Transport Layer
4. Application Layer

1. Link Layer - The link layer is the lowest layer in the TCP/IP model. It is compared with the
combination of the data link layer and the physical layer of the OSI Model. They are similar but
not identical. This layer is the group of communication protocols that acts as a link to which the
host is connected physically. It is mainly concerned with the physical transmission of the data.

The protocols used in this layer are:

 ETHERNET
 FDDI
 Token Ring
 Frame relay

2. Internet Layer - The Internet layer is compared to the network layer of the OSI model. The
main responsibility of the network layer is to transport data packets from the source to the
destination host across the entire network. The transmission done by the internet layer is less
reliable.

The main protocols used in this layer are:

 IP - It is the primary protocol in the internet layer. It stands for Internet Protocol. It is
responsible for the transmission of data packets from the source to the destination host.
It is implemented in two versions, IPv4 and IPv6.
 ARP - It stands for Address Resolution Protocol. Its main responsibility is to find the
physical address of the host using the internet address or IP address.
 ICMP - It is used for providing messages about errors to the host.
 IGMP - It is used for the transmission of data to a group of networks. For eg. online
streaming.

3. Transport Layer - The transport layer is responsible for the end-to-end communication and
delivery of the non-erroneous data. It provides services that include connection-oriented
communication, flow control, reliability, multiplexing. This layer is similar to the transport layer
of the OSI model.

The main protocols of this layer are:

 TCP - It stands for Transmission Control Protocol. It is a connection-oriented protocol


and provides reliable communication and error-free delivery of data from the source to
the destination host. It is optimized for accurate delivery than timely delivery. It is used
by many internet applications including World Wide Web(WWW), email.
 UDP - It stands for User Diagram Protocol. It provides simple, cost-effective but
unreliable service. It prioritizes speed over the accuracy of delivery.

UDP consists of the following fields


Source port address: The source port address is the address of the application program
that has created the message.
Destination port address: The destination port address is the address of the application
program that receives the message.
Total length: It defines the total number of bytes of the user datagram in bytes.
Checksum: The checksum is a 16-bit field used in error detection.

4. Application Layer - It is the topmost layer of the TCP/IP model. Its functions are similar to
the combination of the application layer, session layer, and presentation layer. It is responsible
for user interface specifications. It contains communication protocols used in the process to
process communication across an Internet protocol computer network.

Some of the protocols used in this layer are:

 HTTP - It stands for Hypertext Transfer Protocol. It is the foundation of data


communication for the World Wide Web. The hypertext includes hyperlinks to other
resources that can be accessed easily by the user.
 SMTP - It stands for Simple Mail Transfer Protocol. It is used for sending and receiving
electronic mails.
 TELNET - It provides a bidirectional interactive text-oriented communication facility to
the hosts over a network.
 FTP - It stands for File Transfer Protocol. It is a standard communication protocol used
for transferring files from one computer to another over a network.

 TELNET: It is an abbreviation for Terminal Network. It establishes the connection


between the local computer and remote computer in such a way that the local terminal
appears to be a terminal at the remote system.

 SNMP: SNMP stands for Simple Network Management Protocol. It is a framework used
for managing the devices on the internet by using the TCP/IP protocol suite.
Advantages
 Many Routing protocols are supported.
 It is highly scalable and uses a client-server architecture.
 It is lightweight.
Disadvantages
 Little difficult to set up.
 Delivery of packets is not guaranteed by the transport layer.
 Vulnerable to a synchronization attack.

*Differences/Comparisons between OSI & TCP/IP

Parameters OSI Model TCP/IP Model

OSI stands for Open Systems TCP/IP stands for Transmission


Full Form
Interconnection. Control Protocol/Internet Protocol.

Layers It has 7 layers. It has 4 layers.

Usage It is low in usage. It is mostly used.

Approach It is vertically approached. It is horizontally approached.

Delivery of the package is Delivery of the package is not


Delivery
guaranteed in OSI Model. guaranteed in TCP/IP Model.

Replacement of tools and changes Replacing the tools is not easy as it


Replacement
can easily be done in this model. is in OSI Model.

Reliability It is less reliable than TCP/IP Model. It is more reliable than OSI Model.

The protocols of the OSI model are The TCP/IP model protocols are not
better unseen and can be returned hidden, and we cannot fit a new
with another appropriate protocol protocol stack in it.
Parameters OSI Model TCP/IP Model

quickly.

It provides both connection and It provides connectionless


connectionless oriented transmission transmission in the network layer
in the network layer; however, only and supports connecting and
connection-oriented transmission in connectionless-oriented
the transport layer. transmission in the transport layer.

The smallest size of the OSI header is The smallest size of the TCP/IP
5 bytes. header is 20 bytes.

Similarities between OSI Model and TCP/IP Model

OSI and TCP/IP both are logical models. One of the main similarities between the OSI and
TCP/IP models is that they both describe how information is transmitted between two
devices across a network. Both models define a set of layers. Each layer performs a specific
set of functions to enable the transmission of data.

Another similarity between the two models is that they both use the concept of
encapsulation, in which data is packaged into a series of headers and trailers that contain
information about the data being transmitted and how it should be handled by the network.

*Lack of OSI model Success

Open System Interconnection (OSI) model is reference model that is used to describe and
explain how does information from software application in one of computers moves freely
through physical medium to software application on another computer. This model consists
of total of seven layers and each of layers performs specific task or particular network
function.
Although, OSI model and its protocols even TCP/IP models and its protocols are not perfect in
each and manner. There is bit of criticism that has been noticed and directed at both of
them. The most striking and unfortunate issue concerning OSI model is that it is perhaps the
most-studied and most widely accepted network structure and yet it is not model that is
really implemented and largely used. The important reasons why happen is given below:

1. Bad Timing: In the OSI model, it is very essential and important to write standards in
between trough i.e., apocalypse of two elephants. Time of standards is very critical as
sometimes standards are written too early even before research is completed. Due to this,
OSI model was not properly understood. The timing was considered bad because this model
was finished and completed after huge and significant amount of research time. Due to this,
the standards are ignored by these companies.
When the OSI came around, this model was perfectly released regarding research, but at that
time TCP/IP model was already receiving huge amounts of investments from companies and
manufacturers did not feel like investing in OSI model. So, there were no initial offerings for
using OSI technique. While every company waited for any of other companies to firstly use
this model technique, but unfortunately none of company went first to use this model. This is
first reason why OSI never happen.

2. Bad Technology :OSI models were never taken into consideration because of competition
TCP/IP protocols that were already used widely. This is due to second reason that OSI model
and its protocols are flawed that means both of them have fundamental weakness or
imperfection or defect in character or performance or design, etc. The idea behind choosing
all of seven layers of OSI model was based more on political issues rather than technical.
Layers are more political than technical.
OSI model, along with all of its associated service definitions and protocols, is highly complex.
On the other hand, other two layers i.e. Data link layer and network layer both of them are
overfull. Documentation is also highly complex due to which it gets very difficult to
implement and is not even very efficient in operation or function. Error and flow control are
also duplicated i.e., reappear again and again in multiple layers or each layer. On the other
hand, most serious and bad criticism is that this model is also dominated by communications
mentality.

3. Bad Implementations :The OSI model is extraordinarily and much more complex due to
which initial implementations were very slow, huge, and unwidely. This is the third reason
due to which OSI became synonymous with poor quality in early days. It turned out to not be
essential and necessary for all of seven layers to be designed together to simply make things
work out.
On the other hand, implementations of TCP/IP were more reliable than OSI due to which
people started using TCP/IP very quickly which led to large community of users. In simple
words, we can say that complexity leads to very poor or bad implementation. It is highly
complex to be effectively and properly implemented.

4. Bad Politics :OSI model was not associated with UNIX. This was fourth reason because
TCP/IP was largely and closely associated with Unix, which helps TCP/IP to get popular in
academia whereas OSI did not have this association at that time.
On the other hand, OSI was associated with European telecommunications, European
community, and government of USA. This model was also considered to be technically
inferior to TCP/IP. So, all people on ground reacted very badly to all of these things and
supported much use of TCP/IP.
Even after all these bad conditions, OSI model is still general standard reference for almost all
of networking documentation. There are many organizations that are highly interested in OSI
model. All of networking that is referring to numbered layers like layer 3 switching generally
refers to OSI model. Even, an effort has also been made simply to update it resulting in
revised model that was published in 1994.

Internet History

The Internet, commonly referred to as "the Net," is a global wide area network (GWAN) or a
network of networks that links computer systems all over the world. Generally, it is a worldwide
system of computer networks that have different high-bandwidth data lines, which includes the
Internet "backbone." Users at any computer can access information from any other computer
via the internet (assuming they have authorization). It was known as the ARPANet for the first
time, and in 1969, the ARPA, called Advanced Research Projects Agency, conceived the
internet. Allowing communication between users and devices from any distance was the
primary objective to create the network. You will need an Internet service provider (ISP) in
terms of connecting to the Internet since they operate as a middleman between you and the
Internet. Most Internet service providers provide you DSL, cable, or fiber connection to connect
to the internet. Below is a table that contains an overall history of the internet.

Year Event

1960 This is the year in which the internet started to share information s a way for government
researchers. And, the first known MODEM and dataphone were introduced by AT&T.

1961 On May 31, 1961, Leonard Kleinrock released his first paper, "Information Flow in Large
Communication Nets."

1962 A paper talking about packetization was released by Leonard Kleinrock. Also, this year, a
suggestion was given by Paul Baran for the transmission of data with the help of using
fixed-size message blocks

1964 Baran produced a study on distributed communications in 1964. In the same year,
Leonard Kleinrock released Communication Nets Stochastic Message Flow and Design,
the first book on packet nets.

1965 The first long-distance dial-up link was established between a TX-2 computer and a Q-32
at SDC in California by Lawrence G. Roberts of MIT and Tom Marill of SDC in California
with a Q-32. Also, the word "Packet" was coined by Donald in this year.
1966 After getting success at connecting over dial-up, a paper about this was published by
Tom Marill and Lawrence G. Roberts.
In the same year, Robert Taylor brought Larry Roberts and joined ARPA to develop
ARPANET.

1967 In 1967, 1-node NPL packet net was created by Donald Davies. For packet switch, the use
of a minicomputer was suggested by Wes Clark.

1968 On 9 December 1968, Hypertext was publicly demonstrated by Doug Engelbart. The first
meeting regarding NWG (Network Working Group) was also held this year, and on June
3, 1968, the ARPANET program plan was published by Larry Roberts.

1969 On 1 April 1969, talking about the IMP software and introducing the Host-to-Host, RFC
#1 was released by Steve Crocker. On 3 July 1969, a press was released for announcing
the public to the Internet by UCLA. On August 29, 1969, UCLA received the first network
equipment and the first network switch. CompuServe, the first commercial internet
service, was founded the same year.

1970 This is the year in which NCP was released by the UCLA team and Steve Crocker.

1971 In 1971, Ray Tomlinson sent the first e-mail via a network to other users.

1972 In 1972, the ARPANET was initially demonstrated to the general public.

1973 TCP was created by Vinton Cerf in 1973, and it was released in December 1974 with the
help of Yogen Dalal and Carl Sunshine. ARPA also launched the first international link,
SATNET, this year. And, the Ethernet was created by Robert Metcalfe at the Xerox Palo
Alto Research Center.

1974 In 1974, the Telenet, a commercial version of ARPANET, was introduced. Many consider it
to be the first Internet service provider.

1978 In 1978, to support real-time traffic, TCP split into TCP/IP, which was driven by John
Shoch, David Reed, and Danny Cohen. Later on, on 1 January 1983, the creation of TCP/IP
was standardized into ARPANET and helped create UDP. Also, in the same year, the first
worm was developed by Jon Hupp and John Shoch at Xerox PARC.

1981 BITNET was established in 1981. It is a time network that was formerly a network of IBM
mainframe computers in the United States.

1983 In 1983, the TCP/IP was standardized by ARPANET, and the IAB, short for Internet
Activities Board was also founded in the same year.

1984 The DNS was introduced by Jon Postel and Paul Mockapetris.

1986 The first Listserv was developed by Eric Thomas, and NSFNET was also created in 1986.
Additionally, BITNET II was created in the same year 1986.

1988 The First T1 backbone was included in ARPANET, and CSNET and CSNET merged to create
CREN.

1989 A proposal for a distributed system was submitted by Tim Berners-Lee at CERN on 12
March 1989 that would later become the WWW.

1990 This year, NSFNET replaced the ARPANET. On 10 September 1990, Mike Parker, Bill
Heelan, and Alan Emtage released the first search engine Archie at McGill University in
Montreal, Canada.

1991 Tim Berners-Lee introduced the WWW (World Wide Web) on August 6, 1991. On August
6, 1991, he also unveiled the first web page and website to the general public. Also, this
year, the internet started to be available to the public by NSF. Outside of Europe, the first
web server came on 1 December 1991.

1992 The main revolution came in the field of the internet that the internet Society was formed,
and NSFNET upgraded to a T3 backbone.

1993 CERN submitted the Web source code to the public domain on April 30, 1993. This
caused the Web to experience massive growth. Also, this year, the United Nations and the
White House came, which helped to begin top-level domains, such as .gov and .org. On
22 April 1993, the first widely-used graphical World Wide Web browser, Mosaic, was
released by the NCSA with the help of Eric Bina and Marc Andreessen.

1994 On April 4, 1994, James H. Clark and Marc Andreessen found the Mosaic Communications
Corporation, Netscape. On 13 October 1994, the first Netscape browser, Mosaic Netscape
0.9, was released, which also introduced the Internet to cookies. On 7 November 1994, a
radio station, WXYC, announced broadcasting on the Internet, and it became the first
traditional radio station for this. Also, in the same year, the W3C was established by Tim
Berners-Lee.

1995 In February 1995, Netscape introduced the SSL (Secure sockets layer), and the dot-com
boom began. Also, the Opera web browser was introduced to browsing web pages on 1
April 1995, and to make voice calls over the Internet, the Vocaltec, the first VoIP software,
wasintroduced.
Later, the Internet Explorer web browser was introduced by Microsoft on 16 August 1995.
In RFC 1866, the next version of HTML 2.0 was released on 24 November 1995.
In 1995, JavaScript, originally known as LiveScript, was created by Brendan Eich. At that
time, he was an employee at Netscape Communications Corporation. Later LiveScript was
renamed to JavaScript with Netscape 2.0B3 on December 4, 1995. In the same year, they
also introduced Java.

1996 This year, Telecom Act took a big Decision and deregulated data networks. Also,
Macromedia Flash that is now known as Adobe Flash was released in 1996.
In December 1996, the W3C published CSS 1, the first CSS specification. As compared to
postal mail, more e-mail was sent in the USA. This is the year in which the network has
ceased to exist as CREN ended its support.

1997 In 1997, the 802.11 (Wi-Fi) standard was introduced by IEEE, and the internet2 consortium
was also established.

1998 The first Internet weblogs arose in this year, and on February 10, 1998, XML became a
W3C recommendation.

1999 In September 1999, Napster began sharing files, and Marc Ostrofsky, the business.com,
the most expensive Internet domain name for $7.5 million on 1 December 1999. Later on,
on 26 July 2007, this domain was sold for $345 million to R.H. Donnelley.

2000 The craze of dot-com began to decrease.

2003 The members of CERN took the decision to dissolve the organization on 7 January 2003.
Also, this year, the Safari web browser came into the market on 30 June 2003.

2004 The Mozilla Firefox web browser was released by Mozilla on 9 November 2004.

2008 On 1 March 2008, the support b AOL for the Netscape Internet browser was ended. Then,
the Google Chrome web browser was introduced by Google on 11 December 2008, and
gradually it became a popular web browser.

2009 A person using the fictitious name Satoshi Nakamoto published the internet money
Bitcoin on 3 January 2009.

2014 On 28 October 2014, W3C recommended and released the HTML5 programming
language to the public.
Introduction to Transmission Media

 Transmission media is a communication channel that carries the information from the
sender to the receiver. Data is transmitted through the electromagnetic signals.
 The main functionality of the transmission media is to carry the information in the form
of bits through LAN(Local Area Network).
 It is a physical path between transmitter and receiver in data communication.
 In a copper-based network, the bits in the form of electrical signals.
 In a fibre based network, the bits in the form of light pulses.
 In OSI(Open System Interconnection) phase, transmission media supports the Layer 1.
Therefore, it is considered to be as a Layer 1 component.
 The electrical signals can be sent through the copper wire, fibre optics, atmosphere,
water, and vacuum.
 The characteristics and quality of data transmission are determined by the
characteristics of medium and signal.
 Transmission media is of two types are wired media and wireless media. In wired media,
medium characteristics are more important whereas, in wireless media, signal
characteristics are more important.
 Different transmission media have different properties such as bandwidth, delay, cost
and ease of installation and maintenance.
 The transmission media is available in the lowest layer of the OSI reference model, i.e.,
Physical layer.

Classification of transmission media

There are two types of classification

1. Guided transmission
2. Un Guided Transmission
1. Guided Media: It is also referred to as Wired or Bounded transmission media. Signals being
transmitted are directed and confined in a narrow pathway by using physical links.
Features:

 High Speed
 Secure
 Used for comparatively shorter distances

There are 3 major types of Guided Media:

(i) Twisted Pair Cable – Twisted Pair Cables are created by twisting two different
protected cables around each other to make a single cable. Shields are often built of
insulated materials that allow both cables to transmit independently. This twisted
wire is then enclosed inside a protective coating to make it easier to use.
Twisted pair cables are generally of two types

 Unshielded Twisted Pair (UTP): UTP consists of two insulated copper wires twisted
around one another. This type of cable has the ability to block interference and does not
depend on a physical shield for this purpose. It is used for telephonic applications.

Advantages: ⇢ Least expensive

⇢ Easy to install
⇢ High-speed capacity
Disadvantages:
⇢ Susceptible to external interference
⇢ Lower capacity and performance in
comparison to STP
⇢ Short distance transmission due to
attenuation
Applications:

Used in telephone connections and LAN


networks

Shielded Twisted Pair (STP): This type of cable consists of a special jacket (a copper braid
covering or a foil shield) to block external interference. It is used in fast-data-rate Ethernet and
in voice and data channels of telephone lines.

Advantages of STP

 High speed than UTP.


 Better performance.
 No crosstalk (or interference).

Disadvantages of STP

 Difficult to install and manufacture.


 Expensive.
Coaxial Cable – It has an outer plastic covering containing an insulation layer made of PVC or
Teflon and 2 parallel conductors each having a separate insulated protection cover. The coaxial
cable transmits information in two modes: Baseband mode(dedicated cable bandwidth) and
Broadband mode(cable bandwidth is split into separate ranges). Cable TVs and analog
television networks widely use Coaxial cables.

Advantages of Coaxial Cable

 Easy to install and expand the range.


 High Bandwidth.
 Less expensive.
 Less noise

Disadvantages of Coaxial Cable

 Single cable failure can disturb the entire network.


 Bulky
 Very expensive over long distances
 They do not support high-speed transmission, this is another disadvantage of coaxial
cable

Optical Fiber Cable:- Optical Fibre Cables are glass-based cables that transmit light signals.
The reflection concepts are employed for light signal transmission over cables. It is recognized
for allowing bulkier data to be delivered with more bandwidth and reduced electromagnetic
interference during transmission. Because the material is non-corrosive and weightless, these
cables are preferable to twisted cables in most instances.

Advantages of Optical Fiber Cable

 High bandwidth.
 Lightweight
 Negligible attenuation.
 Fastest data transmission.

Disadvantages of Optical Fiber Cable

 Very costly.
 Hard to install and maintain.
 Fragile.
 They are unidirectional, which means they will need another fiber for bidirectional
communication.

Stripline

Stripline is a transverse electromagnetic (TEM) transmission line medium invented by


Robert M. Barrett of the Air Force Cambridge Research Centre in the 1950s. Stripline is
the earliest form of the planar transmission line. It uses a conducting material to
transmit high-frequency waves it is also called a waveguide. This conducting material is
sandwiched between two layers of the ground plane which are usually shorted to
provide EMI immunity.

Microstripline

In this, the conducting material is separated from the ground plane by a layer of dielectric.
Unguided Media: It is also referred to as Wireless or Unbounded transmission media. No
physical medium is required for the transmission of electromagnetic signals.

Features:

 The signal is broadcasted through air


 Less Secure
 Used for larger distances

There are 3 types of Signals transmitted through unguided media:

(i) Radio waves – These are easy to generate and can penetrate through buildings. The
sending and receiving antennas need not be aligned. Frequency Range:3KHz – 1GHz.
AM and FM radios and cordless phones use Radio waves for transmission.
(ii) Radio waves are a kind of electromagnetic wave whose wavelength falls in the
electromagnetic spectrum. The radio waves have the longest wavelengths among
electromagnetic waves. As like all other electromagnetic waves the radio waves also
travel at the speed of light. Radio waves are usually generated by charged particles
while accelerating.

(iii) Radio waves are generated artificially by transmitters and received by the antennas
or radio receivers. Radio waves are usually used for fixed or mobile radio
communication, broadcasting, radar, communication satellites.
(iv) Radio waves are the electromagnetic waves that are transmitted in all the directions
of free space.
(v) Radio waves are omnidirectional, i.e., the signals are propagated in all the directions.
(vi) The range in frequencies of radio waves is from 3Khz to 1 khz.
(vii) In the case of radio waves, the sending and receiving antenna are not aligned, i.e.,
the wave sent by the sending antenna can be received by any receiving antenna.
(viii) An example of the radio wave is FM radio.

Further Categorized as (i) Terrestrial and (ii) Satellite

(ii) Microwaves – It is a line of sight transmission i.e. the sending and receiving antennas need
to be properly aligned with each other. The distance covered by the signal is directly
proportional to the height of the antenna. Frequency Range:1GHz – 300GHz. These are majorly
used for mobile phone communication and television distribution.

Microwave Transmission

(ix) Infrared – Infrared waves are used for very short distance communication. They
cannot penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.
(x) Infrared technology uses diffuse light reflected at walls, furniture etc. or a directed
light if a line of sight (LOS) exists between sender and receiver.
(xi) Infrared light is the part of the electromagnetic spectrum, and is an electromagnetic
form of radiation. It comes from the heat and thermal radiation, and it is not visible
to the naked eyes.
(xii) In infrared transmission, senders can be simple light emitting diodes (LEDs) or laser
diodes. Photodiodes act as receivers.
(xiii) Infrared is used in wireless technology devices or systems that convey data through
infrared radiation. Infrared is electromagnetic energy at a wave length or wave
lengths somewhat longer than those of red light.
(xiv) Infrared wireless is used for medium and short range communications and control.
Infrared technology is used in instruction detectors; robot control system, medium
range line of sight laser communication, cordless microphone, headsets, modems,
and other peripheral devices.
(xv) Infrared radiation is used in scientific, industrial, and medical application. Night
vision devices using active near infrared illumination allow people and animals to be
observed without the observer being detected.
(xvi) Infrared transmission technology refers to energy in the region of the
electromagnetic radiation spectrum at wavelength longer than those of visible light
but shorter than those of radio waves.
(xvii) Infrared technology allows computing devices to communicate via short range
wireless signals. With infrared transmission, computers can transfer files and other
digital data bidirectional.
UNIT II : Data Link Layer

Data link layer: Design issues, Framing: fixed size framing,


variable size framing, flow control, error control, error detection
and correction codes, CRC, Checksum: idea, one’s complement
internet checksum, services provided to Network Layer,

Elementary Data Link Layer protocols: simplex protocol, Simplex


stop and wait, Simplex protocol for Noisy Channel.

Sliding window protocol: One bit, Go back N, Selective repeat-Stop


and wait protocol.

Data link layer in HDLC: configuration and transfer modes, frames,


control field, point to point protocol (PPP): framing transition
phase, multiplexing, multi link PPP.

The data link layer is the hardware layer, and information at this
layer is in the form of frames. The data link layer is mainly used
to define the format of the data. The position of the data link
layer is second in the internet model and stays between the network
and physical layer as you can see in the following diagram.

Data-link layer is the second layer after the physical layer. The
data link layer is responsible for maintaining the data link
between two hosts or nodes.
It takes services from the physical layer and provides services to
the network layer. The primary function of this layer is data
synchronization.

The data link layer is further divided into two sub-layers as


follows.

 Logical link control sub-layer


 Media access control sub-layer.

1. Logical Link Control Sub-layer (LLC) –

Provides the logic for the data link, Thus it controls the
synchronization, flow control, and error checking functions of
the data link layer.

Functions are –
 (i) Error Recovery.
 (ii) It performs the flow control operations.
 (iii) User addressing.

2. Media Access Control Sub-layer (MAC) –


It is the second sub-layer of data-link layer. It controls the
flow and multiplexing for transmission medium. Transmission of
data packets is controlled by this layer. This layer is
responsible for sending the data over the network interface
card.
Functions are –
 (i) To perform the control of access to media.
 (ii) It performs the unique addressing to stations directly
connected to LAN.
 (iii) Detection of errors.
Design issues with data link layer are:

The data link layer is supposed to carry out many specified


functions. It is responsible for effective data communication
between two directly connected transmitting and receiving stations.

The data link layer has to carry out several specific functions and
the following are the main design issues of data link layer:

 Data transfer
 Frame synchronization
 Flow control
 Error control
 Addressing
 Link management.

1. Services provided to the network layer – The data link layer


provides a well-defined service interface to the network layer. The
principle of services is to transfer data from the network layer on
the source machine to the network layer on the destination
machine.

The transfer is done through DDL (Dynamic-link layer). The data


link layer takes services from the physical layer and provides
well-defined services to the network layer.

Datalink layer provides three types of services as follows:

 Unacknowledged connectionless service


 Acknowledged connection service
 Acknowledged connection-oriented service.

2 Frame synchronization – The source machine sends data in the


form of blocks called frames to the destination machine. The
starting and ending of each frame should be identified so that the
frame can be recognized by the destination machine.
3. Flow control – Flow control is done to prevent the flow of data
frame at the receiver end. The source machine must not send data
frames at a rate faster than the capacity of destination machine
to accept them.
4. Error control – Error control is done to prevent duplication of
frames. The errors introduced during transmission from source to
destination machines must be detected and corrected at the
destination machine.

Framing:

Frames are the units of digital transmission, particularly in


computer networks and telecommunications. Frames are comparable to
the packets of energy called photons in the case of light energy.
Frame is continuously used in Time Division Multiplexing process.

Framing is a point-to-point connection between two computers or


devices consisting of a wire in which data is transmitted as a
stream of bits. However, these bits must be framed into
discernible blocks of information. Framing is a function of the
data link layer. It provides a way for a sender to transmit a set
of bits that are meaningful to the receiver. Ethernet, token ring,
frame relay, and other data link layer technologies have their own
frame structures. Frames have headers that contain information
such as error-checking codes.
Parts of a Frame
A frame has the following parts −

 Frame Header − It contains the source and the destination


addresses of the frame.
 Payload field − It contains the message to be delivered.
 Trailer − It contains the error detection and error correction
bits.
 Flag − It marks the beginning and end of the frame.

At the data link layer, it extracts the message from the sender
and provides it to the receiver by providing the sender’s and
receiver’s addresses. The advantage of using frames is that data
is broken up into recoverable chunks that can easily be checked
for corruption.

The process of dividing the data into frames and reassembling it


is transparent to the user and is handled by the data link layer.

Framing is an important aspect of data link layer protocol design


because it allows the transmission of data to be organized and
controlled, ensuring that the data is delivered accurately and
efficiently.

Types of framing

There are two types of framing:


1. Fixed-size: The frame is of fixed size and there is no need to
provide boundaries to the frame, the length of the frame itself
acts as a delimiter.
 Drawback: It suffers from internal fragmentation if the data
size is less than the frame size
 Solution: Padding
2. Variable size: In this, there is a need to define the end of
the frame as well as the beginning of the next frame to
distinguish. This can be done in two ways:
1. Length field – We can introduce a length field in the frame to
indicate the length of the frame. Used in Ethernet(802.3). The
problem with this is that sometimes the length field might get
corrupted.
2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate
the end of the frame. Used in Token Ring. The problem with this
is that ED can occur in the data. This can be solved by:
1. Character/Byte Stuffing: Used when frames consist of
characters. If data contains ED then, a byte is stuffed into
data to differentiate it from ED.
Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped
using ‘\O’ character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using
\O and \O is escaped using \O).
Disadvantage – It is very costly and obsolete method.

Bit Stuffing: Let ED = 01111 and if data = 011101


–> Sender stuffs a bit to break the pattern i.e. here appends a 0
in data = 011101.
–> Receiver receives the frame.
–> If data contains 01111101, receiver removes the 0 and reads the
data.

Examples:
 If Data –> 011100011110 and ED –> 0111 then, find data after bit
stuffing.
--> 011010001101100
 If Data –> 110001001 and ED –> 1000 then, find data after bit
stuffing?
--> 11001010011
Advantages of Framing in Data Link Layer
 Frames are used continuously in the process of time-division
multiplexing.
 It facilitates a form to the sender for transmitting a group of
valid bits to a receiver.
 Frames also contain headers that include information such as
error-checking codes.
 A Frame relay, token ring, ethernet, and other types of data
link layer methods have their frame structures.
 Frames allow the data to be divided into multiple recoverable
parts that can be inspected further for corruption.
 It provides a flow control mechanism that manages the frame flow
such that the data congestion does not occur on slow receivers
due to fast senders.
 It provides reliable data transfer services between the layers
of the peer network

Flow Control

Flow control is design issue at Data Link Layer Flow control is a


technique that allows two stations working at different speeds to
communicate with each other. It is a set of measures taken to
regulate the amount of data that a sender sends so that a fast
sender does not overwhelm a slow receiver. In data link layer, flow
control restricts the number of frames the sender can send before
it waits for an acknowledgment from the receiver.

flow control can also be understand as a speed matching mechanism


for two stations.

Approaches to Flow Control : Flow Control is classified into two


categories:

 Feedback – based Flow Control: In this control technique,


sender simply transmits data or information or frame to
receiver, then receiver transmits data back to sender and also
allows sender to transmit more amount of data or tell sender
about how receiver is processing or doing. This simply means
that sender transmits data or frames after it has received
acknowledgements from user.
 Rate – based Flow Control: In this control technique, usually
when sender sends or transfer data at faster speed to receiver
and receiver is not being able to receive data at the speed,
then mechanism known as built-in mechanism in protocol will
just limit or restricts overall rate at which data or
information is being transferred or transmitted by sender
without any feedback or acknowledgement from receiver.

Techniques of Flow Control in Data Link Layer: There are basically


two types of techniques being developed to control the flow of data

1. Stop-and-Wait Flow Control : This method is the easiest and


simplest form of flow control. In this method, basically message or
data is broken down into various multiple frames, and then receiver
indicates its readiness to receive frame of data. When
acknowledgement is received, then only sender will send or transfer
the next frame. This process is continued until sender transmits
EOT (End of Transmission) frame. In this method, only one of frames
can be in transmission at a time. It leads to inefficiency i.e.
less productivity if propagation delay is very much longer than the
transmission delay and Ultimately In this method sender sent
single frame and receiver take one frame at a time and sent
acknowledgement(which is next frame number only) for new frame.
Advantages –

 This method is very easiest and simple and each of the frames
is checked and acknowledged well.
 This method is also very accurate.

Disadvantages –

 This method is fairly slow.


 In this, only one packet or frame can be sent at a time.
 It is very inefficient and makes the transmission process very
slow.

2. Sliding Window Flow Control : This method is required where


reliable in-order delivery of packets or frames is very much needed
like in data link layer. It is point to point protocol that assumes
that none of the other entity tries to communicate until current
data or frame transfer gets completed. In this method, sender
transmits or sends various frames or packets before receiving any
acknowledgement. In this method, both the sender and receiver agree
upon total number of data frames after which acknowledgement is
needed to be transmitted. Data Link Layer requires and uses this
method that simply allows sender to have more than one
unacknowledged packet “in-flight” at a time. This increases and
improves network throughput. and Ultimately In this method sender
sent multiple frame but receiver take one by one and after
completing one frame acknowledge(which is next frame number only)
for new frame.

Advantages –

 It performs much better than stop-and-wait flow control.


 This method increases efficiency.
 Multiples frames can be sent one after another.

Disadvantages –
 The main issue is complexity at the sender and receiver due to
the transferring of multiple frames.
 The receiver might receive data frames or packets out the
sequence.

Error Control

Data-link layer uses the techniques of error control simply to


ensure and confirm that all the data frames or packets.

Error control in data link layer is the process of detecting and


correcting data frames that have been corrupted or lost during
transmission.

In case of lost or corrupted frames, the receiver does not receive


the correct data-frame and sender is ignorant about the loss. Data
link layer follows a technique to detect transit errors and take
necessary actions, which is retransmission of frames whenever error
is detected or frame is lost. The process is called Automatic
Repeat Request (ARQ).

Phases in Error Control

Ways of doing Error Control : There are basically two ways of


doing Error control as given below : The error control mechanism in
data link layer involves the following phases −

 Detection of Error − Transmission error, if any, is detected


by either the sender or the receiver.
 Acknowledgment − acknowledgment may be positive or negative.
o Positive ACK − On receiving a correct frame, the receiver
sends a positive acknowledge.
o Negative ACK − On receiving a damaged frame or a
duplicate frame, the receiver sends a negative
acknowledgment back to the sender.
Retransmission − The sender maintains a clock and sets a timeout
period. If an acknowledgment of a data-frame previously transmitted
does not arrive before the timeout, or a negative acknowledgment is
received, the sender retransmits the frame.

1. Error Detection : Error detection, as the name suggests,


simply means detection or identification of errors. These
errors may occur due to noise or any other impairments during
transmission from transmitter to the receiver, in
communication system. It is a class of techniques for
detecting garbled i.e. unclear and distorted data or messages.
2. Error Correction : Error correction, as the name suggests,
simply means correction or solving or fixing of errors. It
simply means reconstruction and rehabilitation of original
data that is error-free. But error correction method is very
costly and very hard.

Various Techniques for error & flow Control : There are various
techniques of error control as given below :
1. Stop-and-Wait ARQ : Stop-and-Wait ARQ is also known as
alternating bit protocol. It is one of the simplest flow and error
control techniques or mechanisms. This mechanism is generally
required in telecommunications to transmit data or information
between two connected devices. Receiver simply indicates its
readiness to receive data for each frame. In these, sender sends
information or data packets to receiver. Sender then stops and
waits for ACK (Acknowledgment) from receiver. Further, if ACK does
not arrive within given time period i.e., time-out, sender then
again resends frame and waits for ACK. But, if sender receives ACK,
then it will transmit the next data packet to receiver and then
again wait for ACK from receiver. This process to stop and wait
continues until sender has no data frame or packet to send.
2. Sliding Window ARQ : This technique is generally used for
continuous transmission error control. It is further categorized
into two categories as given below :

 Go-Back-N ARQ : Go-Back-N ARQ is form of ARQ protocol in which


transmission process continues to send or transmit total
number of frames that are specified by window size even
without receiving an ACK (Acknowledgement) packet from the
receiver. It uses sliding window flow control protocol. If no
errors occur, then operation is identical to sliding window.

 Selective Repeat ARQ : Selective Repeat ARQ is also form of


ARQ protocol in which only suspected or damaged or lost data
frames are only retransmitted. This technique is similar to
Go-Back-N ARQ though much more efficient than the Go-Back-N
ARQ technique due to reason that it reduces number of
retransmission. In this, the sender only retransmits frames
for which NAK is received. But this technique is used less
because of more complexity between sender and receiver and
each frame must be needed to be acknowledged individually.
The main difference between Go Back ARQ and Selective Repeat ARQ is
that in Go Back ARQ, the sender has to retransmit the whole window
of frame again if any of the frame is lost but in Selective Repeat
ARQ only the data frame that is lost is retransmitted.

Error Detection in computer networks

Error is a condition when the receiver’s information does not match


the sender’s information. During transmission, digital signals
suffer from noise that can introduce errors in the binary bits
traveling from sender to receiver. That means a 0 bit may change to
1 or a 1 bit may change to 0.

Data (Implemented either at the Data link layer or Transport Layer


of the OSI Model) may get scrambled by noise or get corrupted
whenever a message is transmitted. To prevent such errors, error-
detection codes are added as extra data to digital messages. This
helps in detecting any errors that may have occurred during message
transmission.

Types of Errors

Single-Bit Error

A single-bit error refers to a type of data transmission error that


occurs when one bit (i.e., a single binary digit) of a transmitted
data unit is altered during transmission, resulting in an incorrect
or corrupted data unit.
Multiple-Bit Error

A multiple-bit error is an error type that arises when more than


one bit in a data transmission is affected. Although multiple-bit
errors are relatively rare when compared to single-bit errors, they
can still occur, particularly in high-noise or high-interference
digital environments.

Burst Error

When several consecutive bits are flipped mistakenly in digital


transmission, it creates a burst error. This error causes a
sequence of consecutive incorrect values.
To detect errors, a common technique is to introduce
redundancy bits that provide additional information. Various
techniques for error detection include::

1. Simple Parity Check


2. Two-dimensional Parity Check
3. Checksum
4. Cyclic Redundancy Check (CRC)

Simple Parity Check

Simple-bit parity is a simple error detection method that involves


adding an extra bit to a data transmission. It works as:

 1 is added to the block if it contains an odd number of 1’s,


and
 0 is added if it contains an even number of 1’s

This scheme makes the total number of 1’s even, that is why it is
called even parity checking.

Disadvantages

 Single Parity check is not able to detect even no. of bit


error.
 For example, the Data to be transmitted is 101010. Codeword
transmitted to the receiver is 1010101 (we have used even
parity).
Let’s assume that during transmission, two of the bits of code
word flipped to 1111101.
On receiving the code word, the receiver finds the no. of ones
to be even and hence no error, which is a wrong assumption.

Two-dimensional Parity Check

 Two-dimensional Parity check bits are calculated for each row,


which is equivalent to a simple parity check bit. Parity check
bits are also calculated for all columns, then both are sent
along with the data. At the receiving end, these are compared
with the parity bits calculated on the received data.

Checksum

Checksum error detection is a method used to identify errors in


transmitted data. The process involves dividing the data into
equally sized segments and using a 1’s complement to calculate the
sum of these segments. The calculated sum is then sent along with
the data to the receiver. At the receiver’s end, the same process
is repeated and if all zeroes are obtained in the sum, it means
that the data is correct.

Checksum – Operation at Sender’s Side

 Firstly, the data is divided into k segments each of m bits.


 On the sender’s end, the segments are added using 1’s
complement arithmetic to get the sum. The sum is complemented
to get the checksum.
 The checksum segment is sent along with the data segments.

Checksum – Operation at Receiver’s Side

 At the receiver’s end, all received segments are added using


1’s complement arithmetic to get the sum. The sum is
complemented.
 If the result is zero, the received data is accepted;
otherwise discarded.

Disadvantages

 If one or more bits of a segment are damaged and the


corresponding bit or bits of opposite value in a second
segment are also damaged.

Cyclic Redundancy Check (CRC)

 Unlike the checksum scheme, which is based on addition, CRC is


based on binary division.
 In CRC, a sequence of redundant bits, called cyclic redundancy
check bits, are appended to the end of the data unit so that
the resulting data unit becomes exactly divisible by a second,
predetermined binary number.
 At the destination, the incoming data unit is divided by the
same number. If at this step there is no remainder, the data
unit is assumed to be correct and is therefore accepted.
 A remainder indicates that the data unit has been damaged in
transit and therefore must be rejected.

Advantages:
 Increased Data Reliability
 Improved Network Performance
 Enhanced Data Security
Disadvantages:
Overhead Error detection requires additional resources and
processing power, which can lead to increased overhead on the
network. This can result in slower network performance and
increased latency.

False Positives: Error detection mechanisms can sometimes generate


false positives, which can result in unnecessary retransmission of
data. This can further increase the overhead on the network.

Limited Error Correction: Error detection can only identify errors


but cannot correct them. This means that the recipient must rely on
the sender to retransmit the data, which can lead to further delays
and increased network overhead.

Hamming Code:

What Is Hamming Code?

The Hamming Code method is a network technique designed by


R.W.Hamming, for damage and error detection during data
transmission between multiple network channels.

The Hamming Code method is one of the most effective ways to detect
single-data bit errors in the original data at the receiver end. It
is not only used for error detection but is also for correcting
errors in the data bit.

Important Terms for Hamming Code

To begin with, the steps involved in the detection and correction


of data using hamming code, we need to understand some important
terms and expressions, which are:

1. Redundant Bits - These are the extra binary bits added


externally into the original data bit to prevent damage to the
transmitted data and are also needed to recover the original data.
The expression applied to deduce the redundant value is,

2r >= d+r+1

Where,

d - “Data Bits”

r - “Redundant Bits”, r = {1, 2, 3, …….. n}

Example: Assuming the number of data bits is 7, find the number of


redundant bits.

 2^r >= r+7+1


 2^4 >=4+8 [r=4]

The number of redundant bits = 4.

2. Parity Bits - The parity bit is the method to append binary bits
to ensure that the total count of 1’s in the original data is even
bit or odd. It is also applied to detect errors on the receiver
side and correct them.

Types of parity bits:

 Odd Parity bits - In this parity type, the total number of 1’s in
the data bit should be odd in count, then the parity value is 0, and
the value is 1.

 Even Parity bits - In this parity type, the total number of 1’s in
the data bit should be even in count; then the parity value is 0,
and the value is 1.

Working of Hamming Code

To solve the data bit issue with the hamming code method, some
steps need to be followed:
 Step 1 - The position of the data bits and the number of
redundant bits in the original data. The number of redundant
bits is deduced from the expression [2^r >= d+r+1].
 Step 2 - Fill in the data bits and redundant bit, and find the
parity bit value using the expression [2^p, where, p - {0,1,2,
…… n}].
 Step 3 - Fill the parity bit obtained in the original data and
transmit the data to the receiver side.
 Step 4 - Check the received data using the parity bit and
detect any error in the data, and in case damage is present,
use the parity bit value to correct the error.

Next, we will solve an example using hamming code to clarify any


doubts regarding the working steps.

Example : The data bit to be transmitted is 1011010, to be solved


using the hamming code method Determining the Number of Redundant
Bits and Position in the Data.

Determining the Number of Redundant Bits and Position in the Data

The data bits = 7

The redundant bit,

 2^r >= d+r+1


 2^4 >= 7+4+1
 16 >= 12, [So, the value of r = 4.]

Position of the redundant bit, applying the 2^p expression:

 2^0 - P1
 2^1 - P2
 2^2 - P4
 2^3 - P8

 Finding the Parity Bits, for ”Even parity bits,”

1. P1 parity bit is deduced by checking all the bits with 1’s in


the least significant location.

P1: 1, 3, 5, 7, 9, 11

 P1 - P1, 0, 1, 1, 1, 1
 P1 - 0

2. P2 parity bit is deduced by checking all the bits with 1’s in


the second significant location.

P2: 2, 3, 6, 7, 10, 11

 P2 - P2, 0, 0, 1, 0, 1
 P2 - 0

3. P4 parity bit is deduced by checking all the bits with 1’s in


the third significant location.

P4: 4, 5, 6, 7

 P4 - P4, 1, 0, 1
 P4 - 0

4. P8 parity bit is deduced by checking all the bits with 1’s in


the fourth significant location.

P8: 8, 9, 10, 11
 P8 - P1, 1, 0, 1
 P8 - 0

So, the original data to be transmitted to the receiver side is:

Parity values obtained in the above deduction vary from the


originally deduced parity values, proving that an error occurred
during data transmission.

To identify the position of the error bit, use the new parity
values as,

 [0+2^2+2^1+2^0]
 7, i.e., same as the assumed error position.
To correct the error, simply reverse the error bit to its
complement, i.e., for this case, change 0 to 1, to obtain the
original data bit.

With the completion of the example for hamming code, we can


conclude this article on “Hamming Code for Error Detection and
Correction.”

Elementary Data link layer Protocols :

These Protocols also called as primary data link layer


protocols. These Protocols in the data link layer are designed so
that this layer can perform its basic functions: framing, error
control and flow control.

Elementary Data Link protocols are classified into three


categories, as given below −

 Protocol 1 − Unrestricted simplex protocol


 Protocol 2 − Simplex stop and wait protocol
 Protocol 3 − Simplex protocol for noisy channels.

1. Unrestricted Simplex Protocol

Data transmitting is carried out in one direction only. The


transmission (Tx) and receiving (Rx) are always ready and the
processing time can be ignored. In this protocol, infinite buffer
space is available, and no errors are occurring that is no damage
frames and no lost frames.
2.Simplex Stop and Wait protocol

In this protocol we assume that data is transmitted in one


direction only. No error occurs; the receiver can only process the
received information at finite rate. These assumptions imply that
the transmitter cannot send frames at rate faster than the receiver
can process them.

The main problem here is how to prevent the sender from flooding
the receiver. The general solution for this problem is to have the
receiver send some sort of feedback to sender, the process is as
follows −

Step1 − The receiver send the acknowledgement frame back to the


sender telling the sender that the last received frame has been
processed and passed to the host.

Step 2 − Permission to send the next frame is granted.

Step 3 − The sender after sending the sent frame has to wait for an
acknowledge frame from the receiver before sending another frame.
This protocol is called Simplex Stop and wait protocol, the sender
sends one frame and waits for feedback from the receiver. When the
ACK arrives, the sender sends the next frame.

3.Simplex Protocol for Noisy Channel

Data transfer is only in one direction, consider separate sender


and receiver, finite processing capacity and speed at the receiver,
since it is a noisy channel, errors in data frames or
acknowledgement frames are expected. Every frame has a unique
sequence number.

After a frame has been transmitted, the timer is started for a


finite time. Before the timer expires, if the acknowledgement is
not received , the frame gets retransmitted, when the
acknowledgement gets corrupted or sent data frames gets damaged,
how long the sender should wait to transmit the next frame is
infinite.
HDLC (High level Data Link Control)
HDLC (High-level Data Link Control) is a group of protocols or
rules for transmitting data between network points (sometimes
called nodes).

HDLC is a bit-oriented, synchronous data link layer protocol


created by the International Organization for Standardization
(ISO).

Since it is a data link protocol, data is organized into frames. A


frame is transmitted via the network to the destination that
verifies its successful arrival.

It is a bit - oriented protocol that is applicable for both point -


to - point and multipoint communications.

HDLC protocols, different stations are to be effectively applied in


the channel:

 Primary Station - It handles establishing and de-establishing


the primary data channel to share frames in the network, known
as commands.
 Secondary Station - They work under the command of the primary
station, and the frames transmitted by this station are known
as responses.

 Combined Station - This network station can work as both the


primary and secondary stations and handle commands and
responses.

HDLC Transfer Models

HDLC supports two types of transfer modes, normal response mode and
asynchronous balanced mode

 Normal Response Mode (NRM)


 Asynchronous Balanced Mode (ABM)

Normal Response Mode (NRM) − Here, two types of stations are


there, a primary station that send commands and secondary
station that can respond to received commands. It is used for
both point - to - point and multipoint communications.
Asynchronous Balanced Mode (ABM) − Here, the configuration is
balanced, i.e. each station can both send commands and respond to
commands. It is used for only point - to - point communications.

HDLC Frame

HDLC is a bit - oriented protocol where each frame contains up to


six fields. The structure varies according to the type of frame.
The fields of a HDLC frame are −

 Flag − It is an 8-bit sequence that marks the beginning and


the end of the frame. The bit pattern of the flag is 01111110.
 Address − It contains the address of the receiver. If the
frame is sent by the primary station, it contains the
address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary
station. The address field may be from 1 byte to several
bytes.
 Control − It is 1 or 2 bytes containing flow and error control
information.
 Payload − This carries the data from the network layer. Its
length may vary from one network to another.
 FCS − It is a 2 byte or 4 bytes frame check sequence for error
detection. The standard code used is CRC (cyclic redundancy
code)

Types of HDLC Frames

There are three types of HDLC frames. The type of frame is


determined by the control field of the frame –

 I-frame − I-frames or Information frames carry user data from


the network layer. They also include flow and error control
information that is piggybacked on user data. The first bit of
control field of I-frame is 0.
 S-frame − S-frames or Supervisory frames do not contain
information field. They are used for flow and error control
when piggybacking is not required. The first two bits of
control field of S-frame is 10.
 U-frame − U-frames or Un-numbered frames are used for myriad
miscellaneous functions, like link management. It may contain
an information field, if required. The first two bits of
control field of U-frame is 11.

Point- to- point Protocol (PPP)


Point - to - Point Protocol (PPP) is a communication protocol of
the data link layer that is used to transmit multiprotocol data
between two directly connected (point-to-point) computers.

It is a byte - oriented protocol that is widely used in broadband


communications having heavy loads and high speeds. Since it is a
data link layer protocol, data is transmitted in frames. It is also
known as RFC 1661.

Services Provided by PPP

The main services provided by Point-to-Point Protocol are −

 Defining the frame format of the data to be transmitted.


 Defining the procedure of establishing link between two points
and exchange of data.
 Stating the method of encapsulation of network layer data in
the frame.
 Stating authentication rules of the communicating devices.
 Providing address for network communication.
 Providing connections over multiple links.
 Supporting a variety of network layer protocols by providing a
range os services.
Components of PPP

Point - to - Point Protocol is a layered protocol having three


components −

 Encapsulation Component − It encapsulates the datagram so that


it can be transmitted over the specified physical layer.
 Link Control Protocol (LCP) − It is responsible for
establishing, configuring, testing, maintaining and
terminating links for transmission. It also imparts
negotiation for set up of options and use of features by the
two endpoints of the links.
 Authentication Protocols (AP) − These protocols authenticate
endpoints for use of services. The two authentication
protocols of PPP are −
o Password Authentication Protocol (PAP)
o Challenge Handshake Authentication Protocol (CHAP)
 Network Control Protocols (NCPs) − These protocols are used
for negotiating the parameters and facilities for the network
layer. For every higher-layer protocol supported by PPP, one
NCP is there

Some of the NCPs of PPP are −

 Internet Protocol Control Protocol (IPCP)


 OSI Network Layer Control Protocol (OSINLCP)
 Internetwork Packet Exchange Control Protocol (IPXCP)
 DECnet Phase IV Control Protocol (DNCP)
 NetBIOS Frames Control Protocol (NBFCP)
 IPv6 Control Protocol (IPV6CP)
Differences between Point-to-Point and Multi-point Communication

S.NO Point to point communication Multipoint Communication


Multipoint Communication
Point to point communication
means the channel is shared
1. means the channel is shared
among multiple devices or
between two devices.
nodes.
In this communication, link
In this communication, There is is provided at all times for
2.
dedicated link between two nodes. sharing the connection among
nodes.
In this communication, the entire In this communication, the
capacity is reserved between entire capacity isn’t
3. these connected two devices with reserved by any two nodes and
the possibility of waste of the network bandwidth is
network bandwidth/ resources. maximum utilized.
In this communication, there
In this communication, there is
4. is one transmitter and many
one transmitter and one receiver.
receivers.
In Multi-point connections,
In point-to-point connections,
the smallest distance is not
5. the smallest distance is most
important to reach the
important to reach the receiver.
receiver.
Point-to-point communication Multi-point communication
provides security and privacy does not provide security and
6.
because communication channel is privacy because communication
not shared. channel is shared.
UNIT - III
Media Access Control
1. Random Access: ALOHA, Carrier sense multiple access (CSMA), CSMA with Collision
Detection, CSMA with Collision Avoidance,
2. Controlled Access: Reservation, Polling, Token Passing,
3. Channelization: frequency division multiple Access (FDMA), time division multiple
access (TDMA), code division multiple access (CDMA).

Wired LANs: Ethernet, Ethernet Protocol, Standard Ethernet, Fast Ethernet (100 Mbps),
Gigabit Ethernet, 10 Gigabit Ethernet.

The data link layer is used in a computer network to transmit the data between two devices
or nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol. The upper layer has the responsibility to flow control and the error
control in the data link layer, and hence it is termed as logical of data link control. Whereas
the lower sub-layer is used to handle and reduce the collision or multiple access on a
channel. Hence it is termed as media access control or the multiple access resolutions.

Data Link Layer

Data Link Control: A data link control is a reliable channel for transmitting data over a
dedicated link using various techniques such as framing, error control and flow control of
data packets in the computer network.
1. What is a multiple access protocol?

When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the channel. It may create
collision and cross talk. Hence, the multiple access protocol is required to reduce the
collision and avoid crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same time
due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.

Following are the types of multiple access protocol that is subdivided into the different
process as:
1.1 Random Access Protocol

In this protocol, all the station has the equal priority to send the data over a channel. In
random access protocol, one or more stations cannot depend on another station nor any
station control another station. Depending on the channel's state (idle or busy), each station
transmits the data frame. However, if more than one station sends the data over a channel,
there may be a collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end

The different methods of random-access protocols for broadcasting frames on the channel

 ALOHA
 CSMA
 CSMA/CD
 CSMA/CA

1.1.1 ALOHA
Aloha means “Hello”, ALOHA is a multiple-access protocol that allows data to be
transmitted via a shared network channel. It was developed by Norman Abramson and his
associates in the 1970s at the University of Hawaii.

The basic operation of ALOHA is as follows

 Devices can transmit data whenever they have a message to send


 If two or more devices transmit simultaneously, their messages will collide and be
corrupted
 Devices that detect a collision will wait for a random amount of time before trying to
transmit again

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data through
multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.

Types of ALOHA in Computer Network

The two protocols of Aloha in computer networks are:

1. Pure Aloha
2. Slotted Aloha

Pure Aloha

In the case of Pure Aloha, transmission time remains continuous. When a station has the
availability of frames, it will send the frame. In case there is a collision and the frame
gets destroyed, sender will wait for random amount of time before retransmission. Let
us now understand this technique:

Step 1: In the case of Pure ALOHA, the nodes transmit frames whenever the data is
available for sending.

Step 2: Whenever two or more nodes transmit data simultaneously, there is a chance of
collision and frames get destroyed.

Step 3: The sender will expect an acknowledgement from the receiver.

Step 4: When the acknowledgement is not received within a specified time, the sender
node will assume that the frame has been destroyed.

Step 5: If the frame is destroyed by a collision, the node will wait for a random amount
of time and sends it again. The waiting time will be random since otherwise same frames
will collide multiple times.
Step 6: As per Pure ALOHA, when the time-out period passes, every station must wait
for a random time before resending frame. This randomness will help in avoiding more
collisions.

Figure shows an example of frame collisions in pure ALOHA.

In fig there are four stations that .contended with one another for access to shared channel.
All these stations are transmitting frames. Some of these frames collide because multiple
frames are in contention for the shared channel. Only two frames, frame 1.1 and frame 3.2
survive. All other frames are destroyed.

To assure pure aloha: Its throughput and rate of transmission of the frame are to be
predicted.
For that, let’s make some assumptions:

 All the frames should be the same length.


 Stations can not generate frames while transmitting or trying to transmit frames.
 The population of stations attempts to transmit (both new frames and old frames
that collided) according to a Poisson distribution.

 Vulnerable Time = 2 * Tt

The efficiency of Pure ALOHA:


 Pure Aloha= G * e^-2G
where G is number of stations wants to transmit in Tt slot.
Maximum Efficiency:
Maximum Efficiency will be obtained when G=1/2
(pure aloha)max = 1/2 * e^-1
= 0.184
Which means, in Pure ALOHA, only about 18.4% of the time is used for successful
transmissions.

Slotted ALOHA
Slotted ALOHA was invented to improve the efficiency of pure ALOHA as chances of collision
in pure ALOHA are very high.
• In slotted ALOHA, the time of the shared channel is divided into discrete intervals called
slots.
• The stations can send a frame only at the beginning of the slot and only one frame is sent
in each slot.

In slotted ALOHA, if any station is not able to place the frame onto the channel at the
beginning of the slot i.e. it misses the time slot then the station has to wait until the
beginning of the next time slot.
• In slotted ALOHA, there is still a possibility of collision if two stations try to send at the
beginning of the same time slot as shown in fig.

• Slotted ALOHA still has an edge over pure ALOHA as chances of collision are reduced to
one-half.

Protocol Flow Chart for ALOHA

Explanation:

• A station which has a frame ready will send it.

• Then it waits for some time.

• If it receives the acknowledgement then the transmission is successful.

• Otherwise the station uses a backoff strategy, and sends the packet again.

• After many times if there is no acknowledgement then the station aborts the idea of
transmission.

Collision is possible only in the current slot. Therefore, Vulnerable Time is Tt.
The efficiency of Slotted Aloha

 slotted Aloha= G * e^-G


Maximum Efficiency:
(slotted)max = 1 * e^-1
= 1/e = 0.368
Maximum Efficiency, in Slotted ALOHA, is 36.8%.

1.1.2 CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes
idle. Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-


Persistent mode defines that each node senses the channel, and if the channel is inactive, it
sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p
probability) random time and resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data

1.1.3 CSMA/CD

CSMA/CD stands for Carrier Sense Multiple Access/Collision Detection, with collision
detection being an extension of the CSMA protocol. This creates a procedure that regulates
how communication must take place in a network with a shared transmission medium

Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network protocol for
carrier transmission that operates at the MAC.

When a collision is identified, the CSMA/CD quickly terminates the transmission by sending
a signal, saving the sender's time to send the data packet. Let's say each station detects a
collision as the packets are transmitted. In such a situation, the CSMA/CD immediately
issues a jam signal to halt the transmission and delays sending another data packet until a
random time has passed. When a free channel is discovered, the data will be sent.

Algorithm

 The station that intends to transmit the data detects if the carrier is available or
busy by perceiving its state. If a carrier is empty, the transmission starts.
 The condition Tt>=2∗TpTt>=2∗Tp, where TtTt is the transmission delay and TpTp is
the propagation delay, is used by the transmission station to identify collisions.
 As soon as it notices a collision, the station sends the jamming signal.
 Following a collision, the transmitting station interrupts transmission and waits for a
predetermined period, known as the "back-off time". The station then re-transmits
the signal.

Workflow

The following illustration builds a comprehensive understanding of the workflow


mechanism of CSMA/CD:
Efficiency

Although CSMA/CD is more efficient than CSMA, a few considerations are discussed below
relating to its effectiveness:

 The efficiency of CSMA/CD decreases with increase in distance.


 Even if efficiency improves with a larger packet size, there is still a limit as to how it
can be. The packets can only be a maximum of 1500 bytes long.

Advantages

The following points discuss the advantages of using CSMA/CD:

 For collision detection, CSMA/CD is better than CSMA.


 It is used for quick collision detection on a shared channel.
 Compared to CSMA/CA, it has lesser overhead.
 Using CSMA/CD helps prevent the waste of transmission in any way.
 Whenever mandatory, it consumes or shares all of the bandwidth.

Disadvantages

Disadvantages are given as follows:

 After reaching the maximum distance, collision detection is impossible.


 Not ideal for larger networks as collision detection is difficult to detect.
 The performance gets considerably disrupted when more devices are added.

1.1.4 CSMA/CA

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) is a network protocol
for carrier transmission that operates at the MAC.

Compared to wired networks, wireless systems cannot be monitored as securely. The


node beyond the first station's range may cause collisions since it cannot detect the other's
attempts to send data packets over the network. CSMA/CA helps lower the probability of
such collisions. Each station in a decentralized network must abide by a set of rules and
manage communication among themselves.

LBT principle

The "Listen Before Talk" (LBT) principle is the foundation of CSMA/CA. Before the station
can begin transmitting, the line must check to verify if it is free. However, this is only the
first action. Collisions are effectively prevented with the help of additional functions that are
part of the procedure.

Working

The following illustration shows the workflow of CSMA/CA in detail.

The steps above are followed by wireless networks that adopt CSMA/CA. These are
explained in more detail below.

1. First, the stations sense the transmission medium. This implies that the carrier sense
keeps a close eye on the radio channel and determines whether other stations are
transmitting at the same time.
2. If the transmission medium is already in use, a random backoff is initiated, during
which the station waits until a new check begins for a determined period. The same
happens at all other stations that are not actively broadcasting or receiving. It only
occurs if the station is unaware that the medium is already occupied (which is set by
the wireless listening stations to tell a station how long it must wait before accessing
the wireless medium).
3. If the network is available, the station initiates, which regulates how long a station
waits before commencing a media transmission. Before tt, which is the amount of
time a station should wait before sending its request frame, the channel is examined
thoroughly. If it remains free, a random backoff begins, and the optional exchange
starts, which minimizes frame conflicts caused by the hidden node problem. If the
request to send has been received successfully at the receiver side and there hasn't
been a collision, the frame grants the sender permission to use the transmission
medium.
4. All other stations are alerted that the network is in use. As a result, they raise their
network allocation vector again and delay making another attempt to verify if the
channel is free.
5. The station then begins the transmission. The receiver confirms that the data packet
has been successfully received after waiting for the tt, which is the time required to
process a data package, and then sends an ACK (acknowledgement) frame to
indicate to the sender that the data frame has been received. It also sets the
network allocation vector to 0, indicating that the network is now open for a new
transmission.

1.2 Controlled Access Protocols in Computer Network

In controlled access, the stations seek information from one another to find which station
has the right to send. It allows only one node to send at a time, to avoid the collision of
messages on a shared medium. The three controlled-access methods are:

1. Reservation
2. Polling
3. Token Passing

1.2.1 Reservation

 In the reservation method, a station needs to make a reservation before sending


data.
 The timeline has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
 If there are M stations, the reservation interval is divided into M slots, and each
station has one slot.
 Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
 In general, i th station may announce that it has a frame to send by inserting a 1 bit
into i th slot. After all N slots have been checked, each station knows which stations
wish to transmit.
 The stations which have reserved their slots transfer their frames in that order.
 After data transmission period, next reservation interval begins.
 Since everyone agrees on who goes next, there will never be any collisions.

The following figure shows a situation with five stations and a five-slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval,
only station 1 has made a reservation.

Advantages of Reservation The main advantage of reservation is high rates and low rates of
data accessing time of the respective channel can be predicated easily. Here time and rates
are fixed.

 Priorities can be set to provide speedier access from secondary.


 Predictable network performance: Reservation-based access methods can provide
predictable network performance, which is important in applications where latency
and jitter must be minimized, such as in real-time video or audio streaming.
 Reduced contention: Reservation-based access methods can reduce contention for
network resources, as access to the network is pre-allocated based on reservation
requests. This can improve network efficiency and reduce packet loss.
 Quality of Service (QoS) support: Reservation-based access methods can support
QoS requirements, by providing different reservation types for different types of
traffic, such as voice, video, or data. This can ensure that high-priority traffic is given
preferential treatment over lower-priority traffic.
 Efficient use of bandwidth: Reservation-based access methods can enable more
efficient use of available bandwidth, as they allow for time and frequency
multiplexing of different reservation requests on the same channel.
 Support for multimedia applications: Reservation-based access methods are well-
suited to support multimedia applications that require guaranteed network
resources, such as bandwidth and latency, to ensure high-quality performance.

Disadvantages of Reservation:

 Highly trust on controlled dependability.


 Decrease in capacity and channel data rate under light loads; increase in turn-around
time.

1.2.2 Polling

 Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
 In this, one acts as a primary station(controller) and the others are secondary
stations. All data exchanges must be made through the controller.
 The message sent by the controller contains the address of the node being selected
for granting access.
 Although all nodes receive the message the addressed one responds to it and sends
data if any. If there is no data, usually a “poll reject”(NAK) message is sent back.
 Problems include high overhead of the polling messages and high dependence on
the reliability of the controller.
Advantages of Polling:

 The maximum and minimum access time and data rates on the channel are fixed
predictable.
 It has maximum efficiency.
 It has maximum bandwidth.
 No slot is wasted in polling.
 There is assignment of priority to ensure faster access from some secondary.

Disadvantages of Polling

 It consume more time.


 Since every station has an equal chance of winning in every round, link sharing is
biased.
 Only some station might run out of data to send.
 An increase in the turnaround time leads to a drop in the data rates of the channel
under low loads.

Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of
data. Then,

Efficiency = Tt/(Tt + Tpoll)

1.2.3 Token Passing

 In token passing scheme, the stations are connected logically to each other in form
of ring and access to stations is governed by tokens.
 A token is a special bit pattern or a small message, which circulate from one station
to the next in some predefined order.
 In Token ring, token is passed from one station to another adjacent station in the
ring whereas incase of Token bus, each station uses the bus to send the token to the
next station in some predefined order.
 In both cases, token represents permission to send. If a station has a frame queued
for transmission when it receives the token, it can send that frame before it passes
the token to the next station. If it has no queued frame, it passes the token simply.
 After sending a frame, each station must wait for all N stations (including itself) to
send the token to their neighbours and the other N – 1 stations to send a frame, if
they have one.
 There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable
operation of this scheme.

Performance of token ring can be concluded by 2 parameters:-

1. Delay, is a measure of time between when a packet is ready and when it is delivered.
So, the average time (delay) required to send a token to the next station = a/N.
2. Throughput, which is a measure of successful traffic.

Throughput, S = 1/(1 + a/N) for a<1

and

S = 1/{a(1 + 1/N)} for a>1.


where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)

Advantages of Token passing

 It may now be applied with routers cabling and includes built-in debugging features
like protective relay and auto reconfiguration.
 It provides good throughput when conditions of high load.

Disadvantages of Token passing

 Its cost is expensive.


 Topology components are more expensive than those of other, more widely used
standard.
 The hardware element of the token rings are designed to be tricky. This implies that
you should choose on manufacture and use them exclusively.

1.3 Channelization protocols

Channelization Protocols are broadly classified as follows:

 FDMA(Frequency-Division Multiple Access)


 TDMA(Time-Division Multiple Access)
 CDMA(Code-Division Multiple Access)

Lets first understand the need for channelization protocols using the example given below:

Let's consider a transmission line with four users, namely D1, D2, D3, D4. Here, when data is
transmitted from a destined source for D2, it can also be accessed by D1, D3, D4 because
they all are on the same transmission line. D1, D3, and D4 can knowingly or unknowingly
access the data destined for D2. There is a possibility that they thought that data was
supposed for them.

We use Channelization Protocols to solve this problem. Channelization is a way to provide


multiple access by sharing the available bandwidth in time, frequency, or through code
between source and destination nodes. Channelization Protocols can be classified as

 FDMA (Frequency Division Multiple Access)


 TDMA (Time Domain Multiple Access)
 CDMA (Code Division Multiple Access)

1.3.1 FDMA (Frequency Division Multiple Access)

In this technique, the bandwidth is divided into frequency bands, and each frequency band
is allocated to a particular station to transmit its data. The frequency band distributed to the
stations becomes reserved. Each station uses a band-pass filter to confine their data
transmission into their assigned frequency band. Each frequency band has some gap in-
between to prevent interference of multiple bands, and these are called guard bands.

Advantages of FDMA

 FDMA system is easy to implement, and it's not very complex.


 When the traffic is uniform, FDMA becomes very efficient due to its separate
frequency band for each station.
 All stations can run simultaneously at all times without waiting for their turn.
 If the channel is not being used, then it sits idle.
 There is no restriction regarding the baseband or modulation.

Disadvantages of FDMA

 The bandwidth channel is narrow.


 The planning of the network and spectrum is very time-consuming.
 The presence of a guard band reduces the bandwidth available for use.
 Bandwidth is assigned permanently to each station which reduces its flexibility.

1.3.2 TDMA (Time Domain Multiple Access)

TDMA is another technique to enable multiple accesses in a shared medium. In this, the
stations share the channel's bandwidth time-wise. Every station is allocated a fixed time to
transmit its signal. The data link layer tells its physical layer to use the allotted time. TDMA
requires synchronization between stations. There is a time gap between the time intervals,
called guard time, which is assigned for the synchronization between stations. The rate of
data in TDMA is greater than FDMA but lesser than CDMA.

Advantages of TDMA

 TDMA separates users according to time, and this ensures that there is no
interference from the simultaneous transmissions
 No frequency guard band is required in TDMA
 It shares a single carrier frequency with multiple users.
 It saves power as the user is only active while transmitting in its allotted time frame.
 There is no need for precise, narrow band filters as there is no division in the
frequency range.

Disadvantages of TDMA

 If the stations are spread over a wide area, there is a propagation delay, and we use
guard time to counter this.
 Slot allocation in TDMA is complex.
 Synchronization between different channels is difficult to achieve. Each station has
to know the beginning of its slot and its location.
 The stations configured according to TDMA demand high peak power during uplink
in their allotted time slot.

1.3.3 CDMA (Code Division Multiple Access)

In the CDMA technique, communication happens using codes. Using this technique,
different stations can transmit their signal on the same channel using other codes. There is
only one channel in CDMA that carries all the signals. CDMA is based on the coding
technique, where each station is assigned a code (a sequence of numbers called chips). It
differs from TDMA as all the stations can transmit simultaneously in the channel as there is
no time sharing. And it differs from FDMA as only one channel occupies the whole
bandwidth.
Advantages of CDMA

 CDMA operates at low power than FDMA and TDMA


 The capacity of a CDMA system is higher than FDMA and TDMA
 CDMA is cost-effective
 It provides high voice quality than TDMA and FDMA.
 It has the most outstanding spectrum efficiency.
 Hackers can't decode the communication on CDMA

Disadvantages of CDMA

 The performance of CDMA decreases with the increasing number of users.


 The cost of CDMA is higher due to the requirement of types of equipment.
 Incorrect code selection can induce delay.

Ethernet

What is Ethernet?
 Ethernet is a type of communication protocol that is created at Xerox PARC in 1973
by Robert Metcalfe and others,
 which connects computers on a network over a wired connection.
 It is a widely used LAN protocol, which is also known as Alto Aloha Network. It
connects computers within the local area network and wide area network.
 Numerous devices like printers and laptops can be connected by LAN and WAN
within buildings, homes, and even small neighborhoods.

It offers a simple user interface that helps to connect various devices easily, such as
switches, routers, and computers. A local area network (LAN) can be created with the help
of a single router and a few Ethernet cables, which enable communication between all
linked devices.
The wireless networks replaced Ethernet in many areas; however, Ethernet is still more
common for wired networking. Wi-Fi reduces the need for cabling as it allows the users to
connect smartphones or laptops to a network without the required cable. While comparing
with Gigabit Ethernet, the faster maximum data transfer rates are provided by the 802.11ac
Wi-Fi standard. Still, as compared to a wireless network, wired connections are more secure
and

are less prone to interference. This is the main reason to still use Ethernet by many
businesses and organizations.

History of Ethernet

 At the beginning of the 1970s, Ethernet was developed over several years from
ALOHAnet from the University of Hawaii. Then, a test was performed, which was
peaked with a scientific paper in 1976, and published by Metcalfe together with
David Boggs. Late in 1977, a patent on this technology was filed by Xerox
Corporation.
 The Ethernet as a standard was established by companies Xerox, Intel, and Digital
Equipment Corporation (DEC); first, these companies were combined to improve
Ethernet in 1979, then published the first standard in 1980. Other technologies,
including CSMA/CD protocol, were also developed with the help of this process,
which later became known as IEEE 802.3. This process also led to creating a token
bus (802.4) and token ring (802.5).
 In 1983, the IEEE technology became standard, and before 802.11, 802.3 was born.
Many modern PCs started to include Ethernet cards on the motherboard, as due to
the invention of single-chip Ethernet controllers, the Ethernet card became very
inexpensive. Consequently, the use of Ethernet networks in the workplace began by
some small companies but still used with the help of telephone-based four-wire
lines.
 Until the early 1990s, creating the Ethernet connection through twisted pair and
fiberoptic cables was not established. That led to the development of the 100 MB/s
standard in 1995.

Advantages of Ethernet

 It is not much costly to form an Ethernet network. As compared to other systems of


connecting computers, it is relatively inexpensive.
 Ethernet network provides high security for data as it uses firewalls in terms of data
security.
 Also, the Gigabit network allows the users to transmit data at a speed of 1-100Gbps.
 In this network, the quality of the data transfer does maintain.
 In this network, administration and maintenance are easier.
 The latest version of gigabit ethernet and wireless ethernet have the potential to
transmit data at the speed of 1-100Gbps.

Disadvantages of Ethernet

 The wired Ethernet network restricts you in terms of distances, and it is best for
using in short distances.
 If you create a wired ethernet network that needs cables, hubs, switches, routers,
they increase the cost of installation.
 Data needs quick transfer in an interactive application, as well as data is very small.
 In ethernet network, any acknowledge is not sent by receiver after accepting a
packet.
 If you are planning to set up a wireless Ethernet network, it can be difficult if you
have no experience in the network field.
 Comparing with the wired Ethernet network, wireless network is not more secure.
 The full-duplex data communication mode is not supported by the 100Base-T4
version.
 Additionally, finding a problem is very difficult in an Ethernet network (if has), as it is
not easy to determine which node or cable is causing the problem.

Ethernet protocol

Ethernet protocol is available around us in various forms but the present Ethernet can be
defined through the IEEE 802.3 standard. So, This article discusses an overview of Ethernet
protocol basics, types and it’s working

What is an Ethernet Protocol?

The most popular and oldest LAN technology is Ethernet Protocol, so it is more frequently
used in LAN environments which is used in almost all networks like offices, homes, public
places, enterprises, and universities. Ethernet has gained huge popularity because of its
maximum rates over longer distances using optical media.

The Ethernet protocol uses a star topology or linear bus which is the foundation of the IEEE
802.3 standard. The main reason to use Ethernet widely is, simple to understand, maintain,
implement, provides flexibility, and permits less cost network implementation.

Ethernet Protocol Architecture

In the OSI network model, Ethernet protocol operates at the first two layers like the Physical
& the Data Link layers but, Ethernet separates the Data Link layer into two different layers
called the Logical Link Control layer & the Medium Access Control layer.
The physical layer in the network mainly focuses on the elements of hardware like
repeaters, cables & network interface cards (NIC). For instance, an Ethernet network like
100BaseTX or 10BaseT indicates the cables type that can be used, the length of cables, and
the optimal topology.

The data link layer in the network system mainly addresses the way that data

packets are transmitted from one type of node to another. Ethernet uses an access
method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection). This is a system
where every computer listens to the cable before transmitting anything throughout the
network.

Ethernet protocols are available in different flavors and operate at various speeds by
using different types of media. But, all the Ethernet versions are well-matched through each
other. These versions can mix & match on a similar network with the help of different
network devices like hubs, switches, bridges to connect the segments of the network that
utilize different types of media.

The Ethernet protocol’s actual transmission speed can be measured in Mbps


(millions of bits for each second), The speed versions of Ethernet are available in three
different types 10 Mbps, called Standard Ethernet; 100 Mbps called Fast Ethernet & 1,000
Mbps, called as Gigabit Ethernet. The transmission speed of the network is the maximum
speed that can be attained over the network in ideal conditions. The output of the Ethernet
network rarely achieves this highest speed.
Standard Ethernet

The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center (PARC). Since
then, it has gone through four generations:

a. Standard Ethernet (l0 Mbps),

b. Fast Ethernet (100 Mbps),

c. Gigabit Ethernet (l Gbps)

d. Ten-Gigabit Ethernet (l0 Gbps)

MAC Sublayer

In Standard Ethernet, the MAC sublayer governs the operation of the access method. It also
frames data received from the upper layer and passes them to the physical layer.

1. Frame Format

The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of protocol
data unit (PDU), upper-layer data, and the CRC. Ethernet does not provide any mechanism
for acknowledging received frames, making it what is known as an unreliable medium.
Acknowledgments must be implemented at the higher layers.

Preamble: The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating 0sand 1s
that alerts the receiving system to the coming frame and enables it to synchronize its input
timing. The pattern provides only an alert and a timing pulse.

Start frame delimiter (SFD). The second field (l byte: 10101011) signals the beginning of the
frame. The SFD warns the station or stations that this is the last chance for synchronization.
The last 2 bits is 11 and alerts the receiver that the next field is the destination address.

Destination addresses (DA). The DA field is 6 bytes and contains the physical address of the
destination station or stations to receive the packet.

Source addresses (SA). The SA field is also 6 bytes and contains the physical address of the
sender of the packet.

Length or type. This field is defined as a type field or length field. The original Ethernet used
this field as the type field to define the upper-layer protocol using the MAC frame.

Data. This field carries data encapsulated from the upper-layer protocols. It is a minimum of
46 and a maximum of 1500 bytes.

CRC. The last field contains error detection information, in this case a CRC-32.

Different Types of Ethernet Networks


An Ethernet device with CAT5/CAT6 copper cables is connected to a fiber optic cable
through fiber optic media converters. The distance covered by the network is significantly
increased by this extension for fiber optic cable. There are some kinds of Ethernet networks,
which are discussed below:

 Fast Ethernet: This type of Ethernet is usually supported by a twisted pair or CAT5
cable, which has the potential to transfer or receive data at around100 Mbps. They
function at 100Base and 10/100Base Ethernet on the fiber side of the link if any
device such as a camera, laptop, or other is connected to a network. The fiber optic
cable and twisted pair cable are used by fast Ethernet to create communication. The
100BASE-TX, 100BASE-FX, and 100BASE-T4 are the three categories of Fast Ethernet.
 Gigabit Ethernet: This type of Ethernet network is an upgrade from Fast Ethernet,
which uses fiber optic cable and twisted pair cable to create communication. It can
transfer data at a rate of 1000 Mbps or 1Gbps. In modern times, gigabit Ethernet is
more common. This network type also uses CAT5e or other advanced cables, which
can transfer data at a rate of 10 Gbps.

The primary intention of developing the gigabit Ethernet was to full fill the user's
requirements, such as faster transfer of data, faster communication network, and more.

 10-Gigabit Ethernet: This type of network can transmit data at a rate of 10


Gigabit/second, considered a more advanced and high-speed network. It makes use
of CAT6a or CAT7 twisted-pair cables and fiber optic cables as well. This network can
be expended up to nearly 10,000 meters with the help of using a fiber optic cable.
UNIT – IV : The Network Layer Design Issues – Store and Forward Packet
Switching-Services Provided to the Transport layer- Implementation of
Connectionless Service-Implementation of Connection Oriented Service-
Comparison of Virtual Circuit and Datagram Networks, Routing Algorithms-
The Optimality principle-Shortest path, Flooding, Distance vector, Link state,
Hierarchical, Congestion Control algorithms-General principles of congestion
control, Congestion prevention polices, Approaches to Congestion Control-
Traffic Aware Routing- Admission Control-Traffic Throttling-Load Shedding.
Traffic Control Algorithm-Leaky bucket & Token bucket.

Internet Working: How networks differ- How networks can be connected-


Tunneling, internetwork routing-, Fragmentation, network layer in the internet
– IP protocols-IP Version 4 protocol-IPV4 Header Format, IP addresses, Class
full Addressing, CIDR, NAT-, Subnets-IP Version 6-The main IPV6 header,
Transition from IPV4 to IPV6, Comparison of IPV4 & IPV6- Internet control
protocols- ICMP-ARPDHCP

Network layer is majorly focused on getting packets from the source to the
destination, routing error handling and congestion control.
Before learning about design issues in the network layer, let’s learn about it’s
various functions.

 Addressing
Maintains the address at the frame header of both source and destination
and performs addressing to detect various devices in network.
 Packeting
This is performed by Internet Protocol. The network layer converts the
packets from its upper layer.
 Routing
It is the most important functionality. The network layer chooses the most
relevant and best path for the data transmission from source to
destination.
 Inter-networking
It works to deliver a logical connection across multiple devices.
The network layer or layer 3 of the OSI (Open Systems Interconnection) model
is concerned delivery of data packets from the source to the destination across
multiple hops or links.

It is the lowest layer that is concerned with end − to − end transmission. These
issues encompass the services provided to the upper layers as well as internal
design of the layer.

The design issues can be elaborated under four heads −

1. Store − and − Forward Packet Switching


2. Services to Transport Layer
3. Providing Connection Oriented Service
4. Providing Connectionless Service

1. Store − and − Forward Packet Switching

The network layer operates in an environment that uses store and forward
packet switching. The node which has a packet to send, delivers it to the
nearest router. The packet is stored in the router until it has fully arrived and
its checksum is verified for error detection. Once, this is done, the packet is
forwarded to the next router. Since, each router needs to store the entire
packet before it can forward it to the next hop, the mechanism is called store −
and − forward switching.

2. Services to Transport Layer

The network layer provides service its immediate upper layer, namely transport
layer, through the network − transport layer interface. The two types of services
provided are −
 Connection − Oriented Service − In this service, a path is setup between
the source and the destination, and all the data packets belonging to a
message are routed along this path.
 Connectionless Service − In this service, each packet of the message is
considered as an independent entity and is individually routed from the
source to the destination.

The objectives of the network layer while providing these services are −

 The services should not be dependent upon the router technology.


 The router configuration details should not be of a concern to the
transport layer.
 A uniform addressing plan should be made available to the transport
layer, whether the network is a LAN, MAN or WAN.

3. Implementation of Connectionless Service


If connectionless service is offered, packets are injected into the network
individually and routed independently of each other. No advance setup is
needed. In this context, the packets are frequently called datagrams (in
analogy with telegrams) and the network is called a datagram network.
 Let us assume for this example that the message is four times longer
than the maximum packet size, so the network layer has to break it into
four packets, 1, 2, 3, and 4, and send each of them in turn to router A.
 Every router has an internal table telling it where to send packets for
each of the possible destinations.
 Each table entry is a pair(destination and the outgoing line). Only
directly connected lines can be used. A’s initial routing table is shown in
the figure under the label ‘‘initially.’’ At A, packets 1, 2, and 3 are stored
briefly, having arrived on the incoming link.
 Then each packet is forwarded according to A’s table, onto the outgoing
link to C within a new frame. Packet 1 is then forwarded to E and then to
F. However, something different happens to packet 4.
 When it gets to A it is sent to router B, even though it is also destined for
F. For some reason (traffic jam along ACE path), A decided to send
packet 4 via a different route than that of the first three packets. Router
A updated its routing table, as shown under the label ‘‘later.’’
 The algorithm that manages the tables and makes the routing decisions
is called the routing algorithm.
4. Implementation of Connection Oriented service
To use a connection-oriented service, first we establishes a connection, use
it and then release it. In connection-oriented services, the data packets are
delivered to the receiver in the same order in which they have been sent by
the sender.
If connection-oriented service is used, a path from the source router all the
way to the destination router must be established before any data packets
can be sent. This connection is called a VC (virtual circuit), and the network
is called a virtual-circuit network

It can be done in either two ways:


 Circuit Switched Connection – A dedicated physical path or a circuit is
established between the communicating nodes and then data stream is
transferred.

 Virtual Circuit Switched Connection – The data stream is transferred


over a packet switched network, in such a way that it seems to the user
that there is a dedicated path from the sender to the receiver. A virtual
path is established here. While, other connections may also be using the
same path.
Comparison of virtual-circuit and datagram networks
Routing Algorithms

The main function of NL (Network Layer) is routing packets from the source
machine to the destination machine.

There are two processes inside router:

a) One of them handles each packet as it arrives, looking up the outgoing


line to use for it in the routing table. This process is forwarding.

b) The other process is responsible for filling in and updating the routing
tables. That is where the routing algorithm comes into play. This process is
routing.

Regardless of whether routes are chosen independently for each packet or only
when new connections are established, certain properties are desirable in a
routing algorithm correctness, simplicity, robustness, stability, fairness,
optimality

Routing algorithms can be grouped into two major classes:

1) nonadaptive (Static Routing)

2) adaptive. (Dynamic Routing)

Nonadaptive algorithm do not base their routing decisions on measurements or


estimates of the current traffic and topology. Instead, the choice of the route to
use to get from I to J is computed in advance, off line, and downloaded to the
routers when the network is booted. This procedure is sometimes called static
routing.

Adaptive algorithm, in contrast, change their routing decisions to reflect


changes in the topology, and usually the traffic as well. Adaptive algorithms
differ in 1) Where they get their information (e.g., locally, from adjacent routers,
or from all routers), 2) When they change the routes (e.g., every ∆T sec, when
the load changes or when the topology changes), and 3) What metric is used for
optimization (e.g., distance, number of hops, or estimated transit time). This
procedure is called dynamic routing.

Different Routing Algorithms

1. Optimality principle
2. Shortest path algorithm
3. Flooding
4. Distance vector routing
5. Link state routing
6. Hierarchical Routing

1. Optimality principle
One can make a general statement about optimal routes without regard to
network topology or traffic. This statement is known as the optimality principle.

It states that if router J is on the optimal path from router I to router K, then
the optimal path from J to K also falls along the same

As a direct consequence of the optimality principle, we can see that the set of
optimal routes from all sources to a given destination form a tree rooted at the
destination. Such a tree is called a sink tree. The goal of all routing algorithms
is to discover and use the sink trees for all routers
2. Shortest Path Routing (Dijkstra’s) The idea is to build a graph of the
subnet, with each node of the graph representing a router and each arc of
the graph representing a communication line or link. To choose a route
between a given pair of routers, the algorithm just finds the shortest path
between them on the graph
1. Start with the local node (router) as the root of the tree. Assign a cost of 0
to this node and make it the first permanent node.
2. Examine each neighbor of the node that was the last permanent node.
3. Assign a cumulative cost to each node and make it tentative
4. Among the list of tentative nodes
a. Find the node with the smallest cost and make it Permanent
b. If a node can be reached from more than one route then select the
route with the shortest cumulative cost.
5. Repeat steps 2 to 4 until every node becomes permanent.
Flooding
• Another static algorithm is flooding, in which every incoming packet is sent
out on every outgoing line except the one it arrived on.

• Flooding obviously generates vast numbers of duplicate packets, in fact, an


infinite number unless some measures are taken to damp the process.

• One such measure is to have a hop counter contained in the header of each
packet, which is decremented at each hop, with the packet being discarded
when the counter reaches zero. Ideally, the hop counter should be initialized to
the length of the path from source to destination.

• A variation of flooding that is slightly more practical is selective flooding. In


this algorithm the routers do not send every incoming packet out on every line,
only on those lines that are going approximately in the right direction.

• Flooding is not practical in most applications.

Intra- and Inter domain Routing


An autonomous system (AS) is a group of networks and routers under the
authority of a single administration.

 Routing inside an autonomous system is referred to as intra domain


routing. (DISTANCE VECTOR, LINK STATE)
 Routing between autonomous systems is referred to as inter domain
routing. (PATH VECTOR)

1. Distance Vector Routing


 Distance vector routing algorithm is a type of routing algorithm that is

used to determine the best path for data packets to travel through a
network.
 This algorithm is also known as Bellman-Ford Algorithm
 The distance vector routing algorithm works by each router in a network
maintaining a table of the distances to all other routers in the network.
This table is called the distance vector
 The distance vector contains the distance to all the other routers in the
network as well as the next hop router that the data packet should be
sent to in order to reach its destination.
 The distance vector routing algorithm uses the Bellman-Ford equation to
calculate the distance vector table. The Bellman-Ford equation takes into
account the cost of the link between the current router and its
neighbors, as well as the distance vector of the neighboring routers. The
algorithm then selects the shortest path to the destination router based
on the information in the distance vector table.
 In distance vector routing, the least-cost route between any two nodes is
the route with minimum distance. In this protocol, as the name implies,
each node maintains a vector (table) of minimum distances to every
node.

Mainly three things in distance vector routing

 Initialization
 Sharing
 Updating

1. Initialization : Each node can know only the distance between itself
and its immediate neighbors, those directly connected to it. So for the
moment, we assume that each node can send a message to the
immediate neighbors and find the distance between itself and these
neighbors.
2. Sharing : The whole idea of distance vector routing is the sharing of
information between neighbors
Note: In distance vector routing, each node shares its routing table
with its immediate neighbors periodically and when there is a change
3. Updating: When a node receives a two-column table from a neighbor,
it needs to update its routing table.
Updating takes three steps:
1. The receiving node needs to add the cost between itself and the
sending node to each value in the second column. (x+y)
2. If the receiving node uses information from any row. The sending
node is the next node in the route.
3. The receiving node needs to compare each row of its old table with
the corresponding row of the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses
the row with the smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses
the new row.
Example: (Distance vector routing algoritham)

In the network shown below, there are three routers, A, C, and D, with
the following weights − AC =8, CD =4, and DA =3.

8
A B

4 3
C

Step 1 − Each router in this DVR(Distance Vector Routing) network


shares its routing table with every neighbor. For example, A will share its
routing table with neighbors C and D, and neighbors C and D will share their
routing table with A.
Step 2 − If the path via a neighbor is less expensive, the router adjusts
its local table to send packets to the neighbor. In this table, the router updates
the lower cost for A and C by updating the new weight from 8 to 7 in router A
and from 8 to 7 in router C.

Step 3 − The final revised routing table with the reduced cost distance vector
routing protocol for all routers A, C, and D is shown below-
Advantages of Distance Vector Routing Algorithm

The distance vector routing algorithm, also known as the shortest path routing
algorithm in computer networks, has several advantages:

 Simplicity
 Low overhead
 Flexibility
 Stability
 Compatibility

Limitations of Distance Vector Routing Algorithm

The distance vector routing algorithm, also known as the shortest path routing
algorithm in computer networks, has several disadvantages:

 Slow convergence
 Count-to-infinity problem
 Limited scalability
 Limited accuracy
 Limited security
 Limited feasibility in large networks

Link State Routing

Link State Routing (LSR) is a routing algorithm used in computer networks to


determine the best path for data to travel from one node to another. LSR is
considered to be a more advanced and efficient method of routing compared to
Distance Vector Routing (DVR) algorithm.

In LSR, each node in the network maintains a map or database, called a link
state database (LSDB), that contains information about the state of all the links
in the network. This information includes the cost of each link, the status of
each link (up or down), and the neighboring nodes that are connected to each
link.
When a node in the network wants to send data to another node, it consults its
LSDB to determine the best path to take. The node selects the path with the
lowest cost, also known as the shortest path, to reach the destination node. To
determine the shortest path, LSR uses Dijkstra’s shortest path algorithm.

Link state routing is based on the assumption that, although the global
knowledge about the topology is not clear, each node has partial knowledge: it
knows the state (type, condition, and cost) of its links

In other words, the whole topology can be compiled from the partial knowledge
of each node

Some Phases of Link State Routing:

Link state routing (LSR) is a routing protocol used in packet-switched networks


that uses a link state database to store information about the network
topology.

The LSR process can be divided into several phases:

1. initialization phase: The first phase is the initialization phase, where each
router in the network learns about its own directly connected links. This
information is then stored in the router’s link state database.

2. flooding phase: The second phase is the flooding phase, where each router
floods its link state information to all other routers in the network. This allows
each router to learn about the entire network topology.

3. path calculation phase: The third phase is the shortest path calculation
phase, where each router uses the link state information to calculate the
shortest path to every other router in the network. This is typically done using
Dijkstra’s algorithm.

4. route installation phase: The fourth and final phase is the route installation
phase, where each router installs the calculated shortest paths in its routing
table. This allows the router to forward packets along the optimal path to their
destination.

One of the main benefits of LSR is that it only requires routers to have
knowledge of their directly connected links

Features of Link State Routing Algorithm

link state routing (LSR) is a routing protocol that uses a link state database to
store information about the network topology.

1. Each router in the network only needs to know about its directly connected
links. This allows for a more scalable solution, as routers do not need to
maintain a full view of the entire network topology.

2. LSR uses a link state database to store information about the network
topology. This database is updated by flooding link state information between
routers.

3. LSR uses the shortest path algorithm, such as Dijkstra’s algorithm, to


calculate the shortest path to every other router in the network.

4. LSR can quickly adapt to changes in the network topology. When a link goes
down or a new link is added, the link state information is updated and the
shortest path is recalculated.

5. LSR is less prone to routing loops, as each router only installs the shortest
path in its routing table.

6. LSR supports multiple equal-cost paths, which allows for load balancing and
redundancy.
Advantages of Link State Routing Algorithm

1. One of the main advantages of LSR is that it only needs to know the state
of the links it is directly connected to, as opposed to DVR which needs to
know the entire state of the network. This allows LSR to converge quickly
2. Another advantage of LSR is that it does not suffer from the count-to-
infinity problem which is prevalent in DVR.

Disadvantages of Link State Routing Algorithm

1. One of the main disadvantages is that it requires more memory and


processing power than DVR
2. One of the main disadvantages is that it requires more memory and
processing power than DVR

Hierarchical Routing Algorithm:

 Routers in hierarchical routing are organized into regions.


 Each router knows exactly how to route packets to destinations within
its own zone. However, it has no knowledge of the internal organization
of other areas.
 in both the LS and DV algorithms, each router must preserve certain
data about other routers. The number of routers in a network grows as
the network grows in size. As a result, when the size of the routing table
grows, routers are unable to manage network traffic as effectively.
Hierarchical routing is being used to solve this problem.
 Routers in hierarchical routing are categorized into regions.
 Each router only knows about the routers in its own territory and has no
knowledge about routers in other regions. As a result, routers keep one
entry in their database for each additional area.
 he system must be hierarchical, with numerous layers and several group
loops connected to one another at each level. As a result, hierarchical
routing is often utilized in such systems.
There are two types of routing protocols in such an Internet system:

 Routing within a domain


 Routing between domains

Each of these rules is structured in a hierarchical manner. Only the former


routing method is utilized to exchange information within a domain. Both,
however, are utilized for interaction among two or more domains.
 A two-level hierarchy may be inadequate for large networks; thus,
regions may be divided into clusters, clusters into zones, zones into
groups, and so on.

Table 1A
Destination Line Hops

1A - -

1B 1B 1

1C 1C 1
2A 1B 2

2B 1B 3

2C 1B 3

2D 1B 4

3A 1C 3

3B 1C 2

4A 1C 3

4B 1C 4

4C 1C 4

5A 1C 4

5B 1C 5

5C 1B 5

5D 1C 6

5E 1C 5

When routing is accomplished hierarchically, there are only seven entries, as


illustrated below.

1A Hierarchical Table
Destination Line Hops

1A - -

1B 1B 1

1C 1C 1

2 1B 2

3 1C 2

4 1C 3
5 1C 4

this decrease in table area is accompanied by an increase in route length.

Clarification

 In the initial step, for example, the optimum path from 1A to 5C is


through region 2, while hierarchical routing of all traffic to region 5
travels through region 3 since it is better for most of region 5's other
destinations.
 Second step: consider a 720-router subnet. If no hierarchy is employed,
each router's routing table will have 720 entries.
 Third step: If the subnet is divided into 24 regions of 30 routers each,
each router will require 30 local and 23 distant entries, for a total of 53
entries.

If the same 720-router subnet is divided into eight clusters, each with nine
zones and each region with 10 routers, The total number of entries in tables in
each router will then be determined.
10 localized entries + 8 distant regions + 7 clusters = 25 items.

Advantages of Hierarchical Routing Algorithm

1. Scalability
2. Better Traffic Control
3. Simple to administer

Disdvantages of Hierarchical Routing Algorithm

1. Complexity
2. Latency
Congestion Control algorithms

What is congestion?

A state occurring in network layer when the message traffic is so heavy that it
slows down network response time.

Effects of Congestion

 As delay increases, performance decreases.


 If delay increases, retransmission occurs, making situation worse
 Too many packets present in (a part of) the network causes packet delay
and loss that degrades performance. This situation is called congestion.
 The network and transport layers share the responsibility for handling
congestion. Since congestion occurs within the network
 However, the most effective way to control congestion is to reduce the
load that the transport layer is placing on the network. This requires the
network and transport layers to work together.

General Principles of Congestion control

 Congestion Control is a mechanism that controls the entry of data packets


into the network, enabling a better use of a shared network infrastructure
and avoiding congestive collapse.
 Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as
the mechanism to avoid congestive collapse in a network.
 The computer network is divided into two groups: open-loop and closed-loop
solutions.
 Open-loop solutions provide an excellent design to ensure that the problem
does not occur in the first place.
 Closed-loop solutions make the decision based on the concept of a feedback
loop.
Congestion prevention polices

 Congestion control refers to the techniques used to control or prevent


congestion.
 Congestion control techniques can be broadly classified into two
categories:

Open Loop Congestion Control


 Open loop congestion control policies are applied to prevent congestion
before it happens.
 The congestion control is handled either by the source or the
destination.

Policies adopted by open loop congestion control –

1. Retransmission Policy: If the sender feels that a sent packet is lost or


corrupted, the packet needs to be retransmitted. This transmission may
increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to
prevent congestion.
2. Window Policy : The type of window at the sender’s side may also affect
the congestion. Several packets in the Go-back-n window are re-sent,
although some packets may be received successfully at the receiver side.
This duplication may increase the congestion in the network and make it
worse.
3. Discarding Policy : A good discarding policy adopted by the routers is
that the routers may prevent congestion and at the same time partially
discard the corrupted or less sensitive packages and also be able to
maintain the quality of a message.
4. Acknowledgment Policy : Since acknowledgements are also the part of
the load in the network, the acknowledgment policy imposed by the
receiver may also affect congestion. Several approaches can be used to
prevent congestion related to acknowledgment.
5. Admission Policy : n admission policy a mechanism should be used to
prevent congestion. Switches in a flow should first check the resource
requirement of a network flow before transmitting it further. If there is a
chance of a congestion or there is a congestion in the network, router
should

Closed Loop Congestion Control


Closed loop congestion control techniques are used to treat or alleviate
congestion after it happens.
Several techniques are used by different protocols;
1. Backpressure : Backpressure is a technique in which a congested
node stops receiving packets from upstream node. This may cause the
upstream node or nodes to become congested and reject receiving
data from above nodes.
Backpressure is a node-to-node congestion control technique that
propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has
information of its above upstream node.
2. Choke Packet Technique : Choke packet technique is applicable to
both virtual networks as well as datagram subnets. A choke packet is
a packet sent by a node to the source to inform it of congestion. Each
router monitors its resources and the utilization at each of its output
lines.
3. Implicit Signaling : In implicit signaling, there is no communication
between the congested nodes and the source. The source guesses that
there is congestion in a network. For example when sender sends
several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling : In explicit signaling, if a node experiences
congestion it can explicitly sends a packet to the source or destination
to inform about congestion. The difference between choke packet and
explicit signaling is that the signal is included in the packets that
carry data rather than creating a different packet as in case of choke
packet technique.

Approaches to Congestion Control

The presence of congestion means the load is greater than the resources
available over a network to handle. Generally we will get an idea to reduce the
congestion by trying to increase the resources or decrease the load.

There are some approaches for congestion control over a network which are
usually applied on different time scales to either prevent congestion or react to
it once it has occurred.

Let us understand these approaches step wise


Step 1 − The basic way to avoid congestion is to build a network that is well
matched to the traffic that it carries. If more traffic is directed but a low-
bandwidth link is available, definitely congestion occurs.

Step 2 − Sometimes resources can be added dynamically like routers and links
when there is serious congestion. This is called provisioning, and which
happens on a timescale of months, driven by long-term trends.

Step 3 − To utilise most existing network capacity, routers can be tailored to


traffic patterns making them active during daytime when network users are
using more and sleep in different time zones.

Step 4 − Some of local radio stations have helicopters flying around their cities
to report on road congestion to make it possible for their mobile listeners to
route their packets (cars) around hotspots. This is called traffic aware routing.

Step 5 − Sometimes it is not possible to increase capacity. The only way to


reduce the congestion is to decrease the load. In a virtual circuit network, new
connections can be refused if they would cause the network to become
congested. This is called admission control.

Step 6 − Routers can monitor the average load, queueing delay, or packet loss.
In all these cases, the rising number indicates growing congestion. The network
is forced to discard packets that it cannot deliver. The general name for this is
Load shedding. The better technique for choosing which packets to discard can
help to prevent congestion collapse.

Traffic Control Algorithms


Congestion Control is a mechanism that controls the entry of data packets into
the network, enabling a better use of a shared network infrastructure and
avoiding congestive collapse.
Congestive-Avoidance Algorithms (CAA) are implemented at the TCP layer as
the mechanism to avoid congestive collapse in a network.

There are two congestion control algorithm which are as follows:

1. Leaky Bucket Algorithm


2. Token bucket Algorithm

Leaky Bucket Algorithm

 The leaky bucket algorithm discovers its use in the context of network
traffic shaping or rate-limiting.
 A leaky bucket execution and a token bucket execution are
predominantly used for traffic shaping algorithms.
 This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.

Let us consider an example to understand

Similarly, each network interface contains a leaky bucket and the following
steps are involved in leaky bucket algorithm:
 When host wants to send packet, packet is thrown into the bucket.
 The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
 Bursty traffic is converted to a uniform traffic by the leaky bucket.
 In practice the bucket is a finite queue that outputs at a finite rate.

Token bucket Algorithm

 The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
 In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that
never loses information. Therefore, a token bucket algorithm finds its
uses in network traffic shaping or rate-limiting.
 It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
 The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
 When tokens are shown, a flow to transmit traffic appears in the display
of tokens.
 No token means no flow sends its packets. Hence, a flow transfers traffic
up to its peak burst rate in good tokens in the bucket.

Need of token bucket Algorithm:-

The leaky bucket algorithm enforces output pattern at the average rate, no
matter how bursty the traffic is. So in order to deal with the bursty traffic we
need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.
UNIT– IV : The Network Layer Design Issues – Store and Forward Packet Switching-Services
Provided to the Transport layer- Implementation of Connectionless Service-Implementation of
Connection Oriented Service- Comparison of Virtual Circuit and Datagram Networks, Routing
Algorithms-The Optimality principle-Shortest path, Flooding, Distance vector, Link state,
Hierarchical, Congestion Control algorithms-General principles of congestion control,
Congestion prevention polices, Approaches to Congestion Control-Traffic Aware Routing-
Admission Control-Traffic Throttling-Load Shedding. Traffic Control Algorithm-Leaky bucket &
Token bucket.

Internet Working: How networks differ- How networks can be connected- Tunneling,
internetwork routing-, Fragmentation, network layer in the internet – IP protocols-IP Version 4
protocol-IPV4 Header Format, IP addresses, Class full Addressing, CIDR, NAT-, Subnets-IP
Version 6-The main IPV6 header, Transition from IPV4 to IPV6, Comparison of IPV4 & IPV6-
Internet control protocols- ICMP-ARPDHCP

Internet Working
Introduction

 The term 'internetworking' is a fusion of 'inter' and 'networking' which


indicates a link between different nodes or segments. This connection is
facilitated by intermediary hardware such as routers or gateways.
 The original term for internetwork was 'catenet'.
 Networks of various natures including private, public, commercial,
industrial, and governmental often interconnect. Therefore,
internetworking refers to a conglomerate of multiple networks that
function as one large network, connected by intermediary networking
devices.
 The industry, merchandise, and methodologies that address the
challenge of creating and administering internetworks are collectively
known as internetworking.
Internetworking Work

 Every node or segment of a network is constructed using a similar


protocol or communication logic such as Transfer Control Protocol (TCP)
or Internet Protocol (IP), to enable communication.
 To enable communication, every individual network node or phase is
designed with a similar protocol or communication logic that is Transfer
Control Protocol (TCP) or Internet Protocol (IP).
 A network communicates with another network having constant
communication procedures, it’s called Internetworking. Internetworking
was designed to resolve the matter of delivering a packet of information
through many links.
 Internetworking is enforced in Layer three (Network Layer) of the OSI-ISO
model. The foremost notable example of internetworking is the Internet.

Types of Internetworking

Internetworking primarily comprises three components:

1. Extranet
2. Internet
3. Intranet

Internet connections may or may not exist on intranets and extranets.

Extranet

An extranet is a subset of an internetwork that is limited to a single


organization or institution but occasionally has limited connections to one or
more external networks. It is the most basic level of internet usage and is
typically restricted in highly private areas. An extranet could be a Metropolitan
Area Network (MAN), Wide Area Network (WAN), or another type of network.
However, it must include at least one external network and cannot consist of a
single Local Area Network (LAN).

Internet

The Internet is a specific form of internetworking that globally connects


governmental, academic, public, and private networks. It originated from the
ARPANET, developed by the Advanced Research Projects Agency (ARPA) of the
U.S. Defense Department. The World Wide Web (WWW) is also hosted on the
Internet. To distinguish it from other generic internetworks, it is referred to as
the 'Internet'.

Intranet

An Intranet is a collection of interconnected networks that use the Internet


Protocol and IP-based tools like web browsers and FTP tools, all of which are
under the control of a single entity. This entity restricts access to the Intranet
for the outside world and allows only a select group of users. This network is
often referred to as the internal network of a company or other enterprise. A
large intranet typically has its own web server to provide users with browseable
data.

Advantages and Disadvantages of Internetworking

Advantages

 Global Connectivity
 Scalability
 Resource Sharing
 Remote Access:
 Redundancy and Failover
Disadvantages

 Security Risks
 Network Congestion
 Privacy Concerns
 Dependency on Infrastructure
 Complexity

Tunneling

 A tunnel is a connection between two computer networks, in which data


is sent from one network to another through an encrypted link.
 This encrypted link ensures that the data passing through the tunnel
cannot be read or intercepted by anyone who does not have the proper
encryption key.
 Tunnels are commonly used to secure data communications between two
networks or to connect two networks that use different protocols.
 Tunnelling is a protocol for transferring data securely from one network
to another. Using a method known as encapsulation.
 Tunnelling allows private network communications to be sent across a
public network, such as the Internet.
 Port forwarding is another name for Tunnelling.
 When data is tunnelled, it is split into smaller parts called packets, as it
travels through the tunnel. The packets are encrypted via the tunnel,
and another process known as encapsulation takes place
 Tunnels can also be used to bypass firewalls and other security
measures that may be in place on a network. By encapsulating data in
an encrypted tunnel, it can be passed through a firewall without being
detected or blocked.
 Tunnels are typically created using software that runs on both ends of
the connection. The software establishes the tunnel and handles the
encryption and decryption of data passing through it.

There are a number of different protocols that can be used to create a tunnel.
The most common protocols are PPTP, L2TP, and IPSec.

1. Point-to-Point Tunneling Protocol (PPTP):

PPTP is a tunneling protocol that allows data to be passed through an


encrypted link between two networks. PPTP is commonly used to connect a
user’s PC to a remote server or to connect two remote servers.

2. Layer 2 Tunneling Protocol (L2TP):

L2TP is a tunneling protocol that allows data to be passed through an


encrypted link between two networks. L2TP is commonly used to connect a
user’s PC to a remote server or to connect two remote servers.

3. Internet Protocol Security (IPSec):

IPSec is a security protocol that can be used to create a secure tunnel between
two networks. IPSec is commonly used to secure data communications between
two networks or to connect two networks that use different protocols.

4. Secure Sockets Layer (SSL):


SSL is a security protocol that can be used to create a secure tunnel between
two networks. SSL is commonly used to secure data communications between
a web server and a web browser or to connect two networks that use different
protocols.

5. Transport Layer Security (TLS):

TLS is a security protocol that can be used to create a secure tunnel between
two networks. TLS is commonly used to secure data communications between
a web server and a web browser or to connect two networks that use different
protocols.

Internetwork routing

Routing through an internetwork is similar to routing within a single subnet, but with
some added complications

 Consider the internetwork of Fig. 3.8(a) in which five networks are


connected by six (possibly multiprotocol) routers.
 Making a graph model of this situation is complicated by the fact that
every router can directly access (i.e., send packets to) every other router
connected to any network to which it is connected.
 For example, B in Fig. 3.8(a) can directly access A and C via network 2
and also D via network 3.
 This leads to the graph of Fig. 3.8(b).
 Once the graph has been constructed, known routing algorithms, such
as the distance vector and link state algorithms, can be applied to the
set of multiprotocol routers.
 This gives a two-level routing algorithm: within each network an interior
gateway protocol is used, but between the networks, an exterior gateway
protocol is used (''gateway'' is an older term for ''router'').
 In fact, since each network is independent, they may all use different
algorithms.

How a packet reaches its destination?

 Each network in internetwork is often referred to as an Autonomous


System (AS).
 A typical internet packet starts out on its LAN addressed to the local
multiprotocol router (in the MAC layer header).
 After it gets there, the network layer code decides which multiprotocol
router to forward the packet to, using its own routing tables.
 If that router can be reached using the packet's native network protocol,
the packet is forwarded there directly.
 Otherwise it is tunneled there, encapsulated in the protocol required by
the intervening network. This process is repeated until the packet
reaches the destination network.

Fragmentation
The term fragmentation in computer networks refers to the breaking
up of a data packet into smaller pieces in order to fit it through a network with
a smaller maximum transmission unit (MTU) than the initial packet size.

What is Fragmentation in Computer Networks?

Fragmentation in networking takes place in the Network Layer. In the network


layer, when the maximum size of the datagram exceeds the maximum limit of
the frame (Maximum Transmission Unit) at that time fragmentation in
networking occurs.

Let’s take an example to understand the fragmentation in computer networks.

In the above example, we have divided the protocol data units of size 6000
bytes into 4 equal-sized (1500 bytes) new protocol data units and these new
size fragments are perfect for transferring the data because, for campus and
modern ethernet-based offices, 1500 bytes MTU is the standard.

 The maximum size of an IP datagram is 2^16 – 1 = 65, 535 bytes since


the IP header has a total length of 16 bits.
 The network layer performs fragmentation at the destination side,
typically at routers.
 The identification (16 bits) field in the IP header is used by the receiver to
identify the frame. A frame’s individual fragments all share the same
identification number.
 The fragment offset field of size 13 bits in the IP header is used for the
receiver to identify the sequence of the frame.
 Fragmentation does not require at the source side because of the
excellent segmentation by the transport layer.
 The extra header created by fragmentation results in overhead at the
network layer.

There are some factors that arise the requirement of fragmentation in computer
networks:

 Maximum Transmission Unit (MTU): The Maximum Transmission Unit


(MTU) is the largest size of a data packet that can be transmitted over a
particular network. It represents the maximum amount of data that can
be carried in a single packet from one network device to another without
the packet being fragmented. A data packet must be divided into smaller
fragments before being transmitted over the network if its size exceeds
the MTU.
 Bandwidth Utilization: In a network sometimes, large-sized data
packets may end up consuming a large amount of network bandwidth.
To solve this problem, fragmentation divides the large-sized data packet
into small fragments which helps to improve the bandwidth utilization in
the network.
 Network Performance: The network performance can be affected by the
large-sized data packets. Fragmentation can help to solve this critical
problem by dividing the large-sized data packets into small fragments
which can be transferred more effectively.

IP Header Fields for Fragmentation

Now, let’s look at some important fields in the IP header for fragmentation.

 Identification Field (16 bits): To identify the fragments of the same


frame this field is used.
 Fragment offset field (13 bits): This field is used to identify the
sequence of the fragments in the frame. In most cases, it denotes the
number of data bytes before or before the fragment. The largest fragment
offset that can be achieved is (65535 – 20) = 65515, where 65535 is the
largest datagram size possible and 20 is the smallest IP header size.
Since the fragment offset field only has 13 bits, we need ceil(log265515) =
16 bits for a fragment offset. Therefore, in order to efficiently represent,
we need to scale the fragment offset field down by 216/213 = 8, which
serves as a scaling factor. As a result, all fragments other than the final
fragment should have data in multiples of 8, ensuring that the fragment
offset equals N.
 More Fragment Field (MF): The size of this field is 1 bit. This field helps
to determine whether ant fragments are ahead of this fragment or not. If
the value of MF is 1 then there are some fragments ahead of this
fragment. If the value of MF is 0 then this fragment is the last fragment.
 Don’t Fragment Field (DF): This is also a 1-bit size field. If the value of
the DF field is 1 it means we do not want to fragment the packet.

Reassemble the Fragments

As the packets take different paths in the router so reassembly of the


fragments takes place at the destination side, not at the router. We have to
take care two facts into consideration while performing the reassembly of the
fragments. The first one is that all the fragments may not meet at the router
and the second one is that all the fragments may not arrive into the sequence.

Advantages of Fragmentation in Computer Networks

The advantages of fragmentation in computer networks include:


 It is used for network resources by allowing large data packets to be
transmitted over a network.
 These have improved network reliability by reducing the likelihood of
packet loss or corruption during transmission.
 They are compatible with different network technologies and protocols.
 The flexibility in accommodating varying network conditions and
transmission requirements.

Disadvantages of Fragmentation in Computer networks

The disadvantages of fragmentation in computer networks include:

 Increased overhead due to the additional packet headers needed for


reassembly.
 Longer transmission times due to the need for reassembly at the
destination.
 Increased likelihood of network congestion due to the higher number of
packets generated.
 Greater susceptibility to security threats such as packet fragmentation
attacks.

IPV4
 IP stands for Internet Protocol and v4 stands for Version Four (IPv4).
 IPv4 was the primary version brought into action for production within
the ARPANET in 1983.
 Internet Protocol Version 4 (IPv4) is the fourth revision of the Internet
Protocol and a widely used protocol in data communication over different
kinds of networks.
 IPv4 is a connectionless protocol used in packet-switched layer networks,
such as Ethernet.
 It provides the logical connection between network devices by providing
identification for each device.

 IPv4 could be a 32-Bit IP Address.


 IPv4 could be a numeric address, and its bits are separated by a dot.
 The number of header fields is twelve and the length of the header field is
twenty.
 It has Unicast, broadcast, and multicast style of addresses.
 IPv4 supports VLSM (Virtual Length Subnet Mask).
 IPv4 uses the Post Address Resolution Protocol to map to the MAC
address.
 RIP may be a routing protocol supported by the routed daemon.
 Networks ought to be designed either manually or with DHCP.
 Packet fragmentation permits from routers and causing host.
 IP version four addresses are 32-bit integers which will be expressed in
decimal notation.
Example- 192.0.2.126 could be an IPv4 address.

Parts of IPv4

 Network part
The network part indicates the distinctive variety that’s appointed to the
network. The network part conjointly identifies the category of the
network that’s assigned.
 Host Part
The host part uniquely identifies the machine on your network. This part
of the IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the
host half must vary.
 Subnetnumber
This is the nonobligatory part of IPv4. Local networks that have massive
numbers of hosts are divided into subnets and subnet numbers are
appointed to that.

Advantages of IPv4

 IPv4 security permits encryption to keep up privacy and security.


 IPV4 network allocation is significant and presently has quite 85000
practical routers.
 It becomes easy to attach multiple devices across an outsized network
while not NAT.
 This is a model of communication so provides quality service also as
economical knowledge transfer.
 IPV4 addresses are redefined and permit flawless encoding.

Limitations of IPv4

 IP relies on network layer addresses to identify end-points on network,


and each network has a unique IP address.
 The world’s supply of unique IP addresses is dwindling, and they might
eventually run out theoretically.
 If there are multiple host, we need IP addresses of next class.

IPV4 Classification

IPv4 uses 32-bit addresses for Ethernet communication in five classes:

1. class A
2. class B
3. class C
4. class D
5. class E
 Classes A, B and C have a different bit length for addressing the network
host. Class D addresses are reserved for multicasting, while class E
addresses are reserved for future use.
 Class A has subnet mask 255.0.0.0 or /8
 Class B has subnet mask 255.255.0.0 or /16
 Class C has subnet mask 255.255.255.0 or /24.
 For example, with a /16 subnet mask, the network 192.168.0.0 may use
the address range of 192.168.0.0 to 192.168.255.255.
 Network hosts can take any address from this range; however, address
192.168.255.255 is reserved for broadcast within the network.
 The maximum number of host addresses IPv4 can assign to end users is
232.
 IPv6 presents a standardized solution to overcome IPv4’s limitations.
Because of its 128-bit address length, it can define up to 2,128
addresses.

IPV4 Header Frame Format

The presence of options, the size of the datagram header can be of variable
length (20 bytes to 60 bytes).
VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4

HLEN: IP header length (4 bits), which is the number of 32 bit words in the
header. The minimum value for this field is 5 and the maximum is 15.

Type of service: Low Delay, High Throughput, Reliability (8 bits)

Total Length: Length of header + Data (16 bits), which has a minimum value
20 bytes and the maximum is 65,535 bytes.

Identification: Unique Packet Id for identifying the group of fragments of a


single IP datagram (16 bits)

Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag,
more fragments flag (same order)

Fragment Offset: Represents the number of Data Bytes ahead of the


particular fragment in the particular Datagram. Specified in terms of number
of 8 bytes, which has the maximum value of 65,528 bytes.

Time to live: Datagram’s lifetime (8 bits), It prevents the datagram to loop


through the network by restricting the number of Hops taken by a Packet
before delivering to the Destination.

Protocol: Name of the protocol to which the data is to be passed (8 bits)

Header Checksum: 16 bits header checksum for checking errors in the


datagram header

Source IP address: 32 bits IP address of the sender

Destination IP address: 32 bits IP address of the receiver

Option: Optional information such as source route, record route. Used by the
Network administrator to check whether a path is working or not.
IP Addresses
An IP address represents a unique address that distinguishes any device on the
internet or any network from another. IP or Internet Protocol defines the set of
commands directing the setup of data transferred through the internet or any
other local network.

An IP address is the identifier that enables your device to send or receive data
packets across the internet

An IP address is represented by a series of numbers segregated by periods(.).


They are expressed in the form of four pairs - an example address might be
255.255.255.255 wherein each set can range from 0 to 255.

IP addresses are not produced randomly. They are generated mathematically


and are further assigned by the IANA (Internet Assigned Numbers Authority), a
department of the ICANN

Types of IP addresses

There are various classifications of IP addresses, and each category further


contains some types.

Consumer IP addresses

Every individual or firm with an active internet service system pursues two
types of IP addresses, i.e., Private IP (Internet Protocol) addresses and public IP
(Internet Protocol) addresses. The public and private correlate to the network
area. Therefore, a private IP address is practiced inside a network, whereas the
other (public IP address) is practiced outside a network.
1. Private IP addresses

All the devices that are linked with your internet network are allocated a
private IP address. It holds computers, desktops, laptops, smartphones,
tablets, or even Wi-Fi-enabled gadgets such as speakers, printers, or smart
Televisions. With the expansion of IoT (internet of things), the demand for
private IP addresses at individual homes is also seemingly growing.

2. Public IP addresses

A public IP address or primary address represents the whole network of devices


associated with it. Every device included within with your primary address
contains their own private IP address. ISP is responsible to provide your public
IP address to your router. Typically, ISPs contains the bulk stock of IP
addresses that they dispense to their clients. Your public IP address is
practiced by every device to identify your network that is residing outside your
internet network.

3.Dynamic IP addresses
As the name suggests, Dynamic IP addresses change automatically and
frequently. With this types of IP address, ISPs already purchase a bulk stock of
IP addresses and allocate them in some order to their customers. Periodically,
they re-allocate the IP addresses and place the used ones back into the IP
addresses pool so they can be used later for another client. The foundation for
this method is to make cost savings profits for the ISP.

4. Static IP addresses
In comparison to dynamic IP addresses, static addresses are constant in
nature. The network assigns the IP address to the device only once and, it
remains consistent. Though most firms or individuals do not prefer to have a
static IP address, it is essential to have a static IP address for an organization
that wants to host its network server. It protects websites and email addresses
linked with it with a constant IP address.

Class full Addressing


An IP address is an address having information about how to reach a specific host, especially
outside the LAN. An IP address is a 32-bit unique address having an address space of 232.

Generally, there are two notations in which the IP address is written, dotted decimal notation and
hexadecimal notation.

Dotted Decimal Notation

Hexadecimal Notation

Some points to be noted about dotted decimal notation:

1. The value of any segment (byte) is between 0 and 255 (both included).

2. No zeroes are preceding the value in any segment (054 is wrong, 54 is correct).
Class full Addressing
The 32-bit IP address is divided into five sub-classes. These are:

 Class A

 Class B

 Class C

 Class D

 Class E

Each of these classes has a valid range of IP addresses. Classes D and E are
reserved for multicast and experimental purposes respectively. The order of bits
in the first octet determines the classes of the IP address.

The IPv4 address is divided into two parts:

 Network ID

 Host ID

The class of IP address is used to determine the bits used for network ID and
host ID and the number of total networks and hosts possible in that particular
class. Each ISP or network administrator assigns an IP address to each device
that is connected to its network.
Class A

IP addresses belonging to class A are assigned to the networks that contain a


large number of hosts.

 The network ID is 8 bits long.

 The host ID is 24 bits long.

The higher-order bit of the first octet in class A is always set to 0. The
remaining 7 bits in the first octet are used to determine network ID. The 24
bits of host ID are used to determine the host in any network. The default
subnet mask for Class A is 255.x.x.x. Therefore, class A has a total of:

 2^24 – 2 = 16,777,214 host ID

IP addresses belonging to class A ranges from 1.0.0.0 – 126.255.255.255.

Class B

IP address belonging to class B is assigned to networks that range from


medium-sized to large-sized networks.

 The network ID is 16 bits long.

 The host ID is 16 bits long.

The higher-order bits of the first octet of IP addresses of class B are always set
to 10. The remaining 14 bits are used to determine the network ID. The 16 bits
of host ID are used to determine the host in any network. The default subnet
mask for class B is 255.255.x.x. Class B has a total of:

 2^14 = 16384 network address

 2^16 – 2 = 65534 host address

IP addresses belonging to class B ranges from 128.0.0.0 – 191.255.255.255.

Class C

IP addresses belonging to class C are assigned to small-sized networks.

 The network ID is 24 bits long.


 The host ID is 8 bits long.

The higher-order bits of the first octet of IP addresses of class C is always set to
110. The remaining 21 bits are used to determine the network ID. The 8 bits of
host ID are used to determine the host in any network. The default subnet
mask for class C is 255.255.255.x. Class C has a total of:

 2^21 = 2097152 network address

 2^8 – 2 = 254 host address

IP addresses belonging to class C range from 192.0.0.0 – 223.255.255.255.

Class D

IP address belonging to class D is reserved for multi-casting. The higher-order


bits of the first octet of IP addresses belonging to class D is always set to 1110.
The remaining bits are for the address that interested hosts recognize.

Class D does not possess any subnet mask. IP addresses belonging to class D
range from 224.0.0.0 – 239.255.255.255.

Class E

IP addresses belonging to class E are reserved for experimental and research


purposes. IP addresses of class E range from 240.0.0.0 – 255.255.255.254.
This class doesn’t have any subnet mask. The higher-order bits of the first
octet of class E are always set to 1111.

Range of Special IP Addresses

169.254.0.0 – 169.254.0.16 : Link-local addresses


127.0.0.0 – 127.255.255.255 : Loop-back addresses
0.0.0.0 – 0.0.0.8: used to communicate within the current network.
CIDR (Classless Inter-Domain Routing or supernetting)
CIDR (Classless Inter-Domain Routing or supernetting) is a method of
assigning IP addresses that improves the efficiency of address distribution and
replaces the previous system based on Class A, Class B and Class C networks.

The initial goal of CIDR was to slow the increase of routing tables on routers
across the internet and decrease the rapid exhaustion of IPv4 addresses. As a
result, the number of available internet addresses has greatly increased.

The original classful network design of the internet included inefficiencies that
drained the pool of unassigned IPv4 addresses faster than necessary. The
classful design included the following:

 Class A, with over 16 million identifiers

 Class B, with 65,535 identifiers

 Class C, with 254 host identifiers

CIDR is based on variable-length subnet masking (VLSM), which enables


network engineers to divide an IP address space into a hierarchy of subnets of
different sizes. This makes it possible to create subnetworks with different host
counts without wasting large numbers of addresses.

CIDR addresses are made up of two sets of numbers:

1. Prefix. The prefix is the binary representation of the network address --


similar to what would be seen in a normal IP address.

2. Suffix. The suffix declares the total number of bits in the entire address.
For example, CIDR notation might look like: 192.168.129.23/17 -- with 17
being the number of bits in the address. IPv4 addresses support a maximum of
32 bits.

The same CIDR notation can be applied to IPv6 addresses. The only difference
is IPv6 addresses can contain up to 128 bits.

CIDR blocks
CIDR blocks are groups of addresses that share the same prefix and contain
the same number of bits. Supernetting is the combination of multiple
connecting CIDR blocks into a larger whole, all of which share a common
network prefix.

The length of a prefix determines the size of CIDR blocks. A short prefix
supports more addresses -- and, therefore, forms a bigger block -- while a
longer prefix indicates fewer addresses and a smaller block.
The Internet Assigned Numbers Authority (IANA) initially handles CIDR blocks.
IANA is responsible for distributing large blocks of IP addresses to Regional
Internet Registries (RIRs).

CIDR notation

IP sets aside some addresses for specific purposes. For example, several ranges
-- such as the Class B 192.168.0.0 -- are set aside as nonroutable and are
used to define a private network. Most home broadband routers assign
addresses from the 192.168 network for systems inside the home.

Originally, IP addresses were assigned in four major address classes: A through


C. Each class allocated one portion of a 32-bit IP address to identify the
gateway router for that network -- the first 8 bits for Class A, the first 16 for
Class B, the first 24 for Class C. Bits not used for the network identifier were
available for specifying host identifiers for systems on that network.

It helps to think of the binary representation of the network addresses. For


IPv4, the 32-bit address is broken into four groups of 8 bits each -- called
a dotted quad of numbers.

A dotted quad looks like this in decimal form: 192.168.0.0. In binary form, it
looks like this: 11000000.10101000.00000000.00000000.

Advantages of CIDR
 CIDR reduced the problem of wasted IPv4 address space without causing
an explosion in the number of entries in a routing table
 CIDR is now the routing system on the internet's backbone network, and
every ISP uses it. It is supported by the Border Gateway Protocol (BGP),
the prevailing exterior (interdomain) gateway protocol and the Open
Shortest Path First (OSPF) gateway protocol.
 Older gateway protocols, such as Exterior Gateway Protocol and Routing
Information Protocol, do not support CIDR.

NAT (NETWORK ADDRESS TRANSALATION)

 To access the Internet, one public IP address is needed, but we


can use a private IP address in our private network. The idea of
NAT is to allow multiple devices to access the Internet through a
single public address.
 To achieve this, the translation of a private IP address to a public
IP address is required.
Network Address Translation (NAT) is a process in which one or more
local IP address is translated into one or more Global IP address and
vice versa in order to provide Internet access to the local hosts. Also, it
does the translation of port numbers
Network Address Translation (NAT) working –
Generally, the border router is configured for NAT i.e the router which
has one interface in the local (inside) network and one interface in the
global (outside) network.
When a packet traverse outside the local (inside) network, then NAT
converts that local (private) IP address to a global (public) IP address.
When a packet enters the local network, the global (public) IP address is
converted to a local (private) IP address.
 If NAT runs out of addresses, i.e., no address is left in the pool
configured then the packets will be dropped and an Internet
Control Message Protocol (ICMP) host unreachable packet to the
destination is sent.
NAT inside and outside addresses

Inside refers to the addresses which must be translated. Outside refers to the
addresses which are not in control of an organization.

 Inside local address – An IP address that is assigned to a host on the


Inside (local) network. The address is probably not an IP address assigned
by the service provider i.e., these are private IP addresses. This is the
inside host seen from the inside network.

 Inside global address – IP address that represents one or more inside


local IP addresses to the outside world. This is the inside host as seen from
the outside network.

 Outside local address – This is the actual IP address of the destination


host in the local network after translation.

 Outside global address – This is the outside host as seen from the outside
network. It is the IP address of the outside destination host before
translation.
Network Address Translation (NAT) Types

There are 3 ways to configure NAT:

1. Static NAT – In this, a single unregistered (Private) IP address is mapped


with a legally registered (Public) IP address i.e one-to-one mapping between
local and global addresses. This is generally used for Web hosting.

Suppose, if there are 3000 devices that need access to the Internet, the
organization has to buy 3000 public addresses that will be very costly.

2. Dynamic NAT – In this type of NAT, an unregistered IP address is


translated into a registered (Public) IP address from a pool of public IP
addresses. If the IP address of the pool is not free, then the packet will be
dropped as only a fixed number of private IP addresses can be translated
to public addresses.
3. Port Address Translation (PAT) – This is also known as NAT overload. In
this, many local (private) IP addresses can be translated to a single
registered IP address. Port numbers are used to distinguish the traffic i.e.,
which traffic belongs to which IP address. This is most frequently used as
it is cost-effective as thousands of users can be connected to the Internet
by using only one real global (public) IP address.

IPV6

 IPv6 or Internet Protocol Version 6 is a network layer protocol that


allows communication to take place over the network.
 IPv6 was developed by Internet Engineering Task Force (IETF) to deal
with the problem of IPv4 exhaustion.
 IPv6 is a 128-bit address having an address space of 2 128, which is way
bigger than IPv4. IPv6 use Hexa-Decimal format separated by colon (:).
 The common type of IP address (is known as IPv4, for “version 4”).
Here’s an example of what an IP address might look like:

25.59.209.224

 An IPv6 address consists of eight groups of four hexadecimal digits.


Here’s an example IPv6 address:

3001:0da8:75a3:0000:0000:8a2e:0370:7334

Types of IPv6 Address


Now that we know about what is IPv6 address let’s take a look at its different
types.

 Unicast addresses It identifies a unique node on a network and usually


refers to a single sender or a single receiver.
 Multicast addresses It represents a group of IP devices and can only be
used as the destination of a datagram.
 Anycast addresses It is assigned to a set of interfaces that typically belong
to different nodes.
Advantages of IPv6
 Reliability
 Faster Speeds
 Stronger Security
 Routing efficiency
 Most importantly it’s the final solution for growing nodes in Global-
network.
Disadvantages of IPv6
 Conversion
 Communication
IPV6 Header Format

IP version 6 is the new version of Internet Protocol, which is way better than
IP version 4 in terms of complexity and efficiency. Let’s look at the header of
IP version 6 and understand how it is different from the IPv4 header.

Version (4-bits): Indicates version of Internet Protocol which contains


bit sequence 0110.

Traffic Class (8-bits): The Traffic Class field indicates class or priority
of IPv6 packet which is similar to Service Field in IPv4 packet.

Priority assignment of Congestion controlled traffic: Uncontrolled


data traffic is mainly used for Audio/Video data. So we give higher
priority to uncontrolled data traffic.

Flow Label (20-bits): Flow Label field is used by a source to label the
packets belonging to the same flow in order to request special handling
by intermediate IPv6 routers, such as non-default quality of service or
real-time service.
Payload Length (16-bits): It is a 16-bit (unsigned integer) field,
indicates the total size of the payload which tells routers about the
amount of information a particular packet contains in its payload. The
payload Length field includes extension headers(if any) and an upper-
layer packet.
Next Header (8-bits): Next Header indicates the type of extension
header(if present) immediately following the IPv6 header. Whereas In
some cases it indicates the protocols contained within upper-layer
packets, such as TCP, UDP.
Hop Limit (8-bits): Hop Limit field is the same as TTL in IPv4 packets.
It indicates the maximum number of intermediate nodes IPv6 packet is
allowed to travel. Its value gets decremented by one, by each node that
forwards the packet and the packet is discarded if the value decrements
to 0. This is used to discard the packets that are stuck in an infinite
loop because of some routing error.
Source Address (128-bits): Source Address is the 128-bit IPv6 address
of the original source of the packet.
Destination Address (128-bits): The destination Address field indicates
the IPv6 address of the final destination(in most cases). All the
intermediate nodes can use this information in order to correctly route
the packet.
Extension Headers: In order to rectify the limitations of the IPv4 Option
Field, Extension Headers are introduced in IP version 6. The extension
header mechanism is a very important part of the IPv6 architecture.
Transistion From IPV4 to IPV6

Various organization is currently working with IPv4 technology and in one day
we can’t switch directly from IPv4 to IPv6. Instead of only using IPv6, we use
combination of both and transition means not replacing IPv4 but co-existing
of both.

When we want to send a request from an IPv4 address to an IPv6 address, but
it isn’t possible because IPv4 and IPv6 transition is not compatible. For a
solution to this problem, we use some technologies. These technologies
are Dual Stack Routers, Tunneling, and NAT Protocol Translation.

1. Dual-Stack Routers

In dual-stack router, A router’s interface is attached with IPv4 and IPv6


addresses configured are used in order to transition from IPv4 to IPv6.

In this above diagram, A given server with both IPv4 and IPv6 addresses
configured can communicate with all hosts of IPv4 and IPv6 via dual-stack
router (DSR). The dual stack router (DSR) gives the path for all the hosts to
communicate with the server without changing their IP addresses.
2. Tunneling

 Tunneling is used as a medium to communicate the transit network


with the different IP versions.

 In this above diagram, the different IP versions such as IPv4 and IPv6
are present. The IPv4 networks can communicate with the transit or
intermediate network on IPv6 with the help of the Tunnel. It’s also
possible that the IPv6 network can also communicate with IPv4
networks with the help of a Tunnel.
3. NAT Protocol Translation
 With the help of the NAT Protocol Translation technique, the IPv4 and
IPv6 networks can also communicate with each other which do not
understand the address of different IP version.
 Generally, an IP version doesn’t understand the address of different IP
version, for the solution of this problem we use NAT-PT device which
removes the header of first (sender) IP version address and add the
second (receiver)
In the above diagram, an IPv4 address communicates with the IPv6 address
via a NAT-PT device to communicate easily. In this situation, the IPv6 address
understands that the request is sent by the same IP version (IPv6) and it
responds.

Comparisons (Differences) between IPv4 and IPv6

IPv4 IPv6

IPv4 has a 32-bit address


IPv6 has a 128-bit address length
length

It Supports Manual and


It supports Auto and renumbering address
DHCP address
configuration
configuration

In IPv4 end to end,


In IPv6 end-to-end, connection integrity is
connection integrity is
Achievable
Unachievable

It can generate The address space of IPv6 is quite large it can


4.29×109 address space produce 3.4×1038 address space

The Security feature is


IPSEC is an inbuilt security feature in the IPv6
dependent on the
protocol
application

Address representation of
Address Representation of IPv6 is in hexadecimal
IPv4 is in decimal

Fragmentation performed by
In IPv6 fragmentation is performed only by the
Sender and forwarding
sender
routers

In IPv4 Packet flow In IPv6 packet flow identification are Available and
identification is not available uses the flow label field in the header
IPv4 IPv6

In IPv4 checksum field is


In IPv6 checksum field is not available
available

It has a broadcast Message In IPv6 multicast and anycast message


Transmission Scheme transmission scheme is available

In IPv4 Encryption and In IPv6 Encryption and Authentication are


Authentication facility not provided
provided

IPv4 has a header of 20-60 IPv6 has a header of 40 bytes fixed


bytes.

IPv4 can be converted to


Not all IPv6 can be converted to IPv4
IPv6

IPv4 consists of 4 fields


IPv6 consists of 8 fields, which are separated by a
which are separated by
colon (:)
addresses dot (.)

IPv4’s IP addresses are


divided into five different
IPv6 does not have any classes of the IP address.
classes. Class A , Class B,
Class C, Class D , Class E.

IPv4 supports
VLSM(Variable Length IPv6 does not support VLSM.
subnet mask).

Example of IPv4: Example of IPv6:


66.94.29.13 2001:0000:3238:DFE1:0063:0000:0000:FEFB
ICMP (Internet Control Message Protocol)
 Internet Control Message Protocol (ICMP) is a network layer protocol
used to diagnose communication errors by performing an error control
mechanism.
 Since IP does not have an inbuilt mechanism for sending error and
control messages. It depends on Internet Control Message
Protocol(ICMP) to provide error control.
 ICMP is used for reporting errors and management queries. It is a
supporting protocol and is used by network devices like routers for
sending error messages and operations information.

Uses of ICMP
ICMP is used for error reporting if two devices connect over the internet and
some error occurs, So, the router sends an ICMP error message to the source
informing about the error.

Another important use of ICMP protocol is used to perform network diagnosis


by making use of trace router and ping utility. We will discuss them one by
one.

 Traceroute: Trace route utility is used to know the route between two
devices connected over the internet. It routes the journey from one
router to another, and a traceroute is performed to check network
issues before data transfer.
 Ping: Ping is a simple kind of traceroute known as the echo-request
message, it is used to measure the time taken by data to reach the
destination and return to the source, these replies are known as echo-
replies messages.

ICMP Working principle


 ICMP is the primary and important protocol of the IP suite, as it is a
connectionless protocol.
 The working of ICMP is just contrasting with TCP, as TCP is a
connection-oriented protocol whereas ICMP is a connectionless
protocol. Whenever a connection is established before the message
sending, both devices must be ready through a TCP Handshake.
 ICMP packets are transmitted in the form of datagrams that contain an
IP header with ICMP data. ICMP datagram is similar to a packet, which
is an independent data entity.

ICMP Packet Format

ICMPv4 Packet Format

In the ICMP packet format, the first 32 bits of the packet contain three fields:

Type (8-bit): The initial 8-bit of the packet is for message type, it provides a
brief description of the message.
Some common message types are as follows:
 Type 0 – Echo reply
 Type 3 – Destination unreachable
 Type 5 – Redirect Message
 Type 8 – Echo Request
 Type 11 – Time Exceeded
 Type 12 – Parameter problem
Code (8-bit): Code is the next 8 bits of the ICMP packet format, this field
carries some additional information about the error message and type.
Checksum (16-bit): Last 16 bits are for the checksum field in the ICMP
packet header. The checksum is used to check the number of bits of the
complete message and enable the ICMP tool to ensure that complete data is
delivered.
The next 32 bits of the ICMP Header are Extended Header which has the work
of pointing out the problem in IP Message. Byte locations are identified by the
pointer which causes the problem message and receiving device looks here for
pointing to the problem.

The last part of the ICMP packet is Data or Payload of variable length. The
bytes included in IPv4 are 576 bytes and in IPv6, 1280 bytes.

Types of ICMP Messages

Type Code Description

0 – Echo Reply 0 Echo reply

0 Destination network unreachable

1 Destination host unreachable

3 – Destination
2 Destination protocol unreachable
Unreachable

3 Destination port unreachable

4 Fragmentation is needed and the DF flag set


Type Code Description

5 Source route failed

0 Redirect the datagram for the network

1 Redirect datagram for the host

5 – Redirect Message
Redirect the datagram for the Type of Service
2
and Network

3 Redirect datagram for the Service and Host

8 – Echo Request 0 Echo request

9 – Router
0
Advertisement Use to discover the addresses of operational
routers

10 – Router Solicitation 0

0 Time to live exceeded in transit

11 – Time Exceeded

1 Fragment reassembly time exceeded.


Type Code Description

0 The pointer indicates an error.

12 – Parameter Problem
1 Missing required option

2 Bad length

13 – Timestamp 0 Used for time synchronization

14 – Timestamp Reply 0 Reply to Timestamp message

Source Quench Message


A source quench message is a request to decrease the traffic rate for
messages sent to the host destination) or we can say when receiving host
detects that the rate of sending packets (traffic rate) to it is too fast it sends
the source quench message to the source to slow the pace down so that no
packet can be lost.

ICMP will take the source IP from the discarded packet and inform the source
by sending a source quench message. The source will reduce the speed of
transmission so that router will be free from congestion.
UNIT - 5

The Transport Layer: Transport layer protocols: Introduction-services- port number-User data
gram protocol-User datagram-UDP services-UDP applications-Transmission control protocol:
TCP services TCP features- Segment- A TCP connection- windows in TCP- flow control-Error
control, Congestion control in TCP.

Application Layer –- World Wide Web: HTTP, Electronic mail-Architecture- web based mail-
email security- TELENET-local versus remote Logging-Domain Name System: Name Space,
DNS in Internet,- Resolution-Caching- Resource Records- DNS messages- Registrars-security
of DNS Name Servers, SNMP.

What is the transport layer?


The transport layer is Layer 4 of the Open Systems Interconnection (OSI) communications model
and 2nd layer in the TCP/IP model. It is responsible for ensuring that the data packets arrive
accurately and reliably between sender and receiver. The transport layer most often uses TCP or
User Datagram Protocol (UDP).

In the OSI model, the transport layer sits between the network layer and the session layer. The
network layer is responsible for taking the data packets and sending them to the correct computer.
The transport layer then takes the received packets, checks them for errors and sorts them. Then,
it sends them to the session layer of the correct program running on the computer.

The unit of data encapsulation in the Transport Layer is a segment.

Working of Transport Layer


The transport layer takes services from the Application layer and provides services to
the Network layer.
At the sender’s side: The transport layer receives data (message) from the Application layer
and then performs Segmentation, divides the actual message into segments, adds the source and
destination’s port numbers into the header of the segment, and transfers the message to the
Network layer.

At the receiver’s side: The transport layer receives data from the Network layer, reassembles
the segmented data, reads its header, identifies the port number, and forwards the message to the
appropriate port in the Application layer.

5. Transport layer protocols


 There are two common protocols in transport layer
 UDP
 TCP
 UDP is a simple and fast transport protocol. It is for connectionless transmissions. It is
considered unreliable because it does not use acknowledgements or retransmissions, so
packets may be lost. UDP is best for real-time data where speed of delivery is more
important than reliability, such as for video conferencing.
 TCP is the more feature-rich transport protocol. It is connection-oriented. It
uses synchronization and acknowledgement messages to ensure delivery. It retransmits
and reorders packets if needed. It can negotiate sending and receiving rates. TCP is slower
than UDP. TCP is the most common protocol on the internet.

5.1 Services
The transport layer is responsible for providing services to the application layer; it receives
services from the network layer.
The services provided by the transport layer are similar to those of the data link layer. The data
link layer provides the services within a single network while the transport layer provides the
services across an internetwork made up of many networks. The data link layer controls the
physical layer while the transport layer controls all the lower layers.

The services provided by the transport layer protocols can be divided into five categories:
 Process to Process Communications
 Addressing : Port numbers
 Encapsulation and decapsulations
 Multiplexing & Demultiplexing

 Flow control
 Error control
 Congestion control

 Process to Process Communications

 The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
 This involves multiplexing of data from different application processes, i.e. forming data
packets, and adding source and destination port numbers in the header of each Transport
Layer data packet.
 Together with the source and destination IP address, the port numbers constitute a network
socket, i.e. an identification address of the process-to-process communication.

 Flow Control

 Flow Control is the process of managing the rate of data transmission between two nodes to
prevent a fast sender from overwhelming a slow receiver.

 It provides a mechanism for the receiver to control the transmission speed, so that the receiving
node is not overwhelmed with data from transmitting node.

 There are two categories of flow control

 Stop and Wait -- send one frame at a time

 Sliding Window -- Send many frames at a time

 Addressing
 The ability to communicate with the correct application on the computer. Addressing
typically uses network ports to assign each sending and receiving application a specific
port number on the machine. By combining the IP address used in the network layer and
the port on the transport layer, each application can have a unique address.
 Ports are the essential ways to address multiple entities in the same location.
 Using port addressing it is possible to use more than one network-based application at the
same time.
 Three types of Port numbers are used

 Well-known ports - These are permanent port numbers. They range between 0 to
1023.These port numbers are used by Server Process.

 Registered ports - The ports ranging from 1024 to 49,151 are not assigned or
controlled.

 Ephemeral ports (Dynamic Ports) – These are temporary port numbers. They
range between 49152–65535.These port numbers are used by Client Process.

 Encapsulation and Decapsulation

To send a message from one process to another, the transport-layer protocol encapsulates and
decapsulates messages.
 Encapsulation happens at the sender site. The transport layer receives the data and adds the
transport-layer header.
 Decapsulation happens at the receiver site. When the message arrives at the destination
transport layer, the header is dropped and the transport layer delivers the message to the
process running at the application layer.

 Multiplexing

 The transport layer uses the multiplexing to improve transmission efficiency.

Multiplexing can occur in two ways:


 Upward multiplexing: Upward multiplexing means multiple transport layer connections
use the same network connection. To make more cost-effective, the transport layer sends
several transmissions bound for the same destination along the same path; this is achieved
through upward multiplexing.

 Downward multiplexing: Downward multiplexing means one transport layer connection


uses the multiple network connections. Downward multiplexing allows the transport layer
to split a connection among several paths to improve the throughput. This type of
multiplexing is used when networks have a low or slow capacity.

 Error Control

 Error control at the transport layer is responsible for


1. Detecting and discarding corrupted packets.
2. Keeping track of lost and discarded packets and resending them.
3. Recognizing duplicate packets and discarding them.
4. Buffering out-of-order packets until the missing packets arrive.

 Error Control involves two methodologies


1. Error Detection
2. Error Correction

 Congestion Control
Congestion in a network may occur if the load on the network (the number of packets sent to the
network) is greater than the capacity of the network (the number of packets a network can handle).
 Congestion control refers to the mechanisms and techniques that control the congestion
and keep the load below the capacity.
 Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
 Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
5.2 PORT NUMBERS
What is a port number?
 Port number identifies a specific process to which an Internet or other network message is
to be forwarded when it arrives at a server. Ports are identified for each protocol and It is
considered as a communication endpoint.
 Port numbers consist of 16-bit numbers. There are 2^16 port numbers i.e 65536 available.

 These are divided into three ranges:


1. Well-known ports (0 to 1023).
2. Registered ports. The registered ports are those from 1,024 - 49,151. ...
3. Dynamic and/or private ports. The dynamic and/or private ports are those from
49,152 - 65,535.

 Well-known ports

 These are permanent port numbers used by the servers.


 They range between 0 to 1023. registered with IANA for a specific service
 This port number cannot be chosen randomly.
 These port numbers are universal port numbers for servers.
 Every client process knows the well-known port number of the corresponding server
process.
 For example, while the daytime client process, a well-known client program, can use an
ephemeral (temporary) port number, 52,000, to identify itself, the daytime server process
must use the well-known (permanent) port number 13.

 Ephemeral ports (dynamic ports)


 The client program defines itself with a port number, called the ephemeral port number.
 The word ephemeral means “short-lived” and is used because the life of a client is normally
short.
 An ephemeral port number is recommended to be greater than 1023.
 These port number ranges from 49,152 to 65,535 .
 They are neither controlled nor registered.
 They can be used as temporary or private port numbers.

 Registered ports
 The ports ranging from 1024 to 49,151 are not assigned or controlled
Top 25 Most Popular Ports

From the top 100 these are the 25 most popular ports (most frequently used).

1. Port 80 TCP – (HTTP)


2. Port 443 TCP – (HTTPS)
3. Port 67-68 UDP – (DHCP)
4. Port 20-21 – (FTP)
5. Port 23 – (Telnet)
6. Port 22 – (SSH)
7. Port 53 TCP/UDP – (DNS)
8. Port 8080
9. Port 123 UDP – (NTP)
10. Port 25 TCP – (SMTP)
11. Port 3389 TCP (RDP)
12. Port 110 TCP – (POP3)
13. Port 554 TCP/UDP – (RTSP)
14. Port 445 – (SMB/Cifs)
15. Port 587
16. Port 993 TCP – (IMAPS)
17. Port 137-139 – (NetBois)
18. Port 8008
19. Port 500 UDP – (SIP)
20. Port 143
21. Port 161-162 UDP – (SNMP)
22. Port 389 TCP – (LDAP)
23. Port 1434 UDP – (Microsoft SQL)
24. Port 5900 TCP – (VNC)

5.3 User data gram protocol


 Three protocols are associated with the Transport layer. They are
(1) UDP –User Datagram Protocol
(2) TCP – Transmission Control Protocol
(3) SCTP - Stream Control Transmission Protocol
 Each protocol provides a different type of service and should be used appropriately
 UDP - UDP is an unreliable connectionless transport-layer protocol used for its simplicity
and efficiency in applications where error control can be provided by the application-layer
process.
 TCP - TCP is a reliable connection-oriented protocol that can be used in any application
where reliability is important.
 SCTP - SCTP is a new transport-layer protocol designed to combine some features of UDP
and TCP in an effort to create a better protocol for multimedia communication.

USER DATAGRAM PROTOCOL (UDP)

The User Datagram Protocol (UDP) is simplest Transport Layer communication protocol available
of the TCP/IP protocol

There are many transport layer protocols. The transport layer provides process to process
communication. So today, we are here with one of the simplest protocols of Transport Layer.

 User Datagram Protocol was developed by David P. Reed in 1980.


 UDP protocol is the connectionless and unreliable protocol.
 Since UDP is a connectionless protocol there is no need to establish a connection before
transmitting data.
 User Datagram Protocol gives us a set of rules for transmitting data over the internet.
 UDP packets are called User Datagram.
 User Datagram has 8 bytes fixed-size header.
 UDP protocol will work just like an alternative to TCP (Transmission Control Protocol).
 Process can use UDP protocol if they don’t care much about the reliability of transmission
and want to send a small message.

UDP FRAME FORMAT


UDP packets are known as user datagrams.
These user datagrams, have a fixed-size header of 8 bytes made of four fields, each of 2 bytes (16
bits)

UDP header contains four main parameters:

 Source Port - This 16 bits information is used to identify the source port of the packet.
 Destination Port - This 16 bits information, is used identify application level service on
destination machine.
 Length - Length field specifies the entire length of UDP packet (including header). It is
16-bits field and minimum value is 8-byte, i.e. the size of UDP header itself.
 Checksum - This field stores the checksum value generated by the sender before sending.
IPv4 has this field as optional so when checksum field does not contain any value it is made
0 and all its bits are set to zero.

Features OR Services of UDP protocol

Transport layer protocol

 User Datagram Protocol is a transport layer protocol.


 UDP is considered as an unreliable and connection-less protocol
Connectionless service
 UDP provides a connectionless service.
 There is no connection establishment and no connection termination .
 Each user datagram sent by UDP is an independent datagram.
 There is no relationship between the different user datagrams even if they are
 coming from the same source process and going to the same destination program.
 The user datagrams are not numbered.
 Each user datagram can travel on a different path. Ordered delivery of data is not
guaranteed.

Order of delivery not guaranteed

 UDP does not guarantee the order of the datagram. A datagram can be received in any
order
 The UDP protocol utilizes different port numbers for transmitting data to the correct
destination.
 The port numbers are defined between 0 - 1023.

Faster transmission

 UDP provides us a faster service of data transmission as there is no prior connection


establishment before transmitting the data.
 UDP does not require any virtual path for data transmission.

Acknowledgment mechanism

There is no acknowledgment mechanism provided by UDP as UDP protocol is a connection-less


protocol, so there is no handshaking.

Segments are handled independently.

Every segment in UDP takes a different path to reach the destination. So, every UDP packet is
handled independent of other UDP packets.
Stateless

UDP protocol is a stateless protocol which means that the sender does not wait for an
acknowledgment after sending the packet.

Flow Control
 UDP is a very simple protocol.
 There is no flow control, and hence no window mechanism.
 The receiver may overflow with incoming messages.
 The lack of flow control means that the process using UDP should provide for this service,
if needed.

Error Control
 There is no error control mechanism in UDP except for the checksum.
 This means that the sender does not know if a message has been lost or duplicated.
 When the receiver detects an error through the checksum, the user datagram is silently
discarded.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
 UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
 This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.
Encapsulation and Decapsulation
Queuing
Multiplexing and Demultiplexing

UDP applications

Here are few applications where UDP is used to transmit data:

 Domain Name Services


 Simple Network Management Protocol
 Trivial File Transfer Protocol
 Routing Information Protocol
 Kerberos

5.4 Transmission control protocol


What is TCP?

TCP stands for Transmission Control Protocol.


Transmission Control Protocol (TCP) is a communications standard that enables
application programs and computing devices to exchange messages over a network. It is
designed to send packets across the internet and ensure the successful delivery of data and
messages over networks.

 TCP is a reliable, connection-oriented, byte-stream protocol.


 TCP guarantees the reliable, in-order delivery of a stream of bytes. It is a full-
duplex protocol, meaning that each TCP connection supports a pair of byte streams,
one flowing in each direction.
 TCP includes a flow-control mechanism for each of these byte streams that allow
the receiver to limit how much data the sender can transmit at a given time.
 TCP supports a demultiplexing mechanism that allows multiple application
programs on any given host to simultaneously carry on a conversation with their
peers.
 TCP also implements congestion-control mechanism. The idea of this mechanism
is to prevent sender from overloading the network.
 Flow control is an end to end issue, whereas congestion control is concerned with
how host and network interact.

5.4.1 TCP SERVICES


The various services provided by the TCP to the application layer are as follows:
1. Process-to-Process Communication –
TCP provides a process to process communication, i.e, the transfer of data that takes place
between individual processes executing on end systems. This is done using port numbers or
port addresses. Port numbers are 16 bits long that help identify which process is sending or
receiving data on a host.

2. Stream oriented
This means that the data is sent and received as a stream of bytes(unlike UDP or IP that
divides the bits into datagrams or packets). However, the network layer, that provides
service for the TCP, sends packets of information not streams of bytes. Hence, TCP groups
a number of bytes together into a segment and adds a header to each of these segments and
then delivers these segments to the network layer. At the network layer, each of these
segments is encapsulated in an IP packet for transmission. The TCP header has information
that is required for control purposes which will be discussed along with the segment
structure.

3. Full-duplex service
This means that the communication can take place in both directions at the same time.

4. Connection-orientedservice
Unlike UDP, TCP provides a connection-oriented service. It defines 3 different phases:
 Connection establishment
 Data transfer
 Connection termination
5. Reliability
TCP is reliable as it uses checksum for error detection, attempts to recover lost or corrupted
packets by re-transmission, acknowledgement policy and timers. It uses features like byte
number and sequence number and acknowledgement number so as to ensure reliability.
Also, it uses congestion control mechanisms.
6. Multiplexing
TCP does multiplexing and de-multiplexing at the sender and receiver ends respectively as
a number of logical connections can be established between port numbers over a physical
connection.

5.4.2 TCP Features

1. Numbering System
 Although the TCP software keeps track of the segments being transmitted or received, there
is no field for a segment number value in the segment header. Instead, there are two fields
called the sequence number and the acknowledgment number. These two fields refer to the
byte number and not the segment number.
 Byte Number TCP numbers all data bytes that are transmitted in a connection. Numbering
is independent in each direction. When TCP receives bytes of data from a process, it stores
them in the sending buffer and numbers them. The numbering does not necessarily start
from O. Instead, TCP generates a random number between 0 and 232 - 1 for the number of
the first byte.
 For example, if the random number happens to be 1057 and the total data to be sent are
6000 bytes, the bytes are numbered from 1057 to 7056. We will see that byte numbering
is used for flow and error control.
2. Flow Control
TCP, unlike UDP, provides flow control. The receiver of the data controls the amount of data that
are to be sent by the sender. This is done to prevent the receiver from being overwhelmed with
data. The numbering system allows TCP to use a byte-oriented flow control.
3. Error Control
To provide reliable service, TCP implements an error control mechanism. Although error control
considers a segment as the unit of data for error detection (loss or corrupted segments), error
control is byte-oriented, as we will see later.
4. Congestion Control
TCP, unlike UDP, takes into account congestion in the network. The amount of data sent by a
sender is not only controlled by the receiver (flow control), but is also detennined by the level of
congestion in the network

5.4.3 Segment

A packet in TCP is called a segment. The format of a segment is shown in the following figure
The segment consists of a 20- to 60-byte header, followed by data from the application program.
The header is 20 bytes if there are no options and up to 60 bytes if it contains options. The different
sections of the Header are as follows

Source port address. This is a 16-bit field that defines the port number of the application program
in the host that is sending the segment. This serves the same purpose as the source port address in
the UDP header.
Destination port address. This is a 16-bit field that defines the port number of the application
program in the host that is receiving the segment. This serves the same purpose as the destination
port address in the UDP header.
Sequence number. This 32-bit field defines the number assigned to the first byte of data contained
in this segment. As we said before, TCP is a stream transport protocol. To ensure connectivity,
each byte to be transmitted is numbered. The sequence number tells the destination which byte in
this sequence comprises the first byte in the segment. During connection establishment, each party
uses a random number generator to create an initial sequence number (ISN), which is usually
different in each direction.
Acknowledgment number. This 32-bit field defines the byte number that the receiver of the
segment is expecting to receive from the other party. If the receiver of the segment has successfully
received byte number x from the other party, it defines x + I as the acknowledgment number.
Acknowledgment and data can be piggybacked together.
Header length. This 4-bit field indicates the number of 4-byte words in the TCP header. The length
of the header can be between 20 and 60 bytes. Therefore, the value of this field can be between 5
(5 x 4 =20) and 15 (15 x 4 =60).
Reserved. This is a 6-bit field reserved for future use.
Control. This field defines 6 different control bits or flags as shown in below figure. One or more
of these bits can be set at a time

Window size. This field defines the size of the window, in bytes, that the other party must maintain.
Note that the length of this field is 16 bits, which means that the maximum size of the window is
65,535 bytes. This value is normally referred to as the receiving window (rwnd) and is determined
by the receiver. The sender must obey the dictation of the receiver in this case.
Checksum. This 16-bit field contains the checksum. The calculation of the checksum for TCP
follows the same procedure as the one described for UDP. However, the inclusion of the checksum
in the UDP datagram is optional, whereas the inclusion of the checksum for TCP is mandatory.
The same pseudoheader, serving the same purpose, is added to the segment. For the TCP
pseudoheader, the value for the protocol field is 6.
Urgent pointer. This l6-bit field, which is valid only if the urgent flag is set, is used when the
segment contains urgent data. It defines the number that must be added to the sequence number to
obtain the number of the last urgent byte in the data section of the segment.
Options. There can be up to 40 bytes of optional information in the TCP header

5.5 A TCP Connection


TCP is connection-oriented. A connection-oriented transport protocol establishes a virtual path
between the source and destination. All the segments belonging to a message are then sent over
this virtual path. Using a single virtual pathway for the entire message facilitates the
acknowledgment process as well as retransmission of damaged or lost frames.
In TCP, connection-oriented transmission requires three phases: connection establishment, data
transfer, and connection termination.
1. Connection Establishment
TCP transmits data in full-duplex mode. When two TCPs in two machines are connected, they are
able to send segments to each other simultaneously. This implies that each party must initialize
communication and get approval from the other party before any data are transferred.
Three-Way Handshaking The connection establishment in TCP is called three-way handshaking.
In our example, an application program, called the client, wants to make a connection with another
application program, called the server, using TCP as the transport layer protocol. The process starts
with the server. The server program tells its TCP that it is ready to accept a connection. This is
called a request for a passive open. The client program issues a request for an active open. A client
that wishes to connect to an open server tells its TCP that it needs to be connected to that particular
server. TCP can now start the three-way handshaking process as shown in below figure
1. The client sends the first segment, a SYN segment, in which only the SYN flag is set. This
segment is for synchronization of sequence numbers. It consumes one sequence number. When
the data transfer starts, the sequence number is incremented by 1. We can say that the SYN segment
carries no real data, but we can think of it as containing 1 imaginary byte.
2. The server sends the second segment, a SYN +ACK segment, with 2 flag bits set: SYN and
ACK. This segment has a dual purpose. It is a SYN segment for communication in the other
direction and serves as the acknowledgment for the SYN segment. It consumes one sequence
number.
3. The client sends the third segment. This is just an ACK segment. It acknowledges the receipt of
the second segment with the ACK flag and acknowledgment number field. Note that the sequence
number in this segment is the same as the one in the SYN segment; the ACK segment does not
consume any sequence numbers.

2. Data Transfer
After connection is established, bidirectional data transfer can take place. The client and server
can both send data and acknowledgments. for the moment, it is enough to know that data traveling
in the same direction as an acknowledgment are carried on the same segment. The
acknowledgment is piggybacked with the data.
Below figure shows an example. In this example, after connection is established (not shown in the
figure), the client sends 2000 bytes of data in two segments. The server then sends 2000 bytes in
one segment. The client sends one more segment. The first three segments carry both data and
acknowledgment, but the last segment carries only an acknowledgment because there are no more
data to be sent. Note the values of the sequence and acknowledgment numbers. The data segments
sent by the client have the PSH (push) flag set so that the server TCP knows to deliver data to the
server process as soon as they are received. We discuss the use of this flag in greater detail later.
The segment from the server, on the other hand, does not set the push flag. Most TCP
implementations have the option to set or not set this flag.
3. Connection Termination
Any of the two parties involved in exchanging data (client or server) can close the connection,
although it is usually initiated by the client. Most implementations today allow two options for
connection termination: three-way handshaking and four-way handshaking with a half-close
option.Three-Way Handshaking for connection termination as shown in below figure.
1. In a normal situation, the client TCP, after receiving a close command from the client process,
sends the first segment, a FIN segment in which the FIN flag is set. Note that a FIN segment can
include the last chunk of data sent by the client, or it can be just a control segment as shown in
below Figure. If it is only a control segment, it consumes only one sequence number.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends
the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from the
client and at the same time to announce the closing of the connection in the other direction. This
segment can also contain the last chunk f data from the server. If it does not carry data, it consumes
only one sequence number.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN
segment from the TCP server. This segment contains the acknowledgment number, which is 1 plus
the sequence number received in the FIN segment from the server. This segment cannot carry data
and consumes no sequence numbers

5.6 Flow Control


TCP uses a sliding window, to handle flow control. The sliding window protocol used by TCP,
however, is something between the Go-Back-N and Selective Repeat sliding window. The sliding
window protocol in TCP looks like the Go-Back-N protocol because it does not use NAKs; it looks
like Selective Repeat because the receiver holds the out-of-order segments until the missing ones
arrive.
There are two big differences between this sliding window and the one we used at the data link
layer. First, the sliding window of TCP is byte-oriented; the one we discussed in the data link layer
is frame-oriented. Second, the TCP's sliding window is of variable size; the one we discussed in
the data link layer was of fixed size.
Below figure shows the sliding window in TCP. The window spans a portion of the buffer
containing bytes received from the process. The bytes inside the window are the bytes that can be
in transit; they can be sent without worrying about acknowledgment. The imaginary window has
two walls: one left and one right.
The window is opened, closed, or shrunk. These three activities, as we will see, are in the control
of the receiver (and depend on congestion in the network), not the sender. The sender must obey
the commands of the receiver in this matter.
Opening a window means moving the right wall to the right. This allows more new bytes in the
buffer that are eligible for sending. Closing the window means moving the left wall to the right.
This means that some bytes have been acknowledged and the sender need not worry about them
anymore. Shrinking the window means moving the right wall to the left. This is strongly
discouraged and not allowed in some implementations because it means revoking the eligibility of
some bytes for sending

The size of the window is the lesser of rwnd and cwnd.


• The source does not have to send a full window's worth of data.
• The window can be opened or closed by the receiver, but should not be shrunk.
• The destination can send an acknowledgment at any time as long as it does not result in a
shrinking window.
• The receiver can temporarily shut down the window; the sender, however, can always send a
segment of 1 byte after the window is shut down.

5.7 Error Control


TCP provides reliability using error control. Error control includes mechanisms for detecting
corrupted segments, lost segments, out-of-order segments, and duplicated segments. Error control
also includes a mechanism for correcting errors after they are detected. Error detection and
correction in TCP is achieved through the use of three simple tools: checksum, acknowledgment,
and time-out.
1. Checksum
Each segment includes a checksum field which is used to check for a corrupted segment. If the
segment is corrupted, it is discarded by the destination TCP and is considered as lost. TCP uses a
16-bit checksum that is mandatory in every segment.
2. Acknowledgment
TCP uses acknowledgments to confirm the receipt of data segments. Control segments that carry
no data but consume a sequence number are also acknowledged. ACK segments are never
acknowledged.
3. Retransmission
The heart of the error control mechanism is the retransmission of segments. When a segment is
corrupted, lost, or delayed, it is retransmitted. In modern implementations, a segment is
retransmitted on two occasions: when a retransmission timer expires or when the sender receives
three duplicate ACKs.

5.8 Congestion Control in TCP


The size of the sender window is determined by the following two factors
1. Receiver window size
2. Congestion window size

1. Receiver Window Size


• Sender should not send data greater than receiver window size.
• Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
• So, sender should always send data less than or equal to receiver window size.
• Receiver dictates its window size to the sender through TCP Header.
2. Congestion Window
• Sender should not send data greater than congestion window size.
• Otherwise, it leads to dropping the TCP segments which causes TCP Retransmission.
• So, sender should always send data less than or equal to congestion window size.
• Different variants of TCP use different approaches to calculate the size of congestion
window.
• Congestion window is known only to the sender and is not sent over the links.
UNIT – 5 (PART -B)
APPLICATION LAYER

World Wide Web: HTTP, Electronic mail-Architecture- web based mail- email security- TELENET-local
versus remote Logging-Domain Name System: Name Space, DNS in Internet,- Resolution-Caching-
Resource Records- DNS messages- Registrars-security of DNS Name Servers, SNMP.

5.1 Introduction to World Wide Web

 The World Wide Web (WWW) is a collection of documents and other web
resources which are identified by URLs, interlinked by hypertext links, and can
be accessed and searched by browsers via the Internet.
 World Wide Web is also called the Web and it was invented by Tim Berners-Lee
in 1989.
 Website is a collection of web pages belonging to a particular organization.
 The pages can be retrieved and viewed by using browser.

Let us go through the scenario shown in above fig.


 The client wants to see some information that belongs to site 1.
 It sends a request through its browser to the server at site 2.
 The server at site 1 finds the document and sends it to the client.

5.1.1 Client (Browser)

 Web browser is a program, which is used to communicate with web server on


the Internet.
 Each browser consists of three parts: a controller, client protocol and interpreter.
 The controller receives input from input device and use the programs to access
the documents.
 After accessing the document, the controller uses one of the interpreters to
display the document on the screen.

5.1.2 Server

 A computer which is available for the network resources and provides service
to the other computer on request is known as server.
 The web pages are stored at the server.
 Server accepts a TCP connection from a client browser.
 It gets the name of the file required.
 Server gets the stored file. Returns the file to the client and releases the top
connection.
5.1.3 Uniform Resource Locater (URL)

 The URL is a standard for specifying any kind of information on the Internet.
 The URL consists of four parts: protocol, host computer, port and path.
 The protocol is the client or server program which is used to retrieve the
document or file. The protocol can be ftp or http.
 The host is the name of computer on which the information is located.
 The URL can optionally contain the port number and it is separated from the
host name by a colon.
 Path is the pathname of the file where the file is stored.

5.2 Hyper Text Transfer Protocol (HTTP)


o HTTP stands for HyperText Transfer Protocol.
o It is a protocol used to access the data on the World Wide Web (www).
o The HTTP protocol can be used to transfer the data in the form of plain text,
hypertext, audio, video, and so on.
o This protocol is known as HyperText Transfer Protocol because of its
efficiency that allows us to use in a hypertext environment where there are
rapid jumps from one document to another document.
o HTTP is similar to the FTP as it also transfers the files from one host to
another host. But, HTTP is simpler than FTP as HTTP uses only one
connection, i.e., no control connection to transfer the files.
o HTTP is used to carry the data in the form of MIME-like format.
o HTTP is similar to SMTP as the data is transferred between client and
server. The HTTP differs from the SMTP in the way the messages are sent
from the client to the server and from server to the client. SMTP messages
are stored and forwarded while HTTP messages are delivered immediately.
5.2.1 Features of HTTP
o Connectionless protocol: HTTP is a connectionless protocol. HTTP client
initiates a request and waits for a response from the server. When the
server receives the request, the server processes the request and sends
back the response to the HTTP client after which the client disconnects the
connection. The connection between client and server exist only during the
current request and response time only.
o Media independent: HTTP protocol is a media independent as data can
be sent as long as both the client and server know how to handle the data
content. It is required for both the client and server to specify the content
type in MIME-type header.
o Stateless: HTTP is a stateless protocol as both the client and server know
each other only during the current request. Due to this nature of the
protocol, both the client and server do not retain the information between
various requests of the web pages.

5.2.2 HTTP Transactions

The figure shows the HTTP

transaction between client and


server. The client initiates a

transaction by sending a request


message to the server. The server

replies to the request message by


sending a response message.
5.2.3 Messages

HTTP messages are of two types: request and response. Both the message types
follow the same message format.

Request Message: The request message is


sent by the client that consists of a request line,
headers, and sometimes a body.

Response
Message: The
response message
is sent by the
server to the client
that consists of a
status line,
headers, and
sometimes a body.

5.2.3.1 Request and Status Lines. The first line in a request message is called
a request line; the first line in the response message is called the status line.
There is one
common field,
as shown in
below figure.
5.2.3.1.1 Request type. This field is used in the request message. In version
1.1 of HTTP, several request types are defined. The request type is categorized
into methods as defined in below table.

5.3.2.2 Header The header exchanges additional information between the client
and the server. For example, the client can request that the document be sent in
a special format, or the server can send extra information about the document.
The header can consist of one or more header lines. Each header line has a header
name, a colon, a space, and a header value.

A header line belongs to one of four categories: general header, request header,
response header, and entity header.
General header The general header gives general information about the
message and can be present in both a request and a response. Below table lists
some general headers with their descriptions.
Request header The request header can be present only in a request
message. It specifies the client's configuration and the client's preferred
document format. See below Table for a list of some request headers and their
descriptions
Response header The response header can be present only in a response
message. It specifies the server's configuration and special information about the
request. See below Table for a list of some response headers with their
descriptions
Entity header The entity header gives information about the body of
the document. Although it is mostly present in response messages, some request
messages, such as POST or PUT methods, that contain a body also use this type
of header. See below Table for a list of some entity headers and their descriptions.

5.3 Electronic Mail-Architecture

Electronic mail is often referred to as E-mail and it is a method used


for exchanging digital messages.
 Electronic mail is mainly designed for human use.
 It allows a message to includes text, image, audio as well as video.
 This service allows one message to be sent to one or more than one
recipient.
 The E-mail systems are mainly based on the store-and-forward
model where the E-mail server system accepts, forwards, deliver and store
the messages on behalf of users who only need to connect to the
infrastructure of the Email.
 The Person who sends the email is referred to as the Sender while the
person who receives an email is referred to as the Recipient.

Need of an Email

By making use of Email, we can send any message at any time to anyone.
 We can send the same message to several peoples at the same time.
 It is a very fast and efficient way of transferring information.
 The email system is very fast as compared to the Postal system.
 Information can be easily forwarded to coworkers without retyping it.

Components of E-mail System

The basic Components of an Email system are as follows:

1.User Agent(UA)

It is a program that is mainly used to send and receive an email. It is also known
as an email reader. User-Agent is used to compose, send and receive emails.

 It is the first component of an Email.


 User-agent also handles the mailboxes.
 The User-agent mainly provides the services to the user in order to make
the sending and receiving process of message easier.

Given below are some services provided by the User-Agent:

1.Reading the Message

2.Replying the Message


3.Composing the Message

4.Forwarding the Message.

5.Handling the Message.

2.Message Transfer Agent

The actual process of transferring the email is done through the Message Transfer
Agent(MTA).

 In order to send an Email, a system must have an MTA client.


 In order to receive an email, a system must have an MTA server.
 The protocol that is mainly used to define the MTA client and MTA server
on the internet is called SMTP(Simple Mail Transfer Protocol).
 The SMTP mainly defines how the commands and responses must be sent
back and forth

3.Message Access Agent

In the first and second stages of email delivery, we make use of SMTP.

 SMTP is basically a Push protocol.


 The third stage of the email delivery mainly needs the pull protocol, and at
this stage, the message access agent is used.
 The two protocols used to access messages are POP and IMAP4.

Architecture of Email

Now its time to take a look at the architecture of e-mail with the help of four scenarios:
First Scenario

When the sender and the receiver of an E-mail are on the same system, then
there is the need for only two user agents.

Second Scenario

In this scenario, the sender and receiver of an e-mail are basically users on the
two different systems. Also, the message needs to send over the Internet. In this
case, we need to make use of User Agents and Message transfer agents(MTA).
Third Scenario

In this scenario, the sender is connected to the system via a point-to-point WAN
it can be either a dial-up modem or a cable modem. While the receiver is directly
connected to the system like it was connected in the second scenario.

Also in this case sender needs a User agent (UA) in order to prepare the message.
After preparing the message the sender sends the message via a pair of MTA
through LAN or WAN.

Fourth Scenario

In this scenario, the receiver is also connected to his mail server with the help of
WAN or LAN.

When the message arrives the receiver needs to retrieve the message; thus there
is a need for another set of client/server agents. The recipient makes use of
MAA(Message access agent) client in order to retrieve the message.

In this, the client sends the request to the Mail Access agent(MAA) server and
then makes a request for the transfer of messages.
This scenario is most commonly used today.

Structure of Email

The message mainly consists of two parts:

1.Header

2.Body

Header

The header part of the email generally contains the sender's address as well as
the receiver's address and the subject of the message.

Body

The Body of the message contains the actual information that is meant for the
receiver.

Email Address

In order to deliver the email, the mail handling system must make use of an
addressing system with unique addresses.

The address consists of two parts:

 Local part
 Domain Name

Local Part

It is used to define the name of the special file, which is commonly called a user
mailbox; it is the place where all the mails received for the user is stored for
retrieval by the Message Access Agent.
Domain Name

It is the second part of the address is Domain Name.

Both local part and domain name are separated with the help of @.

5.4 Web-Based Mail

Web-Based Mail E-mail is such a common application that some websites today
provide this service to anyone who accesses the site. Two common sites are
Hotmail and Yahoo. The idea is very simple. Mail transfer from Alice's browser to
her mail server is done through HTTP The transfer of the message from the
sending mail server to the receiving mail server is still through SMTP.

Finally, the message from the receiving server (the Web server) to Bob's browser
is done through HTIP. The last phase is very interesting. Instead of POP3 or
IMAP4, HTTP is normally used. When Bob needs to retrieve his e-mails, he sends
a message to the website (Hotmail, for example).

The website sends a form to be filled in by Bob, which includes the log-in name
and the password. If the log-in name and password match, the e-mail is
transferred from the Web server to Bob's browser in HTML format .

5.5 TELNET

TELNET stands for Teletype Network. It is a type of protocol that enables one
computer to connect to the local computer. It is used as a standard TCP/IP
protocol for virtual terminal service which is provided by ISO. The computer
which starts the connection is known as the local computer.
The computer which is being connected to i.e. which accepts the connection
known as the remote computer.
During telnet operation, whatever is being performed on the remote computer
will be displayed by the local computer. Telnet operates on a client/server
principle. The local computer uses a telnet client program and the remote
computers use a telnet server program.

5.5.1 Logging
The logging process can be further categorized into two parts:

1. Local Login
2. Remote Login

1. Local Login: Whenever a user logs into its local system, it is known as local
login.
The Procedure of Local Login
 Keystrokes are accepted by the terminal driver when the user types at the
terminal.
 Terminal Driver passes these characters to OS.
 Now, OS validates the combination of characters and opens the required
application.
2. Remote Login: Remote Login is a process in which users can log in to a
remote site i.e. computer and use services that are available on the remote
computer. With the help of remote login, a user is able to understand the result
of transferring the result of processing from the remote computer to the local
computer.
5.5.2 Network Virtual Terminal(NVT)
NVT (Network Virtual Terminal) is a virtual terminal in TELNET that has a
fundamental structure that is shared by many different types of real terminals.
NVT (Network Virtual Terminal) was created to make communication viable
between different types of terminals with different operating systems.
5.5.3 TELNET Commands
Commands of Telnet are identified by a prefix character, Interpret As Command
(IAC) with code 255. IAC is followed by command and option codes.

The basic format of the command is as shown in the following figure :

Following are some of the important TELNET commands:

Character Decimal Binary Meaning

1. Offering to enable.
WILL 251 11111011 2. Accepting a request to enable.

1. Rejecting a request to enable.


2. Offering to disable.
WON’T 252 11111100
3. Accepting a request to disable.

1. Approving a request to enable.


DO 253 11111101` 2. Requesting to enable.

1. Disapproving a request to enable.


DON’T 254 11111110 2. Approving an offer to disable.
3. Requesting to disable.

Advantages of Telnet
1. It provides remote access to someone’s computer system.
2. Telnet allows the user for more access with fewer problems in data
transmission.
3. Telnet saves a lot of time.
4. The oldest system can be connected to a newer system with telnet having
different operating systems.
Disadvantages of Telnet
1. As it is somehow complex, it becomes difficult to beginners in understanding.
2. Data is sent here in form of plain text, that’s why it is not so secured.
3. Some capabilities are disabled because of not proper interlinking of the
remote and local devices.

5.6 Domain Name System

5.6.1 What is DNS?

DNS stands for Domain Name System. DNS is a directory service that provides a
mapping between the name of a host on the network and its numerical address

The domain name system (DNS) is a naming database in which


internet domain names are located and translated into Internet Protocol (IP)
addresses. The domain name system maps the name people use to locate a
website to the IP address that a computer uses to locate that website. It is also
Called as Name space

For example, if someone types "example.com" into a web browser, a server


behind the scenes maps that name to the corresponding IP address. An IP address
is similar in structure to 203.0.113.72.

5.6.2 How DNS works

DNS servers convert URLs and domain names into IP addresses that computers
can understand and use. They translate what a user types into a browser into
something the machine can use to find a webpage. This process of translation
and lookup is called DNS resolution

The basic process of a DNS resolution follows these steps:

1. The user enters a web address or domain name into a browser.


2. The browser sends a message, called a recursive DNS query, to the network
to find out which IP or network address the domain corresponds to.

3. The query goes to a recursive DNS server, which is also called a recursive
resolver, and is usually managed by the internet service provider (ISP). If the
recursive resolver has the address, it will return the address to the user, and
the webpage will load.

4. If the recursive DNS server does not have an answer, it will query a series of
other servers in the following order: DNS root name servers, top-level domain
(TLD) name servers and authoritative name servers.

5. The three server types work together and continue redirecting until they
retrieve a DNS record that contains the queried IP address. It sends this
information to the recursive DNS server, and the webpage the user is looking
for loads. DNS root name servers and TLD servers primarily redirect queries
and rarely provide the resolution themselves.

6. The recursive server stores, or caches, the A record for the domain name,
which contains the IP address. The next time it receives a request for that
domain name, it can respond directly to the user instead of querying other
servers.

7. If the query reaches the authoritative server and it cannot find the
information, it returns an error message.

The entire process querying the various servers takes a fraction of a second and
is usually imperceptible to the user.

5.6.3 NAME SPACE

A name space that maps each address to a unique name can be organized in two
ways:
 Fiat
 Hierarchical

1. Flat Name Space

In a flat name space, a name is assigned to an address. A name in this space is


a sequence of characters without structure. The names may or may not have a
common section; if they do, it has no meaning. The main disadvantage of a fiat
name space is that it cannot be used in a large system such as the Internet
because it must be centrally controlled to avoid ambiguity and duplication.

2. Hierarchical Name Space

In a hierarchical name space, each name is made of several parts. The first part
can define the nature of the organization, the second part can define the name
of an organization, and the third part can define departments in the organization,
and so on. In this case, the authority to assign and control the name spaces can
be decentralized.

A central authority can assign the part of the name that defines the nature of the
organization and the name of the organization. The responsibility of the rest of
the name can be given to the organization itself. The organization can add suffixes
(or prefixes) to the name to define its host or resources.

5.6.4 DOMAIN NAME SPACE

To have a hierarchical name space, a domain name space was designed. In this
design the names are defined in an inverted-tree structure with the root at the
top. The tree can have only 128 levels: level 0 (root) to level 127

Label
Each node in the tree has a label, which is a string with a maximum of 63
characters. The root label is a null string (empty string). DNS requires that
children of a node (nodes that branch from the same node) have different labels,
which guarantees the uniqueness of the domain names.

Domain Name: Each node in the tree has a domain name. A full domain name
is a sequence of labels separated by dots (.). The domain names are always read
from the node up to the root. The last label is the label of the root (null). This
means that a full domain name always ends in a null label, which means the last
character is a dot because the null string is nothing.

Fully Qualified Domain Name If a label is terminated by a null string, it is called


a fully qualified domain name (FQDN). An FQDN is a domain name that contains
the full name of a host. It contains all labels, from the most specific to the most
general, that uniquely define the name of the host.
Partially Qualified Domain Name If a label is not terminated by a null string,
it is called a partially qualified domain name (PQDN). A PQDN starts from a node,
but it does not reach the root. It is used when the name to be resolved belongs
to the same site as the client. Here the resolver can supply the missing part,
called the suffix, to create an FQDN.
Domain
A domain is a subtree of the
domain name space. The name
of the domain is the domain
name of the node at the top of
the subtree. Figure 25.5 shows
some domains. Note that a
domain may itself be divided
into domains (or subdomains
as they are sometimes called).

5.6.5 DNS IN THE INTERNET


DNS is a protocol that can be used in different platforms. In the Internet, the
domain name space (tree) is divided into three different sections: generic
domains, country domains, and the inverse domain.
Generic Domains: The generic domains define registered hosts according to
their generic behavior. Each node in the tree defines a domain, which is an index
to the domain name space database

Looking at the tree, we see that the first level in the generic domains section
allows 14 possible labels. These labels describe the organization types as listed
in Table
Country Domains The country
domains section uses two-
character country abbreviations
(e.g., us for United States).
Second labels can be
organizational, or they can be
more specific, national
designations. The United States,
for example, uses state
abbreviations as a subdivision of
us (e.g., ca.us.). Below Figure
shows the country domains
section. The address
anza.cup.ca.us can be translated to De Anza College in Cupertino, California, in
the United States.

Inverse Domain The inverse domain is used to map an address to a name. The
server asks its resolver to send a query to the DNS server to map an address to
a name to determine if the client is
on the authorized list. This type of
query is called an inverse or pointer
(PTR) query. To handle a pointer
query, the inverse domain is added
to the domain name space with the
first-level node called arpa (for
historical reasons).
The second level is also one single
node named in-addr (for inverse
address). The rest of the domain
defines IP addresses. The servers
that handle the inverse domain are also hierarchical. This means the netid part
of the address should be at a higher level than the subnetid part, and the subnetid
part higher than the hostid part. In this way, a server serving the whole site is at
a higher level than the servers serving each subnet. This configuration makes the
domain look inverted when compared to a generic or country domain. To follow
the convention of reading the domain labels from the bottom to the top, an IF
address such as 132.34.45.121 (a class B address with netid 132.34) is read as
121.45.34.132.in-addr.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy