CN 1-5 Unit's
CN 1-5 Unit's
UNIT I:
Introduction: Network Types, LAN, MAN, WAN, Network Topologies
Reference models- The OSI Reference Model- the TCP/IP Reference Model - A
Comparison of the OSI and TCP/IP Reference Models, OSI Vs TCP/IP, Lack of OSI
models success, Internet History.
Physical Layer –Introduction to Guided Media- Twisted-pair cable, Coaxial cable and Fiber
optic cable and unguided media: Wireless-Radio waves, microwaves, infrared.
1
peer-to-peer communication
person-to-person communication
electronic commerce
entertainment.(game playing,)
3.Mobile Users
Text messaging or texting
Smart phones,
GPS (Global Positioning System)
m-commerce
NFC (Near Field Communication)
4.Social Issues
With the good comes the bad, as this new-found freedom brings with it many
unsolved social, political, and ethical issues.
Network Definition – A group of computers which are connected to each other and follow
similar usage protocols for the purpose of sharing information and having communications
provided by the networking nodes is called a Computer Network.
A network may be small where it may include just one system or maybe as large as what one
may want. The nodes may further be classified into various types. These include:
1. Personal Computers
2. Servers
3. Networking Hardware
4. General Hosts
Networking can be classified into three types:
1. Types of Computer Networks
2. Topology
3. Interpreters
2
The smallest computer network
Devices may be connected through Bluetooth or other infra-red enables devices
It has a connectivity range of upto 10 metres
It covers an area of upto 30 feet
Personal devices belonging to a single person can be connected to each other using PAN
3.MAN (Metropolitan Area Network) –
A network that can be connected within a city, for example, cable TV Connection
It can be in the form of Ethernet, ATM, Token-ring and FDDI
It has a higher range
This type of network can be used to connect citizens with the various Organisations
4.WAN (Wide Area Network) –
A network which covers over a country or a larger range of people
Telephonic lines are also connected through WAN
Internet is the biggest WAN in the world
Mostly used by Government Organisations to manage data and information
Topology:
Topology defines the structure of the network of how all the components are interconnected to
each other. There are two types of topology: physical and logical topology.
Physical topology is the geometric representation of all the nodes in a network.
Network Topologies
Given below are the eight types of Network Topologies:
1. Point to Point Topology – Point to Point topology is the simplest topology that connects
two nodes directly together with a common link.
2. Bus Topology – A bus topology is such that there is a single line to which all nodes are
connected and the nodes connect only to the bus
3. Mesh Topology – This type of topology contains at least two nodes with two or more
paths between them
4. Ring Topology – In this topology every node has exactly two branches connected to it.
The ring is broken and cannot work if one of the nodes on the ring fails
5. Star Topology – In this network topology, the peripheral nodes are connected to a
central node, which rebroadcasts all the transmissions received from any peripheral node
to all peripheral nodes on the network, including the originating node
3
6. Tree Topology – In this type of topology nodes are connected in the form of a tree. The
function of the central node in this topology may be distributed
7. Line Topology – in this topology all the nodes are connected in a straight line
8. Hybrid Topology – When two more types of topologies combine together, they form a
Hybrid topology
1.Bus Topology
1. The bus topology is designed in such a way that all the stations are connected through a
single cable known as a backbone cable.
2. Each node is either connected to the backbone cable by drop cable or directly connected
to the backbone cable.
3. When a node wants to send a message over the network, it puts a message over the
network. All the stations available in the network will receive the message whether it has
been addressed or not.
4. The bus topology is mainly used in 802.3 (ethernet) and 802.4 standard networks.
5. The configuration of a bus topology is quite simpler as compared to other topologies.
6. The backbone cable is considered as a "single lane" through which the message is
broadcast to all the stations.
Advantages of Bus topology:
1. Low-cost cable: In bus topology, nodes are directly connected to the cable without
4
passing through a hub. Therefore, the initial cost of installation is low.
2. Moderate data speeds: Coaxial or twisted pair cables are mainly used in bus-based
networks that support upto 10 Mbps.
3. Familiar technology: Bus topology is a familiar technology as the installation and
troubleshooting techniques are well known, and hardware components are easily
available.
4. Limited failure: A failure in one node will not have any effect on other nodes.
Disadvantages of Bus topology:
1. Extensive cabling: A bus topology is quite simpler, but still it requires a lot of cabling.
2. Difficult troubleshooting: It requires specialized test equipment to determine the cable
faults. If any fault occurs in the cable, then it would disrupt the communication for all the
nodes.
3. Signal interference: If two nodes send the messages simultaneously, then the signals of
both the nodes collide with each other.
4. Reconfiguration difficult: Adding new devices to the network would slow down the
network.
5. Attenuation: Attenuation is a loss of signal leads to communication issues. Repeaters are
used to regenerate the signal.
2.Ring Topology:
5
7. The most common access method of the ring topology is token passing.
Token passing: It is a network access method in which token is passed from one node to
another node.
Token: It is a frame that circulates around the network.
Advantages of Ring topology:
1. Network Management: Faulty devices can be removed from the network without
bringing the network down.
2. Product availability: Many hardware and software tools for network operation and
monitoring are available.
3. Cost: Twisted pair cabling is inexpensive and easily available. Therefore, the installation
cost is very low.
4. Reliable: It is a more reliable network because the communication system is not
dependent on the single host computer.
Disadvantages of Ring topology:
1. Difficult troubleshooting: It requires specialized test equipment to determine the cable
faults. If any fault occurs in the cable, then it would disrupt the communication for all the
nodes.
2. Failure: The breakdown in one station leads to the failure of the overall network.
3. Reconfiguration difficult: Adding new devices to the network would slow down the
network.
4. Delay: Communication delay is directly proportional to the number of nodes. Adding
new devices increases the communication delay.
3.Star Topology
1. Star topology is an arrangement of the network in which every node is connected to the
central hub, switch or a central computer.
2. The central computer is known as a server, and the peripheral devices attached to the
server are known as clients.
6
3. Coaxial cable or RJ-45 cables are used to connect the computers.
4. Hubs or Switches are mainly used as connection devices in a physical star topology.
5. Star topology is the most popular topology in network implementation.
Advantages of Star topology:
1. Efficient troubleshooting: Troubleshooting is quite efficient in a star topology as
compared to bus topology. In a bus topology, the manager has to inspect the kilometers
of cable. In a star topology, all the stations are connected to the centralized network.
Therefore, the network administrator has to go to the single station to troubleshoot the
problem.
2. Network control: Complex network control features can be easily implemented in the
star topology. Any changes made in the star topology are automatically accommodated.
3. Limited failure: As each station is connected to the central hub with its own cable,
therefore failure in one cable will not affect the entire network.
4. Familiar technology: Star topology is a familiar technology as its tools are cost-
effective.
5. Easily expandable: It is easily expandable as new stations can be added to the open ports
on the hub.
6. Cost effective: Star topology networks are cost-effective as it uses inexpensive coaxial
cable.
7. High data speeds: It supports a bandwidth of approx 100Mbps. Ethernet 100BaseT is
one of the most popular Star topology networks.
Disadvantages of Star topology
1. A Central point of failure: If the central hub or switch goes down, then all the
connected nodes will not be able to communicate with each other.
2. Cable: Sometimes cable routing becomes difficult when a significant amount of routing
is required.
4.Tree topology
1. Tree topology combines the characteristics of bus topology and star topology.
2. A tree topology is a type of structure in which all the computers are connected with each
7
other in hierarchical fashion.
3. The top-most node in tree topology is known as a root node, and all other nodes are the
descendants of the root node.
4. There is only one path exists between two nodes for the data transmission. Thus, it forms
a parent-child hierarchy.
Advantages of Tree topology
1. Support for broadband transmission: Tree topology is mainly used to provide
broadband transmission, i.e., signals are sent over long distances without being
attenuated.
2. Easily expandable: We can add the new device to the existing network. Therefore, we
can say that tree topology is easily expandable.
3. Easily manageable: In tree topology, the whole network is divided into segments known
as star networks which can be easily managed and maintained.
4. Error detection: Error detection and error correction are very easy in a tree topology.
5. Limited failure: The breakdown in one station does not affect the entire network.
6. Point-to-point wiring: It has point-to-point wiring for individual segments.
Disadvantages of Tree topology
1. Difficult troubleshooting: If any fault occurs in the node, then it becomes difficult to
troubleshoot the problem.
2. High cost: Devices required for broadband transmission are very costly.
3. Failure: A tree topology mainly relies on main bus cable and failure in main bus cable
will damage the overall network.
4. Reconfiguration difficult: If new devices are added, then it becomes difficult to
reconfigure.
5.Mesh topology:
8
are a critical concern.
6. Mesh topology is mainly used for wireless networks.
7. Mesh topology can be formed by using the formula:
Number of cables = (n*(n-1))/2;
8. Where n is the number of nodes that represents the network.
9. Mesh topology is divided into two categories:
10. Fully connected mesh topology
11. Partially connected mesh topology
1.Full Mesh Topology: In a full mesh topology, each computer is connected to all the computers
available in the network.
2.Partial Mesh Topology: In a partial mesh topology, not all but certain computers are
connected to those computers with which they communicate frequently.
Advantages of Mesh topology:
1. Reliable: The mesh topology networks are very reliable as if any link breakdown will not
affect the communication between connected computers.
2. Fast Communication: Communication is very fast between the nodes.
3. Easier Reconfiguration: Adding new devices would not disrupt the communication
between other devices.
Disadvantages of Mesh topology
1. Cost: A mesh topology contains a large number of connected devices such as a router
and more transmission media than other topologies.
2. Management: Mesh topology networks are very large and very difficult to maintain and
manage. If the network is not monitored carefully, then the communication link failure
goes undetected.
3. Efficiency: In this topology, redundant connections are high that reduces the efficiency
of the network.
Hybrid Topology:
9
1. The combination of various different topologies is known as Hybrid topology.
2. A Hybrid topology is a connection between different links and nodes to transfer the data.
3. When two or more different topologies are combined together is termed as Hybrid
topology and if similar topologies are connected with each other will not result in Hybrid
topology. For example, if there exist a ring topology in one branch of ICICI bank and bus
topology in another branch of ICICI bank, connecting these two topologies will result in
Hybrid topology.
Advantages of Hybrid Topology:
1. Reliable: If a fault occurs in any part of the network will not affect the functioning of the
rest of the network.
2. Scalable: Size of the network can be easily expanded by adding new devices without
affecting the functionality of the existing network.
3. Flexible: This topology is very flexible as it can be designed according to the
requirements of the organization.
4. Effective: Hybrid topology is very effective as it can be designed in such a way that the
strength of the network is maximized and weakness of the network is minimized.
Disadvantages of Hybrid topology:
1. Complex design: The major drawback of the Hybrid topology is the design of the Hybrid
network. It is very difficult to design the architecture of the Hybrid network.
2. Costly Hub: The Hubs used in the Hybrid topology are very expensive as these hubs are
different from usual Hubs used in other topologies.
3. Costly infrastructure: The infrastructure cost is very high as a hybrid network requires a
lot of cabling, network devices, etc.
3 REFERENCE MODELS:
Computer Network Models
A communication subsystem is a complex piece of Hardware and software. Early attempts for
10
Reference models
The reference models give a conceptual frame work that standardizes the communication
between two heterogeneous networks.
OSI Model
1. Upper layers
2. Lower layers
The upper layer of the OSI model mainly deals with the application related issues, and
they are implemented only in the software. The application layer is closest to the end
user. Both the end user and the application layer interact with the software
applications. An upper layer refers to the layer just above another layer.
The lower layer of the OSI model deals with the data transport issues. The data link layer
and the physical layer are implemented in hardware and software. The physical layer is
the lowest layer of the OSI model and is closest to the physical medium. The physical
layer is mainly responsibl for placing the information on the physical medium.
There are the seven OSI layers. Each layer has different functions. A list of seven layers are
given below:
1. Physical Layer
2. Data-Link Layer
3. Network Layer
4. Transport Layer
5. Session Layer
6. Presentation Layer
7. Application Layer
1. Physical Layer
The lowest layer of the OSI reference model is the physical layer. It is responsible for the actual
physical connection between the devices. The physical layer contains information in the form
of bits. It is responsible for transmitting individual bits from one node to the next. When
receiving data, this layer will get the signal received and convert it into 0s and 1s and send
them to the Data Link layer, which will put the frame back together.
Note: 1. Hub, Repeater, Modem, and Cables are Physical Layer devices.
The data link layer is responsible for the node-to-node delivery of the
message. The main function of this layer is to make sure data transfer is error-free from one
node to another, over the physical layer. When a packet arrives in a network, it is the
responsibility of the DLL to transmit it to the Host using its MAC address.
Framing: Framing is a function of the data link layer. It provides a way for a sender to
transmit a set of bits that are meaningful to the receiver. This can be accomplished by
attaching special bit patterns to the beginning and end of the frame.
Physical addressing: After creating frames, the Data link layer adds physical addresses
(MAC addresses) of the sender and/or receiver in the header of each frame.
Error control: The data link layer provides the mechanism of error control in which it
detects and retransmits damaged or lost frames.
Flow Control: The data rate must be constant on both sides else the data may get
corrupted thus, flow control coordinates the amount of data that can be sent before
receiving an acknowledgment.
Access control: When a single communication channel is shared by multiple devices, the
MAC sub-layer of the data link layer helps to determine which device has control over
the channel at a given time.
Note: 1. Packet in the Data Link layer is referred to as Frame.
2. Data Link layer is handled by the NIC (Network Interface Card) and device drivers of host
machines.
3. Network Layer
The sender & receiver’s IP addresses are placed in the header by the network layer.
Routing: The network layer protocols determine which route is suitable from source to
destination. This function of the network layer is known as routing.
Logical Addressing: To identify each device on Internetwork uniquely, the network layer
defines an addressing scheme. The sender & receiver’s IP addresses are placed in the
header by the network layer. Such an address distinguishes each device uniquely and
universally.
The transport layer also provides the acknowledgment of the successful data transmission
and re-transmits the data if an error is found.
At the sender’s side: The transport layer receives the formatted data from the upper layers,
performs Segmentation, and also implements Flow & Error control to ensure proper data
transmission. It also adds Source and Destination port numbers in its header and forwards the
segmented data to the Network Layer.
Note: The sender needs to know the port number associated with the receiver’s application.
Generally, this destination port number is configured, either by default or manually. For
example, when a web application requests a web server, it typically uses port number 80,
because this is the default port assigned to web applications. Many applications have default
ports assigned.
At the receiver’s side: Transport Layer reads the port number from its header and forwards the
Data which it has received to the respective application. It also performs sequencing and
reassembling of the segmented data.
1. Connection-Oriented Service
2. Connectionless Service
Connection Establishment
Data Transfer
Termination/disconnection
In this type of transmission, the receiving device sends an acknowledgment, back to the source
after a packet or group of packets is received. This type of transmission is reliable and secure.
2. Connectionless service: It is a one-phase process and includes Data Transfer. In this type of
transmission, the receiver does not acknowledge receipt of a packet. This approach allows for
much faster communication between devices. Connection-oriented service is more reliable
than connectionless Service.
Note:
5. Session Layer
This layer is responsible for the establishment of connection, maintenance of sessions, and
authentication, and also ensures security.
Session establishment, maintenance, and termination: The layer allows the two
processes to establish, use and terminate a connection.
Synchronization: This layer allows a process to add checkpoints that are considered
synchronization points in the data. These synchronization points help to identify the
error so that the data is re-synchronized properly, and ends of the messages are not cut
prematurely and data loss is avoided.
Dialog Controller: The session layer allows two systems to start communication with
each other in half-duplex or full-duplex.
Note: 1. All the below 3 layers(including Session Layer) are integrated as a single layer in the
TCP/IP model as the “Application Layer”.
2. Implementation of these 3 layers is done by the network application itself. These are also
known as Upper Layers or Software Layers.
6. Presentation Layer
The presentation layer is also called the Translation layer. The data from the application layer
is extracted here and manipulated as per the required format to transmit over the network.
7. Application Layer
At the very top of the OSI Reference Model stack of layers, we find the Application layer which
is implemented by the network applications. These applications produce the data, which has to
be transferred over the network. This layer also serves as a window for the application services
to access the network and for displaying the received information to the user.
OSI model acts as a reference model and is not implemented on the Internet because of its
late invention. The current model being used is the TCP/IP model.
OSI Model in a Nutshell
Layer Information
Layer Name Responsibility Device
No Form(Data Unit)
Application Helps in identifying the client and
7 Message
Layer synchronizing communication. –
Data from the application layer is extracted
Presentation
6 and manipulated in the required format for Message
Layer –
transmission.
Establishes Connection, Maintenance,
5 Session Layer Message Gateway
EnsuresAuthentication, and Ensures security.
Transport Take Service from Network Layer and
4 Segment Firewall
Layer provide it to the Application Layer.
Network Transmission of data from one host to
3 Packet Router
Layer another, located in different networks.
Data Link
2 Node to Node Delivery of Message. Frame Switch, Bridge
Layer
Physical Establishing Physical Connections between Hub, Repeater,
1 Bits
Layer Devices. Modem, Cables
1. Network interface
2. Internet Layer
3. Transport Layer
4. Application Layer
1. Link Layer - The link layer is the lowest layer in the TCP/IP model. It is compared with the
combination of the data link layer and the physical layer of the OSI Model. They are similar but
not identical. This layer is the group of communication protocols that acts as a link to which the
host is connected physically. It is mainly concerned with the physical transmission of the data.
ETHERNET
FDDI
Token Ring
Frame relay
2. Internet Layer - The Internet layer is compared to the network layer of the OSI model. The
main responsibility of the network layer is to transport data packets from the source to the
destination host across the entire network. The transmission done by the internet layer is less
reliable.
IP - It is the primary protocol in the internet layer. It stands for Internet Protocol. It is
responsible for the transmission of data packets from the source to the destination host.
It is implemented in two versions, IPv4 and IPv6.
ARP - It stands for Address Resolution Protocol. Its main responsibility is to find the
physical address of the host using the internet address or IP address.
ICMP - It is used for providing messages about errors to the host.
IGMP - It is used for the transmission of data to a group of networks. For eg. online
streaming.
3. Transport Layer - The transport layer is responsible for the end-to-end communication and
delivery of the non-erroneous data. It provides services that include connection-oriented
communication, flow control, reliability, multiplexing. This layer is similar to the transport layer
of the OSI model.
4. Application Layer - It is the topmost layer of the TCP/IP model. Its functions are similar to
the combination of the application layer, session layer, and presentation layer. It is responsible
for user interface specifications. It contains communication protocols used in the process to
process communication across an Internet protocol computer network.
SNMP: SNMP stands for Simple Network Management Protocol. It is a framework used
for managing the devices on the internet by using the TCP/IP protocol suite.
Advantages
Many Routing protocols are supported.
It is highly scalable and uses a client-server architecture.
It is lightweight.
Disadvantages
Little difficult to set up.
Delivery of packets is not guaranteed by the transport layer.
Vulnerable to a synchronization attack.
Reliability It is less reliable than TCP/IP Model. It is more reliable than OSI Model.
The protocols of the OSI model are The TCP/IP model protocols are not
better unseen and can be returned hidden, and we cannot fit a new
with another appropriate protocol protocol stack in it.
Parameters OSI Model TCP/IP Model
quickly.
The smallest size of the OSI header is The smallest size of the TCP/IP
5 bytes. header is 20 bytes.
OSI and TCP/IP both are logical models. One of the main similarities between the OSI and
TCP/IP models is that they both describe how information is transmitted between two
devices across a network. Both models define a set of layers. Each layer performs a specific
set of functions to enable the transmission of data.
Another similarity between the two models is that they both use the concept of
encapsulation, in which data is packaged into a series of headers and trailers that contain
information about the data being transmitted and how it should be handled by the network.
Open System Interconnection (OSI) model is reference model that is used to describe and
explain how does information from software application in one of computers moves freely
through physical medium to software application on another computer. This model consists
of total of seven layers and each of layers performs specific task or particular network
function.
Although, OSI model and its protocols even TCP/IP models and its protocols are not perfect in
each and manner. There is bit of criticism that has been noticed and directed at both of
them. The most striking and unfortunate issue concerning OSI model is that it is perhaps the
most-studied and most widely accepted network structure and yet it is not model that is
really implemented and largely used. The important reasons why happen is given below:
1. Bad Timing: In the OSI model, it is very essential and important to write standards in
between trough i.e., apocalypse of two elephants. Time of standards is very critical as
sometimes standards are written too early even before research is completed. Due to this,
OSI model was not properly understood. The timing was considered bad because this model
was finished and completed after huge and significant amount of research time. Due to this,
the standards are ignored by these companies.
When the OSI came around, this model was perfectly released regarding research, but at that
time TCP/IP model was already receiving huge amounts of investments from companies and
manufacturers did not feel like investing in OSI model. So, there were no initial offerings for
using OSI technique. While every company waited for any of other companies to firstly use
this model technique, but unfortunately none of company went first to use this model. This is
first reason why OSI never happen.
2. Bad Technology :OSI models were never taken into consideration because of competition
TCP/IP protocols that were already used widely. This is due to second reason that OSI model
and its protocols are flawed that means both of them have fundamental weakness or
imperfection or defect in character or performance or design, etc. The idea behind choosing
all of seven layers of OSI model was based more on political issues rather than technical.
Layers are more political than technical.
OSI model, along with all of its associated service definitions and protocols, is highly complex.
On the other hand, other two layers i.e. Data link layer and network layer both of them are
overfull. Documentation is also highly complex due to which it gets very difficult to
implement and is not even very efficient in operation or function. Error and flow control are
also duplicated i.e., reappear again and again in multiple layers or each layer. On the other
hand, most serious and bad criticism is that this model is also dominated by communications
mentality.
3. Bad Implementations :The OSI model is extraordinarily and much more complex due to
which initial implementations were very slow, huge, and unwidely. This is the third reason
due to which OSI became synonymous with poor quality in early days. It turned out to not be
essential and necessary for all of seven layers to be designed together to simply make things
work out.
On the other hand, implementations of TCP/IP were more reliable than OSI due to which
people started using TCP/IP very quickly which led to large community of users. In simple
words, we can say that complexity leads to very poor or bad implementation. It is highly
complex to be effectively and properly implemented.
4. Bad Politics :OSI model was not associated with UNIX. This was fourth reason because
TCP/IP was largely and closely associated with Unix, which helps TCP/IP to get popular in
academia whereas OSI did not have this association at that time.
On the other hand, OSI was associated with European telecommunications, European
community, and government of USA. This model was also considered to be technically
inferior to TCP/IP. So, all people on ground reacted very badly to all of these things and
supported much use of TCP/IP.
Even after all these bad conditions, OSI model is still general standard reference for almost all
of networking documentation. There are many organizations that are highly interested in OSI
model. All of networking that is referring to numbered layers like layer 3 switching generally
refers to OSI model. Even, an effort has also been made simply to update it resulting in
revised model that was published in 1994.
Internet History
The Internet, commonly referred to as "the Net," is a global wide area network (GWAN) or a
network of networks that links computer systems all over the world. Generally, it is a worldwide
system of computer networks that have different high-bandwidth data lines, which includes the
Internet "backbone." Users at any computer can access information from any other computer
via the internet (assuming they have authorization). It was known as the ARPANet for the first
time, and in 1969, the ARPA, called Advanced Research Projects Agency, conceived the
internet. Allowing communication between users and devices from any distance was the
primary objective to create the network. You will need an Internet service provider (ISP) in
terms of connecting to the Internet since they operate as a middleman between you and the
Internet. Most Internet service providers provide you DSL, cable, or fiber connection to connect
to the internet. Below is a table that contains an overall history of the internet.
Year Event
1960 This is the year in which the internet started to share information s a way for government
researchers. And, the first known MODEM and dataphone were introduced by AT&T.
1961 On May 31, 1961, Leonard Kleinrock released his first paper, "Information Flow in Large
Communication Nets."
1962 A paper talking about packetization was released by Leonard Kleinrock. Also, this year, a
suggestion was given by Paul Baran for the transmission of data with the help of using
fixed-size message blocks
1964 Baran produced a study on distributed communications in 1964. In the same year,
Leonard Kleinrock released Communication Nets Stochastic Message Flow and Design,
the first book on packet nets.
1965 The first long-distance dial-up link was established between a TX-2 computer and a Q-32
at SDC in California by Lawrence G. Roberts of MIT and Tom Marill of SDC in California
with a Q-32. Also, the word "Packet" was coined by Donald in this year.
1966 After getting success at connecting over dial-up, a paper about this was published by
Tom Marill and Lawrence G. Roberts.
In the same year, Robert Taylor brought Larry Roberts and joined ARPA to develop
ARPANET.
1967 In 1967, 1-node NPL packet net was created by Donald Davies. For packet switch, the use
of a minicomputer was suggested by Wes Clark.
1968 On 9 December 1968, Hypertext was publicly demonstrated by Doug Engelbart. The first
meeting regarding NWG (Network Working Group) was also held this year, and on June
3, 1968, the ARPANET program plan was published by Larry Roberts.
1969 On 1 April 1969, talking about the IMP software and introducing the Host-to-Host, RFC
#1 was released by Steve Crocker. On 3 July 1969, a press was released for announcing
the public to the Internet by UCLA. On August 29, 1969, UCLA received the first network
equipment and the first network switch. CompuServe, the first commercial internet
service, was founded the same year.
1970 This is the year in which NCP was released by the UCLA team and Steve Crocker.
1971 In 1971, Ray Tomlinson sent the first e-mail via a network to other users.
1972 In 1972, the ARPANET was initially demonstrated to the general public.
1973 TCP was created by Vinton Cerf in 1973, and it was released in December 1974 with the
help of Yogen Dalal and Carl Sunshine. ARPA also launched the first international link,
SATNET, this year. And, the Ethernet was created by Robert Metcalfe at the Xerox Palo
Alto Research Center.
1974 In 1974, the Telenet, a commercial version of ARPANET, was introduced. Many consider it
to be the first Internet service provider.
1978 In 1978, to support real-time traffic, TCP split into TCP/IP, which was driven by John
Shoch, David Reed, and Danny Cohen. Later on, on 1 January 1983, the creation of TCP/IP
was standardized into ARPANET and helped create UDP. Also, in the same year, the first
worm was developed by Jon Hupp and John Shoch at Xerox PARC.
1981 BITNET was established in 1981. It is a time network that was formerly a network of IBM
mainframe computers in the United States.
1983 In 1983, the TCP/IP was standardized by ARPANET, and the IAB, short for Internet
Activities Board was also founded in the same year.
1984 The DNS was introduced by Jon Postel and Paul Mockapetris.
1986 The first Listserv was developed by Eric Thomas, and NSFNET was also created in 1986.
Additionally, BITNET II was created in the same year 1986.
1988 The First T1 backbone was included in ARPANET, and CSNET and CSNET merged to create
CREN.
1989 A proposal for a distributed system was submitted by Tim Berners-Lee at CERN on 12
March 1989 that would later become the WWW.
1990 This year, NSFNET replaced the ARPANET. On 10 September 1990, Mike Parker, Bill
Heelan, and Alan Emtage released the first search engine Archie at McGill University in
Montreal, Canada.
1991 Tim Berners-Lee introduced the WWW (World Wide Web) on August 6, 1991. On August
6, 1991, he also unveiled the first web page and website to the general public. Also, this
year, the internet started to be available to the public by NSF. Outside of Europe, the first
web server came on 1 December 1991.
1992 The main revolution came in the field of the internet that the internet Society was formed,
and NSFNET upgraded to a T3 backbone.
1993 CERN submitted the Web source code to the public domain on April 30, 1993. This
caused the Web to experience massive growth. Also, this year, the United Nations and the
White House came, which helped to begin top-level domains, such as .gov and .org. On
22 April 1993, the first widely-used graphical World Wide Web browser, Mosaic, was
released by the NCSA with the help of Eric Bina and Marc Andreessen.
1994 On April 4, 1994, James H. Clark and Marc Andreessen found the Mosaic Communications
Corporation, Netscape. On 13 October 1994, the first Netscape browser, Mosaic Netscape
0.9, was released, which also introduced the Internet to cookies. On 7 November 1994, a
radio station, WXYC, announced broadcasting on the Internet, and it became the first
traditional radio station for this. Also, in the same year, the W3C was established by Tim
Berners-Lee.
1995 In February 1995, Netscape introduced the SSL (Secure sockets layer), and the dot-com
boom began. Also, the Opera web browser was introduced to browsing web pages on 1
April 1995, and to make voice calls over the Internet, the Vocaltec, the first VoIP software,
wasintroduced.
Later, the Internet Explorer web browser was introduced by Microsoft on 16 August 1995.
In RFC 1866, the next version of HTML 2.0 was released on 24 November 1995.
In 1995, JavaScript, originally known as LiveScript, was created by Brendan Eich. At that
time, he was an employee at Netscape Communications Corporation. Later LiveScript was
renamed to JavaScript with Netscape 2.0B3 on December 4, 1995. In the same year, they
also introduced Java.
1996 This year, Telecom Act took a big Decision and deregulated data networks. Also,
Macromedia Flash that is now known as Adobe Flash was released in 1996.
In December 1996, the W3C published CSS 1, the first CSS specification. As compared to
postal mail, more e-mail was sent in the USA. This is the year in which the network has
ceased to exist as CREN ended its support.
1997 In 1997, the 802.11 (Wi-Fi) standard was introduced by IEEE, and the internet2 consortium
was also established.
1998 The first Internet weblogs arose in this year, and on February 10, 1998, XML became a
W3C recommendation.
1999 In September 1999, Napster began sharing files, and Marc Ostrofsky, the business.com,
the most expensive Internet domain name for $7.5 million on 1 December 1999. Later on,
on 26 July 2007, this domain was sold for $345 million to R.H. Donnelley.
2003 The members of CERN took the decision to dissolve the organization on 7 January 2003.
Also, this year, the Safari web browser came into the market on 30 June 2003.
2004 The Mozilla Firefox web browser was released by Mozilla on 9 November 2004.
2008 On 1 March 2008, the support b AOL for the Netscape Internet browser was ended. Then,
the Google Chrome web browser was introduced by Google on 11 December 2008, and
gradually it became a popular web browser.
2009 A person using the fictitious name Satoshi Nakamoto published the internet money
Bitcoin on 3 January 2009.
2014 On 28 October 2014, W3C recommended and released the HTML5 programming
language to the public.
Introduction to Transmission Media
Transmission media is a communication channel that carries the information from the
sender to the receiver. Data is transmitted through the electromagnetic signals.
The main functionality of the transmission media is to carry the information in the form
of bits through LAN(Local Area Network).
It is a physical path between transmitter and receiver in data communication.
In a copper-based network, the bits in the form of electrical signals.
In a fibre based network, the bits in the form of light pulses.
In OSI(Open System Interconnection) phase, transmission media supports the Layer 1.
Therefore, it is considered to be as a Layer 1 component.
The electrical signals can be sent through the copper wire, fibre optics, atmosphere,
water, and vacuum.
The characteristics and quality of data transmission are determined by the
characteristics of medium and signal.
Transmission media is of two types are wired media and wireless media. In wired media,
medium characteristics are more important whereas, in wireless media, signal
characteristics are more important.
Different transmission media have different properties such as bandwidth, delay, cost
and ease of installation and maintenance.
The transmission media is available in the lowest layer of the OSI reference model, i.e.,
Physical layer.
1. Guided transmission
2. Un Guided Transmission
1. Guided Media: It is also referred to as Wired or Bounded transmission media. Signals being
transmitted are directed and confined in a narrow pathway by using physical links.
Features:
High Speed
Secure
Used for comparatively shorter distances
(i) Twisted Pair Cable – Twisted Pair Cables are created by twisting two different
protected cables around each other to make a single cable. Shields are often built of
insulated materials that allow both cables to transmit independently. This twisted
wire is then enclosed inside a protective coating to make it easier to use.
Twisted pair cables are generally of two types
Unshielded Twisted Pair (UTP): UTP consists of two insulated copper wires twisted
around one another. This type of cable has the ability to block interference and does not
depend on a physical shield for this purpose. It is used for telephonic applications.
⇢ Easy to install
⇢ High-speed capacity
Disadvantages:
⇢ Susceptible to external interference
⇢ Lower capacity and performance in
comparison to STP
⇢ Short distance transmission due to
attenuation
Applications:
Shielded Twisted Pair (STP): This type of cable consists of a special jacket (a copper braid
covering or a foil shield) to block external interference. It is used in fast-data-rate Ethernet and
in voice and data channels of telephone lines.
Advantages of STP
Disadvantages of STP
Optical Fiber Cable:- Optical Fibre Cables are glass-based cables that transmit light signals.
The reflection concepts are employed for light signal transmission over cables. It is recognized
for allowing bulkier data to be delivered with more bandwidth and reduced electromagnetic
interference during transmission. Because the material is non-corrosive and weightless, these
cables are preferable to twisted cables in most instances.
High bandwidth.
Lightweight
Negligible attenuation.
Fastest data transmission.
Very costly.
Hard to install and maintain.
Fragile.
They are unidirectional, which means they will need another fiber for bidirectional
communication.
Stripline
Microstripline
In this, the conducting material is separated from the ground plane by a layer of dielectric.
Unguided Media: It is also referred to as Wireless or Unbounded transmission media. No
physical medium is required for the transmission of electromagnetic signals.
Features:
(i) Radio waves – These are easy to generate and can penetrate through buildings. The
sending and receiving antennas need not be aligned. Frequency Range:3KHz – 1GHz.
AM and FM radios and cordless phones use Radio waves for transmission.
(ii) Radio waves are a kind of electromagnetic wave whose wavelength falls in the
electromagnetic spectrum. The radio waves have the longest wavelengths among
electromagnetic waves. As like all other electromagnetic waves the radio waves also
travel at the speed of light. Radio waves are usually generated by charged particles
while accelerating.
(iii) Radio waves are generated artificially by transmitters and received by the antennas
or radio receivers. Radio waves are usually used for fixed or mobile radio
communication, broadcasting, radar, communication satellites.
(iv) Radio waves are the electromagnetic waves that are transmitted in all the directions
of free space.
(v) Radio waves are omnidirectional, i.e., the signals are propagated in all the directions.
(vi) The range in frequencies of radio waves is from 3Khz to 1 khz.
(vii) In the case of radio waves, the sending and receiving antenna are not aligned, i.e.,
the wave sent by the sending antenna can be received by any receiving antenna.
(viii) An example of the radio wave is FM radio.
(ii) Microwaves – It is a line of sight transmission i.e. the sending and receiving antennas need
to be properly aligned with each other. The distance covered by the signal is directly
proportional to the height of the antenna. Frequency Range:1GHz – 300GHz. These are majorly
used for mobile phone communication and television distribution.
Microwave Transmission
(ix) Infrared – Infrared waves are used for very short distance communication. They
cannot penetrate through obstacles. This prevents interference between systems.
Frequency Range:300GHz – 400THz. It is used in TV remotes, wireless mouse,
keyboard, printer, etc.
(x) Infrared technology uses diffuse light reflected at walls, furniture etc. or a directed
light if a line of sight (LOS) exists between sender and receiver.
(xi) Infrared light is the part of the electromagnetic spectrum, and is an electromagnetic
form of radiation. It comes from the heat and thermal radiation, and it is not visible
to the naked eyes.
(xii) In infrared transmission, senders can be simple light emitting diodes (LEDs) or laser
diodes. Photodiodes act as receivers.
(xiii) Infrared is used in wireless technology devices or systems that convey data through
infrared radiation. Infrared is electromagnetic energy at a wave length or wave
lengths somewhat longer than those of red light.
(xiv) Infrared wireless is used for medium and short range communications and control.
Infrared technology is used in instruction detectors; robot control system, medium
range line of sight laser communication, cordless microphone, headsets, modems,
and other peripheral devices.
(xv) Infrared radiation is used in scientific, industrial, and medical application. Night
vision devices using active near infrared illumination allow people and animals to be
observed without the observer being detected.
(xvi) Infrared transmission technology refers to energy in the region of the
electromagnetic radiation spectrum at wavelength longer than those of visible light
but shorter than those of radio waves.
(xvii) Infrared technology allows computing devices to communicate via short range
wireless signals. With infrared transmission, computers can transfer files and other
digital data bidirectional.
UNIT II : Data Link Layer
The data link layer is the hardware layer, and information at this
layer is in the form of frames. The data link layer is mainly used
to define the format of the data. The position of the data link
layer is second in the internet model and stays between the network
and physical layer as you can see in the following diagram.
Data-link layer is the second layer after the physical layer. The
data link layer is responsible for maintaining the data link
between two hosts or nodes.
It takes services from the physical layer and provides services to
the network layer. The primary function of this layer is data
synchronization.
Provides the logic for the data link, Thus it controls the
synchronization, flow control, and error checking functions of
the data link layer.
Functions are –
(i) Error Recovery.
(ii) It performs the flow control operations.
(iii) User addressing.
The data link layer has to carry out several specific functions and
the following are the main design issues of data link layer:
Data transfer
Frame synchronization
Flow control
Error control
Addressing
Link management.
Framing:
At the data link layer, it extracts the message from the sender
and provides it to the receiver by providing the sender’s and
receiver’s addresses. The advantage of using frames is that data
is broken up into recoverable chunks that can easily be checked
for corruption.
Types of framing
Examples:
If Data –> 011100011110 and ED –> 0111 then, find data after bit
stuffing.
--> 011010001101100
If Data –> 110001001 and ED –> 1000 then, find data after bit
stuffing?
--> 11001010011
Advantages of Framing in Data Link Layer
Frames are used continuously in the process of time-division
multiplexing.
It facilitates a form to the sender for transmitting a group of
valid bits to a receiver.
Frames also contain headers that include information such as
error-checking codes.
A Frame relay, token ring, ethernet, and other types of data
link layer methods have their frame structures.
Frames allow the data to be divided into multiple recoverable
parts that can be inspected further for corruption.
It provides a flow control mechanism that manages the frame flow
such that the data congestion does not occur on slow receivers
due to fast senders.
It provides reliable data transfer services between the layers
of the peer network
Flow Control
This method is very easiest and simple and each of the frames
is checked and acknowledged well.
This method is also very accurate.
Disadvantages –
Advantages –
Disadvantages –
The main issue is complexity at the sender and receiver due to
the transferring of multiple frames.
The receiver might receive data frames or packets out the
sequence.
Error Control
Various Techniques for error & flow Control : There are various
techniques of error control as given below :
1. Stop-and-Wait ARQ : Stop-and-Wait ARQ is also known as
alternating bit protocol. It is one of the simplest flow and error
control techniques or mechanisms. This mechanism is generally
required in telecommunications to transmit data or information
between two connected devices. Receiver simply indicates its
readiness to receive data for each frame. In these, sender sends
information or data packets to receiver. Sender then stops and
waits for ACK (Acknowledgment) from receiver. Further, if ACK does
not arrive within given time period i.e., time-out, sender then
again resends frame and waits for ACK. But, if sender receives ACK,
then it will transmit the next data packet to receiver and then
again wait for ACK from receiver. This process to stop and wait
continues until sender has no data frame or packet to send.
2. Sliding Window ARQ : This technique is generally used for
continuous transmission error control. It is further categorized
into two categories as given below :
Types of Errors
Single-Bit Error
Burst Error
This scheme makes the total number of 1’s even, that is why it is
called even parity checking.
Disadvantages
Checksum
Disadvantages
Advantages:
Increased Data Reliability
Improved Network Performance
Enhanced Data Security
Disadvantages:
Overhead Error detection requires additional resources and
processing power, which can lead to increased overhead on the
network. This can result in slower network performance and
increased latency.
Hamming Code:
The Hamming Code method is one of the most effective ways to detect
single-data bit errors in the original data at the receiver end. It
is not only used for error detection but is also for correcting
errors in the data bit.
2r >= d+r+1
Where,
d - “Data Bits”
2. Parity Bits - The parity bit is the method to append binary bits
to ensure that the total count of 1’s in the original data is even
bit or odd. It is also applied to detect errors on the receiver
side and correct them.
Odd Parity bits - In this parity type, the total number of 1’s in
the data bit should be odd in count, then the parity value is 0, and
the value is 1.
Even Parity bits - In this parity type, the total number of 1’s in
the data bit should be even in count; then the parity value is 0,
and the value is 1.
To solve the data bit issue with the hamming code method, some
steps need to be followed:
Step 1 - The position of the data bits and the number of
redundant bits in the original data. The number of redundant
bits is deduced from the expression [2^r >= d+r+1].
Step 2 - Fill in the data bits and redundant bit, and find the
parity bit value using the expression [2^p, where, p - {0,1,2,
…… n}].
Step 3 - Fill the parity bit obtained in the original data and
transmit the data to the receiver side.
Step 4 - Check the received data using the parity bit and
detect any error in the data, and in case damage is present,
use the parity bit value to correct the error.
2^0 - P1
2^1 - P2
2^2 - P4
2^3 - P8
P1: 1, 3, 5, 7, 9, 11
P1 - P1, 0, 1, 1, 1, 1
P1 - 0
P2: 2, 3, 6, 7, 10, 11
P2 - P2, 0, 0, 1, 0, 1
P2 - 0
P4: 4, 5, 6, 7
P4 - P4, 1, 0, 1
P4 - 0
P8: 8, 9, 10, 11
P8 - P1, 1, 0, 1
P8 - 0
To identify the position of the error bit, use the new parity
values as,
[0+2^2+2^1+2^0]
7, i.e., same as the assumed error position.
To correct the error, simply reverse the error bit to its
complement, i.e., for this case, change 0 to 1, to obtain the
original data bit.
The main problem here is how to prevent the sender from flooding
the receiver. The general solution for this problem is to have the
receiver send some sort of feedback to sender, the process is as
follows −
Step 3 − The sender after sending the sent frame has to wait for an
acknowledge frame from the receiver before sending another frame.
This protocol is called Simplex Stop and wait protocol, the sender
sends one frame and waits for feedback from the receiver. When the
ACK arrives, the sender sends the next frame.
HDLC supports two types of transfer modes, normal response mode and
asynchronous balanced mode
HDLC Frame
Wired LANs: Ethernet, Ethernet Protocol, Standard Ethernet, Fast Ethernet (100 Mbps),
Gigabit Ethernet, 10 Gigabit Ethernet.
The data link layer is used in a computer network to transmit the data between two devices
or nodes. It divides the layer into parts such as data link control and the multiple access
resolution/protocol. The upper layer has the responsibility to flow control and the error
control in the data link layer, and hence it is termed as logical of data link control. Whereas
the lower sub-layer is used to handle and reduce the collision or multiple access on a
channel. Hence it is termed as media access control or the multiple access resolutions.
Data Link Control: A data link control is a reliable channel for transmitting data over a
dedicated link using various techniques such as framing, error control and flow control of
data packets in the computer network.
1. What is a multiple access protocol?
When a sender and receiver have a dedicated link to transmit data packets, the data link
control is enough to handle the channel. Suppose there is no dedicated path to
communicate or transfer the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the channel. It may create
collision and cross talk. Hence, the multiple access protocol is required to reduce the
collision and avoid crosstalk between the channels.
For example, suppose that there is a classroom full of students. When a teacher asks a
question, all the students (small channels) in the class start answering the question at the
same time (transferring the data simultaneously). All the students respond at the same time
due to which data is overlap or data lost. Therefore it is the responsibility of a teacher
(multiple access protocol) to manage the students and make them one answer.
Following are the types of multiple access protocol that is subdivided into the different
process as:
1.1 Random Access Protocol
In this protocol, all the station has the equal priority to send the data over a channel. In
random access protocol, one or more stations cannot depend on another station nor any
station control another station. Depending on the channel's state (idle or busy), each station
transmits the data frame. However, if more than one station sends the data over a channel,
there may be a collision or data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver end
The different methods of random-access protocols for broadcasting frames on the channel
ALOHA
CSMA
CSMA/CD
CSMA/CA
1.1.1 ALOHA
Aloha means “Hello”, ALOHA is a multiple-access protocol that allows data to be
transmitted via a shared network channel. It was developed by Norman Abramson and his
associates in the 1970s at the University of Hawaii.
Aloha Rules
1. Pure Aloha
2. Slotted Aloha
Pure Aloha
In the case of Pure Aloha, transmission time remains continuous. When a station has the
availability of frames, it will send the frame. In case there is a collision and the frame
gets destroyed, sender will wait for random amount of time before retransmission. Let
us now understand this technique:
Step 1: In the case of Pure ALOHA, the nodes transmit frames whenever the data is
available for sending.
Step 2: Whenever two or more nodes transmit data simultaneously, there is a chance of
collision and frames get destroyed.
Step 4: When the acknowledgement is not received within a specified time, the sender
node will assume that the frame has been destroyed.
Step 5: If the frame is destroyed by a collision, the node will wait for a random amount
of time and sends it again. The waiting time will be random since otherwise same frames
will collide multiple times.
Step 6: As per Pure ALOHA, when the time-out period passes, every station must wait
for a random time before resending frame. This randomness will help in avoiding more
collisions.
In fig there are four stations that .contended with one another for access to shared channel.
All these stations are transmitting frames. Some of these frames collide because multiple
frames are in contention for the shared channel. Only two frames, frame 1.1 and frame 3.2
survive. All other frames are destroyed.
To assure pure aloha: Its throughput and rate of transmission of the frame are to be
predicted.
For that, let’s make some assumptions:
Vulnerable Time = 2 * Tt
Slotted ALOHA
Slotted ALOHA was invented to improve the efficiency of pure ALOHA as chances of collision
in pure ALOHA are very high.
• In slotted ALOHA, the time of the shared channel is divided into discrete intervals called
slots.
• The stations can send a frame only at the beginning of the slot and only one frame is sent
in each slot.
In slotted ALOHA, if any station is not able to place the frame onto the channel at the
beginning of the slot i.e. it misses the time slot then the station has to wait until the
beginning of the next time slot.
• In slotted ALOHA, there is still a possibility of collision if two stations try to send at the
beginning of the same time slot as shown in fig.
• Slotted ALOHA still has an edge over pure ALOHA as chances of collision are reduced to
one-half.
Explanation:
• Otherwise the station uses a backoff strategy, and sends the packet again.
• After many times if there is no acknowledgement then the station aborts the idea of
transmission.
Collision is possible only in the current slot. Therefore, Vulnerable Time is Tt.
The efficiency of Slotted Aloha
It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes
idle. Hence, it reduces the chances of a collision on a transmission medium.
1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must wait
and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines before transmitting the data,
each node must sense the channel, and if the channel is inactive, it immediately sends the
data. Otherwise, the station must wait for a random time (not continuously), and when the
channel is found to be idle, it transmits the frames.
O- Persistent: It is an O-persistent method that defines the superiority of the station before
the transmission of the frame on the shared channel. If it is found that the channel is
inactive, each station waits for its turn to retransmit the data
1.1.3 CSMA/CD
CSMA/CD stands for Carrier Sense Multiple Access/Collision Detection, with collision
detection being an extension of the CSMA protocol. This creates a procedure that regulates
how communication must take place in a network with a shared transmission medium
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network protocol for
carrier transmission that operates at the MAC.
When a collision is identified, the CSMA/CD quickly terminates the transmission by sending
a signal, saving the sender's time to send the data packet. Let's say each station detects a
collision as the packets are transmitted. In such a situation, the CSMA/CD immediately
issues a jam signal to halt the transmission and delays sending another data packet until a
random time has passed. When a free channel is discovered, the data will be sent.
Algorithm
The station that intends to transmit the data detects if the carrier is available or
busy by perceiving its state. If a carrier is empty, the transmission starts.
The condition Tt>=2∗TpTt>=2∗Tp, where TtTt is the transmission delay and TpTp is
the propagation delay, is used by the transmission station to identify collisions.
As soon as it notices a collision, the station sends the jamming signal.
Following a collision, the transmitting station interrupts transmission and waits for a
predetermined period, known as the "back-off time". The station then re-transmits
the signal.
Workflow
Although CSMA/CD is more efficient than CSMA, a few considerations are discussed below
relating to its effectiveness:
Advantages
Disadvantages
1.1.4 CSMA/CA
Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) is a network protocol
for carrier transmission that operates at the MAC.
LBT principle
The "Listen Before Talk" (LBT) principle is the foundation of CSMA/CA. Before the station
can begin transmitting, the line must check to verify if it is free. However, this is only the
first action. Collisions are effectively prevented with the help of additional functions that are
part of the procedure.
Working
The steps above are followed by wireless networks that adopt CSMA/CA. These are
explained in more detail below.
1. First, the stations sense the transmission medium. This implies that the carrier sense
keeps a close eye on the radio channel and determines whether other stations are
transmitting at the same time.
2. If the transmission medium is already in use, a random backoff is initiated, during
which the station waits until a new check begins for a determined period. The same
happens at all other stations that are not actively broadcasting or receiving. It only
occurs if the station is unaware that the medium is already occupied (which is set by
the wireless listening stations to tell a station how long it must wait before accessing
the wireless medium).
3. If the network is available, the station initiates, which regulates how long a station
waits before commencing a media transmission. Before tt, which is the amount of
time a station should wait before sending its request frame, the channel is examined
thoroughly. If it remains free, a random backoff begins, and the optional exchange
starts, which minimizes frame conflicts caused by the hidden node problem. If the
request to send has been received successfully at the receiver side and there hasn't
been a collision, the frame grants the sender permission to use the transmission
medium.
4. All other stations are alerted that the network is in use. As a result, they raise their
network allocation vector again and delay making another attempt to verify if the
channel is free.
5. The station then begins the transmission. The receiver confirms that the data packet
has been successfully received after waiting for the tt, which is the time required to
process a data package, and then sends an ACK (acknowledgement) frame to
indicate to the sender that the data frame has been received. It also sets the
network allocation vector to 0, indicating that the network is now open for a new
transmission.
In controlled access, the stations seek information from one another to find which station
has the right to send. It allows only one node to send at a time, to avoid the collision of
messages on a shared medium. The three controlled-access methods are:
1. Reservation
2. Polling
3. Token Passing
1.2.1 Reservation
The following figure shows a situation with five stations and a five-slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval,
only station 1 has made a reservation.
Advantages of Reservation The main advantage of reservation is high rates and low rates of
data accessing time of the respective channel can be predicated easily. Here time and rates
are fixed.
Disadvantages of Reservation:
1.2.2 Polling
Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
In this, one acts as a primary station(controller) and the others are secondary
stations. All data exchanges must be made through the controller.
The message sent by the controller contains the address of the node being selected
for granting access.
Although all nodes receive the message the addressed one responds to it and sends
data if any. If there is no data, usually a “poll reject”(NAK) message is sent back.
Problems include high overhead of the polling messages and high dependence on
the reliability of the controller.
Advantages of Polling:
The maximum and minimum access time and data rates on the channel are fixed
predictable.
It has maximum efficiency.
It has maximum bandwidth.
No slot is wasted in polling.
There is assignment of priority to ensure faster access from some secondary.
Disadvantages of Polling
Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of
data. Then,
In token passing scheme, the stations are connected logically to each other in form
of ring and access to stations is governed by tokens.
A token is a special bit pattern or a small message, which circulate from one station
to the next in some predefined order.
In Token ring, token is passed from one station to another adjacent station in the
ring whereas incase of Token bus, each station uses the bus to send the token to the
next station in some predefined order.
In both cases, token represents permission to send. If a station has a frame queued
for transmission when it receives the token, it can send that frame before it passes
the token to the next station. If it has no queued frame, it passes the token simply.
After sending a frame, each station must wait for all N stations (including itself) to
send the token to their neighbours and the other N – 1 stations to send a frame, if
they have one.
There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable
operation of this scheme.
1. Delay, is a measure of time between when a packet is ready and when it is delivered.
So, the average time (delay) required to send a token to the next station = a/N.
2. Throughput, which is a measure of successful traffic.
and
It may now be applied with routers cabling and includes built-in debugging features
like protective relay and auto reconfiguration.
It provides good throughput when conditions of high load.
Lets first understand the need for channelization protocols using the example given below:
Let's consider a transmission line with four users, namely D1, D2, D3, D4. Here, when data is
transmitted from a destined source for D2, it can also be accessed by D1, D3, D4 because
they all are on the same transmission line. D1, D3, and D4 can knowingly or unknowingly
access the data destined for D2. There is a possibility that they thought that data was
supposed for them.
In this technique, the bandwidth is divided into frequency bands, and each frequency band
is allocated to a particular station to transmit its data. The frequency band distributed to the
stations becomes reserved. Each station uses a band-pass filter to confine their data
transmission into their assigned frequency band. Each frequency band has some gap in-
between to prevent interference of multiple bands, and these are called guard bands.
Advantages of FDMA
Disadvantages of FDMA
TDMA is another technique to enable multiple accesses in a shared medium. In this, the
stations share the channel's bandwidth time-wise. Every station is allocated a fixed time to
transmit its signal. The data link layer tells its physical layer to use the allotted time. TDMA
requires synchronization between stations. There is a time gap between the time intervals,
called guard time, which is assigned for the synchronization between stations. The rate of
data in TDMA is greater than FDMA but lesser than CDMA.
Advantages of TDMA
TDMA separates users according to time, and this ensures that there is no
interference from the simultaneous transmissions
No frequency guard band is required in TDMA
It shares a single carrier frequency with multiple users.
It saves power as the user is only active while transmitting in its allotted time frame.
There is no need for precise, narrow band filters as there is no division in the
frequency range.
Disadvantages of TDMA
If the stations are spread over a wide area, there is a propagation delay, and we use
guard time to counter this.
Slot allocation in TDMA is complex.
Synchronization between different channels is difficult to achieve. Each station has
to know the beginning of its slot and its location.
The stations configured according to TDMA demand high peak power during uplink
in their allotted time slot.
In the CDMA technique, communication happens using codes. Using this technique,
different stations can transmit their signal on the same channel using other codes. There is
only one channel in CDMA that carries all the signals. CDMA is based on the coding
technique, where each station is assigned a code (a sequence of numbers called chips). It
differs from TDMA as all the stations can transmit simultaneously in the channel as there is
no time sharing. And it differs from FDMA as only one channel occupies the whole
bandwidth.
Advantages of CDMA
Disadvantages of CDMA
Ethernet
What is Ethernet?
Ethernet is a type of communication protocol that is created at Xerox PARC in 1973
by Robert Metcalfe and others,
which connects computers on a network over a wired connection.
It is a widely used LAN protocol, which is also known as Alto Aloha Network. It
connects computers within the local area network and wide area network.
Numerous devices like printers and laptops can be connected by LAN and WAN
within buildings, homes, and even small neighborhoods.
It offers a simple user interface that helps to connect various devices easily, such as
switches, routers, and computers. A local area network (LAN) can be created with the help
of a single router and a few Ethernet cables, which enable communication between all
linked devices.
The wireless networks replaced Ethernet in many areas; however, Ethernet is still more
common for wired networking. Wi-Fi reduces the need for cabling as it allows the users to
connect smartphones or laptops to a network without the required cable. While comparing
with Gigabit Ethernet, the faster maximum data transfer rates are provided by the 802.11ac
Wi-Fi standard. Still, as compared to a wireless network, wired connections are more secure
and
are less prone to interference. This is the main reason to still use Ethernet by many
businesses and organizations.
History of Ethernet
At the beginning of the 1970s, Ethernet was developed over several years from
ALOHAnet from the University of Hawaii. Then, a test was performed, which was
peaked with a scientific paper in 1976, and published by Metcalfe together with
David Boggs. Late in 1977, a patent on this technology was filed by Xerox
Corporation.
The Ethernet as a standard was established by companies Xerox, Intel, and Digital
Equipment Corporation (DEC); first, these companies were combined to improve
Ethernet in 1979, then published the first standard in 1980. Other technologies,
including CSMA/CD protocol, were also developed with the help of this process,
which later became known as IEEE 802.3. This process also led to creating a token
bus (802.4) and token ring (802.5).
In 1983, the IEEE technology became standard, and before 802.11, 802.3 was born.
Many modern PCs started to include Ethernet cards on the motherboard, as due to
the invention of single-chip Ethernet controllers, the Ethernet card became very
inexpensive. Consequently, the use of Ethernet networks in the workplace began by
some small companies but still used with the help of telephone-based four-wire
lines.
Until the early 1990s, creating the Ethernet connection through twisted pair and
fiberoptic cables was not established. That led to the development of the 100 MB/s
standard in 1995.
Advantages of Ethernet
Disadvantages of Ethernet
The wired Ethernet network restricts you in terms of distances, and it is best for
using in short distances.
If you create a wired ethernet network that needs cables, hubs, switches, routers,
they increase the cost of installation.
Data needs quick transfer in an interactive application, as well as data is very small.
In ethernet network, any acknowledge is not sent by receiver after accepting a
packet.
If you are planning to set up a wireless Ethernet network, it can be difficult if you
have no experience in the network field.
Comparing with the wired Ethernet network, wireless network is not more secure.
The full-duplex data communication mode is not supported by the 100Base-T4
version.
Additionally, finding a problem is very difficult in an Ethernet network (if has), as it is
not easy to determine which node or cable is causing the problem.
Ethernet protocol
Ethernet protocol is available around us in various forms but the present Ethernet can be
defined through the IEEE 802.3 standard. So, This article discusses an overview of Ethernet
protocol basics, types and it’s working
The most popular and oldest LAN technology is Ethernet Protocol, so it is more frequently
used in LAN environments which is used in almost all networks like offices, homes, public
places, enterprises, and universities. Ethernet has gained huge popularity because of its
maximum rates over longer distances using optical media.
The Ethernet protocol uses a star topology or linear bus which is the foundation of the IEEE
802.3 standard. The main reason to use Ethernet widely is, simple to understand, maintain,
implement, provides flexibility, and permits less cost network implementation.
In the OSI network model, Ethernet protocol operates at the first two layers like the Physical
& the Data Link layers but, Ethernet separates the Data Link layer into two different layers
called the Logical Link Control layer & the Medium Access Control layer.
The physical layer in the network mainly focuses on the elements of hardware like
repeaters, cables & network interface cards (NIC). For instance, an Ethernet network like
100BaseTX or 10BaseT indicates the cables type that can be used, the length of cables, and
the optimal topology.
The data link layer in the network system mainly addresses the way that data
packets are transmitted from one type of node to another. Ethernet uses an access
method called CSMA/CD (Carrier Sense Multiple Access/Collision Detection). This is a system
where every computer listens to the cable before transmitting anything throughout the
network.
Ethernet protocols are available in different flavors and operate at various speeds by
using different types of media. But, all the Ethernet versions are well-matched through each
other. These versions can mix & match on a similar network with the help of different
network devices like hubs, switches, bridges to connect the segments of the network that
utilize different types of media.
The original Ethernet was created in 1976 at Xerox's Palo Alto Research Center (PARC). Since
then, it has gone through four generations:
MAC Sublayer
In Standard Ethernet, the MAC sublayer governs the operation of the access method. It also
frames data received from the upper layer and passes them to the physical layer.
1. Frame Format
The Ethernet frame contains seven fields: preamble, SFD, DA, SA, length or type of protocol
data unit (PDU), upper-layer data, and the CRC. Ethernet does not provide any mechanism
for acknowledging received frames, making it what is known as an unreliable medium.
Acknowledgments must be implemented at the higher layers.
Preamble: The first field of the 802.3 frame contains 7 bytes (56 bits) of alternating 0sand 1s
that alerts the receiving system to the coming frame and enables it to synchronize its input
timing. The pattern provides only an alert and a timing pulse.
Start frame delimiter (SFD). The second field (l byte: 10101011) signals the beginning of the
frame. The SFD warns the station or stations that this is the last chance for synchronization.
The last 2 bits is 11 and alerts the receiver that the next field is the destination address.
Destination addresses (DA). The DA field is 6 bytes and contains the physical address of the
destination station or stations to receive the packet.
Source addresses (SA). The SA field is also 6 bytes and contains the physical address of the
sender of the packet.
Length or type. This field is defined as a type field or length field. The original Ethernet used
this field as the type field to define the upper-layer protocol using the MAC frame.
Data. This field carries data encapsulated from the upper-layer protocols. It is a minimum of
46 and a maximum of 1500 bytes.
CRC. The last field contains error detection information, in this case a CRC-32.
Fast Ethernet: This type of Ethernet is usually supported by a twisted pair or CAT5
cable, which has the potential to transfer or receive data at around100 Mbps. They
function at 100Base and 10/100Base Ethernet on the fiber side of the link if any
device such as a camera, laptop, or other is connected to a network. The fiber optic
cable and twisted pair cable are used by fast Ethernet to create communication. The
100BASE-TX, 100BASE-FX, and 100BASE-T4 are the three categories of Fast Ethernet.
Gigabit Ethernet: This type of Ethernet network is an upgrade from Fast Ethernet,
which uses fiber optic cable and twisted pair cable to create communication. It can
transfer data at a rate of 1000 Mbps or 1Gbps. In modern times, gigabit Ethernet is
more common. This network type also uses CAT5e or other advanced cables, which
can transfer data at a rate of 10 Gbps.
The primary intention of developing the gigabit Ethernet was to full fill the user's
requirements, such as faster transfer of data, faster communication network, and more.
Network layer is majorly focused on getting packets from the source to the
destination, routing error handling and congestion control.
Before learning about design issues in the network layer, let’s learn about it’s
various functions.
Addressing
Maintains the address at the frame header of both source and destination
and performs addressing to detect various devices in network.
Packeting
This is performed by Internet Protocol. The network layer converts the
packets from its upper layer.
Routing
It is the most important functionality. The network layer chooses the most
relevant and best path for the data transmission from source to
destination.
Inter-networking
It works to deliver a logical connection across multiple devices.
The network layer or layer 3 of the OSI (Open Systems Interconnection) model
is concerned delivery of data packets from the source to the destination across
multiple hops or links.
It is the lowest layer that is concerned with end − to − end transmission. These
issues encompass the services provided to the upper layers as well as internal
design of the layer.
The network layer operates in an environment that uses store and forward
packet switching. The node which has a packet to send, delivers it to the
nearest router. The packet is stored in the router until it has fully arrived and
its checksum is verified for error detection. Once, this is done, the packet is
forwarded to the next router. Since, each router needs to store the entire
packet before it can forward it to the next hop, the mechanism is called store −
and − forward switching.
The network layer provides service its immediate upper layer, namely transport
layer, through the network − transport layer interface. The two types of services
provided are −
Connection − Oriented Service − In this service, a path is setup between
the source and the destination, and all the data packets belonging to a
message are routed along this path.
Connectionless Service − In this service, each packet of the message is
considered as an independent entity and is individually routed from the
source to the destination.
The objectives of the network layer while providing these services are −
The main function of NL (Network Layer) is routing packets from the source
machine to the destination machine.
b) The other process is responsible for filling in and updating the routing
tables. That is where the routing algorithm comes into play. This process is
routing.
Regardless of whether routes are chosen independently for each packet or only
when new connections are established, certain properties are desirable in a
routing algorithm correctness, simplicity, robustness, stability, fairness,
optimality
1. Optimality principle
2. Shortest path algorithm
3. Flooding
4. Distance vector routing
5. Link state routing
6. Hierarchical Routing
1. Optimality principle
One can make a general statement about optimal routes without regard to
network topology or traffic. This statement is known as the optimality principle.
It states that if router J is on the optimal path from router I to router K, then
the optimal path from J to K also falls along the same
As a direct consequence of the optimality principle, we can see that the set of
optimal routes from all sources to a given destination form a tree rooted at the
destination. Such a tree is called a sink tree. The goal of all routing algorithms
is to discover and use the sink trees for all routers
2. Shortest Path Routing (Dijkstra’s) The idea is to build a graph of the
subnet, with each node of the graph representing a router and each arc of
the graph representing a communication line or link. To choose a route
between a given pair of routers, the algorithm just finds the shortest path
between them on the graph
1. Start with the local node (router) as the root of the tree. Assign a cost of 0
to this node and make it the first permanent node.
2. Examine each neighbor of the node that was the last permanent node.
3. Assign a cumulative cost to each node and make it tentative
4. Among the list of tentative nodes
a. Find the node with the smallest cost and make it Permanent
b. If a node can be reached from more than one route then select the
route with the shortest cumulative cost.
5. Repeat steps 2 to 4 until every node becomes permanent.
Flooding
• Another static algorithm is flooding, in which every incoming packet is sent
out on every outgoing line except the one it arrived on.
• One such measure is to have a hop counter contained in the header of each
packet, which is decremented at each hop, with the packet being discarded
when the counter reaches zero. Ideally, the hop counter should be initialized to
the length of the path from source to destination.
used to determine the best path for data packets to travel through a
network.
This algorithm is also known as Bellman-Ford Algorithm
The distance vector routing algorithm works by each router in a network
maintaining a table of the distances to all other routers in the network.
This table is called the distance vector
The distance vector contains the distance to all the other routers in the
network as well as the next hop router that the data packet should be
sent to in order to reach its destination.
The distance vector routing algorithm uses the Bellman-Ford equation to
calculate the distance vector table. The Bellman-Ford equation takes into
account the cost of the link between the current router and its
neighbors, as well as the distance vector of the neighboring routers. The
algorithm then selects the shortest path to the destination router based
on the information in the distance vector table.
In distance vector routing, the least-cost route between any two nodes is
the route with minimum distance. In this protocol, as the name implies,
each node maintains a vector (table) of minimum distances to every
node.
Initialization
Sharing
Updating
1. Initialization : Each node can know only the distance between itself
and its immediate neighbors, those directly connected to it. So for the
moment, we assume that each node can send a message to the
immediate neighbors and find the distance between itself and these
neighbors.
2. Sharing : The whole idea of distance vector routing is the sharing of
information between neighbors
Note: In distance vector routing, each node shares its routing table
with its immediate neighbors periodically and when there is a change
3. Updating: When a node receives a two-column table from a neighbor,
it needs to update its routing table.
Updating takes three steps:
1. The receiving node needs to add the cost between itself and the
sending node to each value in the second column. (x+y)
2. If the receiving node uses information from any row. The sending
node is the next node in the route.
3. The receiving node needs to compare each row of its old table with
the corresponding row of the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses
the row with the smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses
the new row.
Example: (Distance vector routing algoritham)
In the network shown below, there are three routers, A, C, and D, with
the following weights − AC =8, CD =4, and DA =3.
8
A B
4 3
C
Step 3 − The final revised routing table with the reduced cost distance vector
routing protocol for all routers A, C, and D is shown below-
Advantages of Distance Vector Routing Algorithm
The distance vector routing algorithm, also known as the shortest path routing
algorithm in computer networks, has several advantages:
Simplicity
Low overhead
Flexibility
Stability
Compatibility
The distance vector routing algorithm, also known as the shortest path routing
algorithm in computer networks, has several disadvantages:
Slow convergence
Count-to-infinity problem
Limited scalability
Limited accuracy
Limited security
Limited feasibility in large networks
In LSR, each node in the network maintains a map or database, called a link
state database (LSDB), that contains information about the state of all the links
in the network. This information includes the cost of each link, the status of
each link (up or down), and the neighboring nodes that are connected to each
link.
When a node in the network wants to send data to another node, it consults its
LSDB to determine the best path to take. The node selects the path with the
lowest cost, also known as the shortest path, to reach the destination node. To
determine the shortest path, LSR uses Dijkstra’s shortest path algorithm.
Link state routing is based on the assumption that, although the global
knowledge about the topology is not clear, each node has partial knowledge: it
knows the state (type, condition, and cost) of its links
In other words, the whole topology can be compiled from the partial knowledge
of each node
1. initialization phase: The first phase is the initialization phase, where each
router in the network learns about its own directly connected links. This
information is then stored in the router’s link state database.
2. flooding phase: The second phase is the flooding phase, where each router
floods its link state information to all other routers in the network. This allows
each router to learn about the entire network topology.
3. path calculation phase: The third phase is the shortest path calculation
phase, where each router uses the link state information to calculate the
shortest path to every other router in the network. This is typically done using
Dijkstra’s algorithm.
4. route installation phase: The fourth and final phase is the route installation
phase, where each router installs the calculated shortest paths in its routing
table. This allows the router to forward packets along the optimal path to their
destination.
One of the main benefits of LSR is that it only requires routers to have
knowledge of their directly connected links
link state routing (LSR) is a routing protocol that uses a link state database to
store information about the network topology.
1. Each router in the network only needs to know about its directly connected
links. This allows for a more scalable solution, as routers do not need to
maintain a full view of the entire network topology.
2. LSR uses a link state database to store information about the network
topology. This database is updated by flooding link state information between
routers.
4. LSR can quickly adapt to changes in the network topology. When a link goes
down or a new link is added, the link state information is updated and the
shortest path is recalculated.
5. LSR is less prone to routing loops, as each router only installs the shortest
path in its routing table.
6. LSR supports multiple equal-cost paths, which allows for load balancing and
redundancy.
Advantages of Link State Routing Algorithm
1. One of the main advantages of LSR is that it only needs to know the state
of the links it is directly connected to, as opposed to DVR which needs to
know the entire state of the network. This allows LSR to converge quickly
2. Another advantage of LSR is that it does not suffer from the count-to-
infinity problem which is prevalent in DVR.
Table 1A
Destination Line Hops
1A - -
1B 1B 1
1C 1C 1
2A 1B 2
2B 1B 3
2C 1B 3
2D 1B 4
3A 1C 3
3B 1C 2
4A 1C 3
4B 1C 4
4C 1C 4
5A 1C 4
5B 1C 5
5C 1B 5
5D 1C 6
5E 1C 5
1A Hierarchical Table
Destination Line Hops
1A - -
1B 1B 1
1C 1C 1
2 1B 2
3 1C 2
4 1C 3
5 1C 4
Clarification
If the same 720-router subnet is divided into eight clusters, each with nine
zones and each region with 10 routers, The total number of entries in tables in
each router will then be determined.
10 localized entries + 8 distant regions + 7 clusters = 25 items.
1. Scalability
2. Better Traffic Control
3. Simple to administer
1. Complexity
2. Latency
Congestion Control algorithms
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it
slows down network response time.
Effects of Congestion
The presence of congestion means the load is greater than the resources
available over a network to handle. Generally we will get an idea to reduce the
congestion by trying to increase the resources or decrease the load.
There are some approaches for congestion control over a network which are
usually applied on different time scales to either prevent congestion or react to
it once it has occurred.
Step 2 − Sometimes resources can be added dynamically like routers and links
when there is serious congestion. This is called provisioning, and which
happens on a timescale of months, driven by long-term trends.
Step 4 − Some of local radio stations have helicopters flying around their cities
to report on road congestion to make it possible for their mobile listeners to
route their packets (cars) around hotspots. This is called traffic aware routing.
Step 6 − Routers can monitor the average load, queueing delay, or packet loss.
In all these cases, the rising number indicates growing congestion. The network
is forced to discard packets that it cannot deliver. The general name for this is
Load shedding. The better technique for choosing which packets to discard can
help to prevent congestion collapse.
The leaky bucket algorithm discovers its use in the context of network
traffic shaping or rate-limiting.
A leaky bucket execution and a token bucket execution are
predominantly used for traffic shaping algorithms.
This algorithm is used to control the rate at which traffic is sent to the
network and shape the burst traffic to a steady traffic stream.
Similarly, each network interface contains a leaky bucket and the following
steps are involved in leaky bucket algorithm:
When host wants to send packet, packet is thrown into the bucket.
The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
Bursty traffic is converted to a uniform traffic by the leaky bucket.
In practice the bucket is a finite queue that outputs at a finite rate.
The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that
never loses information. Therefore, a token bucket algorithm finds its
uses in network traffic shaping or rate-limiting.
It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
When tokens are shown, a flow to transmit traffic appears in the display
of tokens.
No token means no flow sends its packets. Hence, a flow transfers traffic
up to its peak burst rate in good tokens in the bucket.
The leaky bucket algorithm enforces output pattern at the average rate, no
matter how bursty the traffic is. So in order to deal with the bursty traffic we
need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.
UNIT– IV : The Network Layer Design Issues – Store and Forward Packet Switching-Services
Provided to the Transport layer- Implementation of Connectionless Service-Implementation of
Connection Oriented Service- Comparison of Virtual Circuit and Datagram Networks, Routing
Algorithms-The Optimality principle-Shortest path, Flooding, Distance vector, Link state,
Hierarchical, Congestion Control algorithms-General principles of congestion control,
Congestion prevention polices, Approaches to Congestion Control-Traffic Aware Routing-
Admission Control-Traffic Throttling-Load Shedding. Traffic Control Algorithm-Leaky bucket &
Token bucket.
Internet Working: How networks differ- How networks can be connected- Tunneling,
internetwork routing-, Fragmentation, network layer in the internet – IP protocols-IP Version 4
protocol-IPV4 Header Format, IP addresses, Class full Addressing, CIDR, NAT-, Subnets-IP
Version 6-The main IPV6 header, Transition from IPV4 to IPV6, Comparison of IPV4 & IPV6-
Internet control protocols- ICMP-ARPDHCP
Internet Working
Introduction
Types of Internetworking
1. Extranet
2. Internet
3. Intranet
Extranet
Internet
Intranet
Advantages
Global Connectivity
Scalability
Resource Sharing
Remote Access:
Redundancy and Failover
Disadvantages
Security Risks
Network Congestion
Privacy Concerns
Dependency on Infrastructure
Complexity
Tunneling
There are a number of different protocols that can be used to create a tunnel.
The most common protocols are PPTP, L2TP, and IPSec.
IPSec is a security protocol that can be used to create a secure tunnel between
two networks. IPSec is commonly used to secure data communications between
two networks or to connect two networks that use different protocols.
TLS is a security protocol that can be used to create a secure tunnel between
two networks. TLS is commonly used to secure data communications between
a web server and a web browser or to connect two networks that use different
protocols.
Internetwork routing
Routing through an internetwork is similar to routing within a single subnet, but with
some added complications
Fragmentation
The term fragmentation in computer networks refers to the breaking
up of a data packet into smaller pieces in order to fit it through a network with
a smaller maximum transmission unit (MTU) than the initial packet size.
In the above example, we have divided the protocol data units of size 6000
bytes into 4 equal-sized (1500 bytes) new protocol data units and these new
size fragments are perfect for transferring the data because, for campus and
modern ethernet-based offices, 1500 bytes MTU is the standard.
There are some factors that arise the requirement of fragmentation in computer
networks:
Now, let’s look at some important fields in the IP header for fragmentation.
IPV4
IP stands for Internet Protocol and v4 stands for Version Four (IPv4).
IPv4 was the primary version brought into action for production within
the ARPANET in 1983.
Internet Protocol Version 4 (IPv4) is the fourth revision of the Internet
Protocol and a widely used protocol in data communication over different
kinds of networks.
IPv4 is a connectionless protocol used in packet-switched layer networks,
such as Ethernet.
It provides the logical connection between network devices by providing
identification for each device.
Parts of IPv4
Network part
The network part indicates the distinctive variety that’s appointed to the
network. The network part conjointly identifies the category of the
network that’s assigned.
Host Part
The host part uniquely identifies the machine on your network. This part
of the IPv4 address is assigned to every host.
For each host on the network, the network part is the same, however, the
host half must vary.
Subnetnumber
This is the nonobligatory part of IPv4. Local networks that have massive
numbers of hosts are divided into subnets and subnet numbers are
appointed to that.
Advantages of IPv4
Limitations of IPv4
IPV4 Classification
1. class A
2. class B
3. class C
4. class D
5. class E
Classes A, B and C have a different bit length for addressing the network
host. Class D addresses are reserved for multicasting, while class E
addresses are reserved for future use.
Class A has subnet mask 255.0.0.0 or /8
Class B has subnet mask 255.255.0.0 or /16
Class C has subnet mask 255.255.255.0 or /24.
For example, with a /16 subnet mask, the network 192.168.0.0 may use
the address range of 192.168.0.0 to 192.168.255.255.
Network hosts can take any address from this range; however, address
192.168.255.255 is reserved for broadcast within the network.
The maximum number of host addresses IPv4 can assign to end users is
232.
IPv6 presents a standardized solution to overcome IPv4’s limitations.
Because of its 128-bit address length, it can define up to 2,128
addresses.
The presence of options, the size of the datagram header can be of variable
length (20 bytes to 60 bytes).
VERSION: Version of the IP protocol (4 bits), which is 4 for IPv4
HLEN: IP header length (4 bits), which is the number of 32 bit words in the
header. The minimum value for this field is 5 and the maximum is 15.
Total Length: Length of header + Data (16 bits), which has a minimum value
20 bytes and the maximum is 65,535 bytes.
Flags: 3 flags of 1 bit each : reserved bit (must be zero), do not fragment flag,
more fragments flag (same order)
Option: Optional information such as source route, record route. Used by the
Network administrator to check whether a path is working or not.
IP Addresses
An IP address represents a unique address that distinguishes any device on the
internet or any network from another. IP or Internet Protocol defines the set of
commands directing the setup of data transferred through the internet or any
other local network.
An IP address is the identifier that enables your device to send or receive data
packets across the internet
Types of IP addresses
Consumer IP addresses
Every individual or firm with an active internet service system pursues two
types of IP addresses, i.e., Private IP (Internet Protocol) addresses and public IP
(Internet Protocol) addresses. The public and private correlate to the network
area. Therefore, a private IP address is practiced inside a network, whereas the
other (public IP address) is practiced outside a network.
1. Private IP addresses
All the devices that are linked with your internet network are allocated a
private IP address. It holds computers, desktops, laptops, smartphones,
tablets, or even Wi-Fi-enabled gadgets such as speakers, printers, or smart
Televisions. With the expansion of IoT (internet of things), the demand for
private IP addresses at individual homes is also seemingly growing.
2. Public IP addresses
3.Dynamic IP addresses
As the name suggests, Dynamic IP addresses change automatically and
frequently. With this types of IP address, ISPs already purchase a bulk stock of
IP addresses and allocate them in some order to their customers. Periodically,
they re-allocate the IP addresses and place the used ones back into the IP
addresses pool so they can be used later for another client. The foundation for
this method is to make cost savings profits for the ISP.
4. Static IP addresses
In comparison to dynamic IP addresses, static addresses are constant in
nature. The network assigns the IP address to the device only once and, it
remains consistent. Though most firms or individuals do not prefer to have a
static IP address, it is essential to have a static IP address for an organization
that wants to host its network server. It protects websites and email addresses
linked with it with a constant IP address.
Generally, there are two notations in which the IP address is written, dotted decimal notation and
hexadecimal notation.
Hexadecimal Notation
1. The value of any segment (byte) is between 0 and 255 (both included).
2. No zeroes are preceding the value in any segment (054 is wrong, 54 is correct).
Class full Addressing
The 32-bit IP address is divided into five sub-classes. These are:
Class A
Class B
Class C
Class D
Class E
Each of these classes has a valid range of IP addresses. Classes D and E are
reserved for multicast and experimental purposes respectively. The order of bits
in the first octet determines the classes of the IP address.
Network ID
Host ID
The class of IP address is used to determine the bits used for network ID and
host ID and the number of total networks and hosts possible in that particular
class. Each ISP or network administrator assigns an IP address to each device
that is connected to its network.
Class A
The higher-order bit of the first octet in class A is always set to 0. The
remaining 7 bits in the first octet are used to determine network ID. The 24
bits of host ID are used to determine the host in any network. The default
subnet mask for Class A is 255.x.x.x. Therefore, class A has a total of:
Class B
The higher-order bits of the first octet of IP addresses of class B are always set
to 10. The remaining 14 bits are used to determine the network ID. The 16 bits
of host ID are used to determine the host in any network. The default subnet
mask for class B is 255.255.x.x. Class B has a total of:
Class C
The higher-order bits of the first octet of IP addresses of class C is always set to
110. The remaining 21 bits are used to determine the network ID. The 8 bits of
host ID are used to determine the host in any network. The default subnet
mask for class C is 255.255.255.x. Class C has a total of:
Class D
Class D does not possess any subnet mask. IP addresses belonging to class D
range from 224.0.0.0 – 239.255.255.255.
Class E
The initial goal of CIDR was to slow the increase of routing tables on routers
across the internet and decrease the rapid exhaustion of IPv4 addresses. As a
result, the number of available internet addresses has greatly increased.
The original classful network design of the internet included inefficiencies that
drained the pool of unassigned IPv4 addresses faster than necessary. The
classful design included the following:
2. Suffix. The suffix declares the total number of bits in the entire address.
For example, CIDR notation might look like: 192.168.129.23/17 -- with 17
being the number of bits in the address. IPv4 addresses support a maximum of
32 bits.
The same CIDR notation can be applied to IPv6 addresses. The only difference
is IPv6 addresses can contain up to 128 bits.
CIDR blocks
CIDR blocks are groups of addresses that share the same prefix and contain
the same number of bits. Supernetting is the combination of multiple
connecting CIDR blocks into a larger whole, all of which share a common
network prefix.
The length of a prefix determines the size of CIDR blocks. A short prefix
supports more addresses -- and, therefore, forms a bigger block -- while a
longer prefix indicates fewer addresses and a smaller block.
The Internet Assigned Numbers Authority (IANA) initially handles CIDR blocks.
IANA is responsible for distributing large blocks of IP addresses to Regional
Internet Registries (RIRs).
CIDR notation
IP sets aside some addresses for specific purposes. For example, several ranges
-- such as the Class B 192.168.0.0 -- are set aside as nonroutable and are
used to define a private network. Most home broadband routers assign
addresses from the 192.168 network for systems inside the home.
A dotted quad looks like this in decimal form: 192.168.0.0. In binary form, it
looks like this: 11000000.10101000.00000000.00000000.
Advantages of CIDR
CIDR reduced the problem of wasted IPv4 address space without causing
an explosion in the number of entries in a routing table
CIDR is now the routing system on the internet's backbone network, and
every ISP uses it. It is supported by the Border Gateway Protocol (BGP),
the prevailing exterior (interdomain) gateway protocol and the Open
Shortest Path First (OSPF) gateway protocol.
Older gateway protocols, such as Exterior Gateway Protocol and Routing
Information Protocol, do not support CIDR.
Inside refers to the addresses which must be translated. Outside refers to the
addresses which are not in control of an organization.
Outside global address – This is the outside host as seen from the outside
network. It is the IP address of the outside destination host before
translation.
Network Address Translation (NAT) Types
Suppose, if there are 3000 devices that need access to the Internet, the
organization has to buy 3000 public addresses that will be very costly.
IPV6
25.59.209.224
3001:0da8:75a3:0000:0000:8a2e:0370:7334
IP version 6 is the new version of Internet Protocol, which is way better than
IP version 4 in terms of complexity and efficiency. Let’s look at the header of
IP version 6 and understand how it is different from the IPv4 header.
Traffic Class (8-bits): The Traffic Class field indicates class or priority
of IPv6 packet which is similar to Service Field in IPv4 packet.
Flow Label (20-bits): Flow Label field is used by a source to label the
packets belonging to the same flow in order to request special handling
by intermediate IPv6 routers, such as non-default quality of service or
real-time service.
Payload Length (16-bits): It is a 16-bit (unsigned integer) field,
indicates the total size of the payload which tells routers about the
amount of information a particular packet contains in its payload. The
payload Length field includes extension headers(if any) and an upper-
layer packet.
Next Header (8-bits): Next Header indicates the type of extension
header(if present) immediately following the IPv6 header. Whereas In
some cases it indicates the protocols contained within upper-layer
packets, such as TCP, UDP.
Hop Limit (8-bits): Hop Limit field is the same as TTL in IPv4 packets.
It indicates the maximum number of intermediate nodes IPv6 packet is
allowed to travel. Its value gets decremented by one, by each node that
forwards the packet and the packet is discarded if the value decrements
to 0. This is used to discard the packets that are stuck in an infinite
loop because of some routing error.
Source Address (128-bits): Source Address is the 128-bit IPv6 address
of the original source of the packet.
Destination Address (128-bits): The destination Address field indicates
the IPv6 address of the final destination(in most cases). All the
intermediate nodes can use this information in order to correctly route
the packet.
Extension Headers: In order to rectify the limitations of the IPv4 Option
Field, Extension Headers are introduced in IP version 6. The extension
header mechanism is a very important part of the IPv6 architecture.
Transistion From IPV4 to IPV6
Various organization is currently working with IPv4 technology and in one day
we can’t switch directly from IPv4 to IPv6. Instead of only using IPv6, we use
combination of both and transition means not replacing IPv4 but co-existing
of both.
When we want to send a request from an IPv4 address to an IPv6 address, but
it isn’t possible because IPv4 and IPv6 transition is not compatible. For a
solution to this problem, we use some technologies. These technologies
are Dual Stack Routers, Tunneling, and NAT Protocol Translation.
1. Dual-Stack Routers
In this above diagram, A given server with both IPv4 and IPv6 addresses
configured can communicate with all hosts of IPv4 and IPv6 via dual-stack
router (DSR). The dual stack router (DSR) gives the path for all the hosts to
communicate with the server without changing their IP addresses.
2. Tunneling
In this above diagram, the different IP versions such as IPv4 and IPv6
are present. The IPv4 networks can communicate with the transit or
intermediate network on IPv6 with the help of the Tunnel. It’s also
possible that the IPv6 network can also communicate with IPv4
networks with the help of a Tunnel.
3. NAT Protocol Translation
With the help of the NAT Protocol Translation technique, the IPv4 and
IPv6 networks can also communicate with each other which do not
understand the address of different IP version.
Generally, an IP version doesn’t understand the address of different IP
version, for the solution of this problem we use NAT-PT device which
removes the header of first (sender) IP version address and add the
second (receiver)
In the above diagram, an IPv4 address communicates with the IPv6 address
via a NAT-PT device to communicate easily. In this situation, the IPv6 address
understands that the request is sent by the same IP version (IPv6) and it
responds.
IPv4 IPv6
Address representation of
Address Representation of IPv6 is in hexadecimal
IPv4 is in decimal
Fragmentation performed by
In IPv6 fragmentation is performed only by the
Sender and forwarding
sender
routers
In IPv4 Packet flow In IPv6 packet flow identification are Available and
identification is not available uses the flow label field in the header
IPv4 IPv6
IPv4 supports
VLSM(Variable Length IPv6 does not support VLSM.
subnet mask).
Uses of ICMP
ICMP is used for error reporting if two devices connect over the internet and
some error occurs, So, the router sends an ICMP error message to the source
informing about the error.
Traceroute: Trace route utility is used to know the route between two
devices connected over the internet. It routes the journey from one
router to another, and a traceroute is performed to check network
issues before data transfer.
Ping: Ping is a simple kind of traceroute known as the echo-request
message, it is used to measure the time taken by data to reach the
destination and return to the source, these replies are known as echo-
replies messages.
In the ICMP packet format, the first 32 bits of the packet contain three fields:
Type (8-bit): The initial 8-bit of the packet is for message type, it provides a
brief description of the message.
Some common message types are as follows:
Type 0 – Echo reply
Type 3 – Destination unreachable
Type 5 – Redirect Message
Type 8 – Echo Request
Type 11 – Time Exceeded
Type 12 – Parameter problem
Code (8-bit): Code is the next 8 bits of the ICMP packet format, this field
carries some additional information about the error message and type.
Checksum (16-bit): Last 16 bits are for the checksum field in the ICMP
packet header. The checksum is used to check the number of bits of the
complete message and enable the ICMP tool to ensure that complete data is
delivered.
The next 32 bits of the ICMP Header are Extended Header which has the work
of pointing out the problem in IP Message. Byte locations are identified by the
pointer which causes the problem message and receiving device looks here for
pointing to the problem.
The last part of the ICMP packet is Data or Payload of variable length. The
bytes included in IPv4 are 576 bytes and in IPv6, 1280 bytes.
3 – Destination
2 Destination protocol unreachable
Unreachable
5 – Redirect Message
Redirect the datagram for the Type of Service
2
and Network
9 – Router
0
Advertisement Use to discover the addresses of operational
routers
10 – Router Solicitation 0
11 – Time Exceeded
12 – Parameter Problem
1 Missing required option
2 Bad length
ICMP will take the source IP from the discarded packet and inform the source
by sending a source quench message. The source will reduce the speed of
transmission so that router will be free from congestion.
UNIT - 5
The Transport Layer: Transport layer protocols: Introduction-services- port number-User data
gram protocol-User datagram-UDP services-UDP applications-Transmission control protocol:
TCP services TCP features- Segment- A TCP connection- windows in TCP- flow control-Error
control, Congestion control in TCP.
Application Layer –- World Wide Web: HTTP, Electronic mail-Architecture- web based mail-
email security- TELENET-local versus remote Logging-Domain Name System: Name Space,
DNS in Internet,- Resolution-Caching- Resource Records- DNS messages- Registrars-security
of DNS Name Servers, SNMP.
In the OSI model, the transport layer sits between the network layer and the session layer. The
network layer is responsible for taking the data packets and sending them to the correct computer.
The transport layer then takes the received packets, checks them for errors and sorts them. Then,
it sends them to the session layer of the correct program running on the computer.
At the receiver’s side: The transport layer receives data from the Network layer, reassembles
the segmented data, reads its header, identifies the port number, and forwards the message to the
appropriate port in the Application layer.
5.1 Services
The transport layer is responsible for providing services to the application layer; it receives
services from the network layer.
The services provided by the transport layer are similar to those of the data link layer. The data
link layer provides the services within a single network while the transport layer provides the
services across an internetwork made up of many networks. The data link layer controls the
physical layer while the transport layer controls all the lower layers.
The services provided by the transport layer protocols can be divided into five categories:
Process to Process Communications
Addressing : Port numbers
Encapsulation and decapsulations
Multiplexing & Demultiplexing
Flow control
Error control
Congestion control
The Transport Layer is responsible for delivering data to the appropriate application
process on the host computers.
This involves multiplexing of data from different application processes, i.e. forming data
packets, and adding source and destination port numbers in the header of each Transport
Layer data packet.
Together with the source and destination IP address, the port numbers constitute a network
socket, i.e. an identification address of the process-to-process communication.
Flow Control
Flow Control is the process of managing the rate of data transmission between two nodes to
prevent a fast sender from overwhelming a slow receiver.
It provides a mechanism for the receiver to control the transmission speed, so that the receiving
node is not overwhelmed with data from transmitting node.
Addressing
The ability to communicate with the correct application on the computer. Addressing
typically uses network ports to assign each sending and receiving application a specific
port number on the machine. By combining the IP address used in the network layer and
the port on the transport layer, each application can have a unique address.
Ports are the essential ways to address multiple entities in the same location.
Using port addressing it is possible to use more than one network-based application at the
same time.
Three types of Port numbers are used
Well-known ports - These are permanent port numbers. They range between 0 to
1023.These port numbers are used by Server Process.
Registered ports - The ports ranging from 1024 to 49,151 are not assigned or
controlled.
Ephemeral ports (Dynamic Ports) – These are temporary port numbers. They
range between 49152–65535.These port numbers are used by Client Process.
To send a message from one process to another, the transport-layer protocol encapsulates and
decapsulates messages.
Encapsulation happens at the sender site. The transport layer receives the data and adds the
transport-layer header.
Decapsulation happens at the receiver site. When the message arrives at the destination
transport layer, the header is dropped and the transport layer delivers the message to the
process running at the application layer.
Multiplexing
Error Control
Congestion Control
Congestion in a network may occur if the load on the network (the number of packets sent to the
network) is greater than the capacity of the network (the number of packets a network can handle).
Congestion control refers to the mechanisms and techniques that control the congestion
and keep the load below the capacity.
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened
Congestion control mechanisms are divided into two categories,
1. Open loop - prevent the congestion before it happens.
2. Closed loop - remove the congestion after it happens.
5.2 PORT NUMBERS
What is a port number?
Port number identifies a specific process to which an Internet or other network message is
to be forwarded when it arrives at a server. Ports are identified for each protocol and It is
considered as a communication endpoint.
Port numbers consist of 16-bit numbers. There are 2^16 port numbers i.e 65536 available.
Well-known ports
Registered ports
The ports ranging from 1024 to 49,151 are not assigned or controlled
Top 25 Most Popular Ports
From the top 100 these are the 25 most popular ports (most frequently used).
The User Datagram Protocol (UDP) is simplest Transport Layer communication protocol available
of the TCP/IP protocol
There are many transport layer protocols. The transport layer provides process to process
communication. So today, we are here with one of the simplest protocols of Transport Layer.
Source Port - This 16 bits information is used to identify the source port of the packet.
Destination Port - This 16 bits information, is used identify application level service on
destination machine.
Length - Length field specifies the entire length of UDP packet (including header). It is
16-bits field and minimum value is 8-byte, i.e. the size of UDP header itself.
Checksum - This field stores the checksum value generated by the sender before sending.
IPv4 has this field as optional so when checksum field does not contain any value it is made
0 and all its bits are set to zero.
UDP does not guarantee the order of the datagram. A datagram can be received in any
order
The UDP protocol utilizes different port numbers for transmitting data to the correct
destination.
The port numbers are defined between 0 - 1023.
Faster transmission
Acknowledgment mechanism
Every segment in UDP takes a different path to reach the destination. So, every UDP packet is
handled independent of other UDP packets.
Stateless
UDP protocol is a stateless protocol which means that the sender does not wait for an
acknowledgment after sending the packet.
Flow Control
UDP is a very simple protocol.
There is no flow control, and hence no window mechanism.
The receiver may overflow with incoming messages.
The lack of flow control means that the process using UDP should provide for this service,
if needed.
Error Control
There is no error control mechanism in UDP except for the checksum.
This means that the sender does not know if a message has been lost or duplicated.
When the receiver detects an error through the checksum, the user datagram is silently
discarded.
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
UDP assumes that the packets sent are small and sporadic(occasionally or at irregular
intervals) and cannot create congestion in the network.
This assumption may or may not be true, when UDP is used for interactive real-time
transfer of audio and video.
Encapsulation and Decapsulation
Queuing
Multiplexing and Demultiplexing
UDP applications
2. Stream oriented
This means that the data is sent and received as a stream of bytes(unlike UDP or IP that
divides the bits into datagrams or packets). However, the network layer, that provides
service for the TCP, sends packets of information not streams of bytes. Hence, TCP groups
a number of bytes together into a segment and adds a header to each of these segments and
then delivers these segments to the network layer. At the network layer, each of these
segments is encapsulated in an IP packet for transmission. The TCP header has information
that is required for control purposes which will be discussed along with the segment
structure.
3. Full-duplex service
This means that the communication can take place in both directions at the same time.
4. Connection-orientedservice
Unlike UDP, TCP provides a connection-oriented service. It defines 3 different phases:
Connection establishment
Data transfer
Connection termination
5. Reliability
TCP is reliable as it uses checksum for error detection, attempts to recover lost or corrupted
packets by re-transmission, acknowledgement policy and timers. It uses features like byte
number and sequence number and acknowledgement number so as to ensure reliability.
Also, it uses congestion control mechanisms.
6. Multiplexing
TCP does multiplexing and de-multiplexing at the sender and receiver ends respectively as
a number of logical connections can be established between port numbers over a physical
connection.
1. Numbering System
Although the TCP software keeps track of the segments being transmitted or received, there
is no field for a segment number value in the segment header. Instead, there are two fields
called the sequence number and the acknowledgment number. These two fields refer to the
byte number and not the segment number.
Byte Number TCP numbers all data bytes that are transmitted in a connection. Numbering
is independent in each direction. When TCP receives bytes of data from a process, it stores
them in the sending buffer and numbers them. The numbering does not necessarily start
from O. Instead, TCP generates a random number between 0 and 232 - 1 for the number of
the first byte.
For example, if the random number happens to be 1057 and the total data to be sent are
6000 bytes, the bytes are numbered from 1057 to 7056. We will see that byte numbering
is used for flow and error control.
2. Flow Control
TCP, unlike UDP, provides flow control. The receiver of the data controls the amount of data that
are to be sent by the sender. This is done to prevent the receiver from being overwhelmed with
data. The numbering system allows TCP to use a byte-oriented flow control.
3. Error Control
To provide reliable service, TCP implements an error control mechanism. Although error control
considers a segment as the unit of data for error detection (loss or corrupted segments), error
control is byte-oriented, as we will see later.
4. Congestion Control
TCP, unlike UDP, takes into account congestion in the network. The amount of data sent by a
sender is not only controlled by the receiver (flow control), but is also detennined by the level of
congestion in the network
5.4.3 Segment
A packet in TCP is called a segment. The format of a segment is shown in the following figure
The segment consists of a 20- to 60-byte header, followed by data from the application program.
The header is 20 bytes if there are no options and up to 60 bytes if it contains options. The different
sections of the Header are as follows
Source port address. This is a 16-bit field that defines the port number of the application program
in the host that is sending the segment. This serves the same purpose as the source port address in
the UDP header.
Destination port address. This is a 16-bit field that defines the port number of the application
program in the host that is receiving the segment. This serves the same purpose as the destination
port address in the UDP header.
Sequence number. This 32-bit field defines the number assigned to the first byte of data contained
in this segment. As we said before, TCP is a stream transport protocol. To ensure connectivity,
each byte to be transmitted is numbered. The sequence number tells the destination which byte in
this sequence comprises the first byte in the segment. During connection establishment, each party
uses a random number generator to create an initial sequence number (ISN), which is usually
different in each direction.
Acknowledgment number. This 32-bit field defines the byte number that the receiver of the
segment is expecting to receive from the other party. If the receiver of the segment has successfully
received byte number x from the other party, it defines x + I as the acknowledgment number.
Acknowledgment and data can be piggybacked together.
Header length. This 4-bit field indicates the number of 4-byte words in the TCP header. The length
of the header can be between 20 and 60 bytes. Therefore, the value of this field can be between 5
(5 x 4 =20) and 15 (15 x 4 =60).
Reserved. This is a 6-bit field reserved for future use.
Control. This field defines 6 different control bits or flags as shown in below figure. One or more
of these bits can be set at a time
Window size. This field defines the size of the window, in bytes, that the other party must maintain.
Note that the length of this field is 16 bits, which means that the maximum size of the window is
65,535 bytes. This value is normally referred to as the receiving window (rwnd) and is determined
by the receiver. The sender must obey the dictation of the receiver in this case.
Checksum. This 16-bit field contains the checksum. The calculation of the checksum for TCP
follows the same procedure as the one described for UDP. However, the inclusion of the checksum
in the UDP datagram is optional, whereas the inclusion of the checksum for TCP is mandatory.
The same pseudoheader, serving the same purpose, is added to the segment. For the TCP
pseudoheader, the value for the protocol field is 6.
Urgent pointer. This l6-bit field, which is valid only if the urgent flag is set, is used when the
segment contains urgent data. It defines the number that must be added to the sequence number to
obtain the number of the last urgent byte in the data section of the segment.
Options. There can be up to 40 bytes of optional information in the TCP header
2. Data Transfer
After connection is established, bidirectional data transfer can take place. The client and server
can both send data and acknowledgments. for the moment, it is enough to know that data traveling
in the same direction as an acknowledgment are carried on the same segment. The
acknowledgment is piggybacked with the data.
Below figure shows an example. In this example, after connection is established (not shown in the
figure), the client sends 2000 bytes of data in two segments. The server then sends 2000 bytes in
one segment. The client sends one more segment. The first three segments carry both data and
acknowledgment, but the last segment carries only an acknowledgment because there are no more
data to be sent. Note the values of the sequence and acknowledgment numbers. The data segments
sent by the client have the PSH (push) flag set so that the server TCP knows to deliver data to the
server process as soon as they are received. We discuss the use of this flag in greater detail later.
The segment from the server, on the other hand, does not set the push flag. Most TCP
implementations have the option to set or not set this flag.
3. Connection Termination
Any of the two parties involved in exchanging data (client or server) can close the connection,
although it is usually initiated by the client. Most implementations today allow two options for
connection termination: three-way handshaking and four-way handshaking with a half-close
option.Three-Way Handshaking for connection termination as shown in below figure.
1. In a normal situation, the client TCP, after receiving a close command from the client process,
sends the first segment, a FIN segment in which the FIN flag is set. Note that a FIN segment can
include the last chunk of data sent by the client, or it can be just a control segment as shown in
below Figure. If it is only a control segment, it consumes only one sequence number.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends
the second segment, a FIN +ACK segment, to confirm the receipt of the FIN segment from the
client and at the same time to announce the closing of the connection in the other direction. This
segment can also contain the last chunk f data from the server. If it does not carry data, it consumes
only one sequence number.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN
segment from the TCP server. This segment contains the acknowledgment number, which is 1 plus
the sequence number received in the FIN segment from the server. This segment cannot carry data
and consumes no sequence numbers
World Wide Web: HTTP, Electronic mail-Architecture- web based mail- email security- TELENET-local
versus remote Logging-Domain Name System: Name Space, DNS in Internet,- Resolution-Caching-
Resource Records- DNS messages- Registrars-security of DNS Name Servers, SNMP.
The World Wide Web (WWW) is a collection of documents and other web
resources which are identified by URLs, interlinked by hypertext links, and can
be accessed and searched by browsers via the Internet.
World Wide Web is also called the Web and it was invented by Tim Berners-Lee
in 1989.
Website is a collection of web pages belonging to a particular organization.
The pages can be retrieved and viewed by using browser.
5.1.2 Server
A computer which is available for the network resources and provides service
to the other computer on request is known as server.
The web pages are stored at the server.
Server accepts a TCP connection from a client browser.
It gets the name of the file required.
Server gets the stored file. Returns the file to the client and releases the top
connection.
5.1.3 Uniform Resource Locater (URL)
The URL is a standard for specifying any kind of information on the Internet.
The URL consists of four parts: protocol, host computer, port and path.
The protocol is the client or server program which is used to retrieve the
document or file. The protocol can be ftp or http.
The host is the name of computer on which the information is located.
The URL can optionally contain the port number and it is separated from the
host name by a colon.
Path is the pathname of the file where the file is stored.
HTTP messages are of two types: request and response. Both the message types
follow the same message format.
Response
Message: The
response message
is sent by the
server to the client
that consists of a
status line,
headers, and
sometimes a body.
5.2.3.1 Request and Status Lines. The first line in a request message is called
a request line; the first line in the response message is called the status line.
There is one
common field,
as shown in
below figure.
5.2.3.1.1 Request type. This field is used in the request message. In version
1.1 of HTTP, several request types are defined. The request type is categorized
into methods as defined in below table.
5.3.2.2 Header The header exchanges additional information between the client
and the server. For example, the client can request that the document be sent in
a special format, or the server can send extra information about the document.
The header can consist of one or more header lines. Each header line has a header
name, a colon, a space, and a header value.
A header line belongs to one of four categories: general header, request header,
response header, and entity header.
General header The general header gives general information about the
message and can be present in both a request and a response. Below table lists
some general headers with their descriptions.
Request header The request header can be present only in a request
message. It specifies the client's configuration and the client's preferred
document format. See below Table for a list of some request headers and their
descriptions
Response header The response header can be present only in a response
message. It specifies the server's configuration and special information about the
request. See below Table for a list of some response headers with their
descriptions
Entity header The entity header gives information about the body of
the document. Although it is mostly present in response messages, some request
messages, such as POST or PUT methods, that contain a body also use this type
of header. See below Table for a list of some entity headers and their descriptions.
Need of an Email
By making use of Email, we can send any message at any time to anyone.
We can send the same message to several peoples at the same time.
It is a very fast and efficient way of transferring information.
The email system is very fast as compared to the Postal system.
Information can be easily forwarded to coworkers without retyping it.
1.User Agent(UA)
It is a program that is mainly used to send and receive an email. It is also known
as an email reader. User-Agent is used to compose, send and receive emails.
The actual process of transferring the email is done through the Message Transfer
Agent(MTA).
In the first and second stages of email delivery, we make use of SMTP.
Architecture of Email
Now its time to take a look at the architecture of e-mail with the help of four scenarios:
First Scenario
When the sender and the receiver of an E-mail are on the same system, then
there is the need for only two user agents.
Second Scenario
In this scenario, the sender and receiver of an e-mail are basically users on the
two different systems. Also, the message needs to send over the Internet. In this
case, we need to make use of User Agents and Message transfer agents(MTA).
Third Scenario
In this scenario, the sender is connected to the system via a point-to-point WAN
it can be either a dial-up modem or a cable modem. While the receiver is directly
connected to the system like it was connected in the second scenario.
Also in this case sender needs a User agent (UA) in order to prepare the message.
After preparing the message the sender sends the message via a pair of MTA
through LAN or WAN.
Fourth Scenario
In this scenario, the receiver is also connected to his mail server with the help of
WAN or LAN.
When the message arrives the receiver needs to retrieve the message; thus there
is a need for another set of client/server agents. The recipient makes use of
MAA(Message access agent) client in order to retrieve the message.
In this, the client sends the request to the Mail Access agent(MAA) server and
then makes a request for the transfer of messages.
This scenario is most commonly used today.
Structure of Email
1.Header
2.Body
Header
The header part of the email generally contains the sender's address as well as
the receiver's address and the subject of the message.
Body
The Body of the message contains the actual information that is meant for the
receiver.
Email Address
In order to deliver the email, the mail handling system must make use of an
addressing system with unique addresses.
Local part
Domain Name
Local Part
It is used to define the name of the special file, which is commonly called a user
mailbox; it is the place where all the mails received for the user is stored for
retrieval by the Message Access Agent.
Domain Name
Both local part and domain name are separated with the help of @.
Web-Based Mail E-mail is such a common application that some websites today
provide this service to anyone who accesses the site. Two common sites are
Hotmail and Yahoo. The idea is very simple. Mail transfer from Alice's browser to
her mail server is done through HTTP The transfer of the message from the
sending mail server to the receiving mail server is still through SMTP.
Finally, the message from the receiving server (the Web server) to Bob's browser
is done through HTIP. The last phase is very interesting. Instead of POP3 or
IMAP4, HTTP is normally used. When Bob needs to retrieve his e-mails, he sends
a message to the website (Hotmail, for example).
The website sends a form to be filled in by Bob, which includes the log-in name
and the password. If the log-in name and password match, the e-mail is
transferred from the Web server to Bob's browser in HTML format .
5.5 TELNET
TELNET stands for Teletype Network. It is a type of protocol that enables one
computer to connect to the local computer. It is used as a standard TCP/IP
protocol for virtual terminal service which is provided by ISO. The computer
which starts the connection is known as the local computer.
The computer which is being connected to i.e. which accepts the connection
known as the remote computer.
During telnet operation, whatever is being performed on the remote computer
will be displayed by the local computer. Telnet operates on a client/server
principle. The local computer uses a telnet client program and the remote
computers use a telnet server program.
5.5.1 Logging
The logging process can be further categorized into two parts:
1. Local Login
2. Remote Login
1. Local Login: Whenever a user logs into its local system, it is known as local
login.
The Procedure of Local Login
Keystrokes are accepted by the terminal driver when the user types at the
terminal.
Terminal Driver passes these characters to OS.
Now, OS validates the combination of characters and opens the required
application.
2. Remote Login: Remote Login is a process in which users can log in to a
remote site i.e. computer and use services that are available on the remote
computer. With the help of remote login, a user is able to understand the result
of transferring the result of processing from the remote computer to the local
computer.
5.5.2 Network Virtual Terminal(NVT)
NVT (Network Virtual Terminal) is a virtual terminal in TELNET that has a
fundamental structure that is shared by many different types of real terminals.
NVT (Network Virtual Terminal) was created to make communication viable
between different types of terminals with different operating systems.
5.5.3 TELNET Commands
Commands of Telnet are identified by a prefix character, Interpret As Command
(IAC) with code 255. IAC is followed by command and option codes.
1. Offering to enable.
WILL 251 11111011 2. Accepting a request to enable.
Advantages of Telnet
1. It provides remote access to someone’s computer system.
2. Telnet allows the user for more access with fewer problems in data
transmission.
3. Telnet saves a lot of time.
4. The oldest system can be connected to a newer system with telnet having
different operating systems.
Disadvantages of Telnet
1. As it is somehow complex, it becomes difficult to beginners in understanding.
2. Data is sent here in form of plain text, that’s why it is not so secured.
3. Some capabilities are disabled because of not proper interlinking of the
remote and local devices.
DNS stands for Domain Name System. DNS is a directory service that provides a
mapping between the name of a host on the network and its numerical address
DNS servers convert URLs and domain names into IP addresses that computers
can understand and use. They translate what a user types into a browser into
something the machine can use to find a webpage. This process of translation
and lookup is called DNS resolution
3. The query goes to a recursive DNS server, which is also called a recursive
resolver, and is usually managed by the internet service provider (ISP). If the
recursive resolver has the address, it will return the address to the user, and
the webpage will load.
4. If the recursive DNS server does not have an answer, it will query a series of
other servers in the following order: DNS root name servers, top-level domain
(TLD) name servers and authoritative name servers.
5. The three server types work together and continue redirecting until they
retrieve a DNS record that contains the queried IP address. It sends this
information to the recursive DNS server, and the webpage the user is looking
for loads. DNS root name servers and TLD servers primarily redirect queries
and rarely provide the resolution themselves.
6. The recursive server stores, or caches, the A record for the domain name,
which contains the IP address. The next time it receives a request for that
domain name, it can respond directly to the user instead of querying other
servers.
7. If the query reaches the authoritative server and it cannot find the
information, it returns an error message.
The entire process querying the various servers takes a fraction of a second and
is usually imperceptible to the user.
A name space that maps each address to a unique name can be organized in two
ways:
Fiat
Hierarchical
In a hierarchical name space, each name is made of several parts. The first part
can define the nature of the organization, the second part can define the name
of an organization, and the third part can define departments in the organization,
and so on. In this case, the authority to assign and control the name spaces can
be decentralized.
A central authority can assign the part of the name that defines the nature of the
organization and the name of the organization. The responsibility of the rest of
the name can be given to the organization itself. The organization can add suffixes
(or prefixes) to the name to define its host or resources.
To have a hierarchical name space, a domain name space was designed. In this
design the names are defined in an inverted-tree structure with the root at the
top. The tree can have only 128 levels: level 0 (root) to level 127
Label
Each node in the tree has a label, which is a string with a maximum of 63
characters. The root label is a null string (empty string). DNS requires that
children of a node (nodes that branch from the same node) have different labels,
which guarantees the uniqueness of the domain names.
Domain Name: Each node in the tree has a domain name. A full domain name
is a sequence of labels separated by dots (.). The domain names are always read
from the node up to the root. The last label is the label of the root (null). This
means that a full domain name always ends in a null label, which means the last
character is a dot because the null string is nothing.
Looking at the tree, we see that the first level in the generic domains section
allows 14 possible labels. These labels describe the organization types as listed
in Table
Country Domains The country
domains section uses two-
character country abbreviations
(e.g., us for United States).
Second labels can be
organizational, or they can be
more specific, national
designations. The United States,
for example, uses state
abbreviations as a subdivision of
us (e.g., ca.us.). Below Figure
shows the country domains
section. The address
anza.cup.ca.us can be translated to De Anza College in Cupertino, California, in
the United States.
Inverse Domain The inverse domain is used to map an address to a name. The
server asks its resolver to send a query to the DNS server to map an address to
a name to determine if the client is
on the authorized list. This type of
query is called an inverse or pointer
(PTR) query. To handle a pointer
query, the inverse domain is added
to the domain name space with the
first-level node called arpa (for
historical reasons).
The second level is also one single
node named in-addr (for inverse
address). The rest of the domain
defines IP addresses. The servers
that handle the inverse domain are also hierarchical. This means the netid part
of the address should be at a higher level than the subnetid part, and the subnetid
part higher than the hostid part. In this way, a server serving the whole site is at
a higher level than the servers serving each subnet. This configuration makes the
domain look inverted when compared to a generic or country domain. To follow
the convention of reading the domain labels from the bottom to the top, an IF
address such as 132.34.45.121 (a class B address with netid 132.34) is read as
121.45.34.132.in-addr.