0% found this document useful (0 votes)
16 views34 pages

CN Material Unit-4 2023

The document discusses routing algorithms at the network layer. It describes two main styles of internetworking - concatenated virtual circuits and datagrams. Concatenated virtual circuits establish end-to-end connections by concatenating virtual circuits across subnets, ensuring packets arrive in order. Datagrams are independent packets that may take different routes, allowing higher bandwidth but with difficulty in congestion control. The document also summarizes non-adaptive routing algorithms like shortest path and flooding, as well as adaptive algorithms like distance vector routing and link state routing.

Uploaded by

Sacet 2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views34 pages

CN Material Unit-4 2023

The document discusses routing algorithms at the network layer. It describes two main styles of internetworking - concatenated virtual circuits and datagrams. Concatenated virtual circuits establish end-to-end connections by concatenating virtual circuits across subnets, ensuring packets arrive in order. Datagrams are independent packets that may take different routes, allowing higher bandwidth but with difficulty in congestion control. The document also summarizes non-adaptive routing algorithms like shortest path and flooding, as well as adaptive algorithms like distance vector routing and link state routing.

Uploaded by

Sacet 2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

THE NETWORK LAYER

Services provided to the transport layer:

1. The services are independent of subnet technology.


2. The transport layer is shielded from the number, type and topology of subnets present.
3. The network addresses made available to transport layer uses a uniform numbering plan .

Virtual Circuit  A connection of the subnet.

Data gram  Independent packets of a connection-less organization.

Internetworking Styles:
1. Concatenated Virtual Circuit Subnets (Connection-Oriented)
2. Datagram Model (Connection-Less)

Concatenated Virtual Circuit Subnets:


A connection to a host in a distant network is set up in a way similar to the way connections
are normally established. The subnet sees that the destination is remote and builds a virtual circuit to
the router nearest the destination network. Then it constructs a virtual circuit from that router to an
external gateway, which records the existence of virtual circuit in its tables and proceeds to build
another virtual circuit to a router in next subnet. This process continues until the destination host has
been reached.

Once data packets begin following along the path, each gateway relays incoming packets,
converting between packet formats and virtual circuit numbers as needed. Clearly, all data packets
must traverse the same sequence of gateways, and thus arrive in order.
The essential feature of this approach is that a sequence of virtual circuits is set up from the
source through one or more gateways to the destination. Each gateway maintains tables telling which
virtual circuits pass through it, where they are to be routed, and what the new virtual circuit number
is.

Datagram Model:
In this model, the only service the n/w layer offers to the transport layer is the ability to inject datagrams
into the subnet and hope for the best. Datagrams from one host to another host travel through different
routes through the inter network. A routing decision is made separately for each packet, possibly
depending on the traffic at the moment the packet is sent. This strategy can use multiple routes and
thus achieve a higher band width than the concatenated virtual circuit model.

A major disadvantage of datagram model to internet working is that it can be used over the subnets
that do not use virtual circuits inside.
Differences between a Virtual Circuit and a Datagram:

S.No Issue Datagram subnet Virtual circuit subnet

1. Circuit setup Not required Required

2. Addressing Each packet contains the Each packet contains a


full source and short virtual circuit number
destination address

3. State Subnet does not hold Each virtual circuit


information state information requires subnet table space

4. Routing Each packet is routed Route chosen when virtual


independently circuit is setup &all packets
follow this route

5. Congestion Difficult Easy, if enough buffers can


control be allocated in advance for
each virtual circuit

Routing algorithms:

A routing algorithm is that part of the network layer software responsible for deciding which
output line an incoming packet should be transmitted on. Each routing algorithm possess
certain properties like
Correctness Optimality
Simplicity Fairness
 Stability Robustness

Routing algorithms are divided into 2 groups:


1. Non-adaptive Routing Algorithms (static routing).
2. Adaptive Routing Algorithms (dynamic routing).

Non-adaptive routing algorithms are those that do not base their routing decisions on
measurements or estimates of the current traffic and topology. The choice of the root to use to get from
I to J is computed in advance, off-line and downloaded to the routers when the N/W is booted .
Adaptive routing algorithms are those that base their routing decisions on measurements, or
estimates of the current traffic and topology.
Non-Adaptive Routing Algorithms:

(a) Shortest path routing


(b) Flooding

(a). Shortest Path Routing: It is used to build a graph of the subnet, with each node of graph
representing a router and each arc of the graph representing a communication line. To choose a route
between a given pair of routers, the algorithm just finds the shortest path between them on the graph.
Different ways of measuring the path length is the number of Hops, Geographical distance in kmts,
Mean Queuing delay, Transmission delay, Functions of distance, Bandwidth, Average traffic,
communication cost etc.,

Eg: - To compute shortest path from ‘A’ to ‘D’: -

(i).Node ‘A’ is permanent and adjacent


nodes for ‘A’ are found in this stage.

(ii).Adjacent nodes of ‘A’ are relabeled


with distance from ‘A’, and node with
minimum value becomes permanent.
Here, node ‘B’.

(iii).Adjacent nodes of ‘B’ are relabeled


with distance from ‘A’, and node
with minimum value becomes
permanent. Here, node ‘E’.

(iv).Adjacent nodes of ‘E’ are relabeled


with distance from ‘A’, and node with
minimum value becomes permanent.
Here, node ‘G’.

(v).Adjacent nodes of ‘G’ are relabeled


with distance from ‘A’, and node with
minimum value becomes permanent.
Here, node ‘F’.
Finally, Destination ‘D’ is relabeled as D(10,H). The path is (D-H-F-E-B-A) as follows:
D(10,H) = H(8,F)
= F(6,E)
= E(4,B)
= B(2,A)
= A

(b). Flooding: If every incoming packet is sent out on every outgoing line except the one it
arrived on, it is called Flooding.

Flooding obviously generates vast numbers of duplicate packets. To damp this process, several
techniques can be employed:
(a) To have a hop counter contained in the header of each packet, which is decremented at each
hop, with the packet being discarded when the counter reaches ‘zero’. Initially, the hop counter
should be initialized to the length of the path from source to destination.
(b) To keep track of which packets have been flooded so that they can be avoided sending second
time. This is achieved by having the source router put a sequence number in each packet it
arrives from its hosts. Each router then needs a list per source router telling which sequence
numbers originating at that source have already been seen. If an incoming packet is on the list,
it is not flooded.

Selective Flooding:

The algorithm in which the routers do not send every incoming packet out on every
line but only on those lines that are going approximately in the right direction

Uses of Flooding:
1. In military, where large number of routers may be blown to bits at any instant.
2. In distributed database applications, it is sometimes necessary to update all the databases
concurrently.
3. As a metric against which other routing algorithms can be compared.
Adaptive Routing Algorithms

Distance Vector Routing

Link state Routing

Hierarchical Routing

Routing for Mobile hosts

Broad cast Routing

Multicast Routing

(1). Distance Vector Routing: This algorithm operates by having each router maintain a table
giving the best known distance to each destination and which line to use to get there. These tables are
updated by exchanging information with the neighbors. This algorithm is also called as BELLMAN-
FORD or FORD-FULKERSON algorithm.

In this algorithm, each router maintains a routing table indexed by and containing one entry for each
router in subnet. This entry contains 2 parts:

a. The preferred outgoing line to use for that destination

b. An estimate of time or distance (no of hops, or time delay or queue length) for that
destination

Eg: Consider an example in which delay is used as a metric. Compute a Routing table ‘J’ from
the given subnet.

New routing table for ‘J’ can be computed from its neighbors as follows:
Similarly….the routing tables for J D,E,F,G,H,I,J,K, and L are computed.

New routing table for ‘J’: To From ‘J’ Line to be followed


A 8 A
B 20 A
C 28 I
D 20 H
E 17 I
F 30 I
G 18 H
H 12 H
I 10 I
J 0 -
K 6 K
L 15 K

Draw backs:

It reacts… Rapidly to Good News


Leisurely to Bad News

Good news propagation

Suppose ‘A’ is down initially, and all other routers know this. When ‘A’ comes up, other routers learn
about it via the vector changes.

The Good news is spreading at the rate of one hop per exchange.
Bad news propagation
(Count to infinity problem):

Suppose initially, all the lines and routers are up. Suddenly ‘A’ goes down.

From the above, it is clear that the bad news travels slowly.

Split Horizon Hack:


Initially, if ‘CD’ goes down. This bad news is propagated at a rate of one hop per exchange using
split-horizon hack.

Both A and B tell ‘C’ that they cannot get to D. Thus ‘C’ immediately concludes that ‘D’ is
unreachable and reports this to both A and B. unfortunately, ‘A’ hears that B has a path of length
2 to D, so it assumes it can get to ‘D’ via ‘B’ in 3 hops. Similarly ‘B’ concludes it can get to D via
‘A’ in 3 hops. On the next exchange, they each set their distance to 4.
(3). Hierarchical Routing :
Different levels are used to compute the routes:

 Level 1 - Routers
 Level 2 - Regions
• Level 3 - Clusters
• Level 4 - Zones
• Level 5 - Zonal regions

E.g:

• When 2-level hierarchy is considered….

One region is considered along with its local routers.

All other regions are considered in to a single router.

• With 2-level hierarchy, Hierarchical routing has reduced the table form 17to 7 entries…

Region-1 is considered along with its local routers .

Regions-2, 3,4,5 are considered as other routers.

Hierarchical table for 1A:


E.g: Consider a subnet with 720 routers ( Routing table entries can be reduced using higher levels
in Hierarchical Routing)

Link state Routing algorithm:


The idea behind link state routing is simple and can be stated as five parts. Each router must do
the following:

1. Discover its neighbors and learn their network addresses.


2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router.

Learning about the Neighbors

When a router is booted, its first task is to learn who its neighbors are. It accomplishes this goal
by sending a special HELLO packet on each point-to-point line. The router on the other end is
expected to send back a reply telling who it is with its address. These names must be globally
unique because when a distant router later hears that three routers are all connected to F, it is
essential that it can determine whether all three mean the same F.
Measuring Line Cost

The link state routing algorithm requires each router to know, or at least have a reasonable estimate
of the delay to each of its neighbors. The most direct way to determine this delay is to send over
the line a special ECHO packet that the other side is required to send back immediately. By
measuring the round-trip time and dividing it by two, the sending router can get a reasonable
estimate of the delay. For even better results, the test can be conducted several times, and the
average used.
An interesting issue is whether to take the load into account when measuring the delay. To factor
the load in, the round-trip timer must be started when the ECHO packet is queued. To ignore the
load, the timer should be started when the ECHO packet reaches the front of the queue.
Building Link State Packets

Once the information needed for the exchange has been collected, the next step is for each router
to build a packet containing all the data. The packet starts with the identity of the sender, followed
by a sequence number (to avoid duplication) and age (to tell how long a packet can stay alive), and
a list of neighbors. For each neighbor, the delay to that neighbor is given. An example subnet is
given in Figure(a) below with delays shown as labels on the lines. The corresponding link state
packets for all six routers are shown in Fig.(b).
(a) A subnet. (b) The link state packets for this subnet.

Building the link state packets is easy. The hard part is determining when to build them. One
possibility is to build them periodically, that is, at regular intervals. Another possibility is to
build them when some significant event occurs, such as a line or neighbor going down or coming
back up again or changing its properties appreciably.

Distributing the Link State Packets

The fundamental idea is to use flooding to distribute the link state packets. To keep the flood
in check, each packet contains a sequence number that is incremented for each new packet sent.
Routers keep track of all the (source router, sequence) pairs they see. When a new link state packet
comes in, it is checked against the list of packets already seen. If it is new, it is forwarded on all lines
except the one it arrived on. If it is a duplicate, it is discarded. If a packet with a sequence number
lower than the highest one seen so far ever arrives, it is rejected as being obsolete since the router
has more recent data.

[Some refinements to this algorithm make it more robust. When a link state packet comes in
to a router for flooding, it is not queued for transmission immediately. Instead it is first put in a
holding area to wait a short while. If another link state packet from the same source comes in before
the first packet is transmitted, their sequence numbers are compared. If they are equal, the duplicate
is discarded. If they are different, the older one is thrown out. To guard against errors on the router-
router lines, all link state packets are acknowledged. When a line goes idle, the holding area is
scanned in round-robin order to select a packet or acknowledgement to send.]
Computing the New Routes

Once a router has accumulated a full set of link state packets, it can construct the entire subnet
graph because every link is represented. Every link is, in fact, represented twice, once for each
direction. The two values can be averaged or used separately.

Now Dijkstra's algorithm can be run locally to construct the shortest path to all possible
destinations. The results of this algorithm can be installed in the routing tables, and normal
operation resumed.
Congestion: When too many packets are sent to a subnet more than its capacity,
the Situation that arises is called congestion.

Reasons for Congestion:

 If input packets coming from 3 or 4 lines, requires only one particular output line. If
routers are supplied with infinite amount of memory, packets take longtime to reach to the
front of queue where duplicates are generated as they are timed out.
 Slow processors cause congestion.
 Low bandwidth lines also cause congestion.
 Congestion feeds upon itself and cause congestion.

Congestion Control Algorithms:

These algorithms control congestion. These are mainly divided into two groups (Also called general
principles of congestion):

1. Open Loop Solutions.


2. Closed Loop Solutions.

Open Loop Solutions attempt to solve the problems by good design to make sure it does not occur
in the first place. Once the system is up and running, mid course corrections are not made.
Closed Loop Solutions are based on the concepts of a feedback loop. It has 3 parts.
1. Monitor the system to detest when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.
These closed loop algorithms are further divided into two categories:

 Implicit feedback: The source reduces the congestion existence by making local
observations.
 Explicit feed back: Packets are sent back from the point of congestion to warn source
Open Loop Systems:
 Congestion Prevention Policies
 Traffic Shaping
 Flow Specifications

1. Congestion prevention policies:


Congestion is prevented using appropriate policies at various levels.

Layer Policies
Transport 1. Retransmission policy
2. Out-of-order caching policy
3. Acknowledgement policy
4. Flow control policy
5. Timeout Determination
Network 1. Virtual circuits versus data gram inside the
subnet
2. Packet queuing service policy
3. Packet discard policy
4. Routing algorithm
5. Packet lifetime Management
Data Link 1. Retransmission policy
2. Out-of-order catching policy
3. Acknowledgement policy
4. Flow control policy

Retransmission policy: Deals with how fast a sender times out and what it transmits
upon time out.
Out-of –order Catching policy: If receivers routinely discard all out-of-order packets,
(packets arrived without order) they have to be retransmitted.
Acknowledgement policy: If each packet is acknowledged immediately, acknowledged
packets generate extra traffic. This policy deals with piggy backing.
Flow Control policy: A tight flow control scheme (ex: a small window) reduces the
data rate and thus helps fight congestion.
Timeout Determination: It is harder as transit time across the network is less
predictable than transit time over a wire between two routers.
Virtual Circuits vs Data grams: This affects congestion as many algorithms work
only with virtual circuits.
Packets queuing and Service policy: Relates to whether routers have one queue per
input line, one queue per output line or both.
Packet Discard policy: Tells which packet is dropped when there is no space.
Routing Algorithm: With this, Traffic is spreaded over all the lines.
Packet lifetime management: Deals with how long a packet may live before being
discarded.

2. Traffic Shaping (Traffic control algorithms):


It is the process of forcing the packets to be transmitted at a more predictable rate. This
approach is widely used in ATM Networks to manage congestion.
When a virtual circuit is set up, the user and the subnet agree on a certain traffic pattern for that
circuit. Monitoring a traffic flow based on agreement made is called “Traffic Policing”.

Traffic shaping can be implemented with any of the two techniques:

1. Leaky Bucket Algorithm


2. Token Bucket Algorithm

The Leaky Bucket Algorithm:

`Imagine a bucket with a small hole in the bottom. No matter, at what rate water enters the bucket, the
outflow is at a constant rate, ‘p’, when there is any water in the bucket and ‘r’, when the bucket is
empty. Also, once the bucket is full, any additional water entering it spills over the sides and is lost.

The same idea can be applied to packets. Conceptually, each host is connected to the network
by an interface containing a leaky bucket, i.e., a finite internal queue. If a packet arrives at the queue
when it is full, it is discarded. In other words, if one or more processes with in the host try to send a
packet when the maximum number are already queued, the new packet is unceremoniously discarded.
This arrangement can be built into the h/w interface or simulated by the host operating system. It was
first proposed by TURNER and is called the “ LEAKY BUCKET ALGORITHM ”.

The host is allowed to put one packet per clock tick onto the network, which turns an uneven
flow of packets from the user processes inside the host into an even flow of packets onto the network,
smoothing out bursts and greatly reducing the chances of congestion.
The Token Bucket Algorithm:
The algorithm that allows the output to speedup when large bursts arrive and one that never
loses data is the TOKEN BUCKET ALGORITHM.

In this algorithm, the leaky bucket holds tokens, generated by a clock at the rate of one token
every T sec. This algorithm allows to save up permission by hosts, up to the maximum size of the
bucket, ‘n’ i.e., bursts of up to ‘n’ packets can be sent at once, allowing some burstiness in output
stream and giving faster response to sudden bursts of input.
In the above diagram, we see a bucket holding 3 tokens, with 5 packets waiting to be
transmitted. For a packet to be transmitted, it must be capture and destroy one token. In the above
example, 3 out of 5 packets have gotten through by capturing the 3 tokens in the bucket, but the other
2 are struck waiting for 2 more tokens to be generated.

The major advantage of the token bucket algorithm is that it throws away tokens instead of
packets, when the bucket fills up.
The implementation of the token bucket algorithm is just a variable that counts tokens. The
counter is incremented by 1, every T and decremented by 1, when a packet is sent. When the counter
hits ‘0’, no packets may be sent.

Closed loop algorithms:

 Congestion control in virtual circuit subnets


 Choke packets
 Load Shedding
 Jitter Control
 Congestion control for Multicasting
1. Congestion control in virtual circuit subnets:
Congestion can be controlled using any of the following techniques:
(a) Admission control
(b) Careful Routing
(c) Agreement Negotiation
Admission control: The idea is that, once congestion has been signaled, no more virtual circuits are
setup until the problem has gone away. Thus, attempts to setup new transport layer connections fail.
Careful routing: New virtual circuits are allowed, but with careful routing around problem areas
Agreement Negotiation: An agreement is to be negotiated between the host and subnet when a virtual
circuit is set up. This agreement normally specifies the volume and shape of the traffic, quality of
service required and other parameters. To keep its part of agreement, the subnet will typically reserve
resources along the path when the circuit is set up, and thus avoids congestion.

2. Choke packets:
This is an approach that can be used in both virtual circuit and data gram subnets. Each router
can easily monitor the utilization of its output lines and other resources using the formula….
=
unew auold+(1-a)f
Where u a variable that reflects the recent utilization of a line. Value lies between 0.0 and 1.0

a constant that determines how fast the router forgets recent history.
f a sample of instantaneous line utilization (either 0 or 1)

Working:

Whenever ‘u’ moves above the threshold, the output line enters a “Warning” state. Each newly
arriving packet is checked to see if its output line is in warning state. If so, the router sends a choke
packet back to the source host, giving it the destination found in the packet. The original packet is
tagged so that it will not generate any more choke packets further along the path and is then forwarded
in the usual way.
When the source host gets the choke packet, It is required to reduce the traffic sent to
the specified destination by ‘x’ percent, i.e., the host should ignore choke packets referring to that
destination for a fixed time interval. After that period has expired, the host listens for more choke
packets for another interval. If no choke packets arrived during the listening period, the host may
increase the flow again.

Disadvantage: The honest host gets an ever-smaller share of the bandwidth than it had before.

To get around the problem of choke packets, Nagle proposed “FAIR QUEUEING
ALGORITHM”. The essence of this algorithm is that routers have multiple queues for each o/p line,
one for each source. When a line becomes idle, the router scans the routes round robin, taking the first
packet on the next queue. In this way, with ‘n’ hosts competing for a given o/p line, each host gets to
send one out of every ‘n’ packets.
Drawback: With this algorithm, more bandwidth is given to hosts that use large packets
than to hosts that use small packets.

To get around the problem of the algorithm proposed by Nagle, Demers suggested an
improvement in which the around robin is done in such away as to simulate a byte-by-byte round
robin, instead of a packet-by-packet round robin. It scans the queues repeatedly, byte-for byte, until it
finds the tick on which each packet will be finished. The packets are then sorted in the order of their
finishing and sent in that order.

In the above example,

At clock tick 1 - The first byte of packet in the queue on line ‘A’ is sent.

At clock tick 2 - The first byte of packet in the queue is sent on line ‘B’.
.
.
.
.
At clock tick 8 - ‘C’ finishes its first packet and then similarly B, D, E and A
finishes after 16 17 ,18 and 20 clock ticks respectively.

Problem: It gives the all hosts the same priority.

To over come this problem of FAIR QUEUEING algorithm, WEIGHTED FAIR


QUEUEING algorithm is widely used in which the required hosts can be assigned a higher priority
or bandwidth so that they can be given 2 or more bytes per tick.
Hop-by-Hop Choke Packets :

At high speeds and over long distance , sending a choke packet to the source host does not work

well because the reaction is slow.

Eg: consider fig(i):

Host ‘A’ sending traffic to host ‘B’ located at a very long distance from ‘A’. The choke packets are
released at the time of congestion and as the 2 hosts are far situated from each other , It takes a
maximum delay for choke packets to reach the host ‘A’ and reaction is similar.

(i). A Choke packet that effects source (ii).A Choke packet that effects each
hop it passes through
An alternative approach to reduce the delay is to have the choke packets take effect at every hop it
passes through, as shown in sequence of fig(ii) . Here, as soon as choke packet reaches F, 'F' is
required to reduce the flow to 'D'. Doing so will require ' F ' to devote more buffers to the flow ,since
the source is still sending away at full blast , but it gives 'D' immediate relief . In the next step, packet
reaches E , which tells E to reduce the flow to F .This action puts a greater demand on E's buffers
but gives ' F ' immediate relief . Finally, the choke packet reaches A and the flow genuinely slows
down.

3. Load Shedding :
It is a fancy way of saying that when routers are being loaded by packets that they cannot handle,
they just throw them away. Which packets to discard depend on the application running.

If a new packet is more important than the old one,


old packet is removed and this process is called 'MILK'.

If an old packet is more important than the new one,


new packet is removed and this process is called ‘WINE'.

(4). Jitter Control :

This is based on calculating the average amount of congestion. The jitter can be bounded by
computing the expected transit time for each loop along the path. When a packet arrives at a router,
the router checks to see how much the packet is behind or ahead of its schedule. This information is
stored in the packet and updated at each hop. If the packet is ahead of schedule, it is held just long
enough to get it back on schedule. If it is behind schedule, the router tries to get it out the door
quickly.

In fact, the algorithm for determining which of several packets competing for an o/p line
should go next can always choose the packet furthest behind its schedule. In this way, packets that
are ahead of schedule get slowed down and packets that are behind its schedule get speeded up, in
both cases reducing the amount of Jitter.
The Network Layer in the Internet
At the network layer, the Internet can be viewed as a collection of sub networks or Autonomous
systems, that are connected together. Several backbones exist which are constructed from high
bandwidth lines and fast routers. LANs at many Universities, Companies and Internet Service.
Providers are attached to Regional Networks, which in turn, are attached to the backbones.

The glue that holds the Internet together is the network layer protocol, IP. The job IP is to
provide a best-efforts way to transport datagrams from source to destination, without regard to whether
or not these machines are on the same network or there are other networks or there are other networks
in between them.

Communication in Internet:
The transport layer takes data streams and breaks them up into datagrams, which are
transmitted through the Internet, possibly fragmented into smaller units as it goes. When all pieces
finally get to the destination, they are reassembled by the network layer into the original datagram,
which is then handed to transport layer.
Tunneling

Tunneling is a solution when the source and destination hosts are on the same type of network,
but there is a different network in between. As an example, think of an international bank with
a TCP/IP-based Ethernet in Paris, a TCP/IP-based Ethernet in London, and a non-IP wide area
network (e.g., ATM) in between, as shown in Figure below.

To send an IP packet to host 2, host 1 constructs the packet containing the IP address of
host 2, inserts it into an Ethernet frame addressed to the Paris multiprotocol router, and puts it
on the Ethernet. When the multiprotocol router gets the frame, it removes the IP packet, inserts
it in the payload field of the WAN network layer packet, and addresses the latter to the WAN
address of the London multiprotocol router. When it gets there, the London router removes the
IP packet and sends it to host 2 inside an Ethernet frame.

The WAN can be seen as a big tunnel extending from one multiprotocol router to the other. The
IP packet just travels from one end of the tunnel to the other. It does not have to worry about
dealing with the WAN at all. Neither do the hosts on either Ethernet. Only the multiprotocol
router has to understand IP and WAN packets.
Fragmentation

Each network imposes some maximum size on its packets. These limits have various causes,
among them:

1. Hardware (e.g., the size of an Ethernet frame).


2. Operating system (e.g., all buffers are 512 bytes).
3. Protocols (e.g., the number of bits in the packet length field).
4. Compliance with some (inter)national standard.
5. Desire to reduce error-induced retransmissions to some level.
6. Desire to prevent one packet from occupying the channel too long.

Problem appears when a large packet wants to travel through a network whose maximum packet
size is too small. The only solution to the problem is to allow gateways to break up packets into
fragments, sending each fragment as a separate internet packet. Two opposing strategies exist for
recombining the fragments back into the original packet.
There are two strategies. (a) Transparent fragmentation. (b) Nontransparent fragmentation

Transparent fragmentation: When an oversized packet arrives at a gateway, the gateway breaks
it up into fragments. Each fragment is addressed to the same exit gateway, where the pieces are
recombined. In this way passage through the small-packet network has been made transparent.
Subsequent networks are not even aware that fragmentation has occurred.

Figure (a) Transparent fragmentation. (b) Nontransparent fragmentation.

In transparent fragmentation, the exit gateway must know when it has received all the pieces, so
either a count field or an ''end of packet'' bit must be provided. For another thing, all packets must
exit via the same gateway. By not allowing some fragments to follow one route to the ultimate
destination and other fragments a disjoint route, some performance may be lost. A last problem is
the overhead required to repeatedly reassemble and then refragment a large packet passing through
a series of small-packet networks. ATM requires transparent fragmentation.

Nontransparent fragmentation:

In Nontransparent fragmentation, once a packet has been fragmented, each fragment is treated as
though it were an original packet. All fragments are passed through the exit gateway (or gateways),
as shown in Fig.(b) Recombination occurs only at the destination host.
Nontransparent fragmentation also has some problems. For example, it requires every host to be able
to do reassembly. Yet another problem is that when a large packet is fragmented, the total overhead
increases because each fragment must have a header.

Fragmentation when the elementary data size is 1 byte. (c) Original packet, containing 10 data
bytes. (d) Fragments after passing through a network with maximum packet size of 8 payload
bytes plus header. (e) Fragments after passing through a size 5 gateway.

The IP Protocol: The IP protocol (or the IP datagram ) consists of 2 parts: -


1. Header part : The header has a 20-bytes fixed part and a variable length optimal part. It is
transmitted in big Indian order i.e., from left to right.

Version: Keeps track of which version of the protocol the datagram belongs to.
IHL: Tells how long the header is, in 32-bit words. The min and max values are 5 and 15
respectively, which are a multiple of 4.
Type of service: Allows the host to tell the subnet what kind of service it wants.
The field itself contains:
 8-bit precedence field, where precedence field is a priority from 0-7.
 8-flags D, T, R (Delay, Throughput, and Reliability)which is most cared parameter set. 2-bits,
unused.
Total Length: The total length of both header and data maximum length is 65,535 bytes.
Identification: Allows the destination host to determine to which datagram a newly arrived
fragment belongs.
DF( Don’t Fragment ): It is an order to routine not to fragment the datagram because the
destination is incapable of putting the pieces back together again.
MF( More Fragments ): All fragments except the last one have this bit set.
Fragment Offset: Tells where in the current datagram this fragment belongs.
Time to live: It is a counter used to limit packet lifetimes. It counts time in seconds, allowing a
maximum lifetime of 255sec. It is decremented at each hop fill it reaches zero and
then discarded.
Header Checksum: Verifies the header only.
Protocol: Tells the network layer to which transport process, the datagram is to be given.
Eg :- TCP, UDP .... etc
SA: Indicates the network number and host number from which datagram has came from.
DA: Destination address and it indicates both the n/w number and host number to which
destination, the datagram is to be delivered.
Options: This was designed to provide an escape to allow subsequent versions of protocol to
include information not present in the original design, to permit experimenters to
tryout new ideas and to avoid allocating header bits to information that is rarely
needed.
Option Description
Security Specifies how secret the datagram is.
Strict Source Routing Gives the complete path to be followed.
Loose Source Routing Gives a list of routers not to be missed.
Record Route Makes each router append its IP address.
Time Stamp Makes each router append its address and time stamp.
IP Addresses :

IP addresses = Network Number + Host Number. All are 32-bits long.

Different address formats:

Network numbers are assigned by NIC (Network Information Centre) and are usually written in
Dotted Decimal Notation, in which each of 4 bytes is written in decimal, from 0 to 255.
The lowest IP address is 0.0.0.0.
The highest IP address is 255.255.255.255.
The value ‘0’ means this n/w or this host.
The value ‘-1’ means Broadcast messages to all hosts on the indicated n/w.

Special IP addresses :
Problems :
 As time goes on, any class network may acquire more than the permitted no. of hosts
which require another class network of same type with a separate IP address.
 As the no. of distinct local n/w s grows, managing them can become a serious headache.

To overcome all these problems……….


A network is allowed to split into several parts for internal use but still act like a single network to the
outside world. In the internal literature, these parts are called SUBNETS.
Eg : A company started up with a class B address and had grown as time passed by which require a second
LAN .Then, 16-bit host number is splitted up into a 6-bit subnet number and a 10-bit host number. This split
allows 62 LANs (0 and –1 are reserved), each with up to 1022 hosts.

Subnet mask

In this example, the subnet might use IP address starting at 130.50.4.1, the second subnet might start
at 130.50.8.1, and so on.

Subnet Working:
Each router has a table listing some number of (network, 0) IP addresses and some number of (this-
n/w, host) IP addresses.
(network, 0) – Tells how to get to distant networks
(this – n/w, host) – Tells how to get to local hosts

Associated with each table is the network interface to use to reach the destination, and certain
other information. When an IP Packet arrives, its destination addresses is looked up in the routing
table. If the packet is for a distant network, it is forwarded to the next router on interface given in the
table. If it is a local host, it is sent directly to the destination. If the network is not present, the packet
is forwarded to default router.

When sub netting is introduced, the routing tables are changed, adding entries of the form (this
– n/w, subnet, 0) and (this – n/w, this subnet, host) Thus, a router on a subnet ‘k’ knows how to get all
other subnets and also how to get to all the hosts on subnet ‘k’. Each router performs a Boolean AND
with network’s subnet mask to get rid of host number and looks up the resulting address in its tables.
Internet Control protocols:
a. ICMP (Internet Control Message Protocol)
b. ARP (Address Resolution Protocol)
c. RARP (Reverse Address Resolution Protocol)
d. DHCP (Dynamic Host Configuration Protocol)

ICMP: When some thing unexpected occurs in the internet, the event is reported by ICMP, which is
used to test the internet. Each ICMP message type is encapsulated in an IP packet.

Message type Description


Destination Unreachable Used when the subnet or a router cannot locate the destination
or a packet with DF bit can’t be delivered
Time exceeded Is sent when a packet is dropped due to its counter reaching
zero(time to live=0)
Source Quench Used to throttle hosts that were sending too many packets, by
which hosts are excepted to slow down.
Echo Request To see if a given destination is reachable and alive. By which the
Echo Reply destination machine responds.
Time Stamp Request Used to record the arrival time of the message and departure time
Time Stamp Reply of the reply in the reply
Parameter Problem Indicates that a illegal value has been detected in header field
Redirect Used when a router notices that a packet seen to be routed wrong
ARP: Used when an IP address is given and corresponding Ethernet address is to be found out. IP
addresses get mapped onto data link layer addresses in the following way:-

Here, we have 2 Ethernets, one in dept-1 with IP address 192.31.65.0 and the other in dept-2 with IP
address 192.31.63.0 in a university, which are connected by a campus FDDI ring with IP address
192.31.60.0. Each machine on the Ethernet has a unique Ethernet address, labeled E1 through E6 and
each machine on the FDDI ring has an FDDI address, labeled F1 through F3.

Let us assume the data transfer from user on host1 to user on host2, in which the sender knows
the name of the intended receiver, say, “Mary @ eagle.cs.uni.edu”

1. Find the IP address for host-2, eagle.cs.uni.edu, which is performed by DNS


2. The IP address for host-2 is returned (192.31.65.5)
3. The upper layer on host-1 now builds a packet with 192.31.65.5 in Destination Address field
and gives it to the IP software to transmit.
4. The IP s/w finds that the destination is on its own n/w but it doesn’t find the destinations Ethernet
address. To do so, host-1 puts a broadcast packet onto the Ethernet asking “Who Owns IP
address 192.31.65.5?”
5. The broadcast will arrive at every machine on Ethernet 192.31.65.0 and each one will check its
IP address, by which host-2 will respond with its Ethernet address(E2)
( The protocol for asking this question and getting the reply is called Address Resolution
Protocol, which is defined in RFC826)
6. Now, the IP s/w on host-1 builds an Ethernet frame addressed to E2, puts the IP packet in the
payload field, and dumps it onto the Ethernet, which is detected and recognized by host2 as a
frame for itself and it causes an interrupt.
7. The Ethernet driver extracts the IP packet from the payload and passes it to the IP s/w, which
sees that it is correctly addressed, and processes it.

Let us assume the data transfer from user on host1 to user on host6, in which the sender knows
the name of the intended receiver, say, “Mary @ eagle.cs.uni.edu”

1. ARP in this case, will fail, as host4 will not see the broadcast(Routers don’t forward Ethernet
level broadcast) 2 solution are possible to deal with this task:-
 The dept-1 Router could be configured to respond to ARP requests for n/w 192.31.63.0,
in which host-1 will make an ARP cache entry of (192.31.63.8, E3) and happily send all
traffic for host-4 to the local router. This solution is called “PROXY ARP”
 To have host-1 immediately see that the destination is on a remote n/w and just send all such
traffic to default Ethernet address that handlers all remote traffic , in this case E3
2. i.e., host-1 packs the IP packet into the payload field of an Ethernet addressed to E3.
3. when the dept-1 routes gets the Ethernet frame, it removes the IP packet from the payload field
and looks up the IP address in its routing tables
4. It discovers that packets for n/w 192.31.63.0 are supposed to go to routes 192.31.60.7. if the
FDDI address is not known in prior, ARP technique is used to find the ring address as F3 by
which the packet is inserted into the payload field of an FDDI frame addressed to F3 and puts it
on the ring
5. At the dept-2 router, the FDDI driver removes the packet from payload field and gives it to IP
software, which sees that it needs to send the packet to 192.31.63.8
6. If this address is not in ARP cache, it broadcasts an ARP request on dept-2 Ethernet and finds
the address as E6.
7. An Ethernet frame addressed to E6 is built, packet is put in payload field, sent over Ethernet
8. When the Ethernet frame arrives at host-4, the packet is extracted from the frame and passed to
IP software for processing.

RARP: RARP is used when an Ethernet address is given and corresponding IP address is to be found.
This protocol a newly-booted work station to broadcast its Ethernet address and say: “My 48-bit
Ethernet address is 14.04.05.18.01.25. Does anyone know my IP address ?”. The RARP server sees
this request, looks up the Ethernet address in its configuration files and sends back the corresponding IP
address.
CIDR – Classless Inter Domain Routing:

Three Bears Problem:


For most organizations, organizing the address space by a class-A network with 16million
addresses is too big and by a class-c network with 256 addresses is too small but by a Class-B
network with 16,536 is just right. This situation is called “Three bears Problem”.

Problems in the Internet:

 In reality, a class – B address is far too large for most Organizations


 As routers have to know about the N/Ws, Every router to maintain a table with all the N/W
entries, one per N/W which requires more Physical storage and which is expensive.
 Various routing algorithms require each route to transmit its table’s periodically. The Larger
the tables, the larger the routing instabilities.
 Using a deeper hierarchy for routing tables (i.e., having each IP address: country -> state->
cites -> N/W -> Host) requires more than 32 bits for IP addresses.
 All these problems can be stoned using CIDR, The basic idea behind this is to allocate the
remaining Class-C network in Variable – sized blocks (almost 2 million N/Ws).
Eg: If a site needs 2000 addresses, it is given a block of 2048 addresses (8 contiguous
class-C networks ) or If a site needs 8000 addresses, it is given a block of 8192
addresses (32 contiguous class- C networks)
In addition to using blocks of contiguous class-C networks as Units, the allocation rules for
class ‘C’ addresses are implemented as :-
 The World was partitioned into 4 zones.
 Each zone is given a position of class – C address space

Address Range Zone


194.0.0.0 to 195.255.255.255 Europe
198.0.0.0 to 199.255.255.255 North America
200.0.0.0 to 201.255.255.255 Central & South America
202.0.0.0 to 203.255.255.255 Asia & The pacific.
In this way, each region was given about 32 million addresses to allocate, with another320 million
class - C addresses form 204.0.0.0 through 223.255.255.255 reserved for future use. To overcome the
routing table explosion more precisely, 114 CIDR uses a 32-bit mark attached to each routing table
entry. When a packet comes in, its destination address is first extracted there the routing table is
scanned entry by entry, masking the detonation address and comparing it to the table entry 100 king
for a match.

If a packet comes in addressed to 194.24.17.4 i.e., 1100 0010 0001 1000 0001 0001 0000 0100 ……
1. It is Boolean AND ed with U-1 mask to get 1100 0010 0001 1000 0001 0000 000 0000 0000
which doesn’t match the U-1 base address.
2. The original address is ANDed with U-2 Mask to get 1100 0010 00011000 0001 0000 0000 0000
which matches the U-2 base address and so the packet is delivered.

IP-V6:

The driving motivation for the adoption of a new version of IP was the limitation imposed by 32-
bit address field in IP-V4. Reasons for in adequacy of 32-bit address include the following:
 2-level structure of the IP address( n/w-number, host number )is wasteful of address space
 The IP addressing model generally requires that a unique n/w number be assigned to each
assigned to each network whether or not it is actually connected to the internet.
 Networks are proliferating rapidly.
 Growth of TCP/IP usage into new areas will result in a rapid growth in the demand for unique IP
addresses.
 It assigns a single IP address to each host instead of multiple.
Comparison of IPV6 and IPV4:

Parameter IPV6 IPV4


1.Address 128-bit 32-bit
2.Data transfer elements packets datagrams
3.Optional header separate not separate
4.Address assignment dynamic static
5.data transfer speed faster slower
6.security provides great security provides smaller security
7.Addressing flexibility more less
8.support for resource allocation yes no
Eg: real time video

IP-V6 Header: It has a fixed length of 40 octets and consists of following fields:

 Version (4 bits): internet protocol version number=6


 Priority (4 bits): is used to distinguish between packets whose sources can be flow-
controlled and those that cannot. (0…7 = for transmission that are capable of slowing down in the
event of congestion while 8…15 = real time traffic whose sending rate is constant)
 Flow label ( 20 bits ): may be used by a host to label those packets for which it is requesting
special handling by routers with in a network.
 Payload length(16-bits): length of remainder of IPV6 packet following the header in octets.
 Next header : it tells which transport protocol handler (Eg: TCP,UDP) to pass the packet to.

Dynamic host configuration protocol (DHCP):

Dynamic Host Configuration Protocol (DHCP) is a client/server protocol that automatically provides
an Internet Protocol (IP) host with its IP address and other related configuration information such as
the subnet mask and default gateway.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy