CN Material Unit-4 2023
CN Material Unit-4 2023
Internetworking Styles:
1. Concatenated Virtual Circuit Subnets (Connection-Oriented)
2. Datagram Model (Connection-Less)
Once data packets begin following along the path, each gateway relays incoming packets,
converting between packet formats and virtual circuit numbers as needed. Clearly, all data packets
must traverse the same sequence of gateways, and thus arrive in order.
The essential feature of this approach is that a sequence of virtual circuits is set up from the
source through one or more gateways to the destination. Each gateway maintains tables telling which
virtual circuits pass through it, where they are to be routed, and what the new virtual circuit number
is.
Datagram Model:
In this model, the only service the n/w layer offers to the transport layer is the ability to inject datagrams
into the subnet and hope for the best. Datagrams from one host to another host travel through different
routes through the inter network. A routing decision is made separately for each packet, possibly
depending on the traffic at the moment the packet is sent. This strategy can use multiple routes and
thus achieve a higher band width than the concatenated virtual circuit model.
A major disadvantage of datagram model to internet working is that it can be used over the subnets
that do not use virtual circuits inside.
Differences between a Virtual Circuit and a Datagram:
Routing algorithms:
A routing algorithm is that part of the network layer software responsible for deciding which
output line an incoming packet should be transmitted on. Each routing algorithm possess
certain properties like
Correctness Optimality
Simplicity Fairness
Stability Robustness
Non-adaptive routing algorithms are those that do not base their routing decisions on
measurements or estimates of the current traffic and topology. The choice of the root to use to get from
I to J is computed in advance, off-line and downloaded to the routers when the N/W is booted .
Adaptive routing algorithms are those that base their routing decisions on measurements, or
estimates of the current traffic and topology.
Non-Adaptive Routing Algorithms:
(a). Shortest Path Routing: It is used to build a graph of the subnet, with each node of graph
representing a router and each arc of the graph representing a communication line. To choose a route
between a given pair of routers, the algorithm just finds the shortest path between them on the graph.
Different ways of measuring the path length is the number of Hops, Geographical distance in kmts,
Mean Queuing delay, Transmission delay, Functions of distance, Bandwidth, Average traffic,
communication cost etc.,
(b). Flooding: If every incoming packet is sent out on every outgoing line except the one it
arrived on, it is called Flooding.
Flooding obviously generates vast numbers of duplicate packets. To damp this process, several
techniques can be employed:
(a) To have a hop counter contained in the header of each packet, which is decremented at each
hop, with the packet being discarded when the counter reaches ‘zero’. Initially, the hop counter
should be initialized to the length of the path from source to destination.
(b) To keep track of which packets have been flooded so that they can be avoided sending second
time. This is achieved by having the source router put a sequence number in each packet it
arrives from its hosts. Each router then needs a list per source router telling which sequence
numbers originating at that source have already been seen. If an incoming packet is on the list,
it is not flooded.
Selective Flooding:
The algorithm in which the routers do not send every incoming packet out on every
line but only on those lines that are going approximately in the right direction
Uses of Flooding:
1. In military, where large number of routers may be blown to bits at any instant.
2. In distributed database applications, it is sometimes necessary to update all the databases
concurrently.
3. As a metric against which other routing algorithms can be compared.
Adaptive Routing Algorithms
Hierarchical Routing
Multicast Routing
(1). Distance Vector Routing: This algorithm operates by having each router maintain a table
giving the best known distance to each destination and which line to use to get there. These tables are
updated by exchanging information with the neighbors. This algorithm is also called as BELLMAN-
FORD or FORD-FULKERSON algorithm.
In this algorithm, each router maintains a routing table indexed by and containing one entry for each
router in subnet. This entry contains 2 parts:
b. An estimate of time or distance (no of hops, or time delay or queue length) for that
destination
Eg: Consider an example in which delay is used as a metric. Compute a Routing table ‘J’ from
the given subnet.
New routing table for ‘J’ can be computed from its neighbors as follows:
Similarly….the routing tables for J D,E,F,G,H,I,J,K, and L are computed.
Draw backs:
Suppose ‘A’ is down initially, and all other routers know this. When ‘A’ comes up, other routers learn
about it via the vector changes.
The Good news is spreading at the rate of one hop per exchange.
Bad news propagation
(Count to infinity problem):
Suppose initially, all the lines and routers are up. Suddenly ‘A’ goes down.
From the above, it is clear that the bad news travels slowly.
Both A and B tell ‘C’ that they cannot get to D. Thus ‘C’ immediately concludes that ‘D’ is
unreachable and reports this to both A and B. unfortunately, ‘A’ hears that B has a path of length
2 to D, so it assumes it can get to ‘D’ via ‘B’ in 3 hops. Similarly ‘B’ concludes it can get to D via
‘A’ in 3 hops. On the next exchange, they each set their distance to 4.
(3). Hierarchical Routing :
Different levels are used to compute the routes:
Level 1 - Routers
Level 2 - Regions
• Level 3 - Clusters
• Level 4 - Zones
• Level 5 - Zonal regions
E.g:
• With 2-level hierarchy, Hierarchical routing has reduced the table form 17to 7 entries…
When a router is booted, its first task is to learn who its neighbors are. It accomplishes this goal
by sending a special HELLO packet on each point-to-point line. The router on the other end is
expected to send back a reply telling who it is with its address. These names must be globally
unique because when a distant router later hears that three routers are all connected to F, it is
essential that it can determine whether all three mean the same F.
Measuring Line Cost
The link state routing algorithm requires each router to know, or at least have a reasonable estimate
of the delay to each of its neighbors. The most direct way to determine this delay is to send over
the line a special ECHO packet that the other side is required to send back immediately. By
measuring the round-trip time and dividing it by two, the sending router can get a reasonable
estimate of the delay. For even better results, the test can be conducted several times, and the
average used.
An interesting issue is whether to take the load into account when measuring the delay. To factor
the load in, the round-trip timer must be started when the ECHO packet is queued. To ignore the
load, the timer should be started when the ECHO packet reaches the front of the queue.
Building Link State Packets
Once the information needed for the exchange has been collected, the next step is for each router
to build a packet containing all the data. The packet starts with the identity of the sender, followed
by a sequence number (to avoid duplication) and age (to tell how long a packet can stay alive), and
a list of neighbors. For each neighbor, the delay to that neighbor is given. An example subnet is
given in Figure(a) below with delays shown as labels on the lines. The corresponding link state
packets for all six routers are shown in Fig.(b).
(a) A subnet. (b) The link state packets for this subnet.
Building the link state packets is easy. The hard part is determining when to build them. One
possibility is to build them periodically, that is, at regular intervals. Another possibility is to
build them when some significant event occurs, such as a line or neighbor going down or coming
back up again or changing its properties appreciably.
The fundamental idea is to use flooding to distribute the link state packets. To keep the flood
in check, each packet contains a sequence number that is incremented for each new packet sent.
Routers keep track of all the (source router, sequence) pairs they see. When a new link state packet
comes in, it is checked against the list of packets already seen. If it is new, it is forwarded on all lines
except the one it arrived on. If it is a duplicate, it is discarded. If a packet with a sequence number
lower than the highest one seen so far ever arrives, it is rejected as being obsolete since the router
has more recent data.
[Some refinements to this algorithm make it more robust. When a link state packet comes in
to a router for flooding, it is not queued for transmission immediately. Instead it is first put in a
holding area to wait a short while. If another link state packet from the same source comes in before
the first packet is transmitted, their sequence numbers are compared. If they are equal, the duplicate
is discarded. If they are different, the older one is thrown out. To guard against errors on the router-
router lines, all link state packets are acknowledged. When a line goes idle, the holding area is
scanned in round-robin order to select a packet or acknowledgement to send.]
Computing the New Routes
Once a router has accumulated a full set of link state packets, it can construct the entire subnet
graph because every link is represented. Every link is, in fact, represented twice, once for each
direction. The two values can be averaged or used separately.
Now Dijkstra's algorithm can be run locally to construct the shortest path to all possible
destinations. The results of this algorithm can be installed in the routing tables, and normal
operation resumed.
Congestion: When too many packets are sent to a subnet more than its capacity,
the Situation that arises is called congestion.
If input packets coming from 3 or 4 lines, requires only one particular output line. If
routers are supplied with infinite amount of memory, packets take longtime to reach to the
front of queue where duplicates are generated as they are timed out.
Slow processors cause congestion.
Low bandwidth lines also cause congestion.
Congestion feeds upon itself and cause congestion.
These algorithms control congestion. These are mainly divided into two groups (Also called general
principles of congestion):
Open Loop Solutions attempt to solve the problems by good design to make sure it does not occur
in the first place. Once the system is up and running, mid course corrections are not made.
Closed Loop Solutions are based on the concepts of a feedback loop. It has 3 parts.
1. Monitor the system to detest when and where congestion occurs.
2. Pass this information to places where action can be taken.
3. Adjust system operation to correct the problem.
These closed loop algorithms are further divided into two categories:
Implicit feedback: The source reduces the congestion existence by making local
observations.
Explicit feed back: Packets are sent back from the point of congestion to warn source
Open Loop Systems:
Congestion Prevention Policies
Traffic Shaping
Flow Specifications
Layer Policies
Transport 1. Retransmission policy
2. Out-of-order caching policy
3. Acknowledgement policy
4. Flow control policy
5. Timeout Determination
Network 1. Virtual circuits versus data gram inside the
subnet
2. Packet queuing service policy
3. Packet discard policy
4. Routing algorithm
5. Packet lifetime Management
Data Link 1. Retransmission policy
2. Out-of-order catching policy
3. Acknowledgement policy
4. Flow control policy
Retransmission policy: Deals with how fast a sender times out and what it transmits
upon time out.
Out-of –order Catching policy: If receivers routinely discard all out-of-order packets,
(packets arrived without order) they have to be retransmitted.
Acknowledgement policy: If each packet is acknowledged immediately, acknowledged
packets generate extra traffic. This policy deals with piggy backing.
Flow Control policy: A tight flow control scheme (ex: a small window) reduces the
data rate and thus helps fight congestion.
Timeout Determination: It is harder as transit time across the network is less
predictable than transit time over a wire between two routers.
Virtual Circuits vs Data grams: This affects congestion as many algorithms work
only with virtual circuits.
Packets queuing and Service policy: Relates to whether routers have one queue per
input line, one queue per output line or both.
Packet Discard policy: Tells which packet is dropped when there is no space.
Routing Algorithm: With this, Traffic is spreaded over all the lines.
Packet lifetime management: Deals with how long a packet may live before being
discarded.
`Imagine a bucket with a small hole in the bottom. No matter, at what rate water enters the bucket, the
outflow is at a constant rate, ‘p’, when there is any water in the bucket and ‘r’, when the bucket is
empty. Also, once the bucket is full, any additional water entering it spills over the sides and is lost.
The same idea can be applied to packets. Conceptually, each host is connected to the network
by an interface containing a leaky bucket, i.e., a finite internal queue. If a packet arrives at the queue
when it is full, it is discarded. In other words, if one or more processes with in the host try to send a
packet when the maximum number are already queued, the new packet is unceremoniously discarded.
This arrangement can be built into the h/w interface or simulated by the host operating system. It was
first proposed by TURNER and is called the “ LEAKY BUCKET ALGORITHM ”.
The host is allowed to put one packet per clock tick onto the network, which turns an uneven
flow of packets from the user processes inside the host into an even flow of packets onto the network,
smoothing out bursts and greatly reducing the chances of congestion.
The Token Bucket Algorithm:
The algorithm that allows the output to speedup when large bursts arrive and one that never
loses data is the TOKEN BUCKET ALGORITHM.
In this algorithm, the leaky bucket holds tokens, generated by a clock at the rate of one token
every T sec. This algorithm allows to save up permission by hosts, up to the maximum size of the
bucket, ‘n’ i.e., bursts of up to ‘n’ packets can be sent at once, allowing some burstiness in output
stream and giving faster response to sudden bursts of input.
In the above diagram, we see a bucket holding 3 tokens, with 5 packets waiting to be
transmitted. For a packet to be transmitted, it must be capture and destroy one token. In the above
example, 3 out of 5 packets have gotten through by capturing the 3 tokens in the bucket, but the other
2 are struck waiting for 2 more tokens to be generated.
The major advantage of the token bucket algorithm is that it throws away tokens instead of
packets, when the bucket fills up.
The implementation of the token bucket algorithm is just a variable that counts tokens. The
counter is incremented by 1, every T and decremented by 1, when a packet is sent. When the counter
hits ‘0’, no packets may be sent.
2. Choke packets:
This is an approach that can be used in both virtual circuit and data gram subnets. Each router
can easily monitor the utilization of its output lines and other resources using the formula….
=
unew auold+(1-a)f
Where u a variable that reflects the recent utilization of a line. Value lies between 0.0 and 1.0
a constant that determines how fast the router forgets recent history.
f a sample of instantaneous line utilization (either 0 or 1)
Working:
Whenever ‘u’ moves above the threshold, the output line enters a “Warning” state. Each newly
arriving packet is checked to see if its output line is in warning state. If so, the router sends a choke
packet back to the source host, giving it the destination found in the packet. The original packet is
tagged so that it will not generate any more choke packets further along the path and is then forwarded
in the usual way.
When the source host gets the choke packet, It is required to reduce the traffic sent to
the specified destination by ‘x’ percent, i.e., the host should ignore choke packets referring to that
destination for a fixed time interval. After that period has expired, the host listens for more choke
packets for another interval. If no choke packets arrived during the listening period, the host may
increase the flow again.
Disadvantage: The honest host gets an ever-smaller share of the bandwidth than it had before.
To get around the problem of choke packets, Nagle proposed “FAIR QUEUEING
ALGORITHM”. The essence of this algorithm is that routers have multiple queues for each o/p line,
one for each source. When a line becomes idle, the router scans the routes round robin, taking the first
packet on the next queue. In this way, with ‘n’ hosts competing for a given o/p line, each host gets to
send one out of every ‘n’ packets.
Drawback: With this algorithm, more bandwidth is given to hosts that use large packets
than to hosts that use small packets.
To get around the problem of the algorithm proposed by Nagle, Demers suggested an
improvement in which the around robin is done in such away as to simulate a byte-by-byte round
robin, instead of a packet-by-packet round robin. It scans the queues repeatedly, byte-for byte, until it
finds the tick on which each packet will be finished. The packets are then sorted in the order of their
finishing and sent in that order.
At clock tick 1 - The first byte of packet in the queue on line ‘A’ is sent.
At clock tick 2 - The first byte of packet in the queue is sent on line ‘B’.
.
.
.
.
At clock tick 8 - ‘C’ finishes its first packet and then similarly B, D, E and A
finishes after 16 17 ,18 and 20 clock ticks respectively.
At high speeds and over long distance , sending a choke packet to the source host does not work
Host ‘A’ sending traffic to host ‘B’ located at a very long distance from ‘A’. The choke packets are
released at the time of congestion and as the 2 hosts are far situated from each other , It takes a
maximum delay for choke packets to reach the host ‘A’ and reaction is similar.
(i). A Choke packet that effects source (ii).A Choke packet that effects each
hop it passes through
An alternative approach to reduce the delay is to have the choke packets take effect at every hop it
passes through, as shown in sequence of fig(ii) . Here, as soon as choke packet reaches F, 'F' is
required to reduce the flow to 'D'. Doing so will require ' F ' to devote more buffers to the flow ,since
the source is still sending away at full blast , but it gives 'D' immediate relief . In the next step, packet
reaches E , which tells E to reduce the flow to F .This action puts a greater demand on E's buffers
but gives ' F ' immediate relief . Finally, the choke packet reaches A and the flow genuinely slows
down.
3. Load Shedding :
It is a fancy way of saying that when routers are being loaded by packets that they cannot handle,
they just throw them away. Which packets to discard depend on the application running.
This is based on calculating the average amount of congestion. The jitter can be bounded by
computing the expected transit time for each loop along the path. When a packet arrives at a router,
the router checks to see how much the packet is behind or ahead of its schedule. This information is
stored in the packet and updated at each hop. If the packet is ahead of schedule, it is held just long
enough to get it back on schedule. If it is behind schedule, the router tries to get it out the door
quickly.
In fact, the algorithm for determining which of several packets competing for an o/p line
should go next can always choose the packet furthest behind its schedule. In this way, packets that
are ahead of schedule get slowed down and packets that are behind its schedule get speeded up, in
both cases reducing the amount of Jitter.
The Network Layer in the Internet
At the network layer, the Internet can be viewed as a collection of sub networks or Autonomous
systems, that are connected together. Several backbones exist which are constructed from high
bandwidth lines and fast routers. LANs at many Universities, Companies and Internet Service.
Providers are attached to Regional Networks, which in turn, are attached to the backbones.
The glue that holds the Internet together is the network layer protocol, IP. The job IP is to
provide a best-efforts way to transport datagrams from source to destination, without regard to whether
or not these machines are on the same network or there are other networks or there are other networks
in between them.
Communication in Internet:
The transport layer takes data streams and breaks them up into datagrams, which are
transmitted through the Internet, possibly fragmented into smaller units as it goes. When all pieces
finally get to the destination, they are reassembled by the network layer into the original datagram,
which is then handed to transport layer.
Tunneling
Tunneling is a solution when the source and destination hosts are on the same type of network,
but there is a different network in between. As an example, think of an international bank with
a TCP/IP-based Ethernet in Paris, a TCP/IP-based Ethernet in London, and a non-IP wide area
network (e.g., ATM) in between, as shown in Figure below.
To send an IP packet to host 2, host 1 constructs the packet containing the IP address of
host 2, inserts it into an Ethernet frame addressed to the Paris multiprotocol router, and puts it
on the Ethernet. When the multiprotocol router gets the frame, it removes the IP packet, inserts
it in the payload field of the WAN network layer packet, and addresses the latter to the WAN
address of the London multiprotocol router. When it gets there, the London router removes the
IP packet and sends it to host 2 inside an Ethernet frame.
The WAN can be seen as a big tunnel extending from one multiprotocol router to the other. The
IP packet just travels from one end of the tunnel to the other. It does not have to worry about
dealing with the WAN at all. Neither do the hosts on either Ethernet. Only the multiprotocol
router has to understand IP and WAN packets.
Fragmentation
Each network imposes some maximum size on its packets. These limits have various causes,
among them:
Problem appears when a large packet wants to travel through a network whose maximum packet
size is too small. The only solution to the problem is to allow gateways to break up packets into
fragments, sending each fragment as a separate internet packet. Two opposing strategies exist for
recombining the fragments back into the original packet.
There are two strategies. (a) Transparent fragmentation. (b) Nontransparent fragmentation
Transparent fragmentation: When an oversized packet arrives at a gateway, the gateway breaks
it up into fragments. Each fragment is addressed to the same exit gateway, where the pieces are
recombined. In this way passage through the small-packet network has been made transparent.
Subsequent networks are not even aware that fragmentation has occurred.
In transparent fragmentation, the exit gateway must know when it has received all the pieces, so
either a count field or an ''end of packet'' bit must be provided. For another thing, all packets must
exit via the same gateway. By not allowing some fragments to follow one route to the ultimate
destination and other fragments a disjoint route, some performance may be lost. A last problem is
the overhead required to repeatedly reassemble and then refragment a large packet passing through
a series of small-packet networks. ATM requires transparent fragmentation.
Nontransparent fragmentation:
In Nontransparent fragmentation, once a packet has been fragmented, each fragment is treated as
though it were an original packet. All fragments are passed through the exit gateway (or gateways),
as shown in Fig.(b) Recombination occurs only at the destination host.
Nontransparent fragmentation also has some problems. For example, it requires every host to be able
to do reassembly. Yet another problem is that when a large packet is fragmented, the total overhead
increases because each fragment must have a header.
Fragmentation when the elementary data size is 1 byte. (c) Original packet, containing 10 data
bytes. (d) Fragments after passing through a network with maximum packet size of 8 payload
bytes plus header. (e) Fragments after passing through a size 5 gateway.
Version: Keeps track of which version of the protocol the datagram belongs to.
IHL: Tells how long the header is, in 32-bit words. The min and max values are 5 and 15
respectively, which are a multiple of 4.
Type of service: Allows the host to tell the subnet what kind of service it wants.
The field itself contains:
8-bit precedence field, where precedence field is a priority from 0-7.
8-flags D, T, R (Delay, Throughput, and Reliability)which is most cared parameter set. 2-bits,
unused.
Total Length: The total length of both header and data maximum length is 65,535 bytes.
Identification: Allows the destination host to determine to which datagram a newly arrived
fragment belongs.
DF( Don’t Fragment ): It is an order to routine not to fragment the datagram because the
destination is incapable of putting the pieces back together again.
MF( More Fragments ): All fragments except the last one have this bit set.
Fragment Offset: Tells where in the current datagram this fragment belongs.
Time to live: It is a counter used to limit packet lifetimes. It counts time in seconds, allowing a
maximum lifetime of 255sec. It is decremented at each hop fill it reaches zero and
then discarded.
Header Checksum: Verifies the header only.
Protocol: Tells the network layer to which transport process, the datagram is to be given.
Eg :- TCP, UDP .... etc
SA: Indicates the network number and host number from which datagram has came from.
DA: Destination address and it indicates both the n/w number and host number to which
destination, the datagram is to be delivered.
Options: This was designed to provide an escape to allow subsequent versions of protocol to
include information not present in the original design, to permit experimenters to
tryout new ideas and to avoid allocating header bits to information that is rarely
needed.
Option Description
Security Specifies how secret the datagram is.
Strict Source Routing Gives the complete path to be followed.
Loose Source Routing Gives a list of routers not to be missed.
Record Route Makes each router append its IP address.
Time Stamp Makes each router append its address and time stamp.
IP Addresses :
Network numbers are assigned by NIC (Network Information Centre) and are usually written in
Dotted Decimal Notation, in which each of 4 bytes is written in decimal, from 0 to 255.
The lowest IP address is 0.0.0.0.
The highest IP address is 255.255.255.255.
The value ‘0’ means this n/w or this host.
The value ‘-1’ means Broadcast messages to all hosts on the indicated n/w.
Special IP addresses :
Problems :
As time goes on, any class network may acquire more than the permitted no. of hosts
which require another class network of same type with a separate IP address.
As the no. of distinct local n/w s grows, managing them can become a serious headache.
Subnet mask
In this example, the subnet might use IP address starting at 130.50.4.1, the second subnet might start
at 130.50.8.1, and so on.
Subnet Working:
Each router has a table listing some number of (network, 0) IP addresses and some number of (this-
n/w, host) IP addresses.
(network, 0) – Tells how to get to distant networks
(this – n/w, host) – Tells how to get to local hosts
Associated with each table is the network interface to use to reach the destination, and certain
other information. When an IP Packet arrives, its destination addresses is looked up in the routing
table. If the packet is for a distant network, it is forwarded to the next router on interface given in the
table. If it is a local host, it is sent directly to the destination. If the network is not present, the packet
is forwarded to default router.
When sub netting is introduced, the routing tables are changed, adding entries of the form (this
– n/w, subnet, 0) and (this – n/w, this subnet, host) Thus, a router on a subnet ‘k’ knows how to get all
other subnets and also how to get to all the hosts on subnet ‘k’. Each router performs a Boolean AND
with network’s subnet mask to get rid of host number and looks up the resulting address in its tables.
Internet Control protocols:
a. ICMP (Internet Control Message Protocol)
b. ARP (Address Resolution Protocol)
c. RARP (Reverse Address Resolution Protocol)
d. DHCP (Dynamic Host Configuration Protocol)
ICMP: When some thing unexpected occurs in the internet, the event is reported by ICMP, which is
used to test the internet. Each ICMP message type is encapsulated in an IP packet.
Here, we have 2 Ethernets, one in dept-1 with IP address 192.31.65.0 and the other in dept-2 with IP
address 192.31.63.0 in a university, which are connected by a campus FDDI ring with IP address
192.31.60.0. Each machine on the Ethernet has a unique Ethernet address, labeled E1 through E6 and
each machine on the FDDI ring has an FDDI address, labeled F1 through F3.
Let us assume the data transfer from user on host1 to user on host2, in which the sender knows
the name of the intended receiver, say, “Mary @ eagle.cs.uni.edu”
Let us assume the data transfer from user on host1 to user on host6, in which the sender knows
the name of the intended receiver, say, “Mary @ eagle.cs.uni.edu”
1. ARP in this case, will fail, as host4 will not see the broadcast(Routers don’t forward Ethernet
level broadcast) 2 solution are possible to deal with this task:-
The dept-1 Router could be configured to respond to ARP requests for n/w 192.31.63.0,
in which host-1 will make an ARP cache entry of (192.31.63.8, E3) and happily send all
traffic for host-4 to the local router. This solution is called “PROXY ARP”
To have host-1 immediately see that the destination is on a remote n/w and just send all such
traffic to default Ethernet address that handlers all remote traffic , in this case E3
2. i.e., host-1 packs the IP packet into the payload field of an Ethernet addressed to E3.
3. when the dept-1 routes gets the Ethernet frame, it removes the IP packet from the payload field
and looks up the IP address in its routing tables
4. It discovers that packets for n/w 192.31.63.0 are supposed to go to routes 192.31.60.7. if the
FDDI address is not known in prior, ARP technique is used to find the ring address as F3 by
which the packet is inserted into the payload field of an FDDI frame addressed to F3 and puts it
on the ring
5. At the dept-2 router, the FDDI driver removes the packet from payload field and gives it to IP
software, which sees that it needs to send the packet to 192.31.63.8
6. If this address is not in ARP cache, it broadcasts an ARP request on dept-2 Ethernet and finds
the address as E6.
7. An Ethernet frame addressed to E6 is built, packet is put in payload field, sent over Ethernet
8. When the Ethernet frame arrives at host-4, the packet is extracted from the frame and passed to
IP software for processing.
RARP: RARP is used when an Ethernet address is given and corresponding IP address is to be found.
This protocol a newly-booted work station to broadcast its Ethernet address and say: “My 48-bit
Ethernet address is 14.04.05.18.01.25. Does anyone know my IP address ?”. The RARP server sees
this request, looks up the Ethernet address in its configuration files and sends back the corresponding IP
address.
CIDR – Classless Inter Domain Routing:
If a packet comes in addressed to 194.24.17.4 i.e., 1100 0010 0001 1000 0001 0001 0000 0100 ……
1. It is Boolean AND ed with U-1 mask to get 1100 0010 0001 1000 0001 0000 000 0000 0000
which doesn’t match the U-1 base address.
2. The original address is ANDed with U-2 Mask to get 1100 0010 00011000 0001 0000 0000 0000
which matches the U-2 base address and so the packet is delivered.
IP-V6:
The driving motivation for the adoption of a new version of IP was the limitation imposed by 32-
bit address field in IP-V4. Reasons for in adequacy of 32-bit address include the following:
2-level structure of the IP address( n/w-number, host number )is wasteful of address space
The IP addressing model generally requires that a unique n/w number be assigned to each
assigned to each network whether or not it is actually connected to the internet.
Networks are proliferating rapidly.
Growth of TCP/IP usage into new areas will result in a rapid growth in the demand for unique IP
addresses.
It assigns a single IP address to each host instead of multiple.
Comparison of IPV6 and IPV4:
IP-V6 Header: It has a fixed length of 40 octets and consists of following fields:
Dynamic Host Configuration Protocol (DHCP) is a client/server protocol that automatically provides
an Internet Protocol (IP) host with its IP address and other related configuration information such as
the subnet mask and default gateway.