0% found this document useful (0 votes)
2 views37 pages

Unit 5 CN

The document discusses various routing protocols, focusing on distance vector routing and the Routing Information Protocol (RIP), explaining their mechanisms, advantages, and potential issues like instability. It also covers concepts such as shortest path trees, multicast routing strategies, and congestion control techniques, particularly open loop methods. Additionally, it addresses flow characteristics such as reliability, delay, jitter, and bandwidth, and the functions and classifications of routing algorithms, including the flooding algorithm.

Uploaded by

facapa9164
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views37 pages

Unit 5 CN

The document discusses various routing protocols, focusing on distance vector routing and the Routing Information Protocol (RIP), explaining their mechanisms, advantages, and potential issues like instability. It also covers concepts such as shortest path trees, multicast routing strategies, and congestion control techniques, particularly open loop methods. Additionally, it addresses flow characteristics such as reliability, delay, jitter, and bandwidth, and the functions and classifications of routing algorithms, including the flooding algorithm.

Uploaded by

facapa9164
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Unit 5 (Routing Protocols)

1) Describe distance vector routing protocol with one valid example. Explain working
of RIP. (1641038)

In distance vector routing, the least-cost route between any two nodes is the route with
minimum distance. In this protocol, each node maintains a vector (table) of minimum
distances to every node. The table at each node also guides the packets to the desired node
by showing the next stop in the route (next-hop routing).

Initialization

At the beginning, each node can know only the distance between itself and its immediate
neighbors, those directly connected to it. So, for the moment, we assume that each node
can send a message to the immediate neighbors and find the distance between itself and
these neighbors.
Sharing

The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So, if node C shares its routing
table with A, node A can also know how to reach node E. On the other hand, node C does
not know how to reach node D, but node A does. If node A shares its routing table with
node C, node C also knows how to reach node D. In other words, nodes A and C, as
immediate neighbors, can improve their routing tables if they help each other.
A problem in this is that a node therefore can send only the first two columns of its table to
any neighbor.
In simple words distance vector routing, each node shares its routing table with its
immediate neighbors periodically and when there is a change.

Updating

When a node receives a two-column table from a neighbor, it needs to update its routing
table. Updating takes three steps:

1. The receiving node needs to add the cost between itself and the sending node to
each value in the second column.
2. The receiving node needs to add the name of the sending node to each row as the
third column if the receiving node uses information from any row. The ending node
is the next node in the route.
3. The receiving node needs to compare each row of its old table with the
corresponding row of the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with
the
smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new
row.

When to Share?

The table is sent both periodically and when there is a change in the table.
Periodic Update: A node sends its routing table, normally every 30s in a periodic update.
The period depends on the protocol that is using distance vector protocol.
Triggered update: A node sends its two-column routing table to its neighbors anytime when
there is a change in its routing table.

Two-Node Loop Instability

A problem with distance vector routing is instability, which means that a network using this
protocol can become unstable.
At the beginning, both nodes A and B know how to reach node X. But suddenly, the link
between A and X fails. Node A changes its table. If A can send its table to B immediately,
everything is fine. However, the system becomes unstable if B sends its routing table to A
before receiving A's routing table. Node A receives the update and, assuming that B has
found a way to reach X, immediately updates its routing table. Based on the triggered
update strategy, A sends its new update to B. Now B thinks that something has been
changed around A and updates its routing table. The cost of reaching X increases gradually
until it reaches infinity. At this moment, both A and B know that X cannot be reached.
However, during this time the system is not stable. Node A thinks that the route to X is via B;
node B thinks that the route to X is via A. If A receives a packet destined for X, it goes to B
and then comes back to A. Similarly, if B receives a packet destined for X, it goes to A and
comes back to B. Packets bounce between A and B, creating a two-node loop problem. A
few solutions have been proposed for instability of this kind.

Three-Node Instability

Suppose, after finding that X is not reachable, node A sends a packet to Band C to inform
them of the situation. Node B immediately updates its table, but the packet to C is lost in
the network and never reaches C. Node C remains in the dark and still thinks that there is a
route to X via A with a distance of 5. After a while, node C sends to Bits routing table, which
includes the route to X. Node B is totally fooled here. It receives information on the route to
X from C, and according to the algorithm, it updates its table, showing the route to X via C
with a cost of 8. This information has come from C, not from A, so after a while node B may
advertise this route to A. Now A is fooled and updates its table to show that A can reach X
via B with a cost of 12. Of course, the loop continues; now A advertises the route to X to C,
with increased cost, but not to B. Node C then advertises the route to B with an increased
cost. Node B does the same to A. And so on. The loop stops when the cost in each node
reaches infinity.
RIP (Routing Information Protocol)
The Routing Information Protocol (RIP) is an intradomain routing protocol used inside an
autonomous system. It is a very simple protocol based on distance vector routing. RIP
implements distance vector routing directly with some considerations:

1. In an autonomous system, we are dealing with routers and networks (links). The
routers have routing tables; networks do not.

2. The destination in a routing table is a network, which means the first column
defines a network address.

3. The metric used by RIP is very simple; the distance is defined as the number of
links (networks) to reach the destination. For this reason, the metric in RIP is called a
hop count.

4. Infinity is defined as 16, which means that any route in an autonomous system
using RIP cannot have more than 15 hops.

5. The next-node column defines the address of the router to which the packet is to
be sent to reach its destination.

Working of RIP

An autonomous system with seven networks and four routers. The table of each router is
also shown. Let us look at the routing table for Rl. The table has seven entries to show how
to reach each network in the autonomous system. Router Rl is directly connected to
networks 130.10.0.0 and 130.11.0.0, which means that there are no next-hop entries for
these two networks. To send a packet to one of the three networks at the far left, router Rl
needs to deliver the packet to R2. The next-node entry for these three networks is the
interface of router R2 with IP address 130.10.0.1. To send a packet to the two networks at
the far right, router Rl needs to send the packet to the interface of router R4 with IP address
130.11.0.1. The other tables can be explained similarly.

2) What is the shortest path tree, short based tree, and group shared tree? Explain
with valid examples.(1641039)

Shortest Path tree


The process of optimal interdomain routing eventually results in the finding of the
shortest path tree. The root of the tree is the source, and the leaves are the
potential destinations. The path from the root to each destination is the shortest
path. However, the number of trees and the formation of the trees in unicast and
multicast routing are different.
Unicast routing: When a router receives a packet to forward, it needs to find the
shortest path to the destination of the packet. The router consults its routing table
for that particular destination. The next-hop entry corresponding to the destination
is the start of the shortest path. The router knows the shortest path for each
destination, which means that the router has a shortest path tree to optimally reach
all destinations. In other words, each line of the routing table is a shortest path; the
whole routing table is a shortest path tree. In unicast routing, each router needs only
one shortest path tree to forward a packet; however each router has its own
shortest path tree.
In unicast routing, each router in the domain has a table that defines a shortest path
tree to possible destinations.
The figure shows the details of the routing table and the shortest path tree for
router R1. Each line in the routing table corresponds to one path from the root to
the corresponding network. The whole table represents the shortest path tree.
Multicast Routing: When a router receives a multicast packet, the situation is
different from when it receives a unicast packet. A multicast packet may have
destinations in more than one network. Forwarding of a single packet to members of
a group requires a shortest path tree. If we have n groups, we may need n shortest
path trees. We can imagine the complexity of multiplexing routing. Two approaches
have been used to solve the problem:
Source based trees and group shared trees.
Source Based Tree: In this approach, each router needs to have one shortest path
tree for each group. The shortest path tree for a group defines the next hop for each
network that has loyal member(s) for that group.
In the figure we assume that we have only five groups in the domain: G1, G2, G3, G4,
and G5. At the moment G1 has loyal members in four networks, G2 in three, G3 in
two, G4 in two, and G5 in two. We have shown the names of the groups with loyal
members of each network.
The figure also shows the multicasting routing table for router R1. There is one
shortest path tree for each group; therefore there are five shortest path trees for
five groups. If router R1 receives a packet with destination address G1, it needs to
send a copy of the packet to the attached network, a copy to router R2, and a copy
to router R4 so that all members of G1 can receive a copy. In this approach, if the
number of groups is m, each router needs to have m shortest path trees, one for
each group. We can imagine the complexity of the routing table if we have hundreds
or thousands of groups. However, we will show how different protocols manage to
alleviate the situation.
In the source based tree approach, each router needs to have one shortest path tree
for each group.

Group Shared Tree: In this approach, instead of each router having m shortest path
trees, only one designated router, called the centre core, or rendezvous router, takes
the responsibility of distributing multicast traffic. The core has m shortest path trees
in its routing table. The rest of the routers in the domain have none. If a router
receives a multicast packet, it encapsulates the packet in a unicast packet and sends
it to the core router. The core router removes the multicast packet from its capsule,
and consults its routing table to route the packet.

In the group shared tree approach, only the core router, which has a shortest path
tree for each group, is involved in multicasting.
3) Provide the classification of Congestion Control. Explain Open Loop techniques
with details.(1641040)
Congestion is an important issue that can arise in packet switched network. Congestion is a
situation in Communication Networks in which too many packets are present in a part of the
subnet, performance degrades. Congestion in a network may occur when the load on the
network (i.e. the number of packets sent to the network) is greater than the capacity of the
network (i.e. the number of packets a network can handle.)
In other words when too much traffic is offered, congestion sets in and performance
degrades sharply.

Causing of Congestion:
• The input traffic rate exceeds the capacity of the output lines.
• The routers are too slow to perform bookkeeping tasks (queuing buffers, updating tables,
etc.).
• The routers' buffer is too limited.
• Congestion in a subnet can occur if the processors are slow.
• Bursty traffic.
Congestion Control:
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened. Congestion
control mechanisms are divided into two categories, one category prevents the congestion
from happening and the other category removes congestion after it has taken place.
I) Closed Loop Congestion Control:

• Closed loop congestion control mechanisms try to remove the congestion after it happens.
• It uses some kind of feedback.
II) Open Loop Congestion Control:
• In Open Loop Congestion Control, policies are used to prevent the congestion before it
happens.
• Congestion control is handled either by the source or by the destination.
• The various methods used for open loop congestion control are: -

Retransmission Policy
• The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted.
Window Policy
• To implement window policy, selective reject window method is used for congestion
control.
• Selective Reject method is preferred over Go-back-n window as in Go-back-n method,
when timer for a packet times out, several packets are resent, although some may have
arrived safely at the receiver. Thus, this duplication may make congestion worse.
• Selective reject method sends only the specific lost or damaged packets.
Acknowledgement Policy
• The acknowledgement policy imposed by the receiver may also affect congestion.
• If the receiver does not acknowledge every packet it receives it may slow down the sender
and help prevent congestion.
• Acknowledgments also add to the traffic load on the network. Thus, by sending fewer
acknowledgements we can reduce load on the network.
• To implement it, several approaches can be used:
1. A receiver may send an acknowledgement only if it has a packet to be sent.
2. A receiver may send an acknowledgement when a timer expires.
3. A receiver may also decide to acknowledge only N packets at a time.
Discarding Policy
• A router may discard less sensitive packets when congestion is likely to happen.
• Such a discarding policy may prevent congestion and at the same time may not harm the
integrity of the transmission.
Admission Policy
• An admission policy, which is a quality-of-service mechanism, can also prevent congestion
in virtual circuit networks.
• Switches in a flow first check the resource requirement of a flow before admitting it to the
network.
• A router can deny establishing a virtual circuit connection if there is congestion in the
"network or if there is a possibility of future congestion.

4) Explain flow characteristics. Also explain Reliability, Delay, Jitter and Bandwidth
with example. (1641041)
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter and
bandwidth.
Reliability
• Reliability is an important characteristic of flow.
• Lack of reliability means losing a packet or acknowledgement which then requires
retransmission.
• However, the sensitivity of application programs to reliability is not the same. For example,
it is more important that electronic mail, file transfer, and internet access have reliable
transmissions than audio conferencing or telephony.
Delay
• Source to destination delay is another flow characteristic.
• Applications can tolerate delay in different degrees.
• In this case, telephony, audio conferencing, video conferencing and remote log in need
minimum delay while delay in file transfer or e-mail is less important.
Jitter
• Jitter is defined as the variation in delay for packets belonging to the same flow.
• High Jitter means the difference between delays is large and low jitter means the variation
is small.
• For example, if four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have
same delay, 20 units of time. On the other hand, if the above four packets arrive at 21, 23, 21,
and 28 they will have different delays of21, 22, 19 and 24.
Bandwidth
• Different applications need different bandwidths.
• In video conferencing we need to send millions of bits per second to refresh a color screen
while the total number of bits in an email may not reach even a million.

5) What is the function of routing algorithm? How are they classified? Explain flooding
algorithm? (1641042)

Functions of routing algorithm:

 Routing protocols have been created in response to the demand for dynamic routing
tables.
 A routing protocol is a combination of rules and procedures that lets routers in the
internet inform each other of changes.
 It allows routers to share whatever they know about the internet or their
neighborhood.
 The routing protocols also include procedures for combining information received
from other routers.
Classification of routing protocol:
1) Adaptive Routing Algorithm: These algorithms change their routing decisions to reflect
changes in the topology and in traffic as well. These get their routing information from
adjacent routers or from all routers. The optimization parameters are the distance,
number of hops and estimated transit time.
i) Centralized
ii) Isolated
iii) distributed
2) Non-Adaptive Routing Algorithm: These algorithms do not base their routing decisions
on measurements and estimates of the current traffic and topology. Instead the route to
be taken in going from one node to the other is computed in advance, off-line, and
downloaded to the routers when the network is booted. This is also known as static
routing.
i) Flooding
ii) Random walk
Flooding Algorithm:

A router receives a packet and without even looking at the destination group address, sends
it out from every interlace except the one from which it was received. It accomplishes
multicasting. A network with active and non-active members receives the packet. Flooding
broadcasts packets, but creates loops in the systems. A packet that has left the router may
come back again from another interlace or the same interlace and be forwarded again.
Some flooding protocols keep a copy of the packet for a while and discard any duplicates to
avoid loops.

6) Write a short note on Scheduling. How many types of Queuing are done in
networks? Explain each with clear figures. (1641043)
Scheduling, packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats different flows in a fair manner. Several scheduling techniques
are designed to improve the quality of service.
The three types of queuing discussed here are:
1.) FIFO queuing
2.) Priority queuing
3.) Weighted fair queuing
1.) FIFO queuing

In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the router or switch
is ready to process them. If the average arrival rate is higher than the average processing
rate the queue will fill up and new packets will be discarded.

2.) Priority Queuing

In Priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue. The packets in the highest-priority queue are processed first. Packets in the
lowest-priority queue are processed last.
(Note: The system does not stop serving a queue until it is empty.)

3.) Weighted Fair Queuing


In this technique the packets are still assigned to different classes and admitted to
different queues. The queues however are weighted based on the priority of the
queue; higher priority means a higher weight. The system processes packets in each
queue in a round -robin fashion with the number of packets selected from each queue
based on the corresponding weight.
7) Explain the working of Link State routing algorithm with respect to OSPF. Provide
valid figures and examples describing it’s working. (1641045)

Link state protocols, such as IS-IS and OSPF, rely on each router in the network to
advertise the state of each of their links to every other router within the local routing
domain. The result is a complete network topology map, called a shortest path tree,
compiled by each router in the network. As a router receives an advertisement, it
will store this information in a local database, typically referred to as the link state
database, and pass the information on to each of its adjacent peers. This information
is not processed or manipulated in any way before it is passed on to the router's
adjacent peers. The link state information is flooded through the routing domain
unchanged, just as the originating router advertises it.
As each router builds a complete database of the link state information as
advertised by every other router within the network, it uses an algorithm, called the
shortest path first algorithm, to build a tree with itself as the center of that tree. The
shortest path to each reachable destination within the network is found by
traversing the tree. The most common shortest path first algorithm is the Dijkstra
algorithm.
When a link-state router boots, it first needs to discover to which routers it is directly
connected. For this, each router sends a HELLO message every N seconds on all of its
interfaces. This message contains the router’s address. Each router has a unique
address. As its neighboring routers also send HELLO messages, the router
automatically discovers to which neighbors it is connected. These HELLO messages
are only sent to neighbors who are directly connected to a router, and a router never
forwards the HELLO messages that they receive. HELLO messages are also used to
detect link and router failures. A link is considered to have failed if no HELLO
message has been received from the neighboring router for a period of (k*n)
seconds.
8) Discuss traffic shaping mechanism with neat diagrams. (1641046)
Traffic shaping is a mechanism to control the amount and rate of traffic sent to the
network. Two techniques can shape traffic: Leaky bucket and token bucket.
Leaky Bucket:
If a bucket has a small hole at the bottom, the water leaks from the bucket at a
considerate rate as long as there is water in the bucket. The rate at which the water
leaks does not depend on the rate at which the water is input to the bucket unless
the bucket is empty. The input rate can vary, but the output remains constant.
Similarly in networking, a technique called leaky bucket can smooth out bursty
traffic. Bursty chunks are stored in the bucket and sent out at an average rate.

In the figure we assume that the network has committed a bandwidth of 3 Mbps for
a host. The host sends a burst of data at a rate of 12 Mbps for 2s for a total of
24mbits of data. The host is silent for 5s and then sends data at a rate of 2 Mbps for
3s for a total of 6 Mbits of data. In all, the hosts has sent 30 Mbits of data in 10s. The
leaky bucket smooths the traffic by sending out data at a rate of 3 Mbps for the
same 10s. Without the leaky bucket the bursts of data could have hurt the network
by consuming more bandwidth than set aside for this host.

Token bucket:
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a
host is not sending for a while,its bucket becomes empty. Now if the host has bursty
data, the leaky bucket allows only an average rate. The time when the host was idle
is not taken into account. On the other hand the token bucket algorithm allows idle
hosts to accumulate credit for the future in the forms of tokens. For each tick of the
clock, the system sends n tokens to the bucket. For example if n is 100 and the host
is idle for 100 ticks, the bucket collects 10,000 tokens. Now host can consume all
these tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells
per tick. In other words the host can send bursty data as long as the bucket is not
empty.
9) Discuss in detail the different categories of congestion control techniques with an
example? (1641047)
Congestion control refers to the techniques used to control or prevent congestion.
Congestion control techniques can be broadly classified into two categories:
1. Open Loop Congestion Control
2. Closed Loop Congestion Control

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens.
Policies adopted by open loop Congestion control are:
1. Retransmission Policy
2. Window Policy
3. Discarding Policy
4. Acknowledgement Policy
5. Admission Policy

1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion
and also able to optimize efficiency.
2. Window Policy:
The type of window at the sender side may also affect the congestion. Several packets
in the Go-back-n window are resent, although some packets may be received
successfully at the receiver side. This duplication may increase the congestion in the
network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet
that may have been lost.
3. Discarding Policy:
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discards the corrupted or less sensitive
package and also able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy:
Since acknowledgement are also the part of the load in network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be
used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send a acknowledgment
only if it has to send a packet or a timer expires.
5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in a
flow should first check the resource requirement of a network flow before
transmitting it further. If there is a chance of a congestion or there is a congestion in
the network, router should deny establishing a virtual network connection to prevent
further congestion.

Closed Loop Congestion Control


Closed loop congestion control technique is used to treat congestion after it happens.
Several techniques are used by different protocols; some of them are:
1. Backpressure
2. Choke Packet Technique
3. Implicit Signalling
4. Explicit Signalling

Backpressure:
Backpressure is a technique in which a congested node stop receiving packet from upstream
node. This may cause the upstream node or nodes to become congested and rejects
receiving data from above nodes. Backpressure is a node-to-node congestion control
technique that propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has information of its
above upstream node.
1. In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and informs the source to slow down.
2. Choke Packet Technique:
Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitor its resources and the utilization at each of its output
lines. Whenever the resource utilization exceeds the threshold value which is set by
the administrator, the router directly sends a choke packet to the source giving it a
feedback to reduce the traffic. The intermediate nodes through which the packets has
traveled are not warned about congestion.
3. Implicit Signaling:
In implicit signaling, there is no communication between the congested nodes and
the source. The source guesses that there is congestion in a network. For example
when sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling:
In explicit signaling, if a node experiences congestion it can explicitly sends a packet
to the source or destination to inform about congestion. The difference between
choke packet and explicit signaling is that the signal is included in the packets that
carry data rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
1. Forward Signaling: In forward signaling signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this
case adopt policies to prevent further congestion.
2. Backward Signaling: In forward signaling signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it
needs to slow down.

10) Explain different techniques to improve Qos? (1641049)

Techniques to Improve QoS:

Techniques that can be used to improve the quality of service as follows scheduling, traffic
shaping, admission control and resource reservation.

 Scheduling :

Packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats the different flows in a fair and appropriate manner.
Several scheduling techniques are designed to improve the quality of service. Three
of them here: FIFO queuing, priority queuing, and weighted fair queuing.

1) FIFO Queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue)
until the node (router or switch) is ready to process them. If the average arrival rate
is higher than the average processing rate, the queue will fill up and new packets will
be discarded. Figure9 shows a conceptual view of a FIFO queue.

2) Priority Queuing: In priority queuing, packets are first assigned to a priority class. Each
priority class has its own queue. The packets in the highest-priority queue are processed
first. Packets in the lowest-priority queue are processed last. Note that the system does not
stop serving a queue until it is empty.
Figure10 shows priority queuing with two priority levels (for simplicity).

A priority queue can provide better QoS than the FIFO queue because higher priority traffic,
such as multimedia, can reach the destination with less delay.
3) Weighted Fair Queuing: A better scheduling method is weighted fair queuing. In this
technique, the packets are still assigned to different classes and admitted to different
queues. The queues, however, are weighted based on the priority of the queues; higher
priority means a higher weight. The system processes packets in each queue in a round-
robin fashion with the number of packets selected from each queue based on the
corresponding weight.
 Traffic Shaping :
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent
to the network. Two techniques can shape traffic: leaky bucket and token bucket.
1) Leaky Bucket: A technique called leaky bucket can smooth out bursty traffic.
Bursty chunks are stored in the bucket and sent out at an average rate.
A simple leaky bucket implementation is shown in Figure11. A FIFO queue holds the
packets. If the traffic consists of fixed-size packets, the process removes a fixed
number of packets from the queue at each tick of the clock. If the traffic consists of
variable-length packets, the fixed output rate must be based on the number of bytes
or bits.

The following is an algorithm for variable-length packets:


1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the
counter by the packet size. Repeat this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.
2) Token Bucket:
The token bucket algorithm allows idle hosts to accumulate credit for the future in the form
of tokens. For each tick of the clock, the system sends n tokens to the bucket. The system
removes one token for every cell (or byte) of data sent. For example, if n is 100 and the host
is idle for 100 ticks, the bucket collects 10,000 tokens. Now the host can consume all these
tokens in one tick with 10,000 cells, or the host takes 1,000 ticks with 10 cells per tick. In
other words, the host can send bursty data as long as the bucket is not empty.
Figure12shows the idea.

 Resource Reservation :
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on.
The quality of service is improved if these resources are reserved beforehand.
 Admission Control :
Admission control refers to the mechanism used by a router, or a switch, to accept
or reject a flow based on predefined parameters called flow specifications. Before a
router accepts a flow for processing, it checks the flow specifications to see if its
capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous
commitments to other flows can handle the new flow.

11) Define Routing Protocol (1641038)

A router normally connects LANs and WANs in the Internet and has routing table
that is used for making decisions about the route. The routing tables are normally
dynamic and are updated using routing protocols. Routing protocols are used to
continuously update the routing tables that are consulted for forwarding and
routing.

12) Draw the figure providing protocols used at each layer of TCP/IP suites. (1641039)

13) Define Data Traffic. (1641040)


Network traffic or data traffic is the amount of data moving across a network at a given
point of time. Network data in computer networks is mostly encapsulated in network
packets, which provide the load in the network. Network traffic is the main component
for network traffic measurement, network traffic control and simulation. The proper
organization of network traffic helps in ensuring the quality of service in a given
network.
14) List the types of Static and Dynamic Routing algorithm. (1641041)
Static Routing Algorithms:
Shortest path routing
Hot potato routing
Multi path routing
Dynamic Routing
Distributed Routing

15) What is SYN and ACK? (1641042)

SYN:
SYN segment is a control segment and carries no data, but it consumes one
sequence number and does not contain an acknowledgment number.A SYN segment
is for synchronization of sequence numbers.
Example: A client chooses a random number as the first sequence number and sends
this number to the server. This sequence number is called the initial sequence
number (ISN).
ACK:
ACK segment acknowledges the receipt of the second segment with the ACK flag and
acknowledgment number field. An ACK segment, if carrying no data, consumes no
sequence number.
16) Write any two functions of a router. (1641043)
(Routers are used to connect networks. Routers process packets, which are units of data at
the Network layer. A Router receives a packet and examines the destination IP
address information to determine what network the packet needs to reach, and then sends
the packet out of the corresponding interface)

Routers carry out two basic functions


 They select a path between networks.
 They securely transmit information packets across that path towards an intended
destination..
 They draw on routing protocols and algorithms. These algorithms are designed to
plot routes using such criteria as throughput, delay, simplicity, low overhead,
reliability/stability, and flexibility.

17) Write about congestion control. (1641044)


Congestion control refers to techniques and mechanisms that can either prevent congestion
before it happens or remove congestion after it has happened. In general, we can divide
congestion control mechanisms into two broad categories: open-loop congestion control
(prevention) and closed-loop congestion control (removal).

18) What is choke packet?


A choke packet is used in network maintenance and quality management to inform a
specific node or transmitter that its transmitted traffic is creating congestion over
the network. This forces the node or transmitter to reduce its output rate. Choke
packets are used for congestion and flow control over a network. The source node is
addressed directly by the router, forcing it to decrease its sending rate .The source
node acknowledges this by reducing the sending rate by some percentage.

19) Write the formula for average data rate (1641046)

Average data rate = Amount of data / Time


(Amount of data divided by time)
20) Congestion feeds itself. Justify. (1641047)
Congestion tends to feed upon itself to get even worse. Routers respond to
overloading by dropping packets. When these packets contain TCP segments, the
segments don't reach their destination, and they are therefore left unacknowledged,
which eventually leads to timeout and retransmission. So, the major cause of
congestion is often the bursty nature of traffic.

21) Differentiate TCP and UDP. (1641049)

TCP UDP

Transmission Control User Datagram Protocol or


Stands for: Protocol Universal Datagram
Protocol
It is a connection oriented It is a connection less
Type of Connection: protocol protocol

TCP is used in case of UDP is preferred in case of


applications in which fast the applications which have
Usage: transmission of data is not the priority of sending the
required data on time and on faster
rates
HTTP, FTP, SMTP Telnet DNS, DHCP, TFTP, SNMP,
Examples: etc. RIP, VOIP etc.

It rearranges data packets No inherent ordering, the


Ordering of data packets: in the order specified data packets of same
message may be ordered
differently

22) What is congestion control? (1641044)


Congestion control refers to techniques and mechanisms that can either prevent congestion
before it happens or remove congestion after it has happened. In general, we can divide
congestion control mechanisms into two broad categories: open-loop congestion control
(prevention) and closed-loop congestion control (removal).

Open-Loop Congestion Control: In open-loop congestion control, policies are applied to prevent
congestion before it happens. In these mechanisms, congestion control is handled by either the
source or the destination.

Closed-Loop Congestion: Control Closed-loop congestion control mechanisms try to alleviate


congestion after it happens. Several mechanisms have been used by different protocols.
23) What is the Role of Repeater? (1641049)
In digital communication systems, a repeater is a device that receives a digital signal on an
electromagnetic or optical transmission medium and regenerates the signal along the next
leg of the medium. In electromagnetic media, repeaters overcome the attenuation caused
by free-space electromagnetic-field divergence or cable loss. A series of repeaters make
possible the extension of a signal over a distance.

24) What is the importance of Autonomous System? Provide the classification of Routing
Protocols. Explain each in two statements. (1641038)
Internet can be so large that one routing protocol cannot handle the task of updating the
routing tables of all routers. For this reason, an internet is divided into autonomous
systems. An autonomous system (AS) is a group of networks and routers under the
authority of a single administration. Routing inside an autonomous system is referred to as
intradomain routing. Routing between autonomous systems is referred to as interdomain
routing. Each autonomous system can choose one or more intradomain routing protocols to
handle routing inside the autonomous system.
Multicast Link State Routing (MOSPF)
Multicast link state routing is a direct extension of unicast routing and uses a source-based
tree approach. Although unicast routing is quite involved, the extension to multicast routing
is very simple and straightforward.

Multicast Distance Vector (DVMRP)


The idea is to create a table from scratch by using the information from the unicast distance
vector tables. Multicast distance vector routing uses source-based trees, but the router
never actually makes a routing table. When a router receives a multicast packet, it forwards
the packet as though it is consulting a routing table. To accomplish this, the multicast
distance vector algorithm uses a process based on four decision-making strategies.

Flooding: Flooding is the first strategy that broadcasts packets, but creates loops in the
systems.
Reverse Path Forwarding (RPF). RPF is a modified flooding strategy. To prevent loops, only
one copy is forwarded; the other copies are dropped.
Reverse Path Broadcasting (RPB). RPF guarantees that each network receives a copy of the
multicast packet without formation of loops.
Reverse Path Multicasting (RPM). As you may have noticed, RPB does not multicast the
packet, it broadcasts it. This is not efficient. To increase efficiency, the multicast packet must
reach only those networks that have active members for that particular group. This is called
reverse path multicasting (RPM).

CRT
The Core-Based Tree (CBT) protocol is a group-shared protocol that uses a core as the root
of the tree. The autonomous system is divided into regions, and a core (center router or
rendezvous router) is chosen for each region.

PIM
Protocol Independent Multicast (PIM) is the name given to two independent multicasts
routing protocols: Protocol Independent Multicast, Dense Mode (PIM-DM) and Protocol
Independent Multicast, Sparse Mode (PIM-SM).
PIM-DM is used when there is a possibility that each router is involved inmulticasting (dense
mode). In this environment, the use of a protocol that broadcasts the packet is justified
because almost all routers are involved in the process.
PIM-SM is used when there is a slight possibility that each router is involved in multicasting
(sparse mode). In this environment, the use of a protocol that broadcasts the packet is not
justified; a protocol such as CBT that uses a group-shared tree is more appropriate.
25) Explain distance vector routing algorithm. (16410039)
Distance vector routing:
In distance vector routing, the least-cost route between any two nodes is the route
with minimum distance. In this protocol, as the name implies, each node maintains a vector
(table) of minimum distances to every node. The table at each node also guides the packets
to the desired node by showing the next stop in the route (next-hop routing).

Initialization:
a) The table in figure are stable.
b) Each node knows how to reach any node and their cost.
c) At the beginning, each node knows the cost of itself and its immediate neighbour.
(Those nodes directly connected to it).
d) Assume that each node send a message to the immediate neighbours and find the
distance between itself and their neighbours.
e) The distance of any entry that is not a neighbour is marked as infinite (unreachable).

Sharing:
a) Idea is to share the information between neighbours.
b) The node A does not know the distance about E, but node C does.
c) If node C shares its routing table with A, node A can also know how to reach node E.
d) On the other hand, node C does not know how to reach node D, but node A does.
e) If node A shares its routing table with C, then node C can also know how to reach
node D.
f) Node A and C are immediate neighbours, can improve their routing tables if they
help each other.

26) Explain Link State Routing Algorithm. (1641040)


Link state routing algorithm is a routing method used by dynamic routers in which every
router maintains a database of its individual autonomous system (AS) topology. The Open
Shortest Path First (OSPF) routing protocol uses the link state routing algorithm to allow
OSPF routers to exchange routing information with each other.
How It Works
An AS or routing domain is a group of networks that use the same routing protocol and are
under common administration. All routers in an AS have identical link state databases,
which contain information about each router’s local state. Routers distribute their local
state by using link state advertisements (LSAs), which contain information about neighbors
and route costs. From these LSAs, each router builds a hierarchical tree containing least-cost
paths to other networks, with the router itself as the root of the tree. Least-cost paths are
determined by preassigned factors such as the number of hops between routers, the speeds
of the network links connecting them, and traffic flow patterns.
The link state routing algorithm used by the OSPF protocol offers the following advantages
over the distance vector routing algorithm used by the Routing Information Protocol (RIP):
RIP routers exchange their entire routing table on a periodic basis, adding to overall network
traffic, while OSPF routers exchange only routing table updates.
RIP routers use only the single metric hop count to create their routing tables, while OSPF
routers can also use link speeds and traffic patterns to establish cost values for routing
traffic.
On the other hand, OSPF requires considerably more processing on the part of the router,
making it more expensive to implement. OSPF is also more complex to configure than RIP.

27) How Congestion Control is done in TCP? What are the techniques used, explain in
short. Draw TCP congestion policy summary figure. (1641041)
TCP’s congestion management is window-based; that is, TCP adjusts its window size to
adapt to congestion. The window size can be thought of as the number of packets out there
in the network; more precisely, it represents the number of packets and ACKs either in
transit or enqueued. An alternative approach often used for real-time systems is rate-based
congestion management, which runs into an unfortunate difficulty if the sending rate
momentarily happens to exceed the available rate.
Congestion policy in TCP –
1. Slow Start Phase: starts slowly increment is exponential to threshold
2. Congestion Avoidance Phase: After reaching the threshold increment is by 1
3. Congestion Detection Phase: Sender goes back to slow start phase or Congestion
avoidance phase.
Slow Start Phase: exponential increment – In this phase after every RTT the congestion
window size increments exponentially.
Congestion Avoidance Phase: additive increment – This phase starts after the threshold
value also denoted as ssthresh. The size of cwnd(congestion window) increases additive. After
each RTT cwnd = cwnd + 1.
Congestion Detection Phase: multiplicative decrement – If congestion occurs, the
congestion window size is decreased. The only way a sender can guess that congestion has
occurred is the need to retransmit a segment. Retransmission is needed to recover a missing
packet which is assumed to have been dropped by a router due to congestion. Retransmission
can occur in one of two cases: when the RTO timer times out or when three duplicate ACKs
are received.
 Case 1: Retransmission due to Timeout – In this case congestion possibility is high.
(a) ssthresh is reduced to half of the current window size.
(b) set cwnd = 1
(c) start with slow start phase again.
 Case 2: Retransmission due to 3 Acknowledgement Duplicates – In this case
congestion possibility is less.
(a) ssthresh value reduces to half of the current window size.
(b) set cwnd= ssthresh
(c) start with congestion avoidance phase
TCP congestion policy summary figure

28) Explain distance vector routing in detail? (1641042)


A combination of least-cost paths from the root of the tree to all destinations. These paths
are graphically glued together to form the tree. Distance-vector routing unglues these paths
and creates a distance vector, a one-dimensional array to represent the tree.
A distance vector does not give the path to the destinations as the least-cost tree does; it
gives only the least costs to the destinations. In an internet each node, when it is booted,
creates a very rudimentary distance vector with the minimum information the node can
obtain from its neighborhood. The node sends some greeting messages out of its interfaces
and discovers the identity of the immediate neighbors and the distance between itself and
each neighbor. It then makes a simple distance vector by inserting the discovered distances
in the corresponding cells and leaves the value of other cells as infinity.
Distance-Vector Routing Algorithm for a Node
Distance_Vector_Routing ( )
{
// Initialize (create initial vectors for the node)
D[myself] = 0
for (y = 1 to N)
{
if (y is a neighbor)
D[y] = c[myself][y]
else
D[y] = ∞
}
send vector {D[1], D[2], …, D[N]} to all neighbors
// Update (improve the vector with the vector received from a neighbor)
repeat (forever)
{
wait (for a vector Dw from a neighbor w or any change in the link)
for (y = 1 to N)
{
D[y] = min [D[y], (c[myself][w] + Dw[y])] // Bellman-Ford equation
}
if (any change in the vector)
send vector {D[1], D[2], …, D[N]} to all neighbors
}
} // End of Distance Vector

29) Explain the general principles of congestion control. (1641043)

Congestion in a network may occur if the load on the network, the number of packets sent
to the network is greater than the capacity of the network, the number of packets a
network can handle. Congestion control refers to the mechanisms and techniques to control
the congestion and keep the load below the capacity.
When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion. As traffic increases too far, the routers are no longer able to
cope and they begin losing packets. At very high traffic, performance collapses completely
and almost no packets are delivered.
General Principles of Congestion Control:

 Three Step approach to apply congestion control:

1. Monitor the system - detect when and where congestion occurs.


2. Pass information to where action can be taken.
3. Adjust system operation to correct the problem.

 The subnet for congestion should be monitored by:

1) Percentage of all packets discarded for lack of buffer space.


2) Average queue lengths
3) Number of packets that time out and are retransmitted
4) Average packet delay
5) Standard deviation of packet delay (jitter Control).

 Knowledge of congestion will cause the hosts to take appropriate action to reduce
the congestion.
 For a scheme to work correctly, the time scale must be adjusted carefully.
 If every time two packets arrive in a row, a router yells STOP and every time a router
is idle for 20 µsec, it yells GO, the system will oscillate wildly and never converge.
 Dividing all algorithms into open loop or closed loop
 They further divide the open loop algorithms into ones that act at the source versus
ones that act at the destination.
 The closed loop algorithms are also divided into two subcategories:
Explicit feedback & implicit feedback
 In explicit feedback algorithms, packets are sent back from the point of congestion to
warn the source.
 In implicit algorithms, the source deduces the existence of congestion by making
local observations, such as the time needed for acknowledgements to come back.

The presence of congestion means that the load is (temporarily) greater than the
resources can handle. Hence the solution is to increase the resources or decrease the load.
(That is not always possible. So we have to apply some congestion prevention policy.)

30) Explain congestion in a network with an example. (1641045)


A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.
Effects of Congestion
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation worse
Example: It can be much easier and less costly to increase capacity in communications
networks than in road networks. Also, communications network traffic can be compressed
in many cases, whereas road traffic cannot. In addition, communications network traffic can
often be rerouted at essentially zero cost (when alternative routes exist), whereas large
costs (particularly in terms of lost time) can be incurred from rerouting road traffic. Various
techniques have likewise been developed in attempt to minimize congestion collapse in
communications networks. In addition to increasing capacity and data compression, they
include protocols for informing transmitting devices about the current levels of network
congestion and having them reroute or delay their transmissions according to congestion
levels.

31) Provide the classification of Congestion Control. Explain Closed Loop techniques with
details. (1641046)
Congestion control refers to the techniques and mechanisms that can either prevent
congestion before it happens or remove congestion after it has happened. In
general, we can divide congestion control mechanisms into two broad categories:
Open-loop congestion control (prevention) and closed-loop congestion control
(removal)

Closed loop Congestion Control:


These mechanisms try to alleviate congestion after it happens. Several mechanisms
have been used by different protocols. We describe a few of them here.
Backpressure: The technique of backpressure refers to a congestion control
mechanism in which a congested mode stops receiving data from the immediate
upstream node or nodes. This may cause the upstream nodes to become congested
and they in turn reject data from their upstream nodes. Backpressure is a node to
node congestion control that starts with a node and propagates in the opposite
direction of data flow to the source. The backpressure technique can be applied only
to virtual circuit networks in which each node knows the upstream node from which
a flow of data is coming.

Node 3 in the figure has more input data than it can handle. It drops some packets
and informs node 2 to slowdown. Node 2 in turn maybe congested because and it
informs node 1 to slow down. Node 1 maybe congested and if so it informs the
source to slow down. The pressure on node 3 is moved backward to the source to
remove the congestion.
Choke Packet: A choke packet is a packet sent by a node to the source to inform it of
congestion. Note the difference between the backpressure and choke packet
methods. In backpressure the warning is from one node to its upstream node,
although the warning may eventually reach the source station. In the choke packet
method the warning is from the router, which has encountered congestion to the
source station directly. The intermediate node through which the packets have
travelled are not warned.

Implicit Signalling: In implicit signalling there is no communication between the


congested node or nodes and the source. The source guesses that there is a
congestion somewhere in the network from other symptoms like, when a source
sends several packets and there is no acknowledgement for a while, one assumption
is that the network is congested. The source then slows down.
Explicit Signalling: The node that experiences congestion can explicitly send a signal
to the source or destination. The explicit signalling method however is different from
the choke packet method. In the choke packet method a separate packet is used for
this purpose; in the explicit signalling method the signal is included in the packets
that carry data.
32) Explain priority and FIFO Queueing in QoS. (1641047)
FIFO:
First In First Out (FIFO) is a Fair Queuing method, the first packet to get to the router
will be the first packet to be sent out. There is only one queue with FIFO, One Queue
for received traffic and one queue for traffic being sent out of the router. This is
essentially the best effort queuing strategy which gives no priority to any traffic
types and is not recommended for voice and video applications deployments. The
default form of queuing on nearly all interfaces is First-In First-Out (FIFO).

Priority Queuing:
There are 4 Queues of Traffic in Priority queuing, and you define what type of traffic
goes into these queues. The 4 types of Queues are based on Priorities which are
High, Medium, Normal and Low Priority Queue.
This is how Priority Queuing works, as long as there is traffic in High Queue, the
other Queues will be neglected, the next to be processed will be traffic in Medium
Queue and as long as there is traffic in Medium Queue, the traffic in normal and low
queues will be neglected. Also, while serving the traffic in Medium Queue, if the
router receives traffic in High Queue, then High Queue will be processed and unless
all traffic has cleared the High Queue the router will not go back to Medium Queue.
This could result in resource starvation for the traffic arriving and sitting on the lower
priority queues like the normal and low queues.
Priority Queuing is a strict Priority method, which will always prefer the traffic in
High Priority Queue to other queues, and the order of processing is
High Priority Queue > Medium Priority Queue > Normal Priority Queue>Low Queue

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy