Unit 5 CN
Unit 5 CN
1) Describe distance vector routing protocol with one valid example. Explain working
of RIP. (1641038)
In distance vector routing, the least-cost route between any two nodes is the route with
minimum distance. In this protocol, each node maintains a vector (table) of minimum
distances to every node. The table at each node also guides the packets to the desired node
by showing the next stop in the route (next-hop routing).
Initialization
At the beginning, each node can know only the distance between itself and its immediate
neighbors, those directly connected to it. So, for the moment, we assume that each node
can send a message to the immediate neighbors and find the distance between itself and
these neighbors.
Sharing
The whole idea of distance vector routing is the sharing of information between neighbors.
Although node A does not know about node E, node C does. So, if node C shares its routing
table with A, node A can also know how to reach node E. On the other hand, node C does
not know how to reach node D, but node A does. If node A shares its routing table with
node C, node C also knows how to reach node D. In other words, nodes A and C, as
immediate neighbors, can improve their routing tables if they help each other.
A problem in this is that a node therefore can send only the first two columns of its table to
any neighbor.
In simple words distance vector routing, each node shares its routing table with its
immediate neighbors periodically and when there is a change.
Updating
When a node receives a two-column table from a neighbor, it needs to update its routing
table. Updating takes three steps:
1. The receiving node needs to add the cost between itself and the sending node to
each value in the second column.
2. The receiving node needs to add the name of the sending node to each row as the
third column if the receiving node uses information from any row. The ending node
is the next node in the route.
3. The receiving node needs to compare each row of its old table with the
corresponding row of the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with
the
smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new
row.
When to Share?
The table is sent both periodically and when there is a change in the table.
Periodic Update: A node sends its routing table, normally every 30s in a periodic update.
The period depends on the protocol that is using distance vector protocol.
Triggered update: A node sends its two-column routing table to its neighbors anytime when
there is a change in its routing table.
A problem with distance vector routing is instability, which means that a network using this
protocol can become unstable.
At the beginning, both nodes A and B know how to reach node X. But suddenly, the link
between A and X fails. Node A changes its table. If A can send its table to B immediately,
everything is fine. However, the system becomes unstable if B sends its routing table to A
before receiving A's routing table. Node A receives the update and, assuming that B has
found a way to reach X, immediately updates its routing table. Based on the triggered
update strategy, A sends its new update to B. Now B thinks that something has been
changed around A and updates its routing table. The cost of reaching X increases gradually
until it reaches infinity. At this moment, both A and B know that X cannot be reached.
However, during this time the system is not stable. Node A thinks that the route to X is via B;
node B thinks that the route to X is via A. If A receives a packet destined for X, it goes to B
and then comes back to A. Similarly, if B receives a packet destined for X, it goes to A and
comes back to B. Packets bounce between A and B, creating a two-node loop problem. A
few solutions have been proposed for instability of this kind.
Three-Node Instability
Suppose, after finding that X is not reachable, node A sends a packet to Band C to inform
them of the situation. Node B immediately updates its table, but the packet to C is lost in
the network and never reaches C. Node C remains in the dark and still thinks that there is a
route to X via A with a distance of 5. After a while, node C sends to Bits routing table, which
includes the route to X. Node B is totally fooled here. It receives information on the route to
X from C, and according to the algorithm, it updates its table, showing the route to X via C
with a cost of 8. This information has come from C, not from A, so after a while node B may
advertise this route to A. Now A is fooled and updates its table to show that A can reach X
via B with a cost of 12. Of course, the loop continues; now A advertises the route to X to C,
with increased cost, but not to B. Node C then advertises the route to B with an increased
cost. Node B does the same to A. And so on. The loop stops when the cost in each node
reaches infinity.
RIP (Routing Information Protocol)
The Routing Information Protocol (RIP) is an intradomain routing protocol used inside an
autonomous system. It is a very simple protocol based on distance vector routing. RIP
implements distance vector routing directly with some considerations:
1. In an autonomous system, we are dealing with routers and networks (links). The
routers have routing tables; networks do not.
2. The destination in a routing table is a network, which means the first column
defines a network address.
3. The metric used by RIP is very simple; the distance is defined as the number of
links (networks) to reach the destination. For this reason, the metric in RIP is called a
hop count.
4. Infinity is defined as 16, which means that any route in an autonomous system
using RIP cannot have more than 15 hops.
5. The next-node column defines the address of the router to which the packet is to
be sent to reach its destination.
Working of RIP
An autonomous system with seven networks and four routers. The table of each router is
also shown. Let us look at the routing table for Rl. The table has seven entries to show how
to reach each network in the autonomous system. Router Rl is directly connected to
networks 130.10.0.0 and 130.11.0.0, which means that there are no next-hop entries for
these two networks. To send a packet to one of the three networks at the far left, router Rl
needs to deliver the packet to R2. The next-node entry for these three networks is the
interface of router R2 with IP address 130.10.0.1. To send a packet to the two networks at
the far right, router Rl needs to send the packet to the interface of router R4 with IP address
130.11.0.1. The other tables can be explained similarly.
2) What is the shortest path tree, short based tree, and group shared tree? Explain
with valid examples.(1641039)
Group Shared Tree: In this approach, instead of each router having m shortest path
trees, only one designated router, called the centre core, or rendezvous router, takes
the responsibility of distributing multicast traffic. The core has m shortest path trees
in its routing table. The rest of the routers in the domain have none. If a router
receives a multicast packet, it encapsulates the packet in a unicast packet and sends
it to the core router. The core router removes the multicast packet from its capsule,
and consults its routing table to route the packet.
In the group shared tree approach, only the core router, which has a shortest path
tree for each group, is involved in multicasting.
3) Provide the classification of Congestion Control. Explain Open Loop techniques
with details.(1641040)
Congestion is an important issue that can arise in packet switched network. Congestion is a
situation in Communication Networks in which too many packets are present in a part of the
subnet, performance degrades. Congestion in a network may occur when the load on the
network (i.e. the number of packets sent to the network) is greater than the capacity of the
network (i.e. the number of packets a network can handle.)
In other words when too much traffic is offered, congestion sets in and performance
degrades sharply.
Causing of Congestion:
• The input traffic rate exceeds the capacity of the output lines.
• The routers are too slow to perform bookkeeping tasks (queuing buffers, updating tables,
etc.).
• The routers' buffer is too limited.
• Congestion in a subnet can occur if the processors are slow.
• Bursty traffic.
Congestion Control:
Congestion Control refers to techniques and mechanisms that can either prevent
congestion, before it happens, or remove congestion, after it has happened. Congestion
control mechanisms are divided into two categories, one category prevents the congestion
from happening and the other category removes congestion after it has taken place.
I) Closed Loop Congestion Control:
• Closed loop congestion control mechanisms try to remove the congestion after it happens.
• It uses some kind of feedback.
II) Open Loop Congestion Control:
• In Open Loop Congestion Control, policies are used to prevent the congestion before it
happens.
• Congestion control is handled either by the source or by the destination.
• The various methods used for open loop congestion control are: -
Retransmission Policy
• The sender retransmits a packet, if it feels that the packet it has sent is lost or corrupted.
Window Policy
• To implement window policy, selective reject window method is used for congestion
control.
• Selective Reject method is preferred over Go-back-n window as in Go-back-n method,
when timer for a packet times out, several packets are resent, although some may have
arrived safely at the receiver. Thus, this duplication may make congestion worse.
• Selective reject method sends only the specific lost or damaged packets.
Acknowledgement Policy
• The acknowledgement policy imposed by the receiver may also affect congestion.
• If the receiver does not acknowledge every packet it receives it may slow down the sender
and help prevent congestion.
• Acknowledgments also add to the traffic load on the network. Thus, by sending fewer
acknowledgements we can reduce load on the network.
• To implement it, several approaches can be used:
1. A receiver may send an acknowledgement only if it has a packet to be sent.
2. A receiver may send an acknowledgement when a timer expires.
3. A receiver may also decide to acknowledge only N packets at a time.
Discarding Policy
• A router may discard less sensitive packets when congestion is likely to happen.
• Such a discarding policy may prevent congestion and at the same time may not harm the
integrity of the transmission.
Admission Policy
• An admission policy, which is a quality-of-service mechanism, can also prevent congestion
in virtual circuit networks.
• Switches in a flow first check the resource requirement of a flow before admitting it to the
network.
• A router can deny establishing a virtual circuit connection if there is congestion in the
"network or if there is a possibility of future congestion.
4) Explain flow characteristics. Also explain Reliability, Delay, Jitter and Bandwidth
with example. (1641041)
Traditionally, four types of characteristics are attributed to a flow: reliability, delay, jitter and
bandwidth.
Reliability
• Reliability is an important characteristic of flow.
• Lack of reliability means losing a packet or acknowledgement which then requires
retransmission.
• However, the sensitivity of application programs to reliability is not the same. For example,
it is more important that electronic mail, file transfer, and internet access have reliable
transmissions than audio conferencing or telephony.
Delay
• Source to destination delay is another flow characteristic.
• Applications can tolerate delay in different degrees.
• In this case, telephony, audio conferencing, video conferencing and remote log in need
minimum delay while delay in file transfer or e-mail is less important.
Jitter
• Jitter is defined as the variation in delay for packets belonging to the same flow.
• High Jitter means the difference between delays is large and low jitter means the variation
is small.
• For example, if four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have
same delay, 20 units of time. On the other hand, if the above four packets arrive at 21, 23, 21,
and 28 they will have different delays of21, 22, 19 and 24.
Bandwidth
• Different applications need different bandwidths.
• In video conferencing we need to send millions of bits per second to refresh a color screen
while the total number of bits in an email may not reach even a million.
5) What is the function of routing algorithm? How are they classified? Explain flooding
algorithm? (1641042)
Routing protocols have been created in response to the demand for dynamic routing
tables.
A routing protocol is a combination of rules and procedures that lets routers in the
internet inform each other of changes.
It allows routers to share whatever they know about the internet or their
neighborhood.
The routing protocols also include procedures for combining information received
from other routers.
Classification of routing protocol:
1) Adaptive Routing Algorithm: These algorithms change their routing decisions to reflect
changes in the topology and in traffic as well. These get their routing information from
adjacent routers or from all routers. The optimization parameters are the distance,
number of hops and estimated transit time.
i) Centralized
ii) Isolated
iii) distributed
2) Non-Adaptive Routing Algorithm: These algorithms do not base their routing decisions
on measurements and estimates of the current traffic and topology. Instead the route to
be taken in going from one node to the other is computed in advance, off-line, and
downloaded to the routers when the network is booted. This is also known as static
routing.
i) Flooding
ii) Random walk
Flooding Algorithm:
A router receives a packet and without even looking at the destination group address, sends
it out from every interlace except the one from which it was received. It accomplishes
multicasting. A network with active and non-active members receives the packet. Flooding
broadcasts packets, but creates loops in the systems. A packet that has left the router may
come back again from another interlace or the same interlace and be forwarded again.
Some flooding protocols keep a copy of the packet for a while and discard any duplicates to
avoid loops.
6) Write a short note on Scheduling. How many types of Queuing are done in
networks? Explain each with clear figures. (1641043)
Scheduling, packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats different flows in a fair manner. Several scheduling techniques
are designed to improve the quality of service.
The three types of queuing discussed here are:
1.) FIFO queuing
2.) Priority queuing
3.) Weighted fair queuing
1.) FIFO queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the router or switch
is ready to process them. If the average arrival rate is higher than the average processing
rate the queue will fill up and new packets will be discarded.
In Priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue. The packets in the highest-priority queue are processed first. Packets in the
lowest-priority queue are processed last.
(Note: The system does not stop serving a queue until it is empty.)
Link state protocols, such as IS-IS and OSPF, rely on each router in the network to
advertise the state of each of their links to every other router within the local routing
domain. The result is a complete network topology map, called a shortest path tree,
compiled by each router in the network. As a router receives an advertisement, it
will store this information in a local database, typically referred to as the link state
database, and pass the information on to each of its adjacent peers. This information
is not processed or manipulated in any way before it is passed on to the router's
adjacent peers. The link state information is flooded through the routing domain
unchanged, just as the originating router advertises it.
As each router builds a complete database of the link state information as
advertised by every other router within the network, it uses an algorithm, called the
shortest path first algorithm, to build a tree with itself as the center of that tree. The
shortest path to each reachable destination within the network is found by
traversing the tree. The most common shortest path first algorithm is the Dijkstra
algorithm.
When a link-state router boots, it first needs to discover to which routers it is directly
connected. For this, each router sends a HELLO message every N seconds on all of its
interfaces. This message contains the router’s address. Each router has a unique
address. As its neighboring routers also send HELLO messages, the router
automatically discovers to which neighbors it is connected. These HELLO messages
are only sent to neighbors who are directly connected to a router, and a router never
forwards the HELLO messages that they receive. HELLO messages are also used to
detect link and router failures. A link is considered to have failed if no HELLO
message has been received from the neighboring router for a period of (k*n)
seconds.
8) Discuss traffic shaping mechanism with neat diagrams. (1641046)
Traffic shaping is a mechanism to control the amount and rate of traffic sent to the
network. Two techniques can shape traffic: Leaky bucket and token bucket.
Leaky Bucket:
If a bucket has a small hole at the bottom, the water leaks from the bucket at a
considerate rate as long as there is water in the bucket. The rate at which the water
leaks does not depend on the rate at which the water is input to the bucket unless
the bucket is empty. The input rate can vary, but the output remains constant.
Similarly in networking, a technique called leaky bucket can smooth out bursty
traffic. Bursty chunks are stored in the bucket and sent out at an average rate.
In the figure we assume that the network has committed a bandwidth of 3 Mbps for
a host. The host sends a burst of data at a rate of 12 Mbps for 2s for a total of
24mbits of data. The host is silent for 5s and then sends data at a rate of 2 Mbps for
3s for a total of 6 Mbits of data. In all, the hosts has sent 30 Mbits of data in 10s. The
leaky bucket smooths the traffic by sending out data at a rate of 3 Mbps for the
same 10s. Without the leaky bucket the bursts of data could have hurt the network
by consuming more bandwidth than set aside for this host.
Token bucket:
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a
host is not sending for a while,its bucket becomes empty. Now if the host has bursty
data, the leaky bucket allows only an average rate. The time when the host was idle
is not taken into account. On the other hand the token bucket algorithm allows idle
hosts to accumulate credit for the future in the forms of tokens. For each tick of the
clock, the system sends n tokens to the bucket. For example if n is 100 and the host
is idle for 100 ticks, the bucket collects 10,000 tokens. Now host can consume all
these tokens in one tick with 10,000 cells, or the host takes 1000 ticks with 10 cells
per tick. In other words the host can send bursty data as long as the bucket is not
empty.
9) Discuss in detail the different categories of congestion control techniques with an
example? (1641047)
Congestion control refers to the techniques used to control or prevent congestion.
Congestion control techniques can be broadly classified into two categories:
1. Open Loop Congestion Control
2. Closed Loop Congestion Control
1. Retransmission Policy:
It is the policy in which retransmission of the packets are taken care. If the sender feels
that a sent packet is lost or corrupted, the packet needs to be retransmitted. This
transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion
and also able to optimize efficiency.
2. Window Policy:
The type of window at the sender side may also affect the congestion. Several packets
in the Go-back-n window are resent, although some packets may be received
successfully at the receiver side. This duplication may increase the congestion in the
network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet
that may have been lost.
3. Discarding Policy:
A good discarding policy adopted by the routers is that the routers may prevent
congestion and at the same time partially discards the corrupted or less sensitive
package and also able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent
congestion and also maintain the quality of the audio file.
4. Acknowledgment Policy:
Since acknowledgement are also the part of the load in network, the acknowledgment
policy imposed by the receiver may also affect congestion. Several approaches can be
used to prevent congestion related to acknowledgment.
The receiver should send acknowledgement for N packets rather than sending
acknowledgement for a single packet. The receiver should send a acknowledgment
only if it has to send a packet or a timer expires.
5. Admission Policy:
In admission policy a mechanism should be used to prevent congestion. Switches in a
flow should first check the resource requirement of a network flow before
transmitting it further. If there is a chance of a congestion or there is a congestion in
the network, router should deny establishing a virtual network connection to prevent
further congestion.
Backpressure:
Backpressure is a technique in which a congested node stop receiving packet from upstream
node. This may cause the upstream node or nodes to become congested and rejects
receiving data from above nodes. Backpressure is a node-to-node congestion control
technique that propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has information of its
above upstream node.
1. In above diagram the 3rd node is congested and stops receiving packets as a result 2nd
node may be get congested due to slowing down of the output data flow. Similarly 1st
node may get congested and informs the source to slow down.
2. Choke Packet Technique:
Choke packet technique is applicable to both virtual networks as well as datagram
subnets. A choke packet is a packet sent by a node to the source to inform it of
congestion. Each router monitor its resources and the utilization at each of its output
lines. Whenever the resource utilization exceeds the threshold value which is set by
the administrator, the router directly sends a choke packet to the source giving it a
feedback to reduce the traffic. The intermediate nodes through which the packets has
traveled are not warned about congestion.
3. Implicit Signaling:
In implicit signaling, there is no communication between the congested nodes and
the source. The source guesses that there is congestion in a network. For example
when sender sends several packets and there is no acknowledgment for a while, one
assumption is that there is a congestion.
4. Explicit Signaling:
In explicit signaling, if a node experiences congestion it can explicitly sends a packet
to the source or destination to inform about congestion. The difference between
choke packet and explicit signaling is that the signal is included in the packets that
carry data rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
1. Forward Signaling: In forward signaling signal is sent in the direction of the
congestion. The destination is warned about congestion. The receiver in this
case adopt policies to prevent further congestion.
2. Backward Signaling: In forward signaling signal is sent in the opposite
direction of the congestion. The source is warned about congestion and it
needs to slow down.
Techniques that can be used to improve the quality of service as follows scheduling, traffic
shaping, admission control and resource reservation.
Scheduling :
Packets from different flows arrive at a switch or router for processing. A good
scheduling technique treats the different flows in a fair and appropriate manner.
Several scheduling techniques are designed to improve the quality of service. Three
of them here: FIFO queuing, priority queuing, and weighted fair queuing.
1) FIFO Queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue)
until the node (router or switch) is ready to process them. If the average arrival rate
is higher than the average processing rate, the queue will fill up and new packets will
be discarded. Figure9 shows a conceptual view of a FIFO queue.
2) Priority Queuing: In priority queuing, packets are first assigned to a priority class. Each
priority class has its own queue. The packets in the highest-priority queue are processed
first. Packets in the lowest-priority queue are processed last. Note that the system does not
stop serving a queue until it is empty.
Figure10 shows priority queuing with two priority levels (for simplicity).
A priority queue can provide better QoS than the FIFO queue because higher priority traffic,
such as multimedia, can reach the destination with less delay.
3) Weighted Fair Queuing: A better scheduling method is weighted fair queuing. In this
technique, the packets are still assigned to different classes and admitted to different
queues. The queues, however, are weighted based on the priority of the queues; higher
priority means a higher weight. The system processes packets in each queue in a round-
robin fashion with the number of packets selected from each queue based on the
corresponding weight.
Traffic Shaping :
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent
to the network. Two techniques can shape traffic: leaky bucket and token bucket.
1) Leaky Bucket: A technique called leaky bucket can smooth out bursty traffic.
Bursty chunks are stored in the bucket and sent out at an average rate.
A simple leaky bucket implementation is shown in Figure11. A FIFO queue holds the
packets. If the traffic consists of fixed-size packets, the process removes a fixed
number of packets from the queue at each tick of the clock. If the traffic consists of
variable-length packets, the fixed output rate must be based on the number of bytes
or bits.
Resource Reservation :
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on.
The quality of service is improved if these resources are reserved beforehand.
Admission Control :
Admission control refers to the mechanism used by a router, or a switch, to accept
or reject a flow based on predefined parameters called flow specifications. Before a
router accepts a flow for processing, it checks the flow specifications to see if its
capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its previous
commitments to other flows can handle the new flow.
A router normally connects LANs and WANs in the Internet and has routing table
that is used for making decisions about the route. The routing tables are normally
dynamic and are updated using routing protocols. Routing protocols are used to
continuously update the routing tables that are consulted for forwarding and
routing.
12) Draw the figure providing protocols used at each layer of TCP/IP suites. (1641039)
SYN:
SYN segment is a control segment and carries no data, but it consumes one
sequence number and does not contain an acknowledgment number.A SYN segment
is for synchronization of sequence numbers.
Example: A client chooses a random number as the first sequence number and sends
this number to the server. This sequence number is called the initial sequence
number (ISN).
ACK:
ACK segment acknowledges the receipt of the second segment with the ACK flag and
acknowledgment number field. An ACK segment, if carrying no data, consumes no
sequence number.
16) Write any two functions of a router. (1641043)
(Routers are used to connect networks. Routers process packets, which are units of data at
the Network layer. A Router receives a packet and examines the destination IP
address information to determine what network the packet needs to reach, and then sends
the packet out of the corresponding interface)
TCP UDP
Open-Loop Congestion Control: In open-loop congestion control, policies are applied to prevent
congestion before it happens. In these mechanisms, congestion control is handled by either the
source or the destination.
24) What is the importance of Autonomous System? Provide the classification of Routing
Protocols. Explain each in two statements. (1641038)
Internet can be so large that one routing protocol cannot handle the task of updating the
routing tables of all routers. For this reason, an internet is divided into autonomous
systems. An autonomous system (AS) is a group of networks and routers under the
authority of a single administration. Routing inside an autonomous system is referred to as
intradomain routing. Routing between autonomous systems is referred to as interdomain
routing. Each autonomous system can choose one or more intradomain routing protocols to
handle routing inside the autonomous system.
Multicast Link State Routing (MOSPF)
Multicast link state routing is a direct extension of unicast routing and uses a source-based
tree approach. Although unicast routing is quite involved, the extension to multicast routing
is very simple and straightforward.
Flooding: Flooding is the first strategy that broadcasts packets, but creates loops in the
systems.
Reverse Path Forwarding (RPF). RPF is a modified flooding strategy. To prevent loops, only
one copy is forwarded; the other copies are dropped.
Reverse Path Broadcasting (RPB). RPF guarantees that each network receives a copy of the
multicast packet without formation of loops.
Reverse Path Multicasting (RPM). As you may have noticed, RPB does not multicast the
packet, it broadcasts it. This is not efficient. To increase efficiency, the multicast packet must
reach only those networks that have active members for that particular group. This is called
reverse path multicasting (RPM).
CRT
The Core-Based Tree (CBT) protocol is a group-shared protocol that uses a core as the root
of the tree. The autonomous system is divided into regions, and a core (center router or
rendezvous router) is chosen for each region.
PIM
Protocol Independent Multicast (PIM) is the name given to two independent multicasts
routing protocols: Protocol Independent Multicast, Dense Mode (PIM-DM) and Protocol
Independent Multicast, Sparse Mode (PIM-SM).
PIM-DM is used when there is a possibility that each router is involved inmulticasting (dense
mode). In this environment, the use of a protocol that broadcasts the packet is justified
because almost all routers are involved in the process.
PIM-SM is used when there is a slight possibility that each router is involved in multicasting
(sparse mode). In this environment, the use of a protocol that broadcasts the packet is not
justified; a protocol such as CBT that uses a group-shared tree is more appropriate.
25) Explain distance vector routing algorithm. (16410039)
Distance vector routing:
In distance vector routing, the least-cost route between any two nodes is the route
with minimum distance. In this protocol, as the name implies, each node maintains a vector
(table) of minimum distances to every node. The table at each node also guides the packets
to the desired node by showing the next stop in the route (next-hop routing).
Initialization:
a) The table in figure are stable.
b) Each node knows how to reach any node and their cost.
c) At the beginning, each node knows the cost of itself and its immediate neighbour.
(Those nodes directly connected to it).
d) Assume that each node send a message to the immediate neighbours and find the
distance between itself and their neighbours.
e) The distance of any entry that is not a neighbour is marked as infinite (unreachable).
Sharing:
a) Idea is to share the information between neighbours.
b) The node A does not know the distance about E, but node C does.
c) If node C shares its routing table with A, node A can also know how to reach node E.
d) On the other hand, node C does not know how to reach node D, but node A does.
e) If node A shares its routing table with C, then node C can also know how to reach
node D.
f) Node A and C are immediate neighbours, can improve their routing tables if they
help each other.
27) How Congestion Control is done in TCP? What are the techniques used, explain in
short. Draw TCP congestion policy summary figure. (1641041)
TCP’s congestion management is window-based; that is, TCP adjusts its window size to
adapt to congestion. The window size can be thought of as the number of packets out there
in the network; more precisely, it represents the number of packets and ACKs either in
transit or enqueued. An alternative approach often used for real-time systems is rate-based
congestion management, which runs into an unfortunate difficulty if the sending rate
momentarily happens to exceed the available rate.
Congestion policy in TCP –
1. Slow Start Phase: starts slowly increment is exponential to threshold
2. Congestion Avoidance Phase: After reaching the threshold increment is by 1
3. Congestion Detection Phase: Sender goes back to slow start phase or Congestion
avoidance phase.
Slow Start Phase: exponential increment – In this phase after every RTT the congestion
window size increments exponentially.
Congestion Avoidance Phase: additive increment – This phase starts after the threshold
value also denoted as ssthresh. The size of cwnd(congestion window) increases additive. After
each RTT cwnd = cwnd + 1.
Congestion Detection Phase: multiplicative decrement – If congestion occurs, the
congestion window size is decreased. The only way a sender can guess that congestion has
occurred is the need to retransmit a segment. Retransmission is needed to recover a missing
packet which is assumed to have been dropped by a router due to congestion. Retransmission
can occur in one of two cases: when the RTO timer times out or when three duplicate ACKs
are received.
Case 1: Retransmission due to Timeout – In this case congestion possibility is high.
(a) ssthresh is reduced to half of the current window size.
(b) set cwnd = 1
(c) start with slow start phase again.
Case 2: Retransmission due to 3 Acknowledgement Duplicates – In this case
congestion possibility is less.
(a) ssthresh value reduces to half of the current window size.
(b) set cwnd= ssthresh
(c) start with congestion avoidance phase
TCP congestion policy summary figure
Congestion in a network may occur if the load on the network, the number of packets sent
to the network is greater than the capacity of the network, the number of packets a
network can handle. Congestion control refers to the mechanisms and techniques to control
the congestion and keep the load below the capacity.
When too many packets are present in (a part of) the subnet, performance degrades. This
situation is called congestion. As traffic increases too far, the routers are no longer able to
cope and they begin losing packets. At very high traffic, performance collapses completely
and almost no packets are delivered.
General Principles of Congestion Control:
Knowledge of congestion will cause the hosts to take appropriate action to reduce
the congestion.
For a scheme to work correctly, the time scale must be adjusted carefully.
If every time two packets arrive in a row, a router yells STOP and every time a router
is idle for 20 µsec, it yells GO, the system will oscillate wildly and never converge.
Dividing all algorithms into open loop or closed loop
They further divide the open loop algorithms into ones that act at the source versus
ones that act at the destination.
The closed loop algorithms are also divided into two subcategories:
Explicit feedback & implicit feedback
In explicit feedback algorithms, packets are sent back from the point of congestion to
warn the source.
In implicit algorithms, the source deduces the existence of congestion by making
local observations, such as the time needed for acknowledgements to come back.
The presence of congestion means that the load is (temporarily) greater than the
resources can handle. Hence the solution is to increase the resources or decrease the load.
(That is not always possible. So we have to apply some congestion prevention policy.)
31) Provide the classification of Congestion Control. Explain Closed Loop techniques with
details. (1641046)
Congestion control refers to the techniques and mechanisms that can either prevent
congestion before it happens or remove congestion after it has happened. In
general, we can divide congestion control mechanisms into two broad categories:
Open-loop congestion control (prevention) and closed-loop congestion control
(removal)
Node 3 in the figure has more input data than it can handle. It drops some packets
and informs node 2 to slowdown. Node 2 in turn maybe congested because and it
informs node 1 to slow down. Node 1 maybe congested and if so it informs the
source to slow down. The pressure on node 3 is moved backward to the source to
remove the congestion.
Choke Packet: A choke packet is a packet sent by a node to the source to inform it of
congestion. Note the difference between the backpressure and choke packet
methods. In backpressure the warning is from one node to its upstream node,
although the warning may eventually reach the source station. In the choke packet
method the warning is from the router, which has encountered congestion to the
source station directly. The intermediate node through which the packets have
travelled are not warned.
Priority Queuing:
There are 4 Queues of Traffic in Priority queuing, and you define what type of traffic
goes into these queues. The 4 types of Queues are based on Priorities which are
High, Medium, Normal and Low Priority Queue.
This is how Priority Queuing works, as long as there is traffic in High Queue, the
other Queues will be neglected, the next to be processed will be traffic in Medium
Queue and as long as there is traffic in Medium Queue, the traffic in normal and low
queues will be neglected. Also, while serving the traffic in Medium Queue, if the
router receives traffic in High Queue, then High Queue will be processed and unless
all traffic has cleared the High Queue the router will not go back to Medium Queue.
This could result in resource starvation for the traffic arriving and sitting on the lower
priority queues like the normal and low queues.
Priority Queuing is a strict Priority method, which will always prefer the traffic in
High Priority Queue to other queues, and the order of processing is
High Priority Queue > Medium Priority Queue > Normal Priority Queue>Low Queue