0% found this document useful (0 votes)
2 views

Routing Algorithm

The document discusses routing algorithms in network layers, detailing the processes of forwarding and routing, and categorizing algorithms into nonadaptive (static) and adaptive (dynamic). It elaborates on distance vector routing and link state routing, explaining their mechanisms, initialization, sharing, updating, and congestion control techniques. Additionally, it addresses congestion control methods, including warning bits, choke packets, load shedding, random early discard, and traffic shaping to manage network performance effectively.

Uploaded by

Mohammad Sami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Routing Algorithm

The document discusses routing algorithms in network layers, detailing the processes of forwarding and routing, and categorizing algorithms into nonadaptive (static) and adaptive (dynamic). It elaborates on distance vector routing and link state routing, explaining their mechanisms, initialization, sharing, updating, and congestion control techniques. Additionally, it addresses congestion control methods, including warning bits, choke packets, load shedding, random early discard, and traffic shaping to manage network performance effectively.

Uploaded by

Mohammad Sami
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Routing Algorithms

The main function of NL (Network Layer) is routing packets from the source machine to the
destination machine.
There are two processes inside router:
a) One of them handles each packet as it arrives, looking up the outgoing line to use for it in
the routing table. This process is forwarding.
b) The other process is responsible for filling in and updating the routing tables. That is where
the routing algorithm comes into play. This process is routing.

Types of Routing Algorithm


Routing algorithms can be grouped into two major classes:
1) nonadaptive (Static Routing)
2) adaptive. (Dynamic Routing)

Nonadaptive algorithm –
 do not base their routing decisions on measurements or estimates of the current traffic
and topology.
 Instead, the choice of the route to use to get from I to J is computed in advance, off line,
and downloaded to the routers when the network is booted. This procedure is sometimes
called static routing.

Adaptive algorithm-
 in contrast, change their routing decisions to reflect changes in the topology, and usually
the traffic as well.

Different Routing Algorithms

• Shortest path algorithm


• Flooding
• Distance vector routing
• Link state routing
• Hierarchical Routing
Distance Vector Routing

 In distance vector routing, the least-cost route between any two nodes is the route with
minimum distance.

 In this protocol, as the name implies, each node maintains a vector (table) of minimum
distances to every node.

Mainly 3 things in this

Initialization
Sharing
Updating

Initialization
 Each node can know only the distance between itself and its immediate neighbors, those
directly connected to it. So for the moment, we assume that each node can send a
message to the immediate neighbors and find the distance between itself and these
neighbors. Below fig shows the initial tables for each node.
 The distance for any entry that is not a neighbor is marked as infinite (unreachable).
Initialization of tables in distance vector routing

Sharing
 The whole idea of distance vector routing is the sharing of information between
neighbors.
 Although node A does not know about node E, node C does. So if node C shares its
routing table with A, node A can also know how to reach node E.
 On the other hand, node C does not know how to reach node D, but node A does. If node
A shares its routing table with node C, node C also knows how to reach node D. In other
words, nodes A and C, as immediate neighbors, can improve their routing tables if they
help each other.
NOTE: In distance vector routing, each node shares its routing table with its immediate
neighbors periodically and when there is a change

Updating
When a node receives a two-column table from a neighbor, it needs to update its routing
table. Updating takes three steps:
1. The receiving node needs to add the cost between itself and the sending node to each value in
the second column. (x+y)
2. The receiving node needs to compare each row of its old table with the corresponding row of
the modified version of the received table.
a. If the next-node entry is different, the receiving node chooses the row with the
smaller cost. If there is a tie, the old one is kept.
b. If the next-node entry is the same, the receiving node chooses the new row.
For example, suppose node C has previously advertised a route to node X with distance 2.
Suppose that now there is no path between C and X; node C now advertises this route with a
distance of infinity.
 C share its table to A and A update its table.
A’s modified table = Table receive from C + add cost (C-A=2) in each col.
 Now compare modified A’s table with its old table and taking min value.

Updating in distance vector routing

Final Diagram
When to Share
 The table is sent both periodically and when there is a change in the table.
 Periodic Update A node sends its routing table, normally every 30 s, in a periodic
update. The period depends on the protocol that is using distance vector routing.
 Triggered Update A node sends its two-column routing table to its neighbors anytime
there is a change in its routing table. This is called a triggered update. The change can
result from the following.
1. A node receives a table from a neighbor, resulting in changes in its own table after updating.
2. A node detects some failure in the neighboring links which results in a distance change to
infinity.

Two-node instability

SOLUTIONS FOR INSTABILITY


1. Defining Infinity: redefine infinity to a smaller number, such as 100. For our previous
scenario, the system will be stable in less than 20 updates. As a matter of fact, most
implementations of the distance vector protocol define the distance between each node to
be 1 and define 16 as infinity.
 However, this means that the distance vector routing cannot be used in large systems.
The size of the network, in each direction, cannot exceed 15 hops.

2. Split Horizon: In this strategy, instead of flooding the table through each interface, each
node sends only part of its table through each interface. If, according to its table, node B
thinks that the optimum route to reach X is via A, it does not need to advertise this piece of
information to A; the information has come from A (A already knows). Taking information
from node A, modifying it, and sending it back to node A creates the confusion. In our
scenario, node B eliminates the last line of its routing table before it sends it to A. In this
case, node A keeps the value of infinity as the distance to X. Later when node A sends its
routing table to B, node B also corrects its routing table. The system becomes stable after the
first update: both node A and B know that X is not reachable.
3. Split Horizon and Poison Reverse Using the split horizon strategy has one drawback.
Normally, the distance vector protocol uses a timer, and if there is no news about a route, the
node deletes the route from its table. When node B in the previous scenario eliminates the
route to X from its advertisement to A, node A cannot guess that this is due to the split
horizon strategy (the source of information was A) or because B has not received any news
about X recently. The split horizon strategy can be combined with the poison reverse
strategy. Node B can still advertise the value for X, but if the source of information is A, it
can replace the distance with infinity as a warning: "Do not use this value; what I know
about this route comes from you."

The Count-to-Infinity Problem


Link State Routing
Link state routing is based on the assumption that, although the global knowledge about
the topology is not clear, each node has partial knowledge: it knows the state (type,
condition, and cost) of its links.
Link State Routing Algo have basically 5 parts
1. Discover its neighbor and learn their network address.

2. Measure the delay or cost to each of its neighbor.

3. Construct link state packet (LSP).

4. Send this LSPs to every other router, called flooding, in an efficient and reliable way.

5. Compute the shortest path to every other router.

1. Learn about the Neighbor


When a router is booted, its first task is to learn who its neighbor are. It
accomplish this goal by sending a special HELLO packet on each point to point
line.
2. Measure Line cost
This algorithm require each router to know the delay to each of its neighbor. This
task is accomplish by sending a special ECHO packet over the line that the other
side is required to send back immediately.
3. Creation of Link State Packet (LSP)
Once the information needed for the exchange has been collected, the next step is
for each router to build a packet that is called Link state Packet (LSP) Which
consist the following information
 Identity of the sender
 Sequence No
 Age
 List of neighbor with delay
2
B C

4 3

A D
A

1
6
5 7

F
E 8

Fig (a) Subnet

Link State Packet for this subnet


A B C D E F

Seq. No Seq. No Seq. No Seq. No Seq No Seq No

Age Age Age Age Age Age

B 4 A 4 B 2 C 3 A 5 B 6

E 5 C 2 D 3 F 7 C 1 D 7

F 6 E 1 F 8 E 8

4. Distributing the Link State Packet


The fundamental idea is to use flooding to distribute the link state packets. To keep the
flood in check, each packet contain sequence no that is incremented for new packet
send.
Routers keep track of all source router, sequence no.
 When a new LSP comes in- it is checked against the list of packets already seen.
 If it is new, it is forwarded on all lines except the one it arrieve on.
 If it is duplicate, it is discarded.

 Sequence No is 32 bit.

There may be some possibility


(i) If Router crashes- it will lose track of its sequence no, if it start again at 0, the
next packet will be rejected as a duplicate.
(ii) If Sequence No Corrupted
Solution to these problem is to include the age of each packet after the sequence no
and decremented it per sec, when age hit zero, the information from that router
discarded.
5. Computing New Route
Once a router has accomplished, a full set of LSPs, it can construct the entire subnet
graph. Now Dijkstra’s Algo run to construct the shortest path to all possible
destination.

CONGESTION CONTROL ALGORITHMS


Too many packets present in (a part of) the network causes packet delay and loss that
degrades performance. This situation is called congestion.
The network and transport layers share the responsibility for handling congestion. Since
congestion occurs within the network, it is the network layer that directly experiences it
and must ultimately determine what to do with the excess packets.
However, the most effective way to control congestion is to reduce the load that the
transport layer is placing on the network. This requires the network and transport layers to
work together. In this chapter we will look at the network aspects of congestion.
When too much traffic is offered, congestion sets in and performance degrades sharply

Above Figure depicts the onset of congestion. When the number of packets hosts send
into the network is well within its carrying capacity, the number delivered is proportional
to the number sent. If twice as many are sent, twice as many are delivered. However, as
the offered load approaches the carrying capacity, bursts of traffic occasionally fill up the
buffers inside routers and some packets are lost. These lost packets consume some of the
capacity, so the number of delivered packets falls below the ideal curve. The network is
now congested. Unless the network is well designed, it may experience a congestion
collapse
difference between congestion control and flow control.
Congestion control has to do with making sure the network is able to carry the offered traffic. It
is a global issue, involving the behavior of all the hosts and routers.
Flow control, in contrast, relates to the traffic between a particular sender and a particular
receiver. Its job is to make sure that a fast sender cannot continually transmit data faster than the
receiver is able to absorb it.
To see the difference between these two concepts, consider a network made up of 100-Gbps
fiber optic links on which a supercomputer is trying to force feed a large file to a personal
computer that is capable of handling only 1 Gbps. Although there is no congestion (the network
itself is not in trouble), flow control is needed to force the supercomputer to stop frequently to
give the personal computer a chance to breathe.
At the other extreme, consider a network with 1-Mbps lines and 1000 large computers, half of
which are trying to transfer files at 100 kbps to the other half. Here, the problem is not that of
fast senders overpowering slow receivers, but that the total offered traffic exceeds what the
network can handle.

The reason congestion control and flow control are often confused is that the best way to handle
both problems is to get the host to slow down. Thus, a host can get a ‘‘slow down’’ message
either because the receiver cannot handle the load or because the network cannot handle it.

Several techniques can be employed. These include:


1. Warning bit
2. Choke packets
3. Load shedding
4. Random early discard
5. Traffic shaping
The first 3 deal with congestion detection and recovery. The last 2 deal with congestion
avoidance

Warning Bit
1. A special bit in the packet header is set by the router to warn the source when congestion is
detected.
2. The bit is copied and piggy-backed on the ACK and sent to the sender.
3. The sender monitors the number of ACK packets it receives with the warning bit set and
adjusts its transmission rate accordingly.
Choke Packets
1. A more direct way of telling the source to slow down.
2. A choke packet is a control packet generated at a congested node and transmitted to
restrict traffic flow.
3. The source, on receiving the choke packet must reduce its transmission rate by a certain
percentage.
4. An example of a choke packet is the ICMP Source Quench
Packet. Hop-by-Hop Choke Packets
1. Over long distances or at high speeds choke packets are not very effective.
2. A more efficient method is to send to choke packets hop-by-hop.
3. This requires each hop to reduce its transmission even before the choke packet arrive at
the source

Load Shedding
1. When buffers become full, routers simply discard packets.
2. Which packet is chosen to be the victim depends on the application and on the error
strategy used in the data link layer.
3. For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in the
received data.
4. For real-time voice or video it is probably better to throw away old data and keep new
packets.
5. Get the application to mark packets with discard priority.

Random Early Discard (RED)


1. This is a proactive approach in which the router discards one or more packets before the
buffer becomes completely full.
2. Each time a packet arrives, the RED algorithm computes the average queue length, avg.
3. If avg is lower than some lower threshold, congestion is assumed to be minimal or non-
existent and the packet is queued.
4. If avg is greater than some upper threshold, congestion is assumed to be serious and the
packet is discarded.
5. If avg is between the two thresholds, this might indicate the onset of congestion. The
probability of congestion is then calculated.
Traffic Shaping
1. Another method of congestion control is to “shape” the traffic before it enters the
network.
2. Traffic shaping controls the rate at which packets are sent (not just how many). Used in
ATM and Integrated Services networks.
3. At connection set-up time, the sender and carrier negotiate a traffic pattern (shape).

Two traffic shaping algorithms are:


Leaky Bucket
Token
Bucket

The Leaky Bucket Algorithm used to control rate in a network. It is implemented as a single-
server queue with constant service time. If the bucket (buffer) overflows then packets are
discarded.

(a) A leaky bucket with water. (b) a leaky bucket with packets.
1. The leaky bucket enforces a constant output rate (average rate) regardless of the burstiness
of the input. Does nothing when input is idle.
2. The host injects one packet per clock tick onto the network. This results in a uniform flow
of packets, smoothing out bursts and reducing congestion.
3. When packets are the same size (as in ATM cells), the one packet per tick is okay. For
variable length packets though, it is better to allow a fixed number of bytes per tick. E.g.
1024 bytes per tick will allow one 1024-byte packet or two 512-byte packets or four 256-
byte packets on 1 tick
Token Bucket Algorithm

1. In contrast to the LB, the Token Bucket Algorithm, allows the output rate to vary,
depending on the size of the burst.
2. In the TB algorithm, the bucket holds tokens. To transmit a packet, the host must capture
and destroy one token.
3. Tokens are generated by a clock at the rate of one token every t sec.
4. Idle hosts can capture and save up tokens (up to the max. size of the bucket) in order to
send larger bursts later.

(a) Before. (b) After.

Leaky Bucket vs. Token Bucket


1. LB discards packets; TB does not. TB discards tokens.
2. With TB, a packet can only be transmitted if there are enough tokens to cover its length in
bytes.
3. LB sends packets at an average rate. TB allows for large bursts to be sent faster by speeding
up the output.
4. TB allows saving up tokens (permissions) to send large bursts. LB does not allow saving.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy