0% found this document useful (0 votes)
12 views25 pages

Unit-5 QoS

In computer networking, multicast is group communication where data transmission is addressed to a group of destination computers simultaneously. Multicast can be one-to-many or many-to-many distribution. Multicast should not be confused with physical layer point-to-multipoint communication.

Uploaded by

jhon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views25 pages

Unit-5 QoS

In computer networking, multicast is group communication where data transmission is addressed to a group of destination computers simultaneously. Multicast can be one-to-many or many-to-many distribution. Multicast should not be confused with physical layer point-to-multipoint communication.

Uploaded by

jhon
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

CHAPTER 30

Quality of Service

T he Internet was originally designed for best-effort service without guarantee of


predictable performance. Best-effort service is often sufficient for a traffic that is
not sensitive to delay, such as file transfers and e-mail. Such a traffic is called elastic
because it can stretch to work under delay conditions; it is also called available bit rate
because applications can speed up or slow down according to the available bit rate.
The real-time traffic generated by some multimedia applications is delay sensitive
and therefore requires guaranteed and predictable performance. Quality of service
(QoS) is an internetworking issue that refers to a set of techniques and mechanisms that
guarantee the performance of the network to deliver predictable service to an applica-
tion program.
The chapter is divided into four sections.
❑ The first section defines data-flow characteristics: reliability, delay, jitter, and
bandwidth. The section then reviews the sensitivity of several applications in rela-
tion to these characteristics. It then classifies applications with respect to the above
characteristics.
❑ The second section concentrates on flow control to improve QoS. One way to do
this is to do scheduling using first-in, first-out queuing, priority queuing, and
weighted fair queuing. Another way is traffic shaping, which can be achieved
using the leaky bucket or the token bucket technique. Resource reservation and
admission control can also be used in this case.
❑ The third section discusses Integrated Services (IntServ), in which bandwidth is
explicitly reserved for a given data flow. The section divides the required service
into two categories: quantitative and qualitative. A separate protocol (RSVP) is
used to provide a connection-oriented service for this purpose.
❑ The fourth section discusses Differentiated Services (DiffServ), in which packets
are marked by applications into classes according to priorities. The section defines
the DS field to mark the priority of the class. Finally, the section introduces traffic
conditioners, used to implement DiffServ.

1053
1054 PART VII TOPICS RELATED TO ALL LAYERS

30.1 DATA-FLOW CHARACTERISTICS


If we want to provide quality of service for an Internet application, we first need
to define what we need for each application. Traditionally, four types of characteris-
tics are attributed to a flow: reliability, delay, jitter, and bandwidth. Let us
first define these characteristics and then investigate the requirements of each appli-
cation type.

30.1.1 Definitions
We can give informal definitions of the above four characteristics:

Reliability
Reliability is a characteristic that a flow needs in order to deliver the packets safe and
sound to the destination. Lack of reliability means losing a packet or acknowledg-
ment, which entails retransmission. However, the sensitivity of different application
programs to reliability varies. For example, reliable transmission is more important
for electronic mail, file transfer, and Internet access than for telephony or audio con-
ferencing.

Delay
Source-to-destination delay is another flow characteristic. Again, applications can
tolerate delay in different degrees. In this case, telephony, audio conferencing, video
conferencing, and remote logging need minimum delay, while delay in file transfer
or e-mail is less important.

Jitter
Jitter is the variation in delay for packets belonging to the same flow. For example, if
four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same
delay, 20 units of time. On the other hand, if the above four packets arrive at 21, 23, 24,
and 28, they will have different delays. For applications such as audio and video, the
first case is completely acceptable; the second case is not. For these applications, it
does not matter if the packets arrive with a short or long delay as long as the delay is the
same for all packets. These types of applications do not tolerate jitter.

Bandwidth
Different applications need different bandwidths. In video conferencing we need to
send millions of bits per second to refresh a color screen while the total number of bits
in an e-mail may not reach even a million.

30.1.2 Sensitivity of Applications


Now let us see how various applications are sensitive to some flow characteristics.
Table 30.1 gives a summary of application types and their sensitivity.
CHAPTER 30 QUALITY OF SERVICE 1055

Table 30.1 Sensitivity of applications to flow characteristics


Application Reliability Delay Jitter Bandwidth
FTP High Low Low Medium
HTTP High Medium Low Medium
Audio-on-demand Low Low High Medium
Video-on-demand Low Low High High
Voice over IP Low High High Low
Video over IP Low High High High

For those applications with a high level of sensitivity to reliability, we need to do


error checking and discard the packet if corrupted. For those applications with a high
level of sensitivity to delay, we need to be sure that they are given priority in transmis-
sion. For those applications with a high level of sensitivity to jitter, we need to be sure
that the packets belonging to the same application pass the network with the same
delay. Finally, for those applications that require high bandwidth, we need to allocate
enough bandwidth to be sure that the packets are not lost.

30.1.3 Flow Classes


Based on the flow characteristics, we can classify flows into groups, with each group
having the required level of each characteristic. The Internet community has not yet
defined such a classification formally. However, we know, for example, that a protocol
like FTP needs a high level of reliability and probably a medium level of bandwidth,
but the level of delay and jitter is not important for this protocol.

Example 30.1
Although the Internet has not defined flow classes formally, the ATM protocol does. As per ATM
specifications, there are five classes of defined service.
a. Constant Bit Rate (CBR). This class is used for emulating circuit switching. CBR appli-
cations are quite sensitive to cell-delay variation. Examples of CBR are telephone traffic,
video conferencing, and television.
b. Variable Bit Rate-Non Real Time (VBR-NRT). Users in this class can send traffic at a
rate that varies with time depending on the availability of user information. An example
is multimedia e-mail.
c. Variable Bit Rate-Real Time (VBR-RT). This class is similar to VBR–NRT but is
designed for applications such as interactive compressed video that are sensitive to cell-
delay variation.
d. Available Bit Rate (ABR). This class of ATM services provides rate-based flow control
and is aimed at data traffic such as file transfer and e-mail.
e. Unspecified Bit Rate (UBR). This class includes all other classes and is widely used
today for TCP/IP.

30.2 FLOW CONTROL TO IMPROVE QOS


Although formal classes of flow are not defined in the Internet, an IP datagram has a
ToS field that can informally define the type of service required for a set of datagrams
sent by an application. If we assign a certain type of application a single level of
1056 PART VII TOPICS RELATED TO ALL LAYERS

required service, we can then define some provisions for those levels of service. These
can be done using several mechanisms.

30.2.1 Scheduling
Treating packets (datagrams) in the Internet based on their required level of service can
mostly happen at the routers. It is at a router that a packet may be delayed, suffer from
jitters, be lost, or be assigned the required bandwidth. A good scheduling technique
treats the different flows in a fair and appropriate manner. Several scheduling tech-
niques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.
FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO
queue is familiar to those who have had to wait for a bus at a bus stop. Figure 30.1
shows a conceptual view of a FIFO queue. The figure also shows the timing relation-
ship between arrival and departure of packets in this queuing. Packets from different
applications (with different sizes) arrive at the queue, are processed, and depart. A
larger packet definitely may need a longer processing time. In the figure, packets 1 and
2 need three time units of processing, but packet 3, which is smaller, needs two time
units. This means that packets may arrive with some delays but depart with different
delays. If the packets belong to the same application, this produces jitters. If the packets
belong to different applications, this also produces jitters for each application.

Figure 30.1 FIFO queue

Queue is full?
No
Arrival 3 2 1 Processor Departure
Yes Queue
Discard
Required processing time
a. Processing in the router
Packet 1: three time units
Packet 2: three time units
Packet 3: two time units
1 2 3

Arrival
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10
1 2 3
Processing
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10
Departure

1 2 3
b. Arrival and departure time
CHAPTER 30 QUALITY OF SERVICE 1057

FIFO queuing is the default scheduling in the Internet. The only thing that is guar-
anteed in this type of queuing is that the packets depart in the order they arrive. Does
FIFO queuing distinguish between packet classes? The answer is definitely no. This
type of queuing is what we will see in the Internet with no service differentiation
between packets from different sources. With FIFO queuing, all packets are treated the
same in a packet-switched network. No matter if a packet belongs to FTP, or Voice over
IP, or an e-mail message, they will be equally subject to loss, delay, and jitter. The
bandwidth allocated for each application depends on how many packets arrive at the
router in a period of time. If we need to provide different services to different classes of
packets, we need to have other scheduling mechanisms.
Priority Queuing
Queuing delay in FIFO queuing often degrades quality of service in the network. A
frame carrying real-time packets may have to wait a long time behind a frame carrying
a small file. We solve this problem using multiple queues and priority queuing.
In priority queuing, packets are first assigned to a priority class. Each priority
class has its own queue. The packets in the highest-priority queue are processed first.
Packets in the lowest-priority queue are processed last. Note that the system does not
stop serving a queue until it is empty. A packet priority is determined from a specific
field in the packet header: the ToS field of an IPv4 header, the priority field of IPv6, a
priority number assigned to a destination address, or a priority number assigned to an
application (destination port number), and so on.
Figure 30.2 shows priority queuing with two priority levels (for simplicity). A pri-
ority queue can provide better QoS than the FIFO queue because higher-priority traffic,
such as multimedia, can reach the destination with less delay. However, there is a
potential drawback. If there is a continuous flow in a high-priority queue, the packets in
the lower-priority queues will never have a chance to be processed. This is a condition
called starvation. Severe starvation may result in dropping of some packets of lower
priority. In the figure, the packets of higher priority are sent out before the packets of
lower priority.
Weighted Fair Queuing
A better scheduling method is weighted fair queuing. In this technique, the packets
are still assigned to different classes and admitted to different queues. The queues,
however, are weighted based on the priority of the queues; higher priority means a higher
weight. The system processes packets in each queue in a round-robin fashion with the
number of packets selected from each queue based on the corresponding weight. For
example, if the weights are 3, 2, and 1, three packets are processed from the first queue,
two from the second queue, and one from the third queue. In this way, we have fair queu-
ing with priority. Figure 30.3 shows the technique with three classes. In weighted fair
queuing, each class may receive a small amount of time in each time period. In other
words, a fraction of time is devoted to serve each class of packets, but the fraction
depends on the priority of the class. For example, in the figure, if the throughput for the
router is R, the class with the highest priority may have the throughput of R/2, the middle
class may have the throughput of R/3, and the class with the lowest priority may have the
throughput of R/6. However, this situation is true if all three classes have the same packet
1058 PART VII TOPICS RELATED TO ALL LAYERS

Figure 30.2 Priority queuing

Queue is full? The switch turns to the


No other queue when the
3 2 1 current one is empty.
Yes
Discard High-priority
Arrival queue
Classifier Processor Departure
Queue is full?
No
2 1 Legend
Yes
Discard Low-priority Low-priority packets
queue
High-priority packets
a. Processing in the router

Note:
1 1 2 2 3 The arrival order is different
from the departure order.
Arrival
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14
1 2 1 3 2
Processing
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14
Departure

1 2 1 3 2

b. Arrival and departure time

size, which may not occur. Packets of different sizes may create many imbalances in
dividing a decent share of time between different classes.

30.2.2 Traffic Shaping or Policing


To control the amount and the rate of traffic is called traffic shaping or traffic policing.
The first term is used when the traffic leaves a network; the second term is used when
the data enters the network. Two techniques can shape or police the traffic: leaky bucket
and token bucket.
Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant
rate as long as there is water in the bucket. The rate at which the water leaks does not
depend on the rate at which the water is input unless the bucket is empty. If the bucket
is full, the water overflows. The input rate can vary, but the output rate remains
constant. Similarly, in networking, a technique called leaky bucket can smooth out
bursty traffic. Bursty chunks are stored in the bucket and sent out at an average rate.
Figure 30.4 shows a leaky bucket and its effects.
CHAPTER 30 QUALITY OF SERVICE 1059

Figure 30.3 Weighted fair queuing

Queue is full?
No
3 2 11
Yes Weight: 3
Discard

Queue is full?
Arrival No
Classifier 2 1 Processor
Yes Weight: 2
Discard
The turning switch selects three
packets from the first queue, then
Queue is full?
No two packets from the second
3 2 1 queue, then one packet from the
Yes Weight: 1 third queue. The cycle repeats.

Discard

Figure 30.4 Leaky bucket

Bursty flow
12 Mbps

2 Mbps 3 Mbps
Leaky
bucket
0 1 2 3 4 5 6 7 8 9 10 s 0 1 2 3 4 5 6 7 8 9 10 s
Fixed flow
Bursty data Fixed-rate data

In the figure, we assume that the network has committed a bandwidth of 3 Mbps
for a host. The use of the leaky bucket shapes the input traffic to make it conform to this
commitment. In Figure 30.4 the host sends a burst of data at a rate of 12 Mbps for
2 seconds, for a total of 24 Mb of data. The host is silent for 5 seconds and then sends
data at a rate of 2 Mbps for 3 seconds, for a total of 6 Mb of data. In all, the host has
sent 30 Mb of data in 10 seconds. The leaky bucket smooths the traffic by sending out data
at a rate of 3 Mbps during the same 10 seconds. Without the leaky bucket, the beginning
burst may have hurt the network by consuming more bandwidth than is set aside for
this host. We can also see that the leaky bucket may prevent congestion.
A simple leaky bucket implementation is shown in Figure 30.5. A FIFO queue
holds the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM
networks), the process removes a fixed number of packets from the queue at each tick
1060 PART VII TOPICS RELATED TO ALL LAYERS

of the clock. If the traffic consists of variable-length packets, the fixed output rate must
be based on the number of bytes or bits.

Figure 30.5 Leaky bucket implementation

Remove packets Leaky bucket


at a constant rate. algorithm
Queue is full?
No
Arrival 3 2 1 Processor Departure
Yes Queue
Discard

The following is an algorithm for variable-length packets:


1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the coun-
ter by the packet size. Repeat this step until the counter value is smaller than the
packet size.
3. Reset the counter to n and go to step 1.

A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by


averaging the data rate. It may drop the packets if the bucket is full.

Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host
is not sending for a while, its bucket becomes empty. Now if the host has bursty data, the
leaky bucket allows only an average rate. The time when the host was idle is not taken
into account. On the other hand, the token bucket algorithm allows idle hosts to accu-
mulate credit for the future in the form of tokens.
Assume the capacity of the bucket is c tokens and tokens enter the bucket at the rate
of r tokens per second. The system removes one token for every cell of data sent. The
maximum number of cells that can enter the network during any time interval of length t
is shown below.

Maximum number of packets = r × t + c

The maximum average rate for the token bucket is shown below.

Maximum average rate = (r × t + c)/t packets per second

This means that the token bucket limits the average packet rate to the network. Fig-
ure 30.6 shows the idea.
CHAPTER 30 QUALITY OF SERVICE 1061

Figure 30.6 Token bucket

Tokens added at the rate of r per second;


tokens are discarded if bucket is full.

Bucket capaciy
is c tokens.
One token is removed
and discarded
per cell transmitted.

Queue is full?

No
Arrival Processor Departure
Yes Queue
Discard

Example 30.2
Let’s assume that the bucket capacity is 10,000 tokens and tokens are added at the rate of
1000 tokens per second. If the system is idle for 10 seconds (or more), the bucket collects
10,000 tokens and becomes full. Any additional tokens will be discarded. The maximum average
rate is shown below.

Maximum average rate = (1000t + 10,000)/t

The token bucket can easily be implemented with a counter. The counter is initial-
ized to zero. Each time a token is added, the counter is incremented by 1. Each time a
unit of data is sent, the counter is decremented by 1. When the counter is zero, the host
cannot send data.

The token bucket allows bursty traffic at a regulated maximum rate.

Combining Token Bucket and Leaky Bucket


The two techniques can be combined to credit an idle host and at the same time regulate
the traffic. The leaky bucket is applied after the token bucket; the rate of the leaky bucket
needs to be higher than the rate of tokens dropped in the bucket.

30.2.3 Resource Reservation


A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The
quality of service is improved if these resources are reserved beforehand. Below, we
discuss a QoS model called Integrated Services, which depends heavily on resource res-
ervation to improve the quality of service.
1062 PART VII TOPICS RELATED TO ALL LAYERS

30.2.4 Admission Control


Admission control refers to the mechanism used by a router or a switch to accept or
reject a flow based on predefined parameters called flow specifications. Before a router
accepts a flow for processing, it checks the flow specifications to see if its capacity can
handle the new flow. It takes into account bandwidth, buffer size, CPU speed, etc., as
well as its previous commitments to other flows. Admission control in ATM networks
is known as Connection Admission Control (CAC), which is a major part of the strategy
for controlling congestion.

30.3 INTEGRATED SERVICES (INTSERV)


Traditional Internet provided only the best-effort delivery service to all users regardless of
what was needed. Some applications, however, needed a minimum amount of band width
to function (such as real-time audio and video). To provide different QoS for different
applications, IETF (discussed in Chapter 1) developed the Integrated Services (IntServ)
model. In this model, which is a flow-based architecture, resources such as bandwidth are
explicitly reserved for a given data flow. In other words, the model is considered a spe-
cific requirement of an application in one particular case regardless of the application
type (data transfer, or voice over IP, or video-on-demand). What is important are the
resources the application needs, not what the application is doing.
The model is based on three schemes:
1. The packets are first classified according to the service they require.
2. The model uses scheduling to forward the packets according to their flow
characteristics.
3. Devices like routers use admission control to determine if the device has the capa-
bility (available resources to handle the flow) before making a commitment. For
example, if an application requires a very high data rate, but a router in the path
cannot provide such a data rate, it denies the admission.
Before we discuss this model, we need to emphasize that the model is flow-based,
which means that all accommodations need to be made before a flow can start. This
implies that we need a connection-oriented service at the network layer. A connection
establishment phase is needed to inform all routers of the requirement and get their
approval (admission control). However, since IP is currently a connectionless protocol,
we need another protocol to be run on top of IP to make it a connection-oriented
protocol before we can use this model. This protocol is called Resource Reservation
Protocol (RSVP) and will be discussed shortly.

Integrated Services is a flow-based QoS model designed for IP.


In this model packets are marked by routers according to flow characteristics.

30.3.1 Flow Specification


We said that IntServ is flow-based. To define a specific flow, a source needs to define a
flow specification, which is made of two parts:
CHAPTER 30 QUALITY OF SERVICE 1063

1. Rspec (resource specification). Rspec defines the resource that the flow needs to
reserve (buffer, bandwidth, etc.).
2. Tspec (traffic specification). Tspec defines the traffic characterization of the flow,
which we discussed before.

30.3.2 Admission
After a router receives the flow specification from an application, it decides to admit or
deny the service. The decision is based on the previous commitments of the router and
the current availability of the resource.

30.3.3 Service Classes


Two classes of services have been defined for Integrated Services: guaranteed service
and controlled-load service.
Guaranteed Service Class
This type of service is designed for real-time traffic that needs a guaranteed minimum
end-to-end delay. The end-to-end delay is the sum of the delays in the routers, the prop-
agation delay in the media, and the setup mechanism. Only the first, the sum of the
delays in the routers, can be guaranteed by the router. This type of service guarantees
that the packets will arrive within a certain delivery time and are not discarded if flow
traffic stays within the boundary of Tspec. We can say that guaranteed services are
quantitative services, in which the amount of end-to-end delay and the data rate must
be defined by the application. Normally guaranteed services are required for real-time
applications (voice over IP).
Controlled-Load Service Class
This type of service is designed for applications that can accept some delays but are
sensitive to an overloaded network and to the danger of losing packets. Good examples
of these types of applications are file transfer, e-mail, and Internet access. The controlled-
load service is a qualitative service in that the application requests the possibility of
low-loss or no-loss packets.

30.3.4 Resource Reservation Protocol (RSVP)


We said that the Integrated Services model needs a connection-oriented network layer.
Since IP is a connectionless protocol, a new protocol is designed to run on top of IP to
make it connection-oriented. A connection-oriented protocol needs to have connection
establishment and connection termination phases, as we discussed in Chapter 18. Before
discussing RSVP, we need to mention that it is an independent protocol separate from the
Integrated Services model. It may be used in other models in the future.
Multicast Trees
RSVP is different from other connection-oriented protocols in that it is based on multi-
cast communication. However, RSVP can also be used for unicasting, because unicast-
ing is just a special case of multicasting with only one member in the multicast group.
The reason for this design is to enable RSVP to provide resource reservations for all
kinds of traffic including multimedia, which often uses multicasting.
1064 PART VII TOPICS RELATED TO ALL LAYERS

Receiver-Based Reservation
In RSVP, the receivers, not the sender, make the reservation. This strategy matches the
other multicasting protocols. For example, in multicast routing protocols, the receivers,
not the sender, make a decision to join or leave a multicast group.
RSVP Messages
RSVP has several types of messages. However, for our purposes, we discuss only two
of them: Path and Resv.
Path Messages
Recall that the receivers in a flow make the reservation in RSVP. However, the receiv-
ers do not know the path traveled by packets before the reservation is made. The path is
needed for the reservation. To solve the problem, RSVP uses Path messages. A Path
message travels from the sender and reaches all receivers in the multicast path. On the
way, a Path message stores the necessary information for the receivers. A Path message
is sent in a multicast environment; a new message is created when the path diverges.
Figure 30.7 shows path messages.

Figure 30.7 Path messages

Path Path Path Path

S1 Path Path Rc3

Rc1 Rc2

Resv Messages
After a receiver has received a Path message, it sends a Resv message. The Resv message
travels toward the sender (upstream) and makes a resource reservation on the routers that
support RSVP. If a router on the path does not support RSVP, it routes the packet based
on the best-effort delivery methods we discussed before. Figure 30.8 shows the Resv
messages.
Reservation Merging In RSVP, the resources are not reserved for each receiver in a
flow; the reservation is merged. In Figure 30.9, Rc3 requests a 2-Mbps bandwidth while
Rc2 requests a 1-Mbps bandwidth. Router R3, which needs to make a bandwidth reser-
vation, merges the two requests. The reservation is made for 2 Mbps, the larger of the
two, because a 2-Mbps input reservation can handle both requests. The same situation
is true for R2. The reader may ask why Rc2 and Rc3, both belonging to a single flow,
request different amounts of bandwidth. The answer is that, in a multimedia environ-
ment, different receivers may handle different grades of quality. For example, Rc2 may
be able to receive video only at 1 Mbps (lower quality), while Rc3 may want to receive
video at 2 Mbps (higher quality).
CHAPTER 30 QUALITY OF SERVICE 1065

Figure 30.8 Resv messages

Resv Resv Resv Resv

S1 Rc3
Resv Resv

Rc1 Rc2

Figure 30.9 Reservation merging

3 Mbps 3 Mbps 2 Mbps 2 Mbps


R2 R3

S1 R1 Rc3
3 Mbps 1 Mbps

Rc1 Rc2

Reservation Styles When there is more than one flow, the router needs to make a reser-
vation to accommodate all of them. RSVP defines three types of reservation styles: wild-
card filter (WF), fixed filter (FF), and shared explicit (SE).
❑ Wild Card Filter Style. In this style, the router creates a single reservation for all
senders. The reservation is based on the largest request. This type of style is used
when the flows from different senders do not occur at the same time.
❑ Fixed Filter Style. In this style, the router creates a distinct reservation for each
flow. This means that if there are n flows, n different reservations are made. This
type of style is used when there is a high probability that flows from different send-
ers will occur at the same time.
❑ Shared Explicit Style. In this style, the router creates a single reservation that can
be shared by a set of flows.
Soft State The reservation information (state) stored in every node for a flow needs to
be refreshed periodically. This is referred to as a soft state, as compared to the hard
state used in other virtual-circuit protocols such as ATM, where the information about
the flow is maintained until it is erased. The default interval for refreshing (soft state
reservation) is currently 30 seconds.

30.3.5 Problems with Integrated Services


There are at least two problems with Integrated Services that may prevent its full imple-
mentation in the Internet: scalability and service-type limitation.
1066 PART VII TOPICS RELATED TO ALL LAYERS

Scalability
The Integrated Services model requires that each router keep information for each flow.
As the Internet is growing every day, this is a serious problem. Keeping information is
especially troublesome for core routers because they are primarily designed to switch
packets at a high rate and not to process information.
Service-Type Limitation
The Integrated Services model provides only two types of services, guaranteed and
control-load. Those opposing this model argue that applications may need more than
these two types of services.

30.4 DIFFERENTIATED SERVICES (DIFFSERV)


In this model, also called DiffServ, packets are marked by applications into classes
according to their priorities. Routers and switches, using various queuing strategies, route
the packets. This model was introduced by the IETF (Internet Engineering Task Force) to
handle the shortcomings of Integrated Services. Two fundamental changes were made:
1. The main processing was moved from the core of the network to the edge of the
network. This solves the scalability problem. The routers do not have to store
information about flows. The applications, or hosts, define the type of service they
need each time they send a packet.
2. The per-flow service is changed to per-class service. The router routes the packet
based on the class of service defined in the packet, not the flow. This solves the
service-type limitation problem. We can define different types of classes based on
the needs of applications.

Differentiated Services is a class-based QoS model designed for IP.


In this Model packets are marked by applications according to their priority.

30.4.1 DS Field
In DiffServ, each packet contains a field called the DS field. The value of this field is
set at the boundary of the network by the host or the first router designated as the
boundary router. IETF proposes to replace the existing ToS (type of service) field in
IPv4 or the priority class field in IPv6 with the DS field, as shown in Figure 30.10.

Figure 30.10 DS field

DSCP CU

The DS field contains two subfields: DSCP and CU. The DSCP (Differentiated
Services Code Point) is a 6-bit subfield that defines the per-hop behavior (PHB). The
2-bit CU (Currently Unused) subfield is not currently used.
CHAPTER 30 QUALITY OF SERVICE 1067

The DiffServ capable node (router) uses the DSCP 6 bits as an index to a table
defining the packet-handling mechanism for the current packet being processed.

30.4.2 Per-Hop Behavior


The DiffServ model defines per-hop behaviors (PHBs) for each node that receives a
packet. So far three PHBs are defined: DE PHB, EF PHB, and AF PHB.
DE PHB
The DE PHB (default PHB) is the same as best-effort delivery, which is compatible
with ToS.
EF PHB
The EF PHB (expedited forwarding PHB) provides the following services:
a. Low loss.
b. Low latency.
c. Ensured bandwidth.
This is the same as having a virtual connection between the source and destination.
AF PHB
The AF PHB (assured forwarding PHB) delivers the packet with a high assurance as
long as the class traffic does not exceed the traffic profile of the node. The users of the
network need to be aware that some packets may be discarded.

30.4.3 Traffic Conditioners


To implement DiffServ, the DS node uses traffic conditioners such as meters, markers,
shapers, and droppers, as shown in Figure 30.11.

Figure 30.11 Traffic conditioners

Traffic conditioners

Meter

Shaper/
Input Classifier Marker Output
Dropper

Dropped
1068 PART VII TOPICS RELATED TO ALL LAYERS

Meter
The meter checks to see if the incoming flow matches the negotiated traffic profile. The
meter also sends this result to other components. The meter can use several tools such
as a token bucket to check the profile.
Marker
A marker can re-mark a packet that is using best-effort delivery (DSCP: 000000) or
down-mark a packet based on information received from the meter. Down-marking
(lowering the class of the flow) occurs if the flow does not match the profile. A marker
does not up-mark a packet (promote the class).
Shaper
A shaper uses the information received from the meter to reshape the traffic if it is not
compliant with the negotiated profile.
Dropper
A dropper, which works as a shaper with no buffer, discards packets if the flow severely
violates the negotiated profile.

30.5 END-CHAPTER MATERIALS


30.5.1 Recommended Reading
For more details about subjects discussed in this chapter, we recommend the following
books and RFCs. The items enclosed in brackets refer to the reference list at the end of
the book.
Books
Several books give some coverage of multimedia: [Com 06], [Tan 03], and [GW 04].
RFCs
Several RFCs show different updates on topics discussed in this chapter, including
RFCs 2198, 2250, 2326, 2475, 3246, 3550, and 3551.

30.5.2 Key Terms


Differentiated Services (DiffServ) priority queuing
first-in, first-out (FIFO) queuing quality of service (QoS)
Integrated Services (IntServ) Resource Reservation Protocol (RSVP)
leaky bucket token bucket
per-hop behavior (PHB) weighted fair queuing

30.5.3 Summary
To provide quality of service for an Internet application, we need to define the flow
characteristics for the application: reliability, delay, jitter, and bandwidth. Common
applications in the Internet have been marked with different levels of sensitivity to flow
CHAPTER 30 QUALITY OF SERVICE 1069

characteristics. A flow class is a set of applications with the same required level of flow
characteristics. Traditionally, five flow classes have been defined by the ATM forum:
CBR, VBR-NRT, VBR-RT, ABR, and UBR.
One way to improve QoS is to use flow control, which can be achieved using tech-
niques such as scheduling, traffic shaping, resource reservation, and admission control.
Scheduling uses FIFO queuing, priority queuing, and weighted fair queueing. Traffic
shaping uses leaky or token bucket. Resource reservation can be made by creating a
connection-oriented protocol at the top of the IP protocol to make the necessary alloca-
tion for the intended traffic. Admission control is a mechanism deployed by a router or
switch to accept or reject a packet or a flow based on the packet class or the flow
requirement.
Integrated Services (IntServ) is a flow-based architecture that tries to use flow
specifications, admission control, and service classes to provide QoS in the Internet.
The approach needs a separate protocol to create a connection-oriented service for this
purpose. The protocol that provides connection-oriented service is called Resource
Reservation Protocol (RSVP), which provides multicasting connection between the
source and many destinations.
Differentiated Service (DiffServ) is an architecture that tries to handle traffic based
on the class of packets, marked by the source. Each packet is marked by the source as
belonging to a specific class; the packet, however, may be delayed or dropped if the
traffic is busy with packets with a higher class level.

30.6 PRACTICE SET


30.6.1 Quizzes
A set of interactive quizzes for this chapter can be found on the book website. It is
strongly recommended that the student take the quizzes to check his/her understanding
of the materials before continuing with the practice set.

30.6.2 Questions
Q30-1. Rank the following applications based on their sensitivity to reliability:

a. HTTP b. SNMP c. SMTP d. VoIP

Q30-2. Rank the following applications based on their sensitivity to delay:

a. HTTP b. SNMP c. SMTP d. VoIP

Q30-3. Rank the following applications based on their sensitivity to jitter:

a. HTTP b. SNMP c. SMTP d. VoIP

Q30-4. Rank the following applications based on their sensitivity to bandwidth:

a. HTTP b. SNMP c. SMTP d. VoIP


1070 PART VII TOPICS RELATED TO ALL LAYERS

Q30-5. Which of the following applications are classified as CBR in ATM?

a. HTTP b. SNMP c. SMTP d. VoIP

Q30-6. Which of the following applications are classified as VBR-NRT in ATM?

a. HTTP b. SNMP c. SMTP d. VoIP

Q30-7. Which of the following applications are classified as VBR-RT in ATM?

a. HTTP b. SNMP c. SMTP d. VoIP

Q30-8. Which of the following applications are classified as UBR in ATM?

a. HTTP b. SNMP c. SMTP d. VoIP

Q30-9. Which of the following technique(s) is (are) used for scheduling?

a. FIFO queuing b. Priority queuing c. Leaky Bucket

Q30-10. Which of the following technique(s) is (are) used for traffic shaping?

a. Token bucket b. Priority queuing c. Leaky Bucket

Q30-11. Define the parts of flow specification in IntServ.


Q30-12. Distinguish between guaranteed services and controlled-load services in
IntServ.
Q30-13. IntServ is normally called a destination-based service. Explain the reason.
Q30-14. DiffServ is normally called a source-based service. Explain the reason.
Q30-15. If a communication is unicast, how can we use RSVP, which is designed for
multicast in IntServ?
Q30-16. How can multicasting be achieved using DiffServ?
Q30-17. Why do we need Path and Resv messages in RSVP?
Q30-18. How many per-hop behaviors have been defined for DiffServ? Name them.
Q30-19. List the components of a traffic conditioner in DiffServ.
Q30-20. Is the flow label in IPv6 more appropriate for IntServ or DiffServ?

30.6.3 Problems
P30-1. Figure 30.12 shows a router using FIFO queuing at the input port.
The arrival and required service times for seven packets are shown below;
ti means that the packet has arrived or departed i ms after a reference time. The
values of required service times are also shown in ms. We assume the trans-
mission time is negligible.
CHAPTER 30 QUALITY OF SERVICE 1071

Figure 30.12 Problem P30-1

Arrival Processor Departure

Packets 1 2 3 4 5 6 7
Arrival time t0 t1 t2 t4 t5 t6 t7
Required service time 1 1 3 2 2 3 1
a. Using time lines, show the arrival time, the process duration, and the depar-
ture time for each packet. Also show the contents of the queue at the begin-
ning of each millisecond.
b. For each packet, find the time spent in the router and the departure delay
with respect to the previously departed packet.
c. If all packets belong to the same application program, determine whether
the router creates jitter for the packets.
P30-2. Figure 30.13 shows a router using priority queuing at the input port.

Figure 30.13 Problem P30-2

High-priority queue

Arrival Classifier Processor Departure

Low-priority queue

The arrival and required service times (transmission time is negligible) for 10
packets are shown below; ti means that the packet has arrived i ms after a ref-
erence time. The values of required service times are also shown in ms. The
packets with higher priorities are packets 1, 2, 3, 4, 7, and 9 (shown in color);
the other packets are packets with lower priorities.

Packets 1 2 3 4 5 6 7 8 9 10
Arrival time t0 t1 t2 t3 t4 t5 t6 t7 t8 t9
Required service time 2 2 2 2 1 1 2 1 2 1

a. Using time lines, show the arrival time, the processing duration, and the
departure time for each packet. Also show the contents of the high-priority
queue (Q1) and the low-priority queue (Q2) at each millisecond.
1072 PART VII TOPICS RELATED TO ALL LAYERS

b. For each packet belonging to the high-priority class, find the time spent in
the router and the departure delay with respect to the previously departed
packet. Find if the router creates jitter for this class.
c. For each packet belonging to the low-priority class, find the time spent in
the router and the departure delay with respect to the previously departed
packet. Determine whether the router creates jitter for this class.
P30-3. To regulate its output flow, a router implements a weighted queueing scheme
with three queues at the output port. The packets are classified and stored in
one of these queues before being transmitted. The weights assigned to queues
are w = 3, w = 2, and w = 1 (3/6, 2/6, and 1/6). The contents of each queue at
time t0 are shown in Figure 30.14. Assume packets are all the same size and
that transmission time for each is 1 μs.

Figure 30.14 Problem P30-3

w=3
7 6 5 4 3 2 1

w=2
Arrival Processor Classifier 15 14 13 12 11 10 9 8 Departure

w=1
20 19 18 17 16

a. Using a time line, show the departure time for each packet.
b. Show the contents of the queues after 5, 10, 15, and 20 μs.
c. Find the departure delay of each packet with respect to the previous packet
in the class w = 3. Has queuing created jitter in this class?
d. Find the departure delay of each packet with respect to the previous packet
in the class w = 2. Has queuing created jitter in this class?
e. Find the departure delay of each packet with respect to the previous packet
in the class w = 1. Has queuing created jitter in this class?
P30-4. In Figure 30.3, assume the weight in each class is 4, 2, and 1. The packets in the
top queue are labeled A, in the middle queue B, and in the bottom queue C.
Show the list of packets transmitted in each of the following situations:
a. Each queue has a large number of packets.
b. The numbers of packets in queues, from top to bottom, are 10, 4, and 0.
c. The numbers of packets in queues, from top to bottom, are 0, 5, and 10.
P30-5. In a leaky bucket used to control liquid flow, how many gallons of liquid are left
in the bucket if the output rate is 5 gal/min, there is an input burst of 100 gal/min
for 12 s, and there is no input for 48 s?
CHAPTER 30 QUALITY OF SERVICE 1073

P30-6. Assume fixed-sized packets arrive at a router with a rate of three packets per
second. Show how the router can use the leaky bucket algorithm to send out
only two packets per second. What is the problem with this approach?
P30-7. Assume a router receives packets of size 400 bits every 100 ms, which means
with the data rate of 4 kbps. Show how we can change the output data rate to
less than 1 kbps by using a leaky bucket algorithm.
P30-8. In a switch using the token bucket algorithm, tokens are added to the bucket
at a rate of r = 5 tokens/second. The capacity of the token bucket is c = 10.
The switch has a buffer that can hold only eight packets (for the sake of
example). The packets arrive at the switch at the rate of R packets/second.
Assume that the packets are all the same size and need the same amount of
time for processing. If at time zero the bucket is empty, show the contents of
the bucket and the queue in each of the following cases and interpret the
result.
a. R = 5 b. R = 3 c. R = 7
P30-9. To understand how the token bucket algorithm can give credit to the sender
that does not use its rate allocation for a while but wants to use it later, let’s
repeat the previous problem with r = 3 and c =10, but assume that the sender
uses a variable rate, as shown in Figure 30.15. The sender sends only three
packets per second for the first two seconds, sends no packets for the next two
seconds, and sends seven packets for the next three seconds. The sender is
allowed to send five packets per second, but, since it does not use this right
fully during the first four seconds, it can send more in the next three seconds.
Show the contents of the token bucket and the buffer for each second to prove
this fact.

Figure 30.15 Problem P30-9

R=7
R=3
Seconds
0 1 2 3 4 5 6 7

P30-10. An output interface in a switch is designed using the leaky bucket algorithm to
send 8000 bytes/s (tick). If the following frames are received in sequence,
show the frames that are sent during each second.
❑ Frames 1, 2, 3, 4: 4000 bytes each
❑ Frames 5, 6, 7: 3200 bytes each
❑ Frames 8, 9: 400 bytes each
❑ Frames 10, 11, 12: 2000 bytes each
1074 PART VII TOPICS RELATED TO ALL LAYERS

P30-11. Assume an ISP uses three leaky buckets to regulate data received from three
customers for transmitting to the Internet. The customers send fixed-size
packets (cells). The ISP sends 10 cells per second for each customer and the
maximum burst size of 20 cells per second. Each leaky bucket is implemented
as a FIFO queue (of size 20) and a timer that extracts one cell from the queue
and sends it every 1/10 of a second. (See Figure 30.16.)

Figure 30.16 Problem P30-11

ISP
Maximum 20 cells
Maximum 10
Customer 1 cells/second
Maximum 20 cells
Maximum 10
Customer 2
cells/second
Maximum 20 cells
Maximum 10
Customer 3 cells/second

a. Show the customer rate and the contents of the queue for the first customer,
which sends 5 cells per second for the first 7 seconds and 15 cells per sec-
ond for the next 9 seconds.
b. Do the same for the second customer, which sends 15 cells per second for
the first 4 seconds and 5 cells per second for the next 14 seconds.
c. Do the same for the third customer, which sends no cells for the first two
seconds, 20 cells for the next two seconds, and repeats the pattern four
times.
P30-12. Assume that the ISP in the previous problem decided to use token buckets
(of capacity c = 20 and rate r = 10) instead of leaky buckets to give credit to
the customer that does not send cells for a while but needs to send some
bursts later. Each token bucket is implemented by a very large queue for
each customer (no packet drop), a bucket that holds the token, and the timer
that regulates dropping tokens in the bucket. (See Figure 30.17.)
a. Show the customer rate, the contents of the queue, and the contents of the
bucket for the first customer, which sends 5 cells per second for the first 7
seconds and 15 cells per second for the next 9 seconds.
b. Do the same for the second customer, which sends 15 cells per second for
the first 4 seconds and 5 cells per second for the next 14 seconds.
c. Do the same for the third customer, which sends no cells for the first 2 sec-
onds, 20 cells for the next 2 seconds, and repeats the pattern 4 times.
CHAPTER 30 QUALITY OF SERVICE 1075

Figure 30.17 Problem P30-12

ISP
Drop one
token/s
Send one cell
Customer 1 per each token

Drop one
token/s
Send one cell
Customer 2 per each token

Drop one
token/s
Send one cell
Customer 3 per each token

30.7 SIMULATION EXPERIMENTS


30.7.1 Applets
We have created some Java applets to show some of the main concepts discussed in this
chapter. It is strongly recommended that the students activate these applets on the book
website and carefully examine the protocols in action.

30.8 PROGRAMMING ASSIGNMENTS


Write the source code for, compile, and test the following programs in one of the pro-
gramming language of your choice:
Prg30-21. A program to simulate a leaky bucket.
Prg30-22. A program to simulate a token bucket.
CHAPTER 31

Cryptography and
Network Security

T he topic of cryptography and network security is very broad and involves some
specific areas of mathematics such as number theory. In this chapter, we try to give
a very simple introduction to this topic to prepare the background for more study. We
have divided this chapter into three sections.
❑ The first section introduces the subject. It first describes security goals such as con-
fidentiality, integrity, and availability. The section shows how confidentiality is
threatened by attacks such as snooping and traffic analysis. The section then shows
how integrity is threatened by attacks such as modification, masquerading, replay-
ing, and repudiation. The section mentions one attack that threatens availability,
denial of service. This section ends with describing the two techniques used in
security: cryptography and steganography. The chapter concentrates on the first.
❑ The second section discusses confidentiality. It first describes symmetric-key
ciphers and explains traditional symmetric-key ciphers such as substitution and
transposition ciphers. It then moves to modern symmetric-key ciphers and explains
modern block and stream ciphers. The section then shows that denial of service is
an attack to availability.
❑ The third section discusses other aspects of security: message integrity, message
authentication, digital signature, entity authentication. These aspects today are part
of the security system that complements confidentiality. The section also describes
the topic of key management including the distribution of keys for both symmetric-
key and asymmetric-key ciphers.

1077

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy