Unit-5 QoS
Unit-5 QoS
Quality of Service
1053
1054 PART VII TOPICS RELATED TO ALL LAYERS
30.1.1 Definitions
We can give informal definitions of the above four characteristics:
Reliability
Reliability is a characteristic that a flow needs in order to deliver the packets safe and
sound to the destination. Lack of reliability means losing a packet or acknowledg-
ment, which entails retransmission. However, the sensitivity of different application
programs to reliability varies. For example, reliable transmission is more important
for electronic mail, file transfer, and Internet access than for telephony or audio con-
ferencing.
Delay
Source-to-destination delay is another flow characteristic. Again, applications can
tolerate delay in different degrees. In this case, telephony, audio conferencing, video
conferencing, and remote logging need minimum delay, while delay in file transfer
or e-mail is less important.
Jitter
Jitter is the variation in delay for packets belonging to the same flow. For example, if
four packets depart at times 0, 1, 2, 3 and arrive at 20, 21, 22, 23, all have the same
delay, 20 units of time. On the other hand, if the above four packets arrive at 21, 23, 24,
and 28, they will have different delays. For applications such as audio and video, the
first case is completely acceptable; the second case is not. For these applications, it
does not matter if the packets arrive with a short or long delay as long as the delay is the
same for all packets. These types of applications do not tolerate jitter.
Bandwidth
Different applications need different bandwidths. In video conferencing we need to
send millions of bits per second to refresh a color screen while the total number of bits
in an e-mail may not reach even a million.
Example 30.1
Although the Internet has not defined flow classes formally, the ATM protocol does. As per ATM
specifications, there are five classes of defined service.
a. Constant Bit Rate (CBR). This class is used for emulating circuit switching. CBR appli-
cations are quite sensitive to cell-delay variation. Examples of CBR are telephone traffic,
video conferencing, and television.
b. Variable Bit Rate-Non Real Time (VBR-NRT). Users in this class can send traffic at a
rate that varies with time depending on the availability of user information. An example
is multimedia e-mail.
c. Variable Bit Rate-Real Time (VBR-RT). This class is similar to VBR–NRT but is
designed for applications such as interactive compressed video that are sensitive to cell-
delay variation.
d. Available Bit Rate (ABR). This class of ATM services provides rate-based flow control
and is aimed at data traffic such as file transfer and e-mail.
e. Unspecified Bit Rate (UBR). This class includes all other classes and is widely used
today for TCP/IP.
required service, we can then define some provisions for those levels of service. These
can be done using several mechanisms.
30.2.1 Scheduling
Treating packets (datagrams) in the Internet based on their required level of service can
mostly happen at the routers. It is at a router that a packet may be delayed, suffer from
jitters, be lost, or be assigned the required bandwidth. A good scheduling technique
treats the different flows in a fair and appropriate manner. Several scheduling tech-
niques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.
FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO
queue is familiar to those who have had to wait for a bus at a bus stop. Figure 30.1
shows a conceptual view of a FIFO queue. The figure also shows the timing relation-
ship between arrival and departure of packets in this queuing. Packets from different
applications (with different sizes) arrive at the queue, are processed, and depart. A
larger packet definitely may need a longer processing time. In the figure, packets 1 and
2 need three time units of processing, but packet 3, which is smaller, needs two time
units. This means that packets may arrive with some delays but depart with different
delays. If the packets belong to the same application, this produces jitters. If the packets
belong to different applications, this also produces jitters for each application.
Queue is full?
No
Arrival 3 2 1 Processor Departure
Yes Queue
Discard
Required processing time
a. Processing in the router
Packet 1: three time units
Packet 2: three time units
Packet 3: two time units
1 2 3
Arrival
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10
1 2 3
Processing
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10
Departure
1 2 3
b. Arrival and departure time
CHAPTER 30 QUALITY OF SERVICE 1057
FIFO queuing is the default scheduling in the Internet. The only thing that is guar-
anteed in this type of queuing is that the packets depart in the order they arrive. Does
FIFO queuing distinguish between packet classes? The answer is definitely no. This
type of queuing is what we will see in the Internet with no service differentiation
between packets from different sources. With FIFO queuing, all packets are treated the
same in a packet-switched network. No matter if a packet belongs to FTP, or Voice over
IP, or an e-mail message, they will be equally subject to loss, delay, and jitter. The
bandwidth allocated for each application depends on how many packets arrive at the
router in a period of time. If we need to provide different services to different classes of
packets, we need to have other scheduling mechanisms.
Priority Queuing
Queuing delay in FIFO queuing often degrades quality of service in the network. A
frame carrying real-time packets may have to wait a long time behind a frame carrying
a small file. We solve this problem using multiple queues and priority queuing.
In priority queuing, packets are first assigned to a priority class. Each priority
class has its own queue. The packets in the highest-priority queue are processed first.
Packets in the lowest-priority queue are processed last. Note that the system does not
stop serving a queue until it is empty. A packet priority is determined from a specific
field in the packet header: the ToS field of an IPv4 header, the priority field of IPv6, a
priority number assigned to a destination address, or a priority number assigned to an
application (destination port number), and so on.
Figure 30.2 shows priority queuing with two priority levels (for simplicity). A pri-
ority queue can provide better QoS than the FIFO queue because higher-priority traffic,
such as multimedia, can reach the destination with less delay. However, there is a
potential drawback. If there is a continuous flow in a high-priority queue, the packets in
the lower-priority queues will never have a chance to be processed. This is a condition
called starvation. Severe starvation may result in dropping of some packets of lower
priority. In the figure, the packets of higher priority are sent out before the packets of
lower priority.
Weighted Fair Queuing
A better scheduling method is weighted fair queuing. In this technique, the packets
are still assigned to different classes and admitted to different queues. The queues,
however, are weighted based on the priority of the queues; higher priority means a higher
weight. The system processes packets in each queue in a round-robin fashion with the
number of packets selected from each queue based on the corresponding weight. For
example, if the weights are 3, 2, and 1, three packets are processed from the first queue,
two from the second queue, and one from the third queue. In this way, we have fair queu-
ing with priority. Figure 30.3 shows the technique with three classes. In weighted fair
queuing, each class may receive a small amount of time in each time period. In other
words, a fraction of time is devoted to serve each class of packets, but the fraction
depends on the priority of the class. For example, in the figure, if the throughput for the
router is R, the class with the highest priority may have the throughput of R/2, the middle
class may have the throughput of R/3, and the class with the lowest priority may have the
throughput of R/6. However, this situation is true if all three classes have the same packet
1058 PART VII TOPICS RELATED TO ALL LAYERS
Note:
1 1 2 2 3 The arrival order is different
from the departure order.
Arrival
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14
1 2 1 3 2
Processing
t1 t2 t3 t4 t5 t6 t7 t8 t9 t10 t11 t12 t13 t14
Departure
1 2 1 3 2
size, which may not occur. Packets of different sizes may create many imbalances in
dividing a decent share of time between different classes.
Queue is full?
No
3 2 11
Yes Weight: 3
Discard
Queue is full?
Arrival No
Classifier 2 1 Processor
Yes Weight: 2
Discard
The turning switch selects three
packets from the first queue, then
Queue is full?
No two packets from the second
3 2 1 queue, then one packet from the
Yes Weight: 1 third queue. The cycle repeats.
Discard
Bursty flow
12 Mbps
2 Mbps 3 Mbps
Leaky
bucket
0 1 2 3 4 5 6 7 8 9 10 s 0 1 2 3 4 5 6 7 8 9 10 s
Fixed flow
Bursty data Fixed-rate data
In the figure, we assume that the network has committed a bandwidth of 3 Mbps
for a host. The use of the leaky bucket shapes the input traffic to make it conform to this
commitment. In Figure 30.4 the host sends a burst of data at a rate of 12 Mbps for
2 seconds, for a total of 24 Mb of data. The host is silent for 5 seconds and then sends
data at a rate of 2 Mbps for 3 seconds, for a total of 6 Mb of data. In all, the host has
sent 30 Mb of data in 10 seconds. The leaky bucket smooths the traffic by sending out data
at a rate of 3 Mbps during the same 10 seconds. Without the leaky bucket, the beginning
burst may have hurt the network by consuming more bandwidth than is set aside for
this host. We can also see that the leaky bucket may prevent congestion.
A simple leaky bucket implementation is shown in Figure 30.5. A FIFO queue
holds the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM
networks), the process removes a fixed number of packets from the queue at each tick
1060 PART VII TOPICS RELATED TO ALL LAYERS
of the clock. If the traffic consists of variable-length packets, the fixed output rate must
be based on the number of bytes or bits.
Token Bucket
The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host
is not sending for a while, its bucket becomes empty. Now if the host has bursty data, the
leaky bucket allows only an average rate. The time when the host was idle is not taken
into account. On the other hand, the token bucket algorithm allows idle hosts to accu-
mulate credit for the future in the form of tokens.
Assume the capacity of the bucket is c tokens and tokens enter the bucket at the rate
of r tokens per second. The system removes one token for every cell of data sent. The
maximum number of cells that can enter the network during any time interval of length t
is shown below.
The maximum average rate for the token bucket is shown below.
This means that the token bucket limits the average packet rate to the network. Fig-
ure 30.6 shows the idea.
CHAPTER 30 QUALITY OF SERVICE 1061
Bucket capaciy
is c tokens.
One token is removed
and discarded
per cell transmitted.
Queue is full?
No
Arrival Processor Departure
Yes Queue
Discard
Example 30.2
Let’s assume that the bucket capacity is 10,000 tokens and tokens are added at the rate of
1000 tokens per second. If the system is idle for 10 seconds (or more), the bucket collects
10,000 tokens and becomes full. Any additional tokens will be discarded. The maximum average
rate is shown below.
The token bucket can easily be implemented with a counter. The counter is initial-
ized to zero. Each time a token is added, the counter is incremented by 1. Each time a
unit of data is sent, the counter is decremented by 1. When the counter is zero, the host
cannot send data.
1. Rspec (resource specification). Rspec defines the resource that the flow needs to
reserve (buffer, bandwidth, etc.).
2. Tspec (traffic specification). Tspec defines the traffic characterization of the flow,
which we discussed before.
30.3.2 Admission
After a router receives the flow specification from an application, it decides to admit or
deny the service. The decision is based on the previous commitments of the router and
the current availability of the resource.
Receiver-Based Reservation
In RSVP, the receivers, not the sender, make the reservation. This strategy matches the
other multicasting protocols. For example, in multicast routing protocols, the receivers,
not the sender, make a decision to join or leave a multicast group.
RSVP Messages
RSVP has several types of messages. However, for our purposes, we discuss only two
of them: Path and Resv.
Path Messages
Recall that the receivers in a flow make the reservation in RSVP. However, the receiv-
ers do not know the path traveled by packets before the reservation is made. The path is
needed for the reservation. To solve the problem, RSVP uses Path messages. A Path
message travels from the sender and reaches all receivers in the multicast path. On the
way, a Path message stores the necessary information for the receivers. A Path message
is sent in a multicast environment; a new message is created when the path diverges.
Figure 30.7 shows path messages.
Rc1 Rc2
Resv Messages
After a receiver has received a Path message, it sends a Resv message. The Resv message
travels toward the sender (upstream) and makes a resource reservation on the routers that
support RSVP. If a router on the path does not support RSVP, it routes the packet based
on the best-effort delivery methods we discussed before. Figure 30.8 shows the Resv
messages.
Reservation Merging In RSVP, the resources are not reserved for each receiver in a
flow; the reservation is merged. In Figure 30.9, Rc3 requests a 2-Mbps bandwidth while
Rc2 requests a 1-Mbps bandwidth. Router R3, which needs to make a bandwidth reser-
vation, merges the two requests. The reservation is made for 2 Mbps, the larger of the
two, because a 2-Mbps input reservation can handle both requests. The same situation
is true for R2. The reader may ask why Rc2 and Rc3, both belonging to a single flow,
request different amounts of bandwidth. The answer is that, in a multimedia environ-
ment, different receivers may handle different grades of quality. For example, Rc2 may
be able to receive video only at 1 Mbps (lower quality), while Rc3 may want to receive
video at 2 Mbps (higher quality).
CHAPTER 30 QUALITY OF SERVICE 1065
S1 Rc3
Resv Resv
Rc1 Rc2
S1 R1 Rc3
3 Mbps 1 Mbps
Rc1 Rc2
Reservation Styles When there is more than one flow, the router needs to make a reser-
vation to accommodate all of them. RSVP defines three types of reservation styles: wild-
card filter (WF), fixed filter (FF), and shared explicit (SE).
❑ Wild Card Filter Style. In this style, the router creates a single reservation for all
senders. The reservation is based on the largest request. This type of style is used
when the flows from different senders do not occur at the same time.
❑ Fixed Filter Style. In this style, the router creates a distinct reservation for each
flow. This means that if there are n flows, n different reservations are made. This
type of style is used when there is a high probability that flows from different send-
ers will occur at the same time.
❑ Shared Explicit Style. In this style, the router creates a single reservation that can
be shared by a set of flows.
Soft State The reservation information (state) stored in every node for a flow needs to
be refreshed periodically. This is referred to as a soft state, as compared to the hard
state used in other virtual-circuit protocols such as ATM, where the information about
the flow is maintained until it is erased. The default interval for refreshing (soft state
reservation) is currently 30 seconds.
Scalability
The Integrated Services model requires that each router keep information for each flow.
As the Internet is growing every day, this is a serious problem. Keeping information is
especially troublesome for core routers because they are primarily designed to switch
packets at a high rate and not to process information.
Service-Type Limitation
The Integrated Services model provides only two types of services, guaranteed and
control-load. Those opposing this model argue that applications may need more than
these two types of services.
30.4.1 DS Field
In DiffServ, each packet contains a field called the DS field. The value of this field is
set at the boundary of the network by the host or the first router designated as the
boundary router. IETF proposes to replace the existing ToS (type of service) field in
IPv4 or the priority class field in IPv6 with the DS field, as shown in Figure 30.10.
DSCP CU
The DS field contains two subfields: DSCP and CU. The DSCP (Differentiated
Services Code Point) is a 6-bit subfield that defines the per-hop behavior (PHB). The
2-bit CU (Currently Unused) subfield is not currently used.
CHAPTER 30 QUALITY OF SERVICE 1067
The DiffServ capable node (router) uses the DSCP 6 bits as an index to a table
defining the packet-handling mechanism for the current packet being processed.
Traffic conditioners
Meter
Shaper/
Input Classifier Marker Output
Dropper
Dropped
1068 PART VII TOPICS RELATED TO ALL LAYERS
Meter
The meter checks to see if the incoming flow matches the negotiated traffic profile. The
meter also sends this result to other components. The meter can use several tools such
as a token bucket to check the profile.
Marker
A marker can re-mark a packet that is using best-effort delivery (DSCP: 000000) or
down-mark a packet based on information received from the meter. Down-marking
(lowering the class of the flow) occurs if the flow does not match the profile. A marker
does not up-mark a packet (promote the class).
Shaper
A shaper uses the information received from the meter to reshape the traffic if it is not
compliant with the negotiated profile.
Dropper
A dropper, which works as a shaper with no buffer, discards packets if the flow severely
violates the negotiated profile.
30.5.3 Summary
To provide quality of service for an Internet application, we need to define the flow
characteristics for the application: reliability, delay, jitter, and bandwidth. Common
applications in the Internet have been marked with different levels of sensitivity to flow
CHAPTER 30 QUALITY OF SERVICE 1069
characteristics. A flow class is a set of applications with the same required level of flow
characteristics. Traditionally, five flow classes have been defined by the ATM forum:
CBR, VBR-NRT, VBR-RT, ABR, and UBR.
One way to improve QoS is to use flow control, which can be achieved using tech-
niques such as scheduling, traffic shaping, resource reservation, and admission control.
Scheduling uses FIFO queuing, priority queuing, and weighted fair queueing. Traffic
shaping uses leaky or token bucket. Resource reservation can be made by creating a
connection-oriented protocol at the top of the IP protocol to make the necessary alloca-
tion for the intended traffic. Admission control is a mechanism deployed by a router or
switch to accept or reject a packet or a flow based on the packet class or the flow
requirement.
Integrated Services (IntServ) is a flow-based architecture that tries to use flow
specifications, admission control, and service classes to provide QoS in the Internet.
The approach needs a separate protocol to create a connection-oriented service for this
purpose. The protocol that provides connection-oriented service is called Resource
Reservation Protocol (RSVP), which provides multicasting connection between the
source and many destinations.
Differentiated Service (DiffServ) is an architecture that tries to handle traffic based
on the class of packets, marked by the source. Each packet is marked by the source as
belonging to a specific class; the packet, however, may be delayed or dropped if the
traffic is busy with packets with a higher class level.
30.6.2 Questions
Q30-1. Rank the following applications based on their sensitivity to reliability:
Q30-10. Which of the following technique(s) is (are) used for traffic shaping?
30.6.3 Problems
P30-1. Figure 30.12 shows a router using FIFO queuing at the input port.
The arrival and required service times for seven packets are shown below;
ti means that the packet has arrived or departed i ms after a reference time. The
values of required service times are also shown in ms. We assume the trans-
mission time is negligible.
CHAPTER 30 QUALITY OF SERVICE 1071
Packets 1 2 3 4 5 6 7
Arrival time t0 t1 t2 t4 t5 t6 t7
Required service time 1 1 3 2 2 3 1
a. Using time lines, show the arrival time, the process duration, and the depar-
ture time for each packet. Also show the contents of the queue at the begin-
ning of each millisecond.
b. For each packet, find the time spent in the router and the departure delay
with respect to the previously departed packet.
c. If all packets belong to the same application program, determine whether
the router creates jitter for the packets.
P30-2. Figure 30.13 shows a router using priority queuing at the input port.
High-priority queue
Low-priority queue
The arrival and required service times (transmission time is negligible) for 10
packets are shown below; ti means that the packet has arrived i ms after a ref-
erence time. The values of required service times are also shown in ms. The
packets with higher priorities are packets 1, 2, 3, 4, 7, and 9 (shown in color);
the other packets are packets with lower priorities.
Packets 1 2 3 4 5 6 7 8 9 10
Arrival time t0 t1 t2 t3 t4 t5 t6 t7 t8 t9
Required service time 2 2 2 2 1 1 2 1 2 1
a. Using time lines, show the arrival time, the processing duration, and the
departure time for each packet. Also show the contents of the high-priority
queue (Q1) and the low-priority queue (Q2) at each millisecond.
1072 PART VII TOPICS RELATED TO ALL LAYERS
b. For each packet belonging to the high-priority class, find the time spent in
the router and the departure delay with respect to the previously departed
packet. Find if the router creates jitter for this class.
c. For each packet belonging to the low-priority class, find the time spent in
the router and the departure delay with respect to the previously departed
packet. Determine whether the router creates jitter for this class.
P30-3. To regulate its output flow, a router implements a weighted queueing scheme
with three queues at the output port. The packets are classified and stored in
one of these queues before being transmitted. The weights assigned to queues
are w = 3, w = 2, and w = 1 (3/6, 2/6, and 1/6). The contents of each queue at
time t0 are shown in Figure 30.14. Assume packets are all the same size and
that transmission time for each is 1 μs.
w=3
7 6 5 4 3 2 1
w=2
Arrival Processor Classifier 15 14 13 12 11 10 9 8 Departure
w=1
20 19 18 17 16
a. Using a time line, show the departure time for each packet.
b. Show the contents of the queues after 5, 10, 15, and 20 μs.
c. Find the departure delay of each packet with respect to the previous packet
in the class w = 3. Has queuing created jitter in this class?
d. Find the departure delay of each packet with respect to the previous packet
in the class w = 2. Has queuing created jitter in this class?
e. Find the departure delay of each packet with respect to the previous packet
in the class w = 1. Has queuing created jitter in this class?
P30-4. In Figure 30.3, assume the weight in each class is 4, 2, and 1. The packets in the
top queue are labeled A, in the middle queue B, and in the bottom queue C.
Show the list of packets transmitted in each of the following situations:
a. Each queue has a large number of packets.
b. The numbers of packets in queues, from top to bottom, are 10, 4, and 0.
c. The numbers of packets in queues, from top to bottom, are 0, 5, and 10.
P30-5. In a leaky bucket used to control liquid flow, how many gallons of liquid are left
in the bucket if the output rate is 5 gal/min, there is an input burst of 100 gal/min
for 12 s, and there is no input for 48 s?
CHAPTER 30 QUALITY OF SERVICE 1073
P30-6. Assume fixed-sized packets arrive at a router with a rate of three packets per
second. Show how the router can use the leaky bucket algorithm to send out
only two packets per second. What is the problem with this approach?
P30-7. Assume a router receives packets of size 400 bits every 100 ms, which means
with the data rate of 4 kbps. Show how we can change the output data rate to
less than 1 kbps by using a leaky bucket algorithm.
P30-8. In a switch using the token bucket algorithm, tokens are added to the bucket
at a rate of r = 5 tokens/second. The capacity of the token bucket is c = 10.
The switch has a buffer that can hold only eight packets (for the sake of
example). The packets arrive at the switch at the rate of R packets/second.
Assume that the packets are all the same size and need the same amount of
time for processing. If at time zero the bucket is empty, show the contents of
the bucket and the queue in each of the following cases and interpret the
result.
a. R = 5 b. R = 3 c. R = 7
P30-9. To understand how the token bucket algorithm can give credit to the sender
that does not use its rate allocation for a while but wants to use it later, let’s
repeat the previous problem with r = 3 and c =10, but assume that the sender
uses a variable rate, as shown in Figure 30.15. The sender sends only three
packets per second for the first two seconds, sends no packets for the next two
seconds, and sends seven packets for the next three seconds. The sender is
allowed to send five packets per second, but, since it does not use this right
fully during the first four seconds, it can send more in the next three seconds.
Show the contents of the token bucket and the buffer for each second to prove
this fact.
R=7
R=3
Seconds
0 1 2 3 4 5 6 7
P30-10. An output interface in a switch is designed using the leaky bucket algorithm to
send 8000 bytes/s (tick). If the following frames are received in sequence,
show the frames that are sent during each second.
❑ Frames 1, 2, 3, 4: 4000 bytes each
❑ Frames 5, 6, 7: 3200 bytes each
❑ Frames 8, 9: 400 bytes each
❑ Frames 10, 11, 12: 2000 bytes each
1074 PART VII TOPICS RELATED TO ALL LAYERS
P30-11. Assume an ISP uses three leaky buckets to regulate data received from three
customers for transmitting to the Internet. The customers send fixed-size
packets (cells). The ISP sends 10 cells per second for each customer and the
maximum burst size of 20 cells per second. Each leaky bucket is implemented
as a FIFO queue (of size 20) and a timer that extracts one cell from the queue
and sends it every 1/10 of a second. (See Figure 30.16.)
ISP
Maximum 20 cells
Maximum 10
Customer 1 cells/second
Maximum 20 cells
Maximum 10
Customer 2
cells/second
Maximum 20 cells
Maximum 10
Customer 3 cells/second
a. Show the customer rate and the contents of the queue for the first customer,
which sends 5 cells per second for the first 7 seconds and 15 cells per sec-
ond for the next 9 seconds.
b. Do the same for the second customer, which sends 15 cells per second for
the first 4 seconds and 5 cells per second for the next 14 seconds.
c. Do the same for the third customer, which sends no cells for the first two
seconds, 20 cells for the next two seconds, and repeats the pattern four
times.
P30-12. Assume that the ISP in the previous problem decided to use token buckets
(of capacity c = 20 and rate r = 10) instead of leaky buckets to give credit to
the customer that does not send cells for a while but needs to send some
bursts later. Each token bucket is implemented by a very large queue for
each customer (no packet drop), a bucket that holds the token, and the timer
that regulates dropping tokens in the bucket. (See Figure 30.17.)
a. Show the customer rate, the contents of the queue, and the contents of the
bucket for the first customer, which sends 5 cells per second for the first 7
seconds and 15 cells per second for the next 9 seconds.
b. Do the same for the second customer, which sends 15 cells per second for
the first 4 seconds and 5 cells per second for the next 14 seconds.
c. Do the same for the third customer, which sends no cells for the first 2 sec-
onds, 20 cells for the next 2 seconds, and repeats the pattern 4 times.
CHAPTER 30 QUALITY OF SERVICE 1075
ISP
Drop one
token/s
Send one cell
Customer 1 per each token
Drop one
token/s
Send one cell
Customer 2 per each token
Drop one
token/s
Send one cell
Customer 3 per each token
Cryptography and
Network Security
T he topic of cryptography and network security is very broad and involves some
specific areas of mathematics such as number theory. In this chapter, we try to give
a very simple introduction to this topic to prepare the background for more study. We
have divided this chapter into three sections.
❑ The first section introduces the subject. It first describes security goals such as con-
fidentiality, integrity, and availability. The section shows how confidentiality is
threatened by attacks such as snooping and traffic analysis. The section then shows
how integrity is threatened by attacks such as modification, masquerading, replay-
ing, and repudiation. The section mentions one attack that threatens availability,
denial of service. This section ends with describing the two techniques used in
security: cryptography and steganography. The chapter concentrates on the first.
❑ The second section discusses confidentiality. It first describes symmetric-key
ciphers and explains traditional symmetric-key ciphers such as substitution and
transposition ciphers. It then moves to modern symmetric-key ciphers and explains
modern block and stream ciphers. The section then shows that denial of service is
an attack to availability.
❑ The third section discusses other aspects of security: message integrity, message
authentication, digital signature, entity authentication. These aspects today are part
of the security system that complements confidentiality. The section also describes
the topic of key management including the distribution of keys for both symmetric-
key and asymmetric-key ciphers.
1077