0% found this document useful (0 votes)
33 views4 pages

Damon Wischik: Abstract - in A Router Serving Many TCP Flows, Queues Will

This document discusses buffer sizing theory for bursty TCP flows. It begins by introducing different definitions of traffic burstiness and focuses on a definition tied to the mechanics of the TCP protocol. It then explains how bursty traffic affects queue buildup in routers and how this feeds back to impact TCP source behavior. The document reviews existing buffer sizing theory for smooth TCP traffic and extends it to account for bursty TCP traffic. This enables modeling the interaction between burstiness, buffer size, TCP throughput, link utilization, and packet drop probability.

Uploaded by

sumabang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views4 pages

Damon Wischik: Abstract - in A Router Serving Many TCP Flows, Queues Will

This document discusses buffer sizing theory for bursty TCP flows. It begins by introducing different definitions of traffic burstiness and focuses on a definition tied to the mechanics of the TCP protocol. It then explains how bursty traffic affects queue buildup in routers and how this feeds back to impact TCP source behavior. The document reviews existing buffer sizing theory for smooth TCP traffic and extends it to account for bursty TCP traffic. This enables modeling the interaction between burstiness, buffer size, TCP throughput, link utilization, and packet drop probability.

Uploaded by

sumabang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Buffer sizing theory for bursty TCP flows

Damon Wischik∗
Computer Science, UCL
Email: D.Wischik@cs.ucl.ac.uk

Abstract— In a router serving many TCP flows, queues will


12
build up from time to time. The manner in which queues build
cwnd
up depends on the buffer space available and on the burstiness of [pkt]
8

the TCP traffic. Conversely, the traffic generated by a TCP flow 4


depends on the congestion it sees at queues along its route. In
order to decide how big buffers should be, we need to understand 10 20 30 40 50
time [s]
the interaction between these effects.
This paper reviews the buffer-sizing theory in [1] and extends
Fig. 1. cwnd as a function of time. When there are no drops cwnd increases
it to cope with bursty TCP traffic. This enables us to explain an steadily; when a drop is detected then cwnd is reduced. This results in a
observation about TCP pacing made in [2]. characteristic ‘sawtooth’ graph. Here RTT = 1s and the packet drop probability
is p = 5%.
I. I NTRODUCTION
The general purpose of buffers in Internet routers is to
accomodate bursts in traffic.
‘Bursty’ is a word with no agreed meaning. Some authors are sent from a source to a receiver. The receiver sends back
stress the scale-free nature of traffic fluctuations: μs-long acknowledgements. Let RTT be the time it takes to send a
fluctuations on top of ms-long fluctuations on top of second- packet from the source to the receiver, and to get back an
long fluctuations etc. [3]. Some authors propose to describe acknowledgement. (RTT may vary, because of fluctuations in
traffic variability simply in terms of the variance of the amount queue size, but we will not model this.) The source maintains a
of traffic generated in a given time interval [4]. Some authors window parameter cwnd; this is the target number of packets
consider the effective bandwidth to be a useful descriptor of that have been sent but not yet acknowledged. When an
traffic variability [5]. acknowledgement is received, cwnd increases by 1/cwnd,
In this paper, I will describe a different notion of burstiness, and this may permit new packets to be sent. When a dropped
one tied specifically to the mechanics of the TCP protocol. I packet is detected, cwnd decreases by cwnd/2, but no more
will explain how burstiness affects the build-up of queues, than once every RTT. This results in cwnd evolving in a
and how this in turn feeds back to affect the behaviour of sawtooth, as illustrated in Figure 1. If cwnd stays constant
TCP sources. This theory results a simple model which lets then the source will transmit cwnd packets every RTT, i.e. it
us calculate the impact of burstiness and buffer size on TCP will send at rate x = cwnd/RTT pkt/s.
throughput, link utilization, and drop probability. (There is much more to TCP than this, including the
Section II gives a brief overview of those parts of TCP mechanism by which TCP detects dropped packets, timeouts,
that we will be concerned with, and explains the notion of and the behaviour of short-lived flows. There are some brief
burstiness. Section III describes the impact of bursty traffic remarks about these aspects of TCP in the conclusion.)
on queues, expanding on the results for smooth traffic in [1]. This description of TCP permits some rather different
Section IV and V give an integrated mathematical model for behaviours. Figure 2 shows two simulation runs in which the
network performance in the presence of bursty TCP traffic. round trip time is the same, the initial cwnd is the same, and
A naive guess would be that greater burstiness leads to there are no packet drops—the only difference is the spacing
larger queues, which leads to more packet loss, which leads of the first three acknowledgements. In the first run, they arrive
to reduced throughput. This guess suggests a mechanism at times 0.2, 0.5 and 0.7; in the second they arrive at times 0.2,
for improving throughput: modify TCP so that it produces 0.22 and 0.24. The impact of clumped acknowledgements is to
smoother traffic flows. Simulation studies [2], [6] show that induce clumped packet transmissions. It can also be seen that
this is not always the outcome: that smooth traffic flows can TCP itself induces some clumpiness in packet transmissions,
paradoxically cause burstiness in the network. In Section VI even when the acknowledgements start out paced.
we use our mathematical model to explain this observation.
A. Traffic/burstiness model
II. TCP
We will work with two different models for TCP, a smooth
This paper will be concerned with long-lived TCP flows. traffic model and a bursty traffic model.
The typical behaviour of such a flow is as follows. Packets Smooth TCP traffic: In the first model, the traffic generated
∗ Research supported by a Royal Society university research fellowship, and by a single TCP flow is a point process. Each point represents
DARPA Buffer Sizing Grant no. W911NF-05-1-0254. a single data packet (and all packets are taken to have the
smooth bursty probability and w is the average clump size; this should be a
decent approximation when q is small.
III. Q UEUE MODEL
20
Consider a single queue, with constant service rate, fed by
many TCP flows. Raina and Wischik [1] have described the
queueing behaviour that results, for smooth TCP traffic. In this
15
section we will review the conclusions of that paper, and also
pkts
sent see how the behaviour changes when the flows are bursty. The
10
aim of this section is to produce an expression for p, the loss
probability term in (1).
In fact, [1] gives two different queueing models, one for
5 small buffers and one for large buffers. The models are
justified heuristically, in the limit as the number of flows N
increases. The small-buffer model is suitable when buffer size
1 2 3 4 does not depend on N ; the large-buffer model is suitable
time [s] when buffer size increases with N . To be concrete, the
small-buffer model seems good for buffers up to a several
Fig. 2. Number of packets sent as a function of time. If the initial packets hundred packets, and the large-buffer model seems good when
are spaced out then subsequent packets will inherit this spacing and the flow
will be smooth; if the initial packets are clumped together then the entire flow maximum possible queueing delay is a non-negligble fraction
will be bursty. of the round trip time.
A. Smooth traffic, small buffers
Let there be N smooth TCP flows with average window
same size). The points are assumed to be separated, e.g. there size w and common round trip time RTT (so that the average
exists some δ > 0 such that no two points are any closer than transmit rate is x = w/RTT), sharing a queue with service rate
δ in time. This would be guaranteed if, for example, the TCP N C and buffer size B. Then the packet drop probability p
source is throttled by a slow access link. The precise statistics is approximately the same as that for a queue with constant
of the point process do not matter in what follows. service rate N C and buffer size B, fed by a Poisson flow of
Bursty TCP traffic: In the second model, the traffic gen- rate N x (i.e. a classic M/D/1/B queue). Indeed, the queue
erated by a single TCP flow is a point process. Each point length distributions in the real system and the approximate
represents a clump of data packets. For a TCP flow with Poisson system should be roughly equal.
average window size w, the number of packets in a clump is The justification for approximating the traffic by Poisson is
assumed to be uniformly distributed between 2w/3 and 4w/3; that the typical duration of a busy period is O(1/N ), and over
this reflects the shape of the TCP sawtooth. The clumps are an interval of duration 1/N the aggregate of N independent
assumed to be separated by one round trip time. As above, simple point processes is approximately Poisson.
packets are all the same size.
These two cases are extremes. Figure 2 suggests that the B. Bursty traffic, small buffers
true behaviour of TCP is likely to be somewhere between these Let there be N bursty TCP flows with average window
two extremes. But it’s easiest to work with the extremes, so size w and common round trip time RTT (so that the average
we will stick with them in the analysis that follows. transmit rate is x = w/RTT). Packets arrive in clumps, and the
clump size is uniform between 2w/3 and 4w/3, so the average
B. Throughput formula clump size is w. Clumps arrive as a point process, say with
When a TCP flow experiences drop probability p, then mean rate λ. The average transmit rate is then x = λw pkt/s,
(according to e.g. [7]) its throughput is roughly so λ = x/w.
0.87 Over an interval of duration 1/N , the aggregate point pro-
x= √ pkt/s. (1) cess is approximately Poisson, as above, and so the aggregate
RTT p
packet arrival process can be modelled as a batch Poisson
When packet drops are independent, p is simply the per-packet arrival process, where the batch size is uniform between 2w/3
drop probability. However in the bursty TCP model it’s likely and 4w/3. For large N , the duration of the interval 1/N is
that if one packet in a clump is dropped then all subsequent so short that there can be at most one clump from any single
packets in that clump are dropped also. In this case it’s easiest flow, and so we will assume that batch sizes are independent.
to define p operationally: take two successive clumps with Hence the packet drop probability is the same as for a queue
drops in them, and let 1/p − 1 be the expected number of not- with constant service rate N C, fed by packets which arrive in
dropped packets between the drops. (For clumps of size 1, clumps according to a Poisson process of rate N λ.
this reduces to the former case of independent packet drops.) For numerical purposes one should set up a Markov chain
A crude estimate is p = q/w where q is the per-clump drop and compute the packet drop probability p from it. For simple
algebraic argument, here is a further simplification. First (Observe that the probability that there is no free space,
some notation. Write L C (B, λ, D) for the drop probability i.e. that an incoming packet is dropped, is 1 − 1/ρ, so
in a queue with constant service rate C, buffer size B, this approximation is consistent with the smooth-TCP case.)
and a Poisson arrival process of rate λ where packet sizes Now consider an incoming clump of w packets. This clump
are independent and have distribution D. (This is the first experiences a drop if the amount of free space is less than
simplification. In the real system, if a clump of packets arrives w, which has probability 1 − 1/ρ w . We therefore propose the
and some of the packets can fit in the buffer then those packets approximation q ≈ 1 − 1/ρ w . The p in (1) is then p ≈ q/w.
are admitted; in this L C (B, λ, D) system the entire clump is
IV. F IXED POINT CALCULATION
dropped.) We want to calculate q = L C (B, λ, Uw ), where Uw
denotes the uniform distribution between 2w/3 and 4w/4. As When deciding on buffer size, the first questions to ask are:
a further simplification, approximate this by L C (B, λ, Fw ), what throughput do I expect to get? what is the link utilization?
where Fw indicates that all batches have fixed size w. But and what sort of quality of service? We now find out how these
this is exactly equal to L C/w (B/W, λ, F1 ) (just rescale packet quantities depend on TCP burstiness.
size), which is exactly equal to L C (B/W, λw, F1 ) (just rescale Consider N long-lived TCP flows sharing a single bottle-
time). In summary, while the packet drop probability for neck link with service rate N C, with drop probability p, and
smooth traffic is LC (B, x, F1 ), the clump drop probability where the common round trip time is RTT. By (1), the offered
for bursty traffic is q ≈ L C (B/w, x, F1 ). An even cruder traffic intensity is
approximation is Nx 0.87
ρ= = √ . (3)
(1 − ρ)ρ B/w NC C RTT p
q≈ , where ρ = x/C; (2)
1 − ρB/w+1 From Section III, we have two different formulae for what
this comes from taking the service to be Markov rather than q = pw might be:
constant-rate. 
(1 − ρ)ρB/w /(1 − ρB/w+1 ) for small buffers
What we have calculated here is the clump drop probability, q≈ (4)
i.e. the probability that there is not enough space in the buffer (1 − 1/ρw )+ for large buffers
to accomodate all the packets in an incoming clump. The p in where ρ = x/C is the offered traffic intensity, B is the buffer
(1) is then p ≈ q/w. size and w is the average clump size: w = 1 for smooth TCP
C. Smooth traffic, large buffers traffic and w = xRTT for bursty TCP traffic. (One can come
up with more accurate values for p and q from a Markov
Let there be N smooth TCP flows with average window
chain calculation, but these simple formulae are more readily
size w and common round trip time RTT (so that the average
interpretable.)
transmit rate is x = w/RTT), sharing a queue with service rate
We have two equations with two unknowns, ρ and p. To
N C. According to [1], if x < C then the queue length is O(1)
find ρ and p, solve the two equations simultaneously. This is
as N → ∞, and the packet drop probability is approximately
referred to as a fixed point solution. Figure 3 illustrates. There
zero. If x > C then the packet drop probability is (x − C)/x,
are two confounding effects: burstier traffic leads to higher
or 1 − 1/ρ when expressed in terms of ρ = x/C, from
per-packet drop probability, which indicates lower throughput;
Little’s Law. The amount of free space in the queue can be
but at the same time burstier traffic means that packet drops
estimated by considering a queue with constant arrival rate
come in bursts so TCP doesn’t respond to all of the packet
N C and with exponential service times of rate N x; the queue
drops, which indicates higher throughput. For the parameters
size distribution in this D/M/1 queue is the same as the
in Figure 3 these two effects cancel out. For smaller buffers or
distribution of free buffer space in the real queue. Furthermore,
for larger burst sizes, the net effect is to reduce throughput. For
the typical time between two drops if O(1/N ).
very large buffers, the difference between bursty and smooth
D. Bursty traffic, large buffers is negligible.
Let there be N bursty TCP flows, with parameters as in For more complicated network topologies, there will be
Section III-B. If x < C then, as with smooth traffic, the more equations: one for each route of TCP flows, and one
queue size is O(1) and the drop probability is approximately for each queue. They should all be solved simultaneously, as
zero. If x > C then the packet drop probability is still in [8], [9].
(x−C)/x = 1−1/ρ, from Little’s Law. To find the probability V. S TABILITY CALCULATION
q that an arriving clump of packets experiences one or more
When deciding on buffer size, a second question to ask is:
packet drops, here is an approximation:
do I expect stable good performance, or will there be jitter and
For smooth traffic, the amount of free buffer space has
instability? Fluid models have emerged as a powerful tool for
the same distribution as the queue size in a D/M/1 queue.
answering this question. A crude first-cut answer is as follows:
Approximate it by the queue size of an M/M/1 queue with
Consider as above a single bottleneck shared by N long-
the same mean arrival and service rates; this gives
lived flows, with mean rate x, and where the service rate is
P(free space = r) = (1 − 1/ρ)/ρr . N C. Let ρ = x/C be the traffic intensity. Let p(ρ) be the
B = 100pkt B large VI. TCP PACING
-1 smooth
TCP pacing is a mechanism which enforces smooth traffic.
bursty It does this by spacing out packets: if the window is cwnd
log10 p -2 and the round trip time is RTT then it attempts to send one
packet every RTT/cwnd. (See [2] for references.)
-3
bursty smooth
According to the results in the previous two sections, this
will have several different effects. First, the traffic is made
0.5 1 1.5
ρ smoother, so packet drop probability is lower, which indicates
increased throughput. Second, packets are spaced out, so
Fig. 3. A plot of drop probability p against traffic intensity ρ. The dashed packet drops are made independent, so TCP responds to more
line shows TCP throughput (3), with C RTT = 6pkt. The solid lines show (4), of them, which indicates decreased throughput. (The balance
one line for smooth traffic (w = 1) and one for bursty traffic (w = xRTT). of these two effects will depend on the specific parameters.)
Buffer size is either 100pkt (left) or very large (right). Where the dashed line
intersects with the relevant solid line, there is the simultaneous solution of Third, the instability index is made higher, which indicates
(3) & (4). instability and synchronization. Paradoxically, by making the
traffic smoother at the source, we have induced “bursty”
B = 100pkt B large behaviour in the network! This theoretical prediction validates
simulation results in [2], [6].
80 The problem of synchronization can be solved by making
60
the buffer smaller. This makes p  (·) smaller, so the instability
s(ρ) index is lower.
40
It can be seen that TCP burstiness has a range of differ-
bursty smooth
20 smooth ent effects, sometimes conflicting, and that the relationship
bursty
between burstiness and buffer size merits careful analysis.
0.6 0.8 1 1.2 1.4
ρ A topic for further analysis is the impact of short-lived
flows, timeouts, etc. In slow start, TCP tends to emit packets
in clumps, which may make the aggregate traffic burstier. On
Fig. 4. Instability index s(ρ) as a function of traffic intensity ρ. The higher
the instability index, the more prone the system is to synchronization. For a
the other hand, many short flows (of one or two packets) may
given small buffer size, smooth TCP has a higher instability index than bursty make the aggregate traffic smoother, so that occasional bursts
TCP. For large buffer size there is negligible difference. do not do too much damage.
R EFERENCES
drop probability resulting from traffic intensity ρ, from one of [1] Gaurav Raina and Damon Wischik, “Buffer sizes for large multiplexers:
the calculations in Section III. Let ρ ∗ be the fixed-point traffic TCP queueing theory and instability analysis,” in EuroNGI, 2005.
[2] Amit Aggarwal, Stefan Savage, and Tom Anderson, “Understanding the
intensity, calculated as in Section IV. Define the instability performance of TCP pacing,” in IEEE Infocom, 2000.
index to be [3] W. E. Leland, M. S. Taqqu, W. Willinger, and D. V. Wilson, “On the
ρ∗ p (ρ∗ )
s(ρ∗ ) = . self-similar nature of ethernet traffic (extended version),” IEEE/ACM
p(ρ) Transactions on Networking, vol. 2, pp. 1–15, 1994.
[4] Ronald G. Addie, Moshe Zukerman, and Timothy D. Neame, “Appli-
The larger this is, the more prone the system is to instability cation of the central limit theorem to communication networks,” Tech.
(see (19) and (20) in [10]). Figure 4 plots the instability Rep. SC-MC-9819, University of Southern Queensland, 1998.
[5] Frank Kelly, “Notes on effective bandwidths,” in Stochastic Networks:
index as a function as a function of ρ ∗ . With small buffers, Theory and Applications, F. P. Kelly, S. Zachary, and I. Ziedins, Eds.,
smooth traffic is much more prone to instability than bursty Royal Statistical Society Lecture Note Series, chapter 8, pp. 141—168.
traffic; with large buffers this effect is still there but it’s barely Oxford University Press, Oxford, 1996.
[6] David X. Wei, Pei Cao, and Steven H. Low, “TCP pacing revisited,”
noticeable. Unpublished, 2006.
Reference [1] gives much more detail about the nature [7] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose, “Modeling TCP
and consequences of instability. The typical way in which throughput: a simple model and its empirical validation,” in Proceedings
of ACM SIGCOMM, 1998.
instability shows itself is in oscillations in queue size, which [8] Tian Bu and Don Towsley, “Fixed point approximations for TCP
leads to synchronization of TCP flows (i.e. many of them behavior in an AQM network,” in ACM SIGMETRICS, 2001.
cutting their windows at the same time), which can lead to [9] R. J. Gibbens, S. K. Sargood, C. Van Eijl, F. P. Kelly, H. Azmoodeh,
R. N. Macfadyen, and N. W. Macfadyen, “Fixed-point models for the
unfairness [6]. Additionally it leads to jitter, and oscillations in end-to-end performance analysis of IP networks,” in 13th ITC specialist
traffic rate. In [1] it is explained how to calculate the amplitude seminar, 2000.
of all these oscillations. [10] Frank Kelly, “Fairness and stability of end-to-end congestion control,”
European Journal of Control, 2003.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy