Benchmarking and modelling
Benchmarking and modelling
Performance Professionals
The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the
measurement and management of computer systems. CMG members are primarily concerned with performance evaluation of existing systems to maximize
performance (eg. response time, throughput, etc.) and with capacity management where planned enhancements to existing systems or the design of new
systems are evaluated to find the necessary resources required to provide adequate performance at a reasonable cost.
This paper was originally published in the Proceedings of the Computer Measurement Group’s 1996 International Conference.
Copyright 1996 by The Computer Measurement Group, Inc. All Rights Reserved. Published by The Computer Measurement Group, Inc. (CMG), a non-profit
Illinois membership corporation. Permission to reprint in whole or in any part may be granted for educational and scientific purposes upon written application to
the Editor, CMG Headquarters, 151 Fries Mill Road, Suite 104, Turnersville , NJ 08012.
BY DOWNLOADING THIS PUBLICATION, YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTOOD AND AGREE TO BE BOUND BY THE
FOLLOWING TERMS AND CONDITIONS:
License: CMG hereby grants you a nonexclusive, nontransferable right to download this publication from the CMG Web site for personal use on a single
computer owned, leased or otherwise controlled by you. In the event that the computer becomes dysfunctional, such that you are unable to access the
publication, you may transfer the publication to another single computer, provided that it is removed from the computer from which it is transferred and its use
on the replacement computer otherwise complies with the terms of this Copyright Notice and License.
Copyright: No part of this publication or electronic file may be reproduced or transmitted in any form to anyone else, including transmittal by e-mail, by file
transfer protocol (FTP), or by being made part of a network-accessible system, without the prior written permission of CMG. You may not merge, adapt,
translate, modify, rent, lease, sell, sublicense, assign or otherwise transfer the publication, or remove any proprietary notice or label appearing on the
publication.
Disclaimer; Limitation of Liability: The ideas and concepts set forth in this publication are solely those of the respective authors, and not of CMG, and CMG
does not endorse, approve, guarantee or otherwise certify any such ideas or concepts in any application or usage. CMG assumes no responsibility or liability
in connection with the use or misuse of the publication or electronic file. CMG makes no warranty or representation that the electronic file will be free from
errors, viruses, worms or other elements or codes that manifest contaminating or destructive properties, and it expressly disclaims liability arising from such
errors, elements or codes.
General: CMG reserves the right to terminate this Agreement immediately upon discovery of violation of any of its terms.
Learn the basics and latest aspects of IT Service Management at CMG's Annual Conference - www.cmg.org/conference
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
Dr. Gilbert E. Houtekamer
Consul Risk Management B.V.
Delft, The Netherlands
email: Houtekamer@consul.nl
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
The advent of modern disk controller architectures has revived the debate on how to model this complex
equipment. At the CMG95 conference a panel session was held on the topic of predicting performance
for modern I/O controllers, and on what could be expected from those predictions. One of the topics
of the discussion was whether or not new controllers rendered modeling useless because of their
complexity.
When analyzing complex equipment like a computer system or a disk I/O subsystem, it is apparent
that the performance that one observes is the result of a very complex interaction between the hardware
design and workload. It is tempting to state that the system is so complex that modeling or simulation
will not work, and that a benchmark is the only way to get a performance prediction. In this paper we
will show the contrary to be true, explaining how analytical modeling can be used successfully, even
with limited information about the subsystem internal operation.
analytical models are so popular. Simulation models 2.2 SELECTING NEW EQUIPMENT
share some of the benefits, although they are generally When selecting new equipment the problem is that you
more expensive to run. are not likely to have all possible subsystems available
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
for testing and selection. Even when the vendors are
1.5 STRENGTHS AND WEAKNESSES
willing to provide this, it is expensive to run the tests.
The most desirable aspect of a benchmark or synthetic Therefore, selection of new equipment is usually based
job is that it can be applied without any knowledge of the on modeling results, or on synthetic job results from other
internal operation of the subsystem tested. It can be installations or service providers. With a modeling study
applied even when the vendor of the subsystem is not you will be able to take your actual workload data (from
ready to explain its internals. However, knowledge of RMF/SMF), but you will have to trust the accuracy of the
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
system internals when available can lead to better model. With a synthetic job, you have actual
benchmark designs. measurement data from the subsystem, but you will have
to trust that the synthetic job results apply to your
The least desirable aspect of analytical and simulation workload. In both cases, bleeding edge customers may
models is that one must have parameters for many of have trouble getting up-to-date versions of the model or
the internal variables. For processors it may be sufficient synthetic job.
to use a single number, for the current multi-level cached
controllers many numbers are required. These numbers 2.3 PREDICTING FUTURE PERFORMANCE
are difficult to obtain. Once the numbers are available, When studying future performance, the considerations
any workload can be analyzed. Another problem with are largely the same as for the selection of new
analytical models is that they are not very accurate at equipment. With analytical modeling, the tool can predict
the performance edge. performance at higher activity levels, and with synthetic
The least desirable aspect of synthetic jobs is that it is jobs you can inspect the performance from the graphs
very difficult to design a synthetic job that represents a from the measurements that are included in the test set.
real workload. The results of the experiment are only as 2.4 TUNING
representative as the workload description. Just as more
When tuning you probably want to use a modeling tool,
parameters are needed for analytical models, synthetic
or maybe a measurement tool, since you are primarily
jobs need to exercise more aspects of a subsystem to
interested in is finding better ways to run your workload
get a good perspective. It is not unusual for a synthetic
on the equipment that you already own. Synthetic jobs
job to consist of 20 set of experiments, all with different
are of less useful, since they are not designed to quickly
results. Another problem is that it requires (stand-alone)
evaluate ways to distribute the load over your
access to a subsystem.
subsystems.
In this paper we will discuss synthetic jobs and analytical The summary is that synthetic jobs are better to create
models as the two primary techniques that can be used reproducible test-runs that exercise certain aspects of a
to compare I/O subsystems. Each technique has its subsystem, and that analytical models are better to
strengths and weaknesses. represent your workload and to tune your system. In the
next section we will discuss some of the problems related
2 WHAT ARE WE TRYING TO ACCOMPLISH?
to the selection of parameters for analytical models.
Performance analysis techniques are used for many
different reasons: to tune, to predict future performance, 3 PARAMETER SELECTION
to aid in selection of new equipment, and to verify When reviewing literature on analytical modeling, it is
performance claims during acceptance testing, among remarkable that most modelers stay away from
others. characterizing the workload. Instead, they use some
general workload properties, like the infamous 4 Kbyte
2.1 VERIFYING PERFORMANCE CLAIMS
block. There are various reasons for this. One is that
When verifying performance claims, the primary issue is modeling is a more attractive subject to scientists than
to agree on some workload and service level that will be measuring, but probably more important is that workload
verified. This could be "a Tuesday night batch window", characterization as a general topic is so difficult.
a specific benchmark, or a specific set of synthetic job Workload characterization is still very much an art, as
runs. The latter two have the advantage that the workload explained by Ferrari [FERR78], Lazowska [LAZO84],
is repeatable, which is a requirement for any claim. and Artis in his thesis [ARTI92]. Lazowska quotes a
Synthetic jobs are thus good for acceptance testing. The number of studies in his book that are surprisingly simple,
flip side is that it must be understood that good synthetic use only a few parameters, but yet give accurate
job performance is not necessarily related to good modeling results. Likewise, Artis uses a number of
application performance. examples in his thesis where he achieves very accurate
results with a well chosen characterization of key • The physical device performance: data rate, seek
variables. Only people that master the art can do the time, and latency. These are the traditional
characterization well! parameters that I/O models work with. The
parameters are generally published by the hardware
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
When modeling (mainframe) disks, the first idea is to look
vendors, although the numbers for most SCSI disks
at the basic operation of a disk drive, and describe the
are of limited value because the performance of SCSI
seek patterns, latency, data transfer and overhead
bus also depends on its internal microprocessor and
components. For cached controllers, cache hits and
buffer management.
misses must be separately studied. For multi-level
cached controllers, multiple caches and multiple
• The controller overhead for all kinds of actions that
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
• The read and write cache hit probabilities, and the 0.03
The shape of these curves is remarkably different: the The (de)staging probability is unknown at this point, but
3880-3 service time increases gradually with the load, it can be estimated once the cache hit probability is
the 3990-2 with more data paths has a flatter curve, and known. The hit probability can be derived from the
modern subsystems have essentially a service time that disconnect time, which can be approximated as:
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
is independent of the load until "they hit the wall". When
they do, performance suffers seriously. One such case
Disconnect = (miss probability) * (device access + contention delays)
was reported by [NIEL95] at CMG95. The entire set of
detailed parameters are required to model the precise
shape of the curve. The typical device access time can be estimated from
manufacturers’ documentation, the contention delays
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
With modern subsystems, however, the response time can be computed from the storage path busy, and the
curve is flat the entire normal operating range. In this disconnect time is available from the measurement data.
operating range the performance is largely determined The connect time and I/O rate are also known from the
by cache hit probabilities and by control unit features like measurement data. This looks like a circular reasoning,
what data transfer rate is used for cache hits and misses. and it is: the storage path busy can be computed when
These parameters are more easily obtained. For the cache hit probability is known, and the cache hit
example, based on a benchmark or a trace is not too probability (actually miss) can be computed when the
difficult to find the average connect time for short (4 storage path busy is known. Fortunately, the two
Kbyte) transfers. Based on this connect time the equations together can be used in an iterative scheme
overhead for simple transfers can be estimated. to estimate the cache hit probability and controller
contention. Circular reasoning is thus not a problem in
this case.
In the next sections we will describe the modeling
concept and explain how parameters can be obtained Note that we did not use a control unit overhead estimate
from measurement data. The next section will explain in this computation. The only modeling assumption is
how a model parameters can be obtained by reasoning that the contention delay can be estimated as a function
"backwards" from the measurement data. of storage path busy.
This very simple model can already be used to answer
4 MODELING CONCEPT such questions as
• what happens when the I/O rate increases?
Limited measurement data can reveal a lot about a
configuration and its workload, in particular when • what happens when the cache hit probability
changes that are not dramatic are modelled. Even only changes?
RMF connect and disconnect data can be used to • what happens when a faster device is employed?
calibrate models in a meaningful way. Let us assume
some basic 3990 model that predicts the RPS miss It is important to realize that the model can accurately
delays and other internal controller contention delays as predict changes even when the calibration did not give
a function of storage path utilization. Many such models a correct estimate for a certain parameter. For example,
have been published, e.g. [BRAN81] when the algorithm predicts a cache hit of 75% while the
[BERE90][HOUT89]. The primary input to the models is actual number is 85% a device type change can still be
the controller (storage path) utilization. This utilization modelled fairly accurate, since the model for the new
is a normally thought of as a model result, but for a situation will "make the same mistakes as the calibration
non-cached controller it can also be derived from the did".
measurement data: The same type of model can also be used to look at
connect time in more detail. Like the disconnect time,
the connect time will change with the workload and cache
Storage path busy% = (RMF connect time) * iorate hit probability for most controllers, although the effect
may not be as pronounced. Specifically, some
controllers transfer cache hits at (parallel or ESCON)
For a cached control unit like the 3990 the staging time channel speed, and cache misses at device speed.
must also be considered, since it causes load on the Other controllers transfer all information at channel
same storage paths: speed. This is easily factored into the connect time
component, such that connect time changes can be
modelled. Connect time consists of:
Storage path busy% = (RMF connect time + • fixed (minimum) controller overhead, that may be
(de)staging time) * iorate different for cache hits and misses.
• workload (channel program) dependent overhead, Figure 3 shows an example of a cache upgrade, that is
for such functions as searches without RPS, search modelled based on RMF and CRR (Cache RMF
key. Reporter) data for an upgrade from a 64 to 128 Mbyte
cache. Apart from the cache size, the controllers
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
• data transfer at device or channel speed.
modelled are quite similar: an HDS 7690 and IBM
With crude estimates on all of these, surprisingly 3990-6. Note that the predictions are again very close.
accurate predictions are possible. Figure 2 shows the In this case the modelled value based on 341 I/Os per
results of a study were measurement data from an initial second was not measured; during the measurement
configuration (Amdahl 6100) was used to predict the interval the I/O rate was 379 I/Os per second.
performance for a new configuration (IBM 3990). After Re-evaluating the model for the new I/O rate gave about
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
the control unit replacement, the modelled value turned the same response time as measured (10.9 vs 11.1 ms).
out to match very well with measurements (6.5 vs 6.6
ms). The estimates for connect overhead and cache
miss service time in this section are based on data from Modelling Example
Cache Size and CU Type Modification
artificial workloads. The numbers are from the DASD
configurations
Magic product [CONS96].
measured 7690/7693 13.9
64 Mbyte cache, 341 I/O/s
0 2 4 6 8 10 12 14 16
6.5
service time in ms
modelled 3990-6/3390-3
Pending Connect Disconnect IOSQ
This chart does represent a generic comparison
of the controller. It represents one workload
6.6 only.
measured 3990-6
statistics will provide cache hit ratios for read and writes,
the read/write ratio and stageing probabilities, allowing 3990-6/3390-2, 1024 Mbyte cache 7.2
4 * 17 Mbyte ESCON
the model to consider differences in read and writes, and
also allowing for more accurate estimates of device measured 3990-6 detail measurement data not available 7.4
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
con [BOWM93][FRIE95b]. McNutt has been quoted out residency time = 4 Gbyte/ (5*0.032) =
of context quite a bit, but his basic ideas turned out to 25000 seconds = 7 hours
work very well in the studies that the author has
performed. His observations were that the cache hit Clearly, a residency time of 7 hours is markedly different
probability as a function of the "single reference from the 60 seconds assumed by Friedman. With a more
residency time" [MCNU87] show a more consistent and realistic set of numbers the holding times are still very
significant:
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
connect = overhead + data transfer The connect time for cache misses can also be studied,
to determine the effective data rate for cache misses.
For example, on a 17 Mbyte/s channel with 4 Kbyte Some control units will transfer data directly at native
device speed for cache misses, while other control units
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
blocks, the data transfer takes about 0.25 ms. The
remainder of the connect time is protocol and controller will use buffering to optimize the channel resource.
overhead. Hits can be separated out from traces based Again, it should be emphasized that this type of analysis
on the observed disconnect time, and in synthetic job can be applied to trace or synthetic job data. The only
runs by design of the run. requirement is that measurement data is available on a
number of similar I/Os. This can be achieved in a
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
Better estimates are available when multiple blocksizes synthetic job by keeping all I/Os the same, or in a trace
are used, allowing you to estimate both overhead and analysis by selecting only I/Os that meet certain criteria.
effective data rate (when it is less than the 17 Mbyte/s Multi-stage cached controllers are somewhat more
channel data rate). Figure 5 illustrates this concept; the difficult to calibrate, since there is no simple hit/miss
intersection is the overhead, and the slope is determined notion. The same techniques are applied though, but
by the data rate. the disconnect time as measured in previous sections
now represents the service time of the second stage,
e.g., the drawer in a RAMAC controller. A breakdown
Connect time as a function of blocksize of drawer performance requires an estimate of the cache
Hypothetical data hit probability in the drawer. While it is difficult to measure
Connecttime (ms) drawer hits, the drawer hit probabilities can be estimated
3.5
using the approach discussed in section "CACHE
3 MODELING" before. A description of the details is
2.5
beyond the scope of this paper. Basically, once there is
an estimate of the hit probability at the drawer, the same
2 methodology can be applied.
1.5
7 CALIBRATION SAMPLES
1 The calibration process as described above can be
0.5
applied based on I/O trace data with channel programs
0 5,000 10,000 15,000 20,000 25,000 30,000 35,000 or a summary of these. An analysis of simple (single
Number of bytes transferred
block) channel programs yields insight in the raw data
Figure 5. Connect time Measurement point for multiple rate and controller overhead, for read and writes, hit and
blocksizes can be used to estimate both connect time misses. Complex channel programs provide numeric
overhead and effective data rate. insight into other overheads. Reads and writes can be
analyzed separately in this case also. Since the purpose
of the analysis is not to fully load a controller, there is no
The same approach can be used for cache misses, i.e. need to create a special workload, nor to trace all devices
transfers with a non-zero disconnect time. For connect in a controller. In fact, the measurement is best
time, this should show a similar result, although the data performed on your live workload that you care about
rate observed may now be the device data rate instead most, to ensure that all relevant workload information is
of the channel data rate. For the disconnect time it will captured.
yield a band of measurement results, since the miss time
Missing from a such a measurement may be information
includes a random device access component. When
on high-speed sequential access, so you may want to
applied to traditional 3990’s the miss analysis tends to
generate such a workload during the measurement.
show the specified device performance characteristics:
the intersection is roughly at half a rotation plus some Figure 6 provides some sample parameters obtained
seek time. When applied to RAID controllers, additional from a real GTF trace. This sample shows what kind of
overhead usually shows up. For example, when studying information can be obtained from a regression analysis,
the data on the initial Iceberg systems [FRIE95a], the where the connect time is determined as a function of
data shows a 35 ms disconnect time, amounting to a the byte count. The sample shows that the data is not
significant overhead of some other sort (Note that the quite what we had hoped: not all numbers do make
current SCSI based systems are a lot faster). sense. Let us review them.
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
(corrections: with 18 Mbyte/s: overhead = 0.75 ms)
Type: Single block write hit, 166083 measurement points
Avg transfer size: 5542 (Coefficient of var 0.540)
Avg connect time: 1.197 (overhead 0.904 ms, data rate 18.895 Mbyte/s)
(corrections: with 18 Mbyte/s: overhead = 0.88 ms)
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
The first two cases are read and write hits where the unit. With parameters like this, the modeling as outlined
regression suggests 20 and 19 Mbyte/s respectively. in this paper can be applied. It may be possible to
The maximum channel data rate is 18 Mbyte/s on real automate much of this process, but we have not yet
channels, so these estimates must be too optimistic. It managed to do this.
does suggest strongly, however, that the control unit
When analyzing data from synthetic jobs similar methods
studied here transfers simple cache hits at full ESCON
can be used, and similar results will be obtained. Also
speed. It also shows that the write overhead is somewhat
in this case it has been the author’s experience that the
higher than for reads. For modeling purposes, we should
numbers cannot be used "as-is"; they must be corrected
re-compute the overhead based on the 18 Mbyte/s
for the properties of the specific synthetic jobs used.
numbers, resulting in 0.75 ms for a read-hit and 0.88 ms
for a write hit. 8 QUESTIONS ANSWERED
For the multi-block write hit the results are more difficult We have shown that analytical models can be calibrated
to interpret: the regression suggests a negative without the need for test-time, special I/O generators or
overhead. There are two ways to "correct" this number: any special system setup. You can calibrate the model
assume that the write overhead is 0.88 ms, which gives with your own workload. The methodology is thus very
a 11.5 Mbyte data rate, or assume that we are still powerful - without the need for any experiments, the
transferring a 18 Mbyte, which gives a 1.9 ms overhead. controller parameters are accurately captured with a very
Inspection of the trace data shows that the lower data simple procedure: create a trace and analyze it.
rate occurs when many small records are written. The Alternatively, the results from synthetic jobs can be used
model should reflect this. to obtain the same parameters.
The read miss case finally suggests a 6.3 Mbyte/s data Models that are built as described in this paper can be
rate, which is somewhat higher than the native device used for I/O performance issues, and have, in the
speed for this control unit. This suggests that some author’s experience, proven to be very accurate. The
buffering occurs in the control unit, but that data transfer success is not so much related to very accurate
is essentially handled at device speed. In fact, when the parameters, but more to the self-calibration properties.
device speed of 4.2 Mbyte/s is used the overhead is 0.78
ms, very close to the read hit case. The read miss case There are of course limitations, some of which are shared
also shows a 11.4 ms disconnect time. The 11.4 ms is with synthetic jobs:
the device delay, including contention.
• The modeling approach does not entirely treat the
Based on this very simple analysis with a standard GTF controller as a black box, which means that the
trace from a production workload we have been able to accuracy does depend on a proper representation of
determine the key overhead parameters for this control the major internal contention points. Synthetic jobs
suffer from a similar problem, but there the issue is [CONS96] DASD Magic User’s Guide, Consul Risk
that tests must be designed specifically to exercise Management, 1996.
all potential bottlenecks of a controller.
[FERR78] D. Ferrari, Computer Systems Performance
Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
• Both synthetic jobs and analytical models are not Evaluation, Prentice-Hall, 1978.
likely to be very accurate in predicting the saturation
point for your own workload. Small errors in [FRIE95a] M. Friedman, The performance and tuning of
parameters can yield large prediction errors when the a Storagetek Iceberg disk subsystem, CMG
controller becomes saturated. This is also what Transactions Winter 1995, 1995.
happens in real life, as reported by [NIEL95, about
[FRIE95b] M. Friedman, Evaluation of an approximation
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe
Iceberg].
technique for disk cache sizing, CMG95, 1995.
The bottom line is that with an analytical model you will
never be sure that all controller contention points are [HOUT89] G.E. Houtekamer, Measuring and Modelling
accurately represented in the model, and with synthetic Disk I/O Subsystems, Delft University Press, ISBN
jobs you will never be sure that the synthetic job has all 90-6275-567-4.
the tests that are required to find the relevant contention [HOUT90] G.E. Houtekamer and H.P. Artis, 3390: A
points. Synthetic jobs give exact results for a workload Close Look at a new DASD Generation, Proceedings
that is not yours, and modeling gives approximate results CMG90, 1990.
for your workload. It boils down to a matter of personal
preference. [KOBA78] H. Kobayashi, Modeling and Analysis: An
introduction to System Performance Evaluation
9 ACKOWNLEDGEMENTS Methodology, Addison-Wesley, 1978, ISBN
I would like to thank the CMG reviewers for their 0-201-14457-3.
constructive comments on this paper.
[LAZO84] E.D. Lazowska et al, Quantitative System
10 REFERENCES Performance: Computer System Analysis using
Queueing Network Models, Prentice-Hall, 1984, ISBN
[ARTI92] H.P. Artis, Workload Characterization 0-13-746975-6.
Algorithms for MVS Computer Systems, University of
Pretoria, 1992. [MCNU94] B. McNutt, A survey of MVS cache locality by
data pool: the multiple workload approach revisited,
[ARTI94] H.P. Artis, DASD Subsystem: Evaluation The
Proceedings CMG 1994.
Performance Envelope, CMG Transactions, Winter
1994. [MCNU87] B. McNutt and J.W. Murray, A multiple
[BERE90] DASD/Cache Performance Analysis and workload approach to cache planning, Proceedings
Modeling, CMG90, 1990. CMG87, 1987
[BOWM93] W. Bowman, A pragmatic approach to cache [MCNU91] B. McNutt, A simple statistical model of cache
residency time for the 1990s, CMG 1993. reference locality and its application to cache planning,
measurement and control, Proceedings CMG91, 1991.
[BRAN81] A. Brandwajn, Models of DASD Subsystems:
Basic Model of Reconnection, Performance Evaluation [NIEL95] C.G. Nielsen and G.A. Philips, The Tip of the
1,3, November 1981, pp 263-281. Iceberg, Proceedings CMG95, 1995.