0% found this document useful (0 votes)
9 views13 pages

CMG1978 - Capacity MGMT Guide

The Computer Measurement Group (CMG) is a non-profit organization focused on the performance evaluation and capacity management of computer systems. This document outlines the importance of systematic capacity management, detailing the roles of various personnel involved and the processes for optimizing resource utilization and forecasting future workloads. It emphasizes the need for baseline reporting to inform decision-making and improve system performance while adhering to copyright and usage guidelines.

Uploaded by

kmdbasappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views13 pages

CMG1978 - Capacity MGMT Guide

The Computer Measurement Group (CMG) is a non-profit organization focused on the performance evaluation and capacity management of computer systems. This document outlines the importance of systematic capacity management, detailing the roles of various personnel involved and the processes for optimizing resource utilization and forecasting future workloads. It emphasizes the need for baseline reporting to inform decision-making and improve system performance while adhering to copyright and usage guidelines.

Uploaded by

kmdbasappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

The Association of System

Performance Professionals

The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the
measurement and management of computer systems. CMG members are primarily concerned with performance evaluation of existing systems to maximize
performance (eg. response time, throughput, etc.) and with capacity management where planned enhancements to existing systems or the design of new
systems are evaluated to find the necessary resources required to provide adequate performance at a reasonable cost.

This paper was originally published in the Proceedings of the Computer Measurement Group’s 1978 International Conference.

For more information on CMG please visit http://www.cmg.org

Copyright Notice and License

Copyright 1978 by The Computer Measurement Group, Inc. All Rights Reserved. Published by The Computer Measurement Group, Inc. (CMG), a non-profit
Illinois membership corporation. Permission to reprint in whole or in any part may be granted for educational and scientific purposes upon written application to
the Editor, CMG Headquarters, 151 Fries Mill Road, Suite 104, Turnersville , NJ 08012.

BY DOWNLOADING THIS PUBLICATION, YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTOOD AND AGREE TO BE BOUND BY THE
FOLLOWING TERMS AND CONDITIONS:

License: CMG hereby grants you a nonexclusive, nontransferable right to download this publication from the CMG Web site for personal use on a single
computer owned, leased or otherwise controlled by you. In the event that the computer becomes dysfunctional, such that you are unable to access the
publication, you may transfer the publication to another single computer, provided that it is removed from the computer from which it is transferred and its use
on the replacement computer otherwise complies with the terms of this Copyright Notice and License.

Concurrent use on two or more computers or on a network is not allowed.

Copyright: No part of this publication or electronic file may be reproduced or transmitted in any form to anyone else, including transmittal by e-mail, by file
transfer protocol (FTP), or by being made part of a network-accessible system, without the prior written permission of CMG. You may not merge, adapt,
translate, modify, rent, lease, sell, sublicense, assign or otherwise transfer the publication, or remove any proprietary notice or label appearing on the
publication.

Disclaimer; Limitation of Liability: The ideas and concepts set forth in this publication are solely those of the respective authors, and not of CMG, and CMG
does not endorse, approve, guarantee or otherwise certify any such ideas or concepts in any application or usage. CMG assumes no responsibility or liability
in connection with the use or misuse of the publication or electronic file. CMG makes no warranty or representation that the electronic file will be free from
errors, viruses, worms or other elements or codes that manifest contaminating or destructive properties, and it expressly disclaims liability arising from such
errors, elements or codes.

General: CMG reserves the right to terminate this Agreement immediately upon discovery of violation of any of its terms.
CAPACITY MANAGEMENT
A DEFINITION AND IMPLEMENTATION GUIDE
Martha R. Gilmore
TESDATA Systems Corporation

ABSTRACT
The traditional method of evaluating a computer configuration for effective utilization
(primarily a lot of guesses made by people with very good intuitions and very little data)
must be augmented with a more systematic approach. Prerequisite to an analysis of capacity
is an in-depth knOWledge of the system over time. Of particular interest is a good his-
torical trace of 1) system changes, both hardware and software, 2) system problems and
down time, 3) workload (e.g., jobs per hour and transactions per second), 4) resuuree
utilization (e.g., percent CPU busy, channel busy, CPU and channel overlap, memory utili-
zation), and 5) performance (e.g., reSIJunse time and turnaround time). Once a good base-
line reporting scheme is functioning properly, control levels can be set for performance,
resuurce utilization and system availability. Exceeded control limits should be addressed
with projects designed to analyze and alleviate the causes of such problems. The well
tuned system should be modeled; at this point the analyst can have the ability to predict
the effect of a change in workload on resource utilization on performance. In addition
to making modeling possible as a Jorecasting tool, the implementation of an adequate
baseline reporting system provides management with the opportunity to accurately project
future workload, resource utili7.ation and performance. System capacity (that workload
which the system can handle and still meet performance objectives) can now be predicted.

1. INTRODUCTION
Capacity management is a very complex function or total budget on workload.
involving virtually every computing center
department. Its purpose is to provide enough • Users must participate in the process of
computing capacity to satisfactorily handle an estimating changes in their workloads.
installation's workload at a minimum cost. An
illustration of the complexity and universal • Data reductiuu and report generating is
involvement of computing center staff is Ken done by a programming staff according to
Kolence'~ [11 very a}Jt diagram (Figure 1) of the plan of the capacity planner.
the information flow in a capacity management
process. Notice how the follOWing personnel An organized approach to capacity management
become involved: can prevent wasted manpower and hardware
resources. The first step in the process of
• The DP manager compares current system "getting organized" is to define the goals
performance to plans and has overall and activities in ~nmprehensihle chunks, and
control. the second is to set out a sequence of steps
to take to accomplish the whole task. The
• The capacity planner has the responsibility whole is broken into three large topics: Cost
for seeing that a plan is formulated. recovery. Resource optimization, and Fore-
implemented and that the information flow casting. The next section describes the goals
is continuous. He must analyze incoming and processes involved in each of these three
reports of baseline data and output from areas. Cost recovery is described very
models and made recommendations. briefly; cost accounting is a separate topic in
its own right. The third section will des-
• Equipment planning and configuration design cribe an implementation sequence; since most
personnel evaluate possible solutions to installations interested in performance manage-
bottlenecks which involve upgrades or re- ment already have a cost recovery plan and
configuring the hardware. accounting system functioning, Lhe implementa-
tion guide assumes this has been done.
• Perfurm<tnc:e data collection and tuning
personnel supply well defined data points 2. A FUNCTIONAL DESCRIPTION OF
for baseline reporting, as well as special- CAPACITY MANAGEMENT
ized performance data collected as part of
tuning projects. Capacity is defined as the workload that a
well tuned system can process without exceed-
• The personnel involved in change cont~ol ing the limits of user performance objectiv,,:;,:.
and problem resolution keep track of Capacity planning thus implies a knowledge of
problems which interfere with normal opera- workload, its resou!"('.P rpqu;rements, the
tion and coordinate changes to the system. limits of current equipment, and the perfor-
mance the user expects. Also implied is the
• The scheduling and control group schedules ability to predict the future workload and
jobs to meet deadlines and optimize some its impact on resource utilization and perfor-
system resources. mance. Pricing schemes are inextricably
linked to workload and performance. Capacity
• Operations personnel are responsible for plannin~ is concerned with optimiZing resource
maintaining a flow of work through the utilization, forecasting and cost recovery.
system and recording operational problems. The common base which tLes all of these
functions together is an understanding of
• The billing funeLlon ald~ in the prediction current system performance, v.ol'kload, and
of income and effects of pricing changes resource utilization. It would be impossible

- I -
to overemphasize the importance of baseline 2.2 RESOURCE OPTIMIZATION
reporting of exactly what level of service is
being provided, how much and where system The goals of a resource optimization effort
resources are being used, and in conjunction are to improve system reliability and avail-
with what workload. ability, to identify system bottlenecks, and 10
tune the system.
2.1 COST RECOVERY
The process of s~stem optimization is an
The goals of a L:osl r~(;overy effort are tu iterative one Is ;
all stages of the jJrucess
provide users with information about their should continual y be refined. Figure 3 gives
own resource consumption and provide the an overview of the steps and decisions in-
center with information it needs to recover volved.
costs in Qn equitable manner. In addition,
a pricing scheme may be used to affect the • Understand the current system (this is an
utilization of various resources. ongoing"never ending process)

The CORot l'ACOVp.ry function interfaces with the a) Understand the current software and
capacity planning effort at three key points, hardware configuration. Know the
1) the effect of pricing schemes on workload, SYSGEN parameters and thier influence
2) the sharing of ongoing data with the per- on performance.
formance evaluation function, and 3) the
correlation of utilization of system components b) Understand factors outside of the CPE
with their cost. organization which influence perfor-
mance. Particular attention should
The degree to which changing prices of a be paid to pricing policies, manual and
computing service affects the demand for that automatic schedUling algorithms, oper-
service varies considerably. Cotton [21 ator actions, scheduled down time,
discusses the relationship lJetween prices and unscheduled duwIl time (lJulh hardware
workload in terms of the price-elasticity of and software caused), and hardware
components of the workload. He observes that errors.
established applications are price-inelastic,
that is, the demand for them is unaffected by c) Know the current workload, both online
price; new services or applications, however, interactive systems and batch work.
are very price-elastic. With the drastic Understand how it varies throughout the
reduction in cost of computing power in day, week, month, year.
~ecent years has come dramatic increase in
demand for new and diversified uses. Recovery d) Know current resource utilization for
of costs depends on a stable subset of the the CPU, Memory and I/O equipment
workload such as differentiated or specialized (channels, controllers and devices).
services. Those subsets of the workload which Understand hawaII the resource utili-
are sensitive to price an be manipulated to zations vary with workload.
effect control over allocation of resources
or restriction to their use. The problem of • Set (or re-evaluate) performance goals
predicting the effect of pricing policy on
workload reduces to a problem of understanding In order to set performance goals it is
which components are price-elastic and which necessary to define the terms in which to
are not. Visability of price-elasticity would specify objectives ~]. Specific measures
come from ongoing trancing of workload by of performance must be chosen along with
application type and budget. The relation- the tools required to provide the mea~ure­
ship between pricing and workload could be ment. It is important to note any dis-
modeled if the tracking is sufficient. An crepancies between the measures and the
attempt to incorporate price factors into a service level the user sees. The workload
queing model is described by Giammo [~ . must be examined for different categories
of work which require different priorities
Many installations have billing systems which and different pp.rforman~e ohjectives. Two
charge users for system resources used based basic types of objectives must be defined
on a system accounting log. A complete and set. user-oriented and system-oriented.
functional description of the billing function User-oriented performance objectives re-
is not included in this discussion. A wealth flect the way the end user would rate his
of ongoing performance management data is service and includes response time for
available in the system accounting log. each interactive system and turnaround
time for various categories of batch work.
Borovi ts and Ein-Dor HI have proposed a System-oriented performance objectives
method of displaying utilization figures and reflect the workload which must be
cost which conveys at a glance an intuition supported by the system and includes batch
for system balance. The method is simple: throughput, interactive systems transaction
plot percentage of utilization on the vertical rates, and number of concurrent interactive
axl!:> and UII the hurizontal axis make divisiun!:> users supported.
which form the base of bars, the width of
which represents percent of cost. Each sepa- • Measure current performance against per-
rate bar represents a different hardware item formance objectives
(or cost center). Figure (2) shows some
examples from their paper. It is easy to Using the same baseline reporting required
gl'a.!';p that an llnderlltilized expensive item i!": in order to "know the system" check the
more significant than an equally utilized specific performance objectives against the
inexpensive item from these graphs. same measures for the time frame of
interest.

- 2 -
• Formulate an improvement hypothesis 2.3 FORECASTING
It is possible that an appropriate solution The goals of forecasting are to predict future
to the particular problem will be intuit- workloads, income, resource consumption, and
ively obvious. More likely, a significant performance; to prevent future bottlenecks
effort will be required to analyze the with recommendations for upgrades; and to pre-
problem. If the problem is primarily user- dict the effect of a change in pricing schemes.
oriented, focus on workload management,
favoring access to one type of work over 2.3.1 PROJECTIONS. The most basic approach
other types of work to needed resources. to forecasting l7] is to make projections from
If the problem is system-oriented, focus baseline data showing trends. The goal is to
on resource management, identifying the predict the capacity of the current system and
critical resources in the system and thus be able to plan for reconfiguration of the
increasing their effective utilization. equipment. This requires being able to predict
future resource utilizations and resulting per-
• Analyze cost effectiveness of proposed formance.
modification
The normal starting point for projections is a
Consider the cost of the modification aimed change in workload. The workload does not
at the specific improvement hypothesis. usually change in a smooth or obvious pattern.
Be sure that even if a maximum expected Acquiring a large new user cannot be predicted
improvement is seen from the change, a) from trend data; the introduction of a new
dollar investment in additional hardware, service (e.g. interactive computing), or new
software or manpower (e.g. another oper- application will add a workload which must be
ator) is recoverable, and b) that the man- estimated. Application areas will normally
power investment required to investigate forecast their workloads in terms like number
the proposed change further is not pro- of students registered, number of library
hibitive. If cost bears a reasonable terminals, number of paycheCks, and so on. The
relationship to the problem, continue, translation of these types of volume predict-
otherwise go back to re-evaluate Objecti- ions to computer loading parameters like trans-
ves. actions/sec., or to resource consumption para-
meters like CPU time and channel busy time is
• Test specific hypothesis different at best; it is nearly impossible if
there is no tracking of computer load and
There are situations when it makes good resource utilization by application.
sense to skip this step; it is possible
that it costs more to test a certain It is conceivable that gross estimates of
improvement definitively than it would to resources required by a new .application could
simply do it. Visibility of the effect be made by estimating path length. That is,
of the change would have to come after one could estimate the number of instructions
the fact from the baseline reporting. It and the number of input/output operations to
should be noted as such. However, in be performed and then calculate the resulting
most cases this step in the process is CPU and channel times required. This however
crucial and should not be bypassed. is not a popular suggestion.
Hypothesis testing is in itself an
iterative process. Start by designing More plausible ways of making the transition
tests and choosing a tool. The two go from new application or increased volume to
hand in hand and can't be separated. computer resources required are directly, by
Possible tools include benchmarking, comparison to the resource requirements of a
simulation, software monitoring, hardware similar eXisting application, or by estimating
monitoring, and processing oJ accounting the workload increase in units whose resource
or other historical files. Run the test requirements are known. These units might be
and collect the data. Analyze the data the number of a particular type of trans-
and evaluate the hypothesis; if the actions per second. Hopefully there exists
testing is inadequate, do more; if the a set of common workload categories which
hypothesis is proven wrong, go back to would prOVide a basis for most applications.
formulate another; if the hypothesis is The first of these methods reqllirp.~ tracking
proven correct, continue. of resourCe requirements by application type.
The second requires the traCking of workload
• Apply modification to the system by application and the tracking of resource
requirements by workload category.
Interface with the change management and
control function to install the change A third alternative for predicting workload
so that a record is kept of the nature changes is to start with changes in users
and date of the change. budgets. At a service organization the amount
of service provided is limited by the money
• Repeat whole process users have to spend. To predict resource
utilization changes from changes in budget
Go back to re-evaluate performance objec- requires tracking of workload and resource
tives. Pay particular attention to the ulilizatlou by budget number. This approach
effect of the change on the production to prediction of resource utilization places
system. Unpredicted side effects of the a heavy burden on understanding the relation-
change should show up in the baseline ship between pricing schemes and workload.
reporting and be dealt with as any other
deviation. If the necessary tracking of workload and
resource utili7.ation has heen done in

-J -
conjunction with tracking the performance of 3.1 PHASE 1 - ESTABLISH BASELINE KNOWLEDGE
various categorie~ uf wurkload, then projections OF CURRENT SYSTEM
in either workload or resource utilization
should make graphical projections of perfor- The main thrust of this phase is to establish
mance fairly easy. Once this level of famili- a well organized baseline data gathering and
arity with the system is reached a move to reporting system. What data is gathered and
simple single server queueing models could how it is displayed, (in particular how the
prove fruitful. data is grouped, i.e. its dimensions) is
dependent on its use. What specific measures
2.3.2 MonRI.TNG. Man~' different modeling or parameters are collected will also depend
methods are available for forecasting purposes. upon the available tools. The number and
The oldest and most familiar is benchmarking. nature of the different ways of looking at
Others are simulation, stochastic models and the same data, and the number of different
analytical models. Figure 4 shows the overall time frames in which to contemplate the same
flow involved in any modeling process. It is data will determine the most efficient pro-
important to recognize at the outset that cessing method and file structure(s) for the
making good use of models implies a continuing baseline data. The goals of developing a
evaluation of the models effectiveness, im~ baseline reporting and gathering system are
provements to it, and validation of the im- discussed in four steps [l~ :
provements. The model can only be as accurate
as the baseline reporting system allows; every • Decide what type of data i~ required
phase of modeling, (development, validation
and Lrackiug) requires a constant input from • Choose specific parameters to measure and
and comparison to the real world data. the tools to collect them

Benchmarks are essentially models of the work- • Design and implement a historical data
load and may be designed to reflect varying storage, retrieval, and display system for
workloads as well as the current one. The the baseline data
benchmark may be input to the real system or
may be used to drive simulation models of all • Validate the data gathered and establish
or various parts of the system, e.g. the a desired level of confidence.
scheduling algorithm or an online system. An
effort to develop benchmarks must be coordi- 3.1.1 DECIDE WHAT TYPES OF DATA ARE REQUIRED.
nated with performance evaluation, fore~ The processes of resource optimization, fore-
casting, pricing, and change control functions. casting, and cost recovery require a detailed
tracking of the following categories of data:
The most promising trend in computer fore-
casting is the maturing of queueing network • System changes
theory models. These types of models have
several advantages over benchmarking and Any attempt to correlate workload and per-
simulation. They can be driven with average formance must take into account the system
values raLher Lhan l:ornplex wurkload models, configuratiun. Hardware changes such as
and they require very little computer added memory repaired disk controller, disk
resources to run. As this branch of computer drive offline, or upgrade to a different
science matures, high level languages are disk system must be traced. The time and
being developed to make model creation very date of software changes such as an internm
simple ~,~. Several studies have been performance modification, a changed IPS
done USlng queuelng network models WhlCh parameter, added software system, increased
lndlcate theIr power ~,lq It lS not memory allocation to a subsystem should be
unusual for predictions to hA made hy a re('.ornen.
single server model which are within five
percent of actual values f91. • Unavailable capacity
3. IMPLEMENTATION A good trace is needed of scheduled down
time, unscheduled down times and their
For convenience, the implementation plan is cause, occurances of various problems and
divided into five phases. The sequence does time and date they were fixed, and time
not have to be rigidly adhered to; the rea- lost to operational problems.
soning is that a total capacity planning
effort is too large to implement without the • Workload
benefits of definite, reasonably spaced mile-
stones throughout the overall process. If The goal is to understand and document the
many elements of capacity planning are already workload. Deciding how to categorize
functioning, it would be reasonable to tackle workload is complicated since we want to
problems of several phases in parallel. Tuning know how to predict future workloads, how
efforts which are currently in progress should workload affects resource utilization, how
not be disrupted; the hope is thut this workload affects performance (e.g. res-
implementation will augument current efforts. ponse times), how workload affects priority
demands and incomes, how the workload
Phase 1 - Establish baseline knowledge varies in time, and how to manage (schedu10
of currpnt ~ystem wnrklnan fnr hest pprfnrmance. In order to
Phase 2 - Establish user service predict future workloads, the current work-
objectives load must be distingUished according to
Phase 3 - Tune system application area or type. The categories
Phase 4 - Establish forecasting techniques must also reflect different expected
Phase 5 - Evaluate overall process and resource utilization, different objectives
iterate

- 4 -
that must be met, and different priorities log data will continue to provide a large
(p._g_ price class). These broad goals apply portion of the data foundation. Many data
to online system transactions as well as items collected for SMF (e.g. CPU time used
batch jobs. That is, categories of trans- by a job) will be analyzed with respect to
actions need to be defined, the trans- many different variables (e.g. time, date,
action rates for each determined, and the schedUling parameters, current system workload
number of concurrent online system users parameters, current utilization parameters and
recorded. Characterizing the workload turnaround for the job). The multifaceted
is also necessary in order to develop and nature of the data is clear when considering
validate models of the workload, e.g. capacityj capacity is defined in terms of the
benchmarks and script files. interrelations of workload, resource utili-
zations and performance. The mass of data and
• Resource utilization the complexity of the analysis create a prob-
lem of major proportions: how to process and
The goal Is to know the amount of resourCffi display it in a flexible inexpensive manner
used by each category of work; resources [12J .
include CPU time, memory, channels and
devices. Since workload is categorized in 3.1.4 VALIDATE COLLECTED DATA AND ESTABLISH
several ways, resource utilization will be CONFTDENCE_ The goal is to have con-
also. Utilization by application area is fidence in the meaning of measured and calcul-
of special interest; projections of work- ated data. One way to establish such confid-
load by application make it possible to ence is by redundant measures made by different
project resource utilization and thus tools, including manually collected or calcu-
future hardware bottlenecks. Also of lated data. Another is by benchmarking or
particular interest is tracking resource using other modeling techniques. Discrepances
consumption by workload categories that need to be understood and resolved. Each
reflect expected resource utilization. reported piece of information should have a
Exceptional resource consumption could well defined constant meaning.
indicate incorrect classification by the
user or ineffective divisions; it could 3.2 PHA~E 2 - SET PERFORMANCE OBJECTIVES
mean a need to redefine some categories
of work. The data collected about Performance standards are needed such that
resource utilization would eventually be both the user community knows when to complain
needed to validate models of system per- (and what about) and the center knows when
formance. something is wrong. There is a clear necess-
ity for setting performance standards even in
• Performance the absence of sanctions (such as price breaks
for users when objectives are not met). The
The goal is to have baseline data against aim is to predict and prevent performance
which to compare performance objectives. failures before they occ-ur.
User performance measures should inclUde
turnaround times and response times for User-oriented performance objectives would be
each category of work. It would also be defined in the same units as baseline measures.
of interest to report the percent of times If it is desirable to educate users to expect
deadlines are not met for scheduled jobs. different turnaround for different types of
jobs and different response times for different
3.1.2 CHOOSE SPECIFIC PARAMETERS AND TOOLS. types of transactions or commands, then diff-
The goal is to collect the data required for erent objectives must be set for each type of
all of the purposes listed above. An in- work. Sometimes this means setting limits on
stallation serious about forecasting must be the minimum as well as the maximum response
prepared to invest in tools necessary to fill time.
in the reporting needed for capacity visi-
bility. There are many baseline measurem~nt Setting system-oriented performance goals meam
tools availablej most are dependent upon the defining the workload that the system is ex-
operating system. Two exceptions which Come pected to handle. Recall that the capacity
to mind are account log (to a large degree) of a system is the workload it can handle
and hardware monitors. Some tools collect when well tuned without exceeding user ser-
data, some analyze data and display informa- vice objectives. If the expected workload,
tion and others do both. A few tools are i.e. goal, exceeds the capacity of the system,
vendor supplied, such as account log pro- it is time for a reconfiguration.
cessors and a plethora are available for a
price. When a change of operating systems is 3.3 PHASE 3 - TUNE SYSTEM
being planned, it is important to become
thoroughly familiar with the measurement tools Once the ongoing baseline data collection and
available for the new environment. display system is functioning and performance
objectives are defined a massive effort can be
3.1.3 DATA STORAGE. RETRIEVAL AND DISPLAY. aimed at system tuning. This is not to imply
The goal is to report billing scheduling, that the system has disintegrated while the
capacity and cost/utilization information in major effort was directed to Phases land 2;
a timely fashion. Well formatted, compre- presumably the historical approach to tuning,
hensible management summary reports should be namely fire fighting, has continued to
periodically produced. In addition, detailed operate. The goal at this stage is to acquire
or specialized reports should be readily necessary tools and expertise (if not already
available upon request fur any reporting on site) to insure a well tuned system. It is
period. Whereas many data collection and a time to fill in gaps in previous tuning
display packages are available and may be efforts and see the system as a whole; this is
chosen for use, it is inevitable that account important since individual fine tuning projec~

- 5 -
are often very narrow in scope. It is also a b~nchmark. Note that benChmarks are used for
time to set precedents for the management of many ()thf>r pl1rpOSf>S and somf> w:i 11 probablr
future tuning projects. already exist for such purposes as tests of
new function, regression, load or stress,
Fine tuning of an operating system and its sub- tuning, and new pricing schemes [16] .
~y~tems is primarily a systems programming
effort; it requires the most detailed possible 3.5 PHASE 5 - EVALUATE OVERALL PROCESS AND
knowledge of the system. Detailed tools ar~ ITERATE
required in addition to the baseline reporting
already in effect, e.g. GTF trace analysis, This phase is a review of the whole capacity
software monitor reports of internal queues planning effort. The current baseline re-
and locks, module activity analysis, and de- porting process should be examined to insure
tailed hardware monitor traces of components that necessary information is being collected.
of the system. Tuning also involves the engi- Perhaps some priorities have changed since the
neering staff in evaluation of equipment con- original decisions were made abouL what tu
figuration, in particular communication equip- measure; perhaps more detail or less is needed.
men t. Smith [13] describe~ a massi ve mut i area Unnecessary reports should be culled from the
tuning effort in an environment where response system. The analysis and hypothesis formula-
tilllt"s were important. Terminals, terminal tion stages of the tuning effort should be
controllers, communication controllers, comm- re-evaluated. Be sure that CPE projects are
unication channels, memory, operating system, interfacing with change control management.
message control program, application programs, Re-evaluate the performance objectives. The
I/O channels and controllers, and disk data tra.rking phase of modeling has been entered;
sets were all studied. Particularly good insure that the ongoing validation of models
results weTP nhtained by reducing I/O time with real world data is taking place. If
and eliminating I/O activity. difficult problems arise in any area, consider
reorganizing part or all of the personnel
The possibility of optimizing applications involved in the effort. By this stage a solid
pro~rams as well as the operating system measurement base should exist to support rules
should not be ignored. In an traditional of thumb and good intuition.
production environment, application program
tuning can be the most significant performance NOTES
improver.
[I] For an excellent discussion of capacity
At this stage it is essential that a frame- management see Ken Kolence, "Software
work be developed for computer measurement Physics", Proc. of SHARE 48 (1977), p. 86.
and evaluation (CPE) projects [141. Each pro-
ject should bave a beginning and an end. The Cotton, Ira W., "Some Fundamentals of
actual or potential problem should be arti- Price Theory for Computer Services",
culated along with any assumptions of hypoth- Performance Evaluation Review, Vol. 5C, I
eses. The expected benefits and the criteria (March 19761, p. I.
to be used to determine whether or not the
project was successfully completed need to be Giamrno, Thomas, "Deficiencies in Computer
spelled out. A method of determining whether Prit:ing Structure Theury", Performance
or not the proposed benefits outweigh the cost
Evaluation Review, Vol. 5C, 1 (March 1976),
of the project must be developed and used; in p. 13.
essence. tuning projects need to be managed.

3.4 PHASE 4 - ESTABLISH FORECASTING Borovits, Israel and Ein-Dor, Phillip,


"Cost/Utilization: A Measure of System
TECHNIQUES Performance", Corom. of the ACM. Vol. 20
3 (March 1977):-PP. 185----,91.
During this phase the value of various fore-
casting methods should be examined. Pro-
The ideas in this section are borrowed
jections from baseline data were made possible heavily from the work of Thomas E. Bell.
by the completion of the first phase. Like- "Framework and Initial Phases for Computer
wise the data required to run analytical mode~ Tuning", Computer Measurement and Evalua-
is already being collected, as is the data
tion: Selected Papers from the SHARE
required to characterize the batch and on-line
systems workloads. J. D. Noe has described Project. Vol. II (1974). pp. 703 - 737.
some criteria for selecting modeling methods [6]
D-51. Using his criteria with performance The concept of setting performance goals
is taken from OSjVS2 MVS Performance
prediction as one's goal, leads one toward
queueing network models; the same criteria Notebook. GC28-0886 0, IBM (July 1977).
applied to the goal of predicting the behavior
of various scheduling algorithms hiases one in Figure 4 and many of the thoughts about
furecasting are taken [rom Dr. L. Bronner
the direction of simulation (e.g. GPSS or
in "Capacity Planning", IBM Technical
SIMSCRIPT) models. The overriding criteria Bulletin, GC22-9001-00 (Jan. 1977).
is that the model allows the selection of
characteristics which are important to the
problem at hand. [8J Sauer, C. H., M. Reiser, and E. A. MacNair,
"RESQ-A Package for Solution of Generali-
Some comments about benchmarks (models of the zed Queuing Networks, AFIPS Conf. Proc.,
AFIPS PRESS, Vol. 46 (1977),~ 977-986.
workload) are in order. Very sophisticated
benchmarks are reqUired to drive simulation
models; it is desirable to be able to vary the Buzen, J. R., "Modeling Computer Svstem
nature of the workload in !:iuch C&!:ies. It may Performance", CMG-VII Conf. Proc. (1976),
be decided at this stage to develop such a
p. 230.

- 6 -
For an example of a practical effort in
modeling see Lipsky, I. and Church, J. D.,
"Applications of a Queueing Network Model
for a Computer System", Computing Surveys
9, 3 (Sept. 1977), pp. 205-221.

An extremely complete documentation of


one installation1s implementation of the
first three steps is described by Dennis
M. Reddington, "Workload History,
Characterization, and Forecast Descrip-
tion", CMG-VII Conf. Proc. (1976), p.
167. --_.-

For a discussion about planning for a


capacity management database see Gilbert
A. Van Schaar, "Capacity Management",
Computer Measurement and Evaluation:
selected papers from Lho SHARE pruiect,
Vol. IV (1976), p. 419.

Smith, James R. Jr., "Modulation of a


Teleprocessing System", Computer
Measurement and Evaluation: selected
papers from the SHARE project, Vol. IV
(1976), p. 483.

Gierach, Stephen A., "Estimating the


Value of ePE Pt'ojects", CMGIV Conf. Proe.
(1976), p. 18. -- -- --

~~ Noe, J. D., "Criteria for System


Modeling Methods", BBUG/CMG Conf. Proc.
(1975), p. 188.

For expansion of these testing concepts,


see "A Guide to testing in a complex
System Environment", GH20-1628-0, IBlo1
(Nov. 1974), pp. 1-5.

- 7 -
MANAGEMENT
I.. DATA COllECTION" ANALVSlS .. ~ REVIEWe. -.(.. PlAN"",NCI • I
CONTROL
Ne", ApoliuliOfl Foree..',11 r Marqge"Mnt

r------l UserNFU FOtKlIlff--,


Tlend 0111 --. I
Polieifl&
Ol!Ckiom

Bill!", 81 EDP Cost f I I I


A«ounting.
SY"I~

"'
~

Cl
C

'"'"
Fomlul Workload
Forl!C;,.t&'
~ Ch.racterililtion Plan,
R_ Worklo8d
I 0.11 • U,ege Fbnna! Oifl!Cl/lwlirecl
Im"lIllltllUltiQn COMPARISON I~
Q t--- • T,end Clpacity UIiDl
I~:;::-
SYllllf'll OF
• Cher'Cle,ltatlon PIM

""
AClUALS
SyJlem
() TO
H PLANNED
:;j
00

~
S;
Cl

"'"'Z" FormerCquilll~"t &


ConligurltiOf' PI.n,
>-3
,.,formenee Form.1
o Control P'tfor",.~
< Cnn';~rltiou
OJ Sy.llm PIM
~ I
~ COil
OJ I
'" I I
Capoclty Management ,.,fQfmlnte 1--------1 A.t. Sellin,
Overview 8r C.pltity
U_
I C,pEI", ~ C........
loPI1II1
Funclion
:""'"-] Management
Figure '·1 Imptonment L AhteW Melltings or Allet
Aeliwitlel
--T-- J
On-PPM
e..,•.,.. l!lli. ,,,,II,..•• I", "",,,.,••......-..... ,"". (No ActlOllI
z 100 z 100
...,
0

N
~

Eo<

~
80 ...,
0
~

Eo<
N
~
80

~
60 ~
60
Eo< Eo<
"'.., 40 "'.., 40
.
"'
..,uEo<
Z
20 ...,
to
Eo<
Z
20
00 20 40 60 80 100 00 20 40 60 80 100
u
PERCENT OF COST PERCENT OF COST
'"'"
0. WELL BALANCED & WELL UTILIZED "''"
0. WELL BALANCED & POORLY UTILIZED

z z

..,.,
0 100 0
~
Eo<
N
~
80 ...,
~

Eo<
N
~
100
80
~ 60 ~ 60
Eo< Eo<

..""''"
Eo<
Z
40
20 .""''"
Eo<
Z
40
20
"'u 00 20 40 60 80 100 "'u 00 20 40 60 80 100
"''0." PERCENT OF COST
RELATIVELY WELL UTILIZED & "''0." PERCENT OF COST
POORLY UTILIZED & UNBALANCED
SOMEWHAT UNBALANCED

FlGURE 2. ILLUSTRATIVE COST/UTILIZATION HISTOGRAMS

- ; -
~ i
r ___ ~ --'''ILI~u_N_D_E_H_S,T._~N_D_T_ll_E
,... SYSTEM __1

I 1
L
I
. ;W~i"tc;.h~i"_n. j
limits
SET
(RE-EVALUATE)
PERFORM1\NCE
OBJErTIVES
I
I
-
MEASURE AGAINST
PERFOR:llANCE
OBJECTIVES
,
problem exists

FOR:l1rLATE
IMPROVEMENT
HYPOTHESIS
I
.
1
'

ANALYZE COST
none EFFECTIVENESS
OF MODIFICATION

l
TEST SPECIFIC invalid
HYPOTHESIS

1
,
IMPLEMENT
MODIFICATIONS I
satisfactory
1
TEST EFFECTIVENESS unsatisfactorv
I OF MODIFICATIONS

FIGURE 3. PROCESS OF OPTIMIZING RESOURCE UTILIZATION

- 10 -
ACTUAL SYSTEM
JOBS/HRS. DATA BASE ORGANIZATION
MESSAGE RATES TRANSACTION SCENARIOS
~ UTILIZATIONS APPLICATION PATH LENGTHS

\L__ R_E_S_P_O_N_S_E_,_T_I_M_E_S SU_P_F._R_V_'_S_O_R_P_A_T_H_L_E_N_G_T_H_S_.......J

.
~1EASUREMENT RESULTS 1
TOOLS RESPONSE TIME
HARDWARE MONITOR TURNAROUND
SOFTWARE MONITOR UTILIZATIONS

~ ~
\':ORKLOAD
CHARACTERIZATION
I SERVICE
CHARACTERIZATION
DETAIL ANALYSIS
TOOLS
CPL' TIME/TRANS. SOFTWARE SUBSYS.
· TRANSACTION TYPEI
· TRA~SArTION RATE l'TILIZATIONS HARDWARE SUBSYS. -
· EXCP RATES MODEL
SYSTEM
CHANNELS

- I I ..
1
/
~'"
.......
"'-.... ~.//

PHASES '1OnFL RESULTS 2


DEVELOPMENT QUEUEING RESPONSE TIME
VALIDATION EMPIRICAL TUR:--IAROUND
UTILIZATIONS

~"";"'" ~
TRACKING

SYS. rONFTGtJRATTON REQl'TRED MODIFICATIONS


PROCESSOR MODEL
DO STORAGE . DATA
NO. OF TERM.
NETWORK
OPERATING SYSTEM

FIGURE 4. PERFORHA~CE PREDICTION CYCLE

- II -

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy