Chapter 3
Chapter 3
3
A GOAL-BASED FRAMEWORK FOR SOFTWARE
MEASUREMENT
1
Software Measurement:
6
Classifying Software Measures
8
Classifying Software Measures (cont.)
Internal External
Size, Effort, Cost Usability
Code Complexity Integrity
Functionality Efficiency
Modularity Testability
Redundancy Reusability
Syntactic Correctness Portability
Reuse Interoperability
7
Importance Of Internal Attributes
1
0
Products
• Products are not restricted to the items that management is
committed to deliver to the customer. Any artifact or
document produced during the software life cycle can be
measured and assessed.
• For example developers often build prototypes for examination
only, so that they can understand requirements or evaluate
possible designs; these prototypes may be measured in some
way.
• External product attributes depend on both product
behavior and environment, each attribute measure should
take these characteristics into account.
• Internal product attributes are sometimes easy to
measure.
• We can determine the size of a product by
measuring the number of pages it fills or the number
of words it contains.
• Other internal product attributes are more
difficult to measure, because opinions differ as to
what they mean and how to measured them. For
example the complexity of codes.
1
2
Resources
17
Goal-Question-Metric
24
Perspective − Examine the (cost, effectiveness,
correctness, defects, changes, product measures, etc.)
from the viewpoint of the developer, manager, customer,
etc. Example: Examine the defects from the viewpoint
of the customer.
Environment − The environment consists of the
following: process factors, people factors, problem
factors, methods, tools, constraints, etc. Example:
The customers of this software are those who have no
knowledge about the tools.
25
Examples Of AT&T goals, questions
and metrics
26
e.g. driving metrics from
G&Q
27
Measurement and Process
Improvement
• Normally measurement is useful for:
Understanding the process and products
Establishing a baseline
Accessing and predicting the outcome
• According to the maturity level of the process given by SEI,
the type of measurement and the measurement program will be
different. It has suggested that there are five levels of
process maturity. Namely: ad hoc, repeatable, defined,
managed and optimized.
• The SEI distinguishes one level from another in terms of key
process activity going on at each level.
28
• Level 1: Ad hoc
• At this level, the inputs are ill- defined, while the outputs are
expected. The transition from input to output is undefined and
uncontrolled. For this level of process maturity, baseline
measurements are needed to provide a starting point for
measuring.
• Level 2: Repeatable
• At this level, the inputs and outputs of the process, constraints, and
resources are identifiable. A repeatable process can be described
by the following diagram.
• The input measures can be the size and volatility of the
requirements. The output may be measured in terms of system
size, the resources in terms of staff effort, and the constraints in
terms of cost and schedule.
29
•Level 3: Defined
• At this level, intermediate activities are defined, and
their inputs and outputs are known and understood. A
simple example of the defined process is described in
the following figure.
• The input to and the output from the intermediate
activities can be examined, measured, and assessed.
30
•Level 4: Managed
• At this level, the feedback from the early project
activities can be used to set priorities for the current
activities and later for the project activities. We can
measure the effectiveness of the process activities.
The measurement reflects the characteristics of the
overall process and of the interaction among and
across major activities.
31
• Level 5: Optimizing
• At this level, the measures from activities are used to
improve the process by removing and adding process
activities and changing the process structure dynamically in
response to measurement feedback. Thus, the process change
can affect the organization and the project as well as the
process. The process will act as sensors and monitors, and
we can change the process significantly in response to
warning signs.
32
To sum-up
34
• The goal and question analysis will be the same, but
the metric will vary with maturity.
• The more mature the process, the richer will be
the measurements.
• The GQM paradigm, in concert with the process maturity,
has been used as the basis for several tools that assist
managers in designing measurement programs.
• GQM helps to understand the need for measuring the
attribute, and process maturity suggests whether we are
capable of measuring it in a meaningful way.
• Together they provide a context for measurement.
35
Software Measurement Validation
• Even when you know which entity and attribute you want
to assess, there are many measures from which to choose.
• Sometimes, managers are confused by measurement which
is not surprising.
• One of the roots of this confusion is the lack of
software measurement validation.
36
• The validation approach depends on distinguishing
measurement from prediction
Measures or measurement systems are used to assess
an existing entity by numerically characterizing one or
more of its attribute.
Prediction systems are used to predict some attribute of
a future entity, involving a mathematical model with
associated prediction procedures.
37
Software Measurement Validation (con’t)
38
• Validating a prediction system in a given environment is
the process of establishing the accuracy of the prediction
system by empirical means; that is, by comparing model
performance with known data in the given environment.
• It involves experimentation and hypothesis testing.
• To validate the prediction system formally you must first
decide how stochastic it is, and then compare performance
of the prediction system with known data points.
39
Performing Software Measurement
Validation
40
Advantages of the GQM Framework