0% found this document useful (0 votes)
10 views57 pages

Metrics (Fsu)

The document presents a comprehensive overview of software metrics, emphasizing their importance in improving software processes, quality control, and project management. It discusses various types of metrics, including process, project, and product metrics, and outlines the significance of systematic measurement in addressing the software crisis. Additionally, it details methods for measuring quality and provides a structured approach to selecting and implementing useful software metrics.

Uploaded by

Pakeeza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views57 pages

Metrics (Fsu)

The document presents a comprehensive overview of software metrics, emphasizing their importance in improving software processes, quality control, and project management. It discusses various types of metrics, including process, project, and product metrics, and outlines the significance of systematic measurement in addressing the software crisis. Additionally, it details methods for measuring quality and provides a structured approach to selecting and implementing useful software metrics.

Uploaded by

Pakeeza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 57

SEA Side

Software Engineering Annotations

Annotation11:
Software Metrics

One hour presentation to inform you of new


techniques and practices in software development.
Professor Sara Stoecklin
Director of Software Engineering- Panama City
Florida State University – Computer Science
sstoecklin@mail.pc.fsu.edu
stoeckli@cs.fsu.edu
850-522-2091
850-522-2023 Ex 182
1
Express in
Numbers

Measurement provides a mechanism for


objective evaluation
2
Software Crisis
• According to American Programmer, 31.1%
of computer software projects get canceled
before they are completed,
• 52.7% will overrun their initial cost estimates
by 189%.
• 94% of project start-ups are restarts of
previously failed projects.
Solution?
systematic approach to software development and
measurement

3
Software Metrics
• It refers to a broad range of
quantitative measurements for
computer software that enable to
– improve the software process
continuously
– assist in quality control and productivity
– assess the quality of technical products
– assist in tactical decision-making
4
Measure, Metrics, Indicators
• Measure.
– provides a quantitative indication of the
extent, amount, dimension, capacity, or size
of some attributes of a product or process.
• Metrics.
– relates the individual measures in some way.
• Indicator.
– a combination of metrics that provide insight
into the software process or project or product
itself.

5
What Should Be Measured?

process
process metrics

project metrics
measurement
product metrics

product
What do we
use as a
basis?
• size?
• function?
6
Metrics of Process Improvement
• Focus on Manageable
Repeatable Process
• Use of Statistical SQA
on Process
• Defect Removal
Efficiency

7
Statistical Software Process Improvement

All errors and defects The overall cost in


are categorized by each category is
origin computed

The cost to correct Resultant data are


each error and defect analyzed and the
is recorded “culprit” category is
uncovered

No. of errors and defects


Plans are developed
in each category is
counted and ranked in
to eliminate the
descending order errors
8
Causes and Origin of Defects
Standards
Standards
7%
7%
Error
ErrorChecking
Checking Specification
Specification
11%
11% 25%
25%

Data
DataHandling
Handling
11%
11%

User
UserInterface
Interface
12% Logic
Logic
12% 20%
20%
Hardware
HardwareInterface
Interface Sofware Interface
8% Sofware Interface
8% 6%
6%
9
Metrics of Project Management
• Budget
• Schedule/ReResource
Management
• Risk Management
• Project goals met or
exceeded
• Customer satisfaction

10
Metrics of the Software Product
• Focus on Deliverable
Quality
• Analysis Products
• Design Product
Complexity – algorithmic,
architectural, data flow
• Code Products
• Production System

11
How Is Quality Measured?
• Analysis Metrics
– Function-based Metrics: Function Points
(Albrecht), Feature Points (C. Jones)
– Bang Metric (DeMarco): Functional Primitives,
Data Elements, Objects, Relationships, States,
Transitions, External Manual Primitives, Input Data
Elements, Output Data Elements, Persistent Data
Elements, Data Tokens, Relationship Connections.

12
Source Lines of Code (SLOC)
• Measures the number of physical lines of
active code

• In general the higher the SLOC in a module


the less understandable and maintainable
the module is

13
Function Oriented Metric -
Function Points
• Function Points are a measure of “how big” is the
program, independently from the actual physical
size of it
• It is a weighted count of several features of the
program
• Dislikers claim FP make no sense wrt the
representational theory of measurement
• There are firms and institutions taking them very
seriously

14
Analyzing the Information Domain
weighting factor
measurement parameter count simple avg. complex
number of user inputs X 3 4 6 =
number of user outputs X 4 5 7 =
number of user inquiries X 3 4 6 =
number of files X 7 10 15 =
number of ext.interfaces X 5 7 10 =
Unadjusted
count-totalFunction
Unadjusted FunctionPoints:
Points:
Assuming all
allinputs
inputswith
Assumingmultiplier
complexity withthe
thesame
sameweight,
weight,all
alloutput
outputwith
withthe
thesame
sameweight,
weight,……
function points
Complete
Complete Formula
Formula for
for the
the Unadjusted
Unadjusted Function
Function Points:
Points:

 Inputs
Wi  Output Wo   Inquiry Win   InternalFiles Wif   ExternalInterfaces Wei

15
Taking Complexity into Account

Factors are rated on a scale of 0 (not important)


to 5 (very important):

data communications on-line update


distributed functions complex processing
heavily used configuration installation ease
transaction rate operational ease
on-line data entry multiple sites
end user efficiency facilitate change

Formula:
Formula:
CM ComplexityMultiplier FComplexityMultiplier
16
Typical Function-Oriented
Metrics
• errors per FP (thousand lines of code)
• defects per FP
• $ per FP
• pages of documentation per FP
• FP per person-month

17
LOC vs.
FP
• Relationship between lines of code and
function points depends upon the
programming language that is used to
implement the software and the quality of
the design

• Empirical studies show an approximate


relationship between LOC and FP

18
LOC/FP
C
(average)
Assembly language 320
128
COBOL, FORTRAN 106
C++ 64
Visual Basic 32
Smalltalk 22
SQL 12
Graphical languages (icons) 4

19
How Is Quality Measured?
• Design Metrics
– Structural Complexity: fan-in, fan-out, morphology
– System Complexity:
– Data Complexity:
– Component Metrics: Size, Modularity, Localization,
Encapsulation, Information Hiding, Inheritance,
Abstraction, Complexity, Coupling, Cohesion,
Polymorphism
• Implementation Metrics
Size, Complexity, Efficiency, etc.
20
Comment Percentage (CP)
• Number of commented lines of code divided by
the number of non-blank lines of code

• Usually 20% indicates adequate commenting for C


or Fortran code

• The higher the CP value the more maintainable the


module is

21
Size Oriented Metric - Fan In and
Fan Out
• The Fan In of a module is the amount of information
that “enters” the module
• The Fan Out of a module is the amount of
information that “exits” a module
• We assume all the pieces of information with the
same size
• Fan In and Fan Out can be computed for functions,
modules, objects, and also non-code components
• Goal - Low Fan Out for ease of maintenance.

22
Size Oriented Metric - Halstead
Software Science
Primitive Measures
number of distinct operators
number of distinct operands
total number of operator occurrences
total number of operand occurrences
Used to Derive
maintenance effort of software
testing time required for software
23
Flow Graph
if
if (a)
(a) {{ Predicate Nodes

X();
X();
}} else
else {{
Y();
Y(); a
}}

Y
X

•V(G) = E - N + 2
• where E = number of edges
24
• and N = number of nodes
McCabes Metric

• Smaller the V(G) the simpler the module.


• Modules larger than V(G) 10 are a little
unmanageable.
• A high cyclomatic complexity indicates
that the code may be of low quality and
difficult to test and maintain

25
Chidamber and Kemerer Metrics
• Weighted methods per class (MWC)
• Depth of inheritance tree (DIT)
• Number of children (NOC)
• Coupling between object classes (CBO)
• Response for class (RFC)
• Lack of cohesion metric (LCOM)

26
Weighted methods per class (WMC)

• ci is the complexity of each method


Mi of the class
– Often, only public methods are
n considered

WMC  ci • Complexity may be the McCabe


complexity of the method
i 1 • Smaller values are better
• Perhaps the average complexity
per method is a better metric?
The number of methods and complexity of methods involved is a direct
predictor of how much time and effort is required to develop and27
maintain the class.
Depth of inheritance tree (DIT)
• For the system under examination, consider the
hierarchy of classes
• DIT is the length of the maximum path from the
node to the root of the tree
• Relates to the scope of the properties
– How many ancestor classes can potential affect a class

• Smaller values are better

28
Number of children (NOC)
• For any class in the inheritance tree, NOC is the
number of immediate children of the class
– The number of direct subclasses
• How would you interpret this number?

• A moderate value indicates scope for reuse and


high values may indicate an inappropriate
abstraction in the design

29
Coupling between object classes (CBO)

• For a class, C, the CBO metric is the number of


other classes to which the class is coupled
• A class, X, is coupled to class C if
– X operates on (affects) C or
– C operates on X
• Excessive coupling indicates weakness of class
encapsulation and may inhibit reuse
• High coupling also indicates that more faults
30
may be introduced due to inter-class activities
Response for class (RFC)
• Mci # of methods called in
response to a message that
invokes method Mi
n

 Mc
– Fully nested set of calls
• Smaller numbers are
better i
– Larger numbers indicate
increased complexity and i 1
debugging difficulties

If a large number of methods can be invoked in response


to a message, the testing and debugging of the class
31
becomes more complicated
Lack of cohesion metric (LCOM)

• Number of methods in a class that reference a


specific instance variable
• A measure of the “tightness” of the code
• If a method references many instance variables,
then it is more complex, and less cohesive
• The larger the number of similar methods in a
class the more cohesive the class is
• Cohesiveness of methods within a class is
32
desirable, since it promotes encapsulation
Testing Metrics
• Metrics that predict the likely number of
tests required during various testing phases
• Metrics that focus on test coverage for a
given component

33
Views on SE Measurement

34
Views on SE Measurement

35
Views on SE Measurement

36
12 Steps to Useful Software
Metrics
Step 1 - Identify Metrics Customers
Step 2 - Target Goals
Step 3 - Ask Questions
Step 4 - Select Metrics
Step 5 - Standardize Definitions
Step 6 - Choose a Model
Step 7 - Establish Counting Criteria
Step 8 - Decide On Decision Criteria
Step 9 - Define Reporting Mechanisms
Step 10 - Determine Additional Qualifiers
Step 11 - Collect Data
Step 12 - Consider Human Factors

37
Step 1 - Identify Metrics
Customers

Who needs the information?

Who’s going to use the metrics?

If the metric does not have a customer --


do not use it.
38
Step 2 - Target Goals

Organizational goals
– Be the low cost provider
– Meet projected revenue targets
Project goals
– Deliver the product by June 1st
– Finish the project within budget
Task goals (entry & exit criteria)
– Effectively inspect software module ABC
– Obtain 100% statement coverage during
testing 39
Step 3 - Ask Questions

Goal: Maintain a high level of customer


satisfaction
• What is our current level of customer
satisfaction?
• What attributes of our products and services are
most important to our customers?
• How do we compare with our competition?
40
Step 4 - Select Metrics
Select metrics that provide information
to help answer the questions
• Be practical, realistic, pragmatic
• Consider current engineering environment
• Start with the possible

Metrics don’t solve problems


-- people solve problems
Metrics provide information so people can make
better decisions 41
Selecting Metrics
Goal: Ensure all known defects are corrected
before shipment







42
Metrics Objective Statement Template
understand
attribute
evaluate in order
To the of the goal(s)
control to
entity
predict

Example - Metric: % defects corrected


% defects ensure all
found & known defects
in order
To evaluate the corrected are corrected
to
during before
testing shipment
43
Step 5 - Standardize Definitions

Developer User
44
Step 6 - Choose a Measurement
Models for code inspection metrics
• Primitive Measurements:
– Lines of Code Inspected = loc
– Hours Spent Preparing = prep_hrs
– Hours Spent Inspecting = in_hrs
– Discovered Defects = defects

• Other Measurements:
– Preparation Rate = loc / prep_hrs
– Inspection Rate = loc / in_hrs
– Defect Detection Rate = defects / (prep_hrs + in_hrs)
45
Step 7 - Establish Counting
Criteria
Lines of Code
• Variations in counting
• No industry accepted standard
• SEI guideline - check sheets for criteria
• Advice: use a tool

46
Counting Criteria - Effort
What is a Software Project?
• When does it start / stop?
• What activities does it include?
• Who works on it?

47
Step 8 - Decide On Decision Criteria
Establish Baselines
• Current value
– Problem report backlog
– Defect prone modules
• Statistical analysis (mean & distribution)
– Defect density
– Fix response time
– Cycle time
– Variance from budget (e.g., cost, schedule) 48
Step 9 - Define Reporting Mechanisms
100
Open Fixed Resolved
80
Jan-97 23 13 3
Feb-97 27 24 11 60

Mar-97 18 26 15 40

Apr-97 12 18 27 20

0
160 100 1st Qtr 2nd Qtr 3rd Qtr 4th Qtr
80
120
60
80 40
20
40
0
Jan Mar May July
0
1st Qtr 2nd Qtr 3rd Qtr 4th Qtr

120

80

40

0
0 20 40 60 80 100 120
1 2 3 4 5 6 7 8 9 10 11 12

49
Step 10 - Determine Additional
Qualifiers
A good metric is a generic metric
Additional qualifiers:
• Provide demographic information
• Allow detailed analysis at multiple levels
• Define additional data requirements

50
Additional Qualifier Example
Metric: software defect arrival rate
• Release / product / product line
• Module / program / subsystem
• Reporting customer / customer group
• Root cause
• Phase found / phase introduced
• Severity

51
Step 11 – Collect Data
What data to collect?
• Metric primitives
• Additional qualifiers

Who should collect the data?


• The data owner
– Direct access to source of data
– Responsible for generating data
– Owners more likely to detect anomalies
– Eliminates double data entry

52
Examples of Data Ownership
Owner Examples of Data Owned
• Management • Schedule
• Budget
• Engineers • Time spent per task
• Inspection data including defects found
• Root cause of defects
• Testers • Test Cases planned / executed / passed
• Problems
• Test coverage
• Configuration management • Lines of code
specialists • Modules changed
• Users • Problems
• Operation hours

53
Step 12 – Consider Human Factors

The People Side of the Metrics Equation


• How measures affect people
• How people affect measures

“Don’t underestimate the intelligence of your


engineers. For any one metric you can come
up with, they will find at least two ways to
beat it.” [unknown]
54
Don’t
Measure Use metrics as
individuals a “stick”

Use only one


Cost
metric

Quality
Schedule
Ignore the data
55
Do
Select metrics Provide feedback
based on goals
Goal 1 Goal 2 Data

Question 1 Question 2 Question 3 Question 4

[Basili-88]
Data Providers Metrics
Metrics 1 Metric 2 Metric 3 Metric 4 Metric 5
Feedback

Processes,
Products &
Services Obtain “buy-in”

Focus on processes,
products & services
56
References
• Chidamber, S. R. & Kemerer, C. F., “A Metrics Suite for Object Oriented Design”, IEEE Transactions on Software Engineering, Vol. 20,
#6, June 1994.

• Hitz, M. and Montazeri, B. “Chidamber and Kemerer’s Metrics Suite: A Measurement Theory Perspective”, IEE Transaction on Software
Engineering, Vol. 22, No. 4, April 1996.

• Lacovara , R.C., and Stark G. E., “A Short Guide to Complexity Analysis, Interpretation and Application”, May 17, 1994. http://
members.aol.com/GEShome/complexity/Comp.html

• Tang, M., Kao, M., and Chen, M., “An Empirical Study on Object-Oriented Metrics”, IEEE Transactions on Software Engineering, 0-7695-
0403-5, 1999.

• Tegarden, D., Sheetz, S., Monarchi, D., “Effectiveness of Traditional Software Metrics for Object-Oriented Systems”, Proceedings: 25th
Hawaii International Confernce on System Sciences, January, 1992, pp. 359-368.

• “Principal Components of Orthogonal Object-Oriented Metrics”


http://satc.gsfc.nasa.gov/support/OSMASAS_SEP01/Principal_Components_of_Orthogonal_Object_Oriented_Metrics.htm
• Software Engineering Fundamentals
by Behforhooz & Hudson, Oxford Press, 1996
Chapter 18: Software Quality and Quality Assurrance

• Software Engineering: A Practioner's Approach


by Roger Pressman, McGraw-Hill, 1997

• IEEE Standard on Software Quality Metrics Validation Methdology (1061)

• Object-Oriented Metrics
by Brian Henderson-Sellers, Prentice-Hall, 1996
57

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy