SQE Assignment # 3 (70133241)
SQE Assignment # 3 (70133241)
NO. 3
NAME:-
HAFIZ M TOUSEEF ZAFAR
CLASS:-
BSSE (7TH)
REG NO & SAP ID:-
BSSE02193217/70133241
SUBMITTED TO:-
MA’AM KIRAN SHAHZADI
Question No. 1
Explain the following metrics:
Answer
Metrics
A software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses.
1) Product Metrics
Product metrics are software product measures at any stage of their development,
from requirements to established systems. Product metrics are related to software
features only.
1.1. Size Metrics
It measures the defects relative to the software size expressed as lines of
code or function point, etc. i.e., it measures code quality per unit. This metric is
used in many commercial software systems.
a. Lines of Code
A line of code (LOC) is any line of text in a code that is not a comment or
blank line, and also header lines, in any case of the number of statements or
fragments of statements on the line. LOC clearly consists of all lines containing the
declaration of any variable, and executable and non-executable statements. As
Lines of Code (LOC) only counts the volume of code, you can only use it to compare
or estimate projects that use the same language and are coded using the same
coding standards.
Features
➢ Variations such as “source lines of code”, are used to set out a codebase.
➢ LOC is frequently used in some kinds of arguments.
➢ They are used in assessing a project’s performance or efficiency.
Advantages
➢ Most used metric in cost estimation.
➢ Its alternates have many problems as compared to this metric.
➢ It is very easy in estimating the efforts.
Disadvantages
➢ Very difficult to estimate the LOC of the final program from the problem
specification.
➢ It correlates poorly with quality and efficiency of code.
➢ It doesn’t consider complexity.
b. Function Points
Function points measure the size of an application system based on the
functional view of the system. The size is determined by counting the number of
inputs, outputs, queries, internal files and external files in the system and adjusting
that total for the functional complexity of the system.
Objectives of FPA:
➢ The objective of FPA is to measure the functionality that the user requests
and receives.
➢ The objective of FPA is to measure software development and maintenance
independently of the technology used for implementation.
➢ It should be simple enough to minimize the overhead of the measurement
process.
➢ It should be a consistent measure among various projects and organizations.
Types of FP Attributes
c. Bang
The bang metric can be used to develop an indication of the size of the
software to be implemented as a consequence of the analysis model.
Developed by DeMarco, the bang metric is “an implementation independent
indication of system size.” To compute the bang metric, the software engineer
must first evaluate a set of primitives—elements of the analysis model that are not
further subdivided at the analysis level. Primitives are determined by
evaluating the analysis model and developing counts for the following forms:
States (ST). The number of user observable states in the state transition diagram.
Modified manual function primitives (FuPM). Functions that lie outside the
system boundary but must be modified to accommodate the new system.
Input data elements (DEI). Those data elements that are input to the system.
Output data elements. (DEO). Those data elements that are output from the system.
Retained data elements. (DER). Those data elements that are retained (stored)
by the system.
Data tokens (TCi). The data tokens (data items that are not subdivided within a
functional primitive) that exist at the boundary of the ith functional primitive
(Evaluated for each primitive).
Relationship connections (REi). The relationships that connect the ith object in
the data model to other objects.
V (G) = E - N + 2 * P
c. Knots (k)
➢ Measures - The complexity and unstructuredness of a module's control flow.
➢ Calculation - Count of the intersections among the control flow paths
through a function.
➢ Use - In pointing out areas of higher unstructured Ness and the resultant
challenges in understanding and testing, higher knots values may suggest the
need to reorder or restructure, rather than just repartition, a module.
d. Information Flow
The other set of metrics we would live to consider are known as Information
Flow Metrics. The basis of information flow metrics is found upon the following
concept the simplest system consists of the component, and it is the work that
these components do and how they are fitted together that identify the
complexity of the system. The following are the working definitions that are
used in Information flow:
Component: Any element identified by decomposing a (software) system
into its constituent's parts.
Coupling: The term used to describe the degree of linkage between one
component to others in the same system.
n=n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
b. Program Volume (V)
The unit of measurement of volume is the standard unit for size "bits." It is
the actual size of a program if a uniform binary encoding for the vocabulary is used.
V=N*log2n
L=V*/V
d. Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional
to the number of the unique operator in the program.
D= (n1/2) * (N2/n2)
E=V/L=D*V
1.4. Quality Metrics
➢ Provide indicators to improve the quality of the product.
➢ There are many quality attributes such as maintainability, usability, integrity
and correctness etc. (see McCall’s Quality Model)
a. Defect Metrics
A variety of metrics on the number and nature of defects found by annual
testers, including:
o Defects by priority
o Defects by severity
o Defect slippage ratio – The percentage of defects that manual testers
did not manage to identify before the software was shipped.
b. Reliability Metrics
Reliability metrics are used to quantitatively expressed the reliability of the
software product. The option of which metric is to be used depends upon the
type of system to which it applies & the requirements of the application domain.
c. Maintainability Metrics
Although much cannot be done to alter the quality of the product during this
phase, following are the fixes that can be carried out to eliminate the defects
as soon as possible with excellent fix quality.
• Fix backlog and backlog management index
• Fix response time and fix responsiveness
• Percent delinquent fixes
• Fix quality
2) Process Metrics
These are the measures of various characteristics of the software development
process. For example, the efficiency of fault detection. They are used to
measure the characteristics of methods, techniques, and tools that are used for
developing software.
➢ Reliability
➢ Usability
➢ Scalability
➢ Security
➢ Innovation
➢ Time-to-market
a. Rayleigh Model
The Rayleigh model is a parametric model in the sense that it is based
on a specific statistical distribution. When the parameters of the statistical
distribution are estimated based on the data from a software project, projections
about the defect rate of the project can be made based on the model.
b. Software Science Model – Halstead
According to Halstead's "A computer program is an implementation
of an algorithm considered to be a collection of tokens which can be classified as
either operators or operand."
Token Count
In these metrics, a computer program is considered to be a collection of
tokens, which may be classified as either operators or operands. All software
science metrics can be defined in terms of these basic symbols. These symbols are
called as a token.
The basic measures are
n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
In terms of the total tokens used, the size of the program can be expressed as
N = N1 + N2.
a. Program Vocabulary (n)
The size of the vocabulary of a program, which consists of the number of
Unique tokens used to build a program, is defined as:
n=n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
b. Program Volume (V)
The unit of measurement of volume is the standard unit for size "bits." It is
the actual size of a program if a uniform binary encoding for the vocabulary
is used.
V=N*log2n
c. Program Level (L)
The value of L ranges between zero and one, with L=1 representing a
program written at the highest possible level (i.e., with minimum size).
L=V*/V
d. Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional
to the number of the unique operator in the program.
D= (n1/2) * (N2/n2)
E=V/L=D*V
a. COCOMO
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e
number of Lines of Code. It is a procedural cost estimate model for software
projects and is often used as a process of reliably predicting the various parameters
associated with making a project such as size, effort, cost, time, and quality. It was
proposed by Barry Boehm in 1981 and is based on the study of 63 projects, which
makes it one of the best-documented models. The key parameters which define the
quality of any software products, which are also an outcome of the Cocomo are
primarily Effort & Schedule:
➢ Effort: Amount of labor that will be required to complete a task. It is
measured in person-months units.
➢ Schedule: Simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put in. It is measured
in the units of time such as weeks, and months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics
determine the value of the constant to be used in subsequent calculations. These
characteristics pertaining to different system types are mentioned below. Boehm’s
definition of organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been
solved in the past and also the team members have a nominal experience
regarding the problem.
b. SOFTCOST
A model may be static or dynamic. In a static model, a single variable is
taken as a key element for calculating cost and time. In a dynamic model, all
variable is interdependent, and there is no basic variable.
c. SPQR Model
d. COPMO
e. ESTIMAC
Software reliability models have appeared as people try to understand the features
of how and why software fails, and attempt to quantify software reliability.
Over 200 models have been established since the early 1970s, but how to
quantify software reliability remains mostly unsolved.
• Assumptions
• Factors
A mathematical function that includes the reliability with the elements. The
mathematical function is generally higher-order exponential or logarithmic.
Software Reliability Modeling Techniques
Both kinds of modeling methods are based on observing and accumulating failure
data and analyzing with statistical inference.
Data Reference Uses historical information Uses data from the current software
development effort.
When used in Usually made before Usually made later in the life cycle
development development or test phases; (after some data have been collected);
cycle can be used as early as not typically used in concept or
concept phase. development phases.