0% found this document useful (0 votes)
64 views17 pages

SQE Assignment # 3 (70133241)

The document provides definitions and explanations of various software metrics used to measure different aspects of software products and complexity, including: 1. Size metrics like lines of code (LOC) and function points which measure the size of a software program or system. 2. Complexity metrics like cyclomatic complexity, knots, and information flow which help predict reliability, maintainability, and understandability of software based on things like control flow and connections between components. 3. Halstead metrics which analyze a program as a collection of operators and operands to measure properties like volume.

Uploaded by

Zeeshan Maken
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views17 pages

SQE Assignment # 3 (70133241)

The document provides definitions and explanations of various software metrics used to measure different aspects of software products and complexity, including: 1. Size metrics like lines of code (LOC) and function points which measure the size of a software program or system. 2. Complexity metrics like cyclomatic complexity, knots, and information flow which help predict reliability, maintainability, and understandability of software based on things like control flow and connections between components. 3. Halstead metrics which analyze a program as a collection of operators and operands to measure properties like volume.

Uploaded by

Zeeshan Maken
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

ASSIGNMENT

NO. 3
NAME:-
HAFIZ M TOUSEEF ZAFAR
CLASS:-
BSSE (7TH)
REG NO & SAP ID:-
BSSE02193217/70133241
SUBMITTED TO:-
MA’AM KIRAN SHAHZADI
Question No. 1
Explain the following metrics:

Answer
Metrics
A software metric is a measure of software characteristics which are measurable or
countable. Software metrics are valuable for many reasons, including measuring software
performance, planning work items, measuring productivity, and many other uses.

1) Product Metrics
Product metrics are software product measures at any stage of their development,
from requirements to established systems. Product metrics are related to software
features only.
1.1. Size Metrics
It measures the defects relative to the software size expressed as lines of
code or function point, etc. i.e., it measures code quality per unit. This metric is
used in many commercial software systems.

a. Lines of Code
A line of code (LOC) is any line of text in a code that is not a comment or
blank line, and also header lines, in any case of the number of statements or
fragments of statements on the line. LOC clearly consists of all lines containing the
declaration of any variable, and executable and non-executable statements. As
Lines of Code (LOC) only counts the volume of code, you can only use it to compare
or estimate projects that use the same language and are coded using the same
coding standards.
Features
➢ Variations such as “source lines of code”, are used to set out a codebase.
➢ LOC is frequently used in some kinds of arguments.
➢ They are used in assessing a project’s performance or efficiency.

Advantages
➢ Most used metric in cost estimation.
➢ Its alternates have many problems as compared to this metric.
➢ It is very easy in estimating the efforts.

Disadvantages
➢ Very difficult to estimate the LOC of the final program from the problem
specification.
➢ It correlates poorly with quality and efficiency of code.
➢ It doesn’t consider complexity.

b. Function Points
Function points measure the size of an application system based on the
functional view of the system. The size is determined by counting the number of
inputs, outputs, queries, internal files and external files in the system and adjusting
that total for the functional complexity of the system.
Objectives of FPA:
➢ The objective of FPA is to measure the functionality that the user requests
and receives.
➢ The objective of FPA is to measure software development and maintenance
independently of the technology used for implementation.
➢ It should be simple enough to minimize the overhead of the measurement
process.
➢ It should be a consistent measure among various projects and organizations.

Types of FP Attributes

Measurements Parameters Examples

1.Number of External Inputs (EI) Input screen and tables

2. Number of External Output (EO) Output screens and reports

3. Number of external inquiries (EQ) Prompts and interrupts.

4. Number of internal files (ILF) Databases and directories

5. Number of external interfaces Shared databases and shared


(EIF) routines.

c. Bang
The bang metric can be used to develop an indication of the size of the
software to be implemented as a consequence of the analysis model.
Developed by DeMarco, the bang metric is “an implementation independent
indication of system size.” To compute the bang metric, the software engineer
must first evaluate a set of primitives—elements of the analysis model that are not
further subdivided at the analysis level. Primitives are determined by
evaluating the analysis model and developing counts for the following forms:

Functional primitives (FuP). The number of transformations (bubbles) that appear


at the lowest level of a data flow diagram.
Data elements (DE). The number of attributes of a data object, data elements
are not composite data and appear within the data dictionary.
Objects (OB). The number of data objects.

Relationships (RE). The number of connections between data objects.

States (ST). The number of user observable states in the state transition diagram.

Transitions (TR). The number of state transitions in the state transition


diagram. In addition to these six primitives, additional counts are determined for.

Modified manual function primitives (FuPM). Functions that lie outside the
system boundary but must be modified to accommodate the new system.

Input data elements (DEI). Those data elements that are input to the system.

Output data elements. (DEO). Those data elements that are output from the system.

Retained data elements. (DER). Those data elements that are retained (stored)
by the system.

Data tokens (TCi). The data tokens (data items that are not subdivided within a
functional primitive) that exist at the boundary of the ith functional primitive
(Evaluated for each primitive).

Relationship connections (REi). The relationships that connect the ith object in
the data model to other objects.

1.2. Complexity Metrics


Complexity metrics are used to predict critical information about reliability
and maintainability of software systems. This paper proposes complexity metric,
which includes all major factors responsible for complexity.
a. Cyclomatic complexity
Cyclomatic complexity of a code section is the quantitative measure of the
number of linearly independent paths in it. It is a software metric used to
indicate the complexity of a program. It is computed using the Control Flow
Graph of the program. The nodes in the graph indicate the smallest group of
commands of a program, and a directed edge in it connects the two nodes i.e., if
second command might immediately follow the first command.
b. Extensions to v(G)
McCabe proposed the cyclomatic number, V (G) of graph theory as an
indicator of software complexity. The cyclomatic number is equal to the
number of linearly independent paths through a program in its graphs
representation. For a program control graph G, cyclomatic number, V (G), is
given as:

V (G) = E - N + 2 * P

E = The number of edges in graphs G


N = The number of nodes in graphs G
P = The number of connected components in graph G.

Following are the properties of Cyclomatic complexity:

1. V (G) is the maximum number of independent paths in the graph


2. V (G) >=1
3. G will have one path if V (G) = 1
4. Minimize complexity to 10

c. Knots (k)
➢ Measures - The complexity and unstructuredness of a module's control flow.
➢ Calculation - Count of the intersections among the control flow paths
through a function.
➢ Use - In pointing out areas of higher unstructured Ness and the resultant
challenges in understanding and testing, higher knots values may suggest the
need to reorder or restructure, rather than just repartition, a module.

d. Information Flow
The other set of metrics we would live to consider are known as Information
Flow Metrics. The basis of information flow metrics is found upon the following
concept the simplest system consists of the component, and it is the work that
these components do and how they are fitted together that identify the
complexity of the system. The following are the working definitions that are
used in Information flow:
Component: Any element identified by decomposing a (software) system
into its constituent's parts.

Cohesion: The degree to which a component performs a single function.

Coupling: The term used to describe the degree of linkage between one
component to others in the same system.

A process contributes complexity due to the following two factors.


1. The complexity of the procedure code itself.
2. The complexity due to the procedure's connections to its environment. The
effect of the first factor has been included through LOC (Line Of Code)
measure. For the quantification of the second factor, Henry and Kafura
have defined two terms, namely FAN-IN and FAN-OUT.

FAN-IN: FAN-IN of a procedure is the number of local flows into that


procedure plus the number of data structures from which this procedure
retrieve information.
FAN -OUT: FAN-OUT is the number of local flows from that procedure plus the
number of data structures which that procedure updates.
Procedure Complexity = Length * (FAN-IN * FANOUT)**2

1.3. Halstead Metrics


According to Halstead's "A computer program is an implementation of an
algorithm considered to be a collection of tokens which can be classified as either
operators or operand."
Token Count
In these metrics, a computer program is considered to be a collection of
tokens, which may be classified as either operators or operands. All software
science metrics can be defined in terms of these basic symbols. These symbols are
called as a token.
The basic measures are
n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
In terms of the total tokens used, the size of the program can be expressed as
N = N1 + N2.
a. Program Vocabulary (n)
The size of the vocabulary of a program, which consists of the number of
unique tokens used to build a program, is defined as:

n=n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
b. Program Volume (V)
The unit of measurement of volume is the standard unit for size "bits." It is
the actual size of a program if a uniform binary encoding for the vocabulary is used.

V=N*log2n

c. Program Level (L)


The value of L ranges between zero and one, with L=1 representing a
program written at the highest possible level (i.e., with minimum size).

L=V*/V

d. Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional
to the number of the unique operator in the program.

D= (n1/2) * (N2/n2)

e. Programming Effort (E)


The unit of measurement of E is elementary mental discriminations.

E=V/L=D*V
1.4. Quality Metrics
➢ Provide indicators to improve the quality of the product.
➢ There are many quality attributes such as maintainability, usability, integrity
and correctness etc. (see McCall’s Quality Model)

a. Defect Metrics
A variety of metrics on the number and nature of defects found by annual
testers, including:
o Defects by priority
o Defects by severity
o Defect slippage ratio – The percentage of defects that manual testers
did not manage to identify before the software was shipped.

b. Reliability Metrics
Reliability metrics are used to quantitatively expressed the reliability of the
software product. The option of which metric is to be used depends upon the
type of system to which it applies & the requirements of the application domain.
c. Maintainability Metrics
Although much cannot be done to alter the quality of the product during this
phase, following are the fixes that can be carried out to eliminate the defects
as soon as possible with excellent fix quality.
• Fix backlog and backlog management index
• Fix response time and fix responsiveness
• Percent delinquent fixes
• Fix quality

2) Process Metrics
These are the measures of various characteristics of the software development
process. For example, the efficiency of fault detection. They are used to
measure the characteristics of methods, techniques, and tools that are used for
developing software.

2.1. General Considerations

➢ Reliability
➢ Usability
➢ Scalability
➢ Security
➢ Innovation
➢ Time-to-market

2.2. Empirical Models


Cost estimation simply means a technique that is used to find out the cost
estimates. The cost estimate is the financial spend that is done on the efforts to
develop and test software in Software Engineering. Cost estimation models are
some mathematical algorithms or parametric equations that are used to estimate
the cost of a product or a project.
Various techniques or models are available for cost estimation, also known as Cost
Estimation Models as shown below:
2.3. Statistical Models
Statistical modeling is a process of applying statistical models and
assumptions to generate sample data and make real-world predictions. It helps
data scientists visualize the relationships between random variables and
strategically interpret datasets.
There are three main types of statistical models, including:
Parametric: Probability distributions with a finite number of parameters
Non-parametric: The number and nature of parameters aren’t fixed but flexible
Semi-parametric: Have both parametric and non-parametric components
There are various other situations where a statistical model would be an
appropriate choice:
➢ When data volume isn’t too big
➢ While isolating the effects of a small number of variables
➢ Errors and uncertainties in predication are reasonable
➢ Independent variables have fewer and pre-specified interactions
➢ When you require high interpretability

2.4. Theory-Based Models


A theoretical model is a framework that researchers create to structure
a study process and plan how to approach a specific research inquiry. It can allow
you to define the purpose of your research and develop an informed perspective.

a. Rayleigh Model
The Rayleigh model is a parametric model in the sense that it is based
on a specific statistical distribution. When the parameters of the statistical
distribution are estimated based on the data from a software project, projections
about the defect rate of the project can be made based on the model.
b. Software Science Model – Halstead
According to Halstead's "A computer program is an implementation
of an algorithm considered to be a collection of tokens which can be classified as
either operators or operand."
Token Count
In these metrics, a computer program is considered to be a collection of
tokens, which may be classified as either operators or operands. All software
science metrics can be defined in terms of these basic symbols. These symbols are
called as a token.
The basic measures are
n1 = count of unique operators.
n2 = count of unique operands.
N1 = count of total occurrences of operators.
N2 = count of total occurrence of operands.
In terms of the total tokens used, the size of the program can be expressed as
N = N1 + N2.
a. Program Vocabulary (n)
The size of the vocabulary of a program, which consists of the number of
Unique tokens used to build a program, is defined as:

n=n1+n2
where
n=vocabulary of a program
n1=number of unique operators
n2=number of unique operands
b. Program Volume (V)
The unit of measurement of volume is the standard unit for size "bits." It is
the actual size of a program if a uniform binary encoding for the vocabulary
is used.

V=N*log2n
c. Program Level (L)
The value of L ranges between zero and one, with L=1 representing a
program written at the highest possible level (i.e., with minimum size).

L=V*/V

d. Program Difficulty
The difficulty level or error-proneness (D) of the program is proportional
to the number of the unique operator in the program.

D= (n1/2) * (N2/n2)

e. Programming Effort (E)


The unit of measurement of E is elementary mental discriminations.

E=V/L=D*V

2.5. Composite Models


Composite Structure Diagram is one of the new artifacts added to
UML 2.0. A composite structure diagram is a UML structural diagram that
contains classes, interfaces, packages, and their relationships, and that provides a
logical view of all, or part of a software system. It shows the internal structure
(Including parts and connectors) of a structured classifier or collaboration.

a. COCOMO
Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e
number of Lines of Code. It is a procedural cost estimate model for software
projects and is often used as a process of reliably predicting the various parameters
associated with making a project such as size, effort, cost, time, and quality. It was
proposed by Barry Boehm in 1981 and is based on the study of 63 projects, which
makes it one of the best-documented models. The key parameters which define the
quality of any software products, which are also an outcome of the Cocomo are
primarily Effort & Schedule:
➢ Effort: Amount of labor that will be required to complete a task. It is
measured in person-months units.
➢ Schedule: Simply means the amount of time required for the completion of
the job, which is, of course, proportional to the effort put in. It is measured
in the units of time such as weeks, and months.
Different models of Cocomo have been proposed to predict the cost estimation at
different levels, based on the amount of accuracy and correctness required. All of
these models can be applied to a variety of projects, whose characteristics
determine the value of the constant to be used in subsequent calculations. These
characteristics pertaining to different system types are mentioned below. Boehm’s
definition of organic, semidetached, and embedded systems:
1. Organic – A software project is said to be an organic type if the team size
required is adequately small, the problem is well understood and has been
solved in the past and also the team members have a nominal experience
regarding the problem.

2. Semi-detached – A software project is said to be a Semi-detached type if the


vital characteristics such as team size, experience, and knowledge of the
various programming environment lie in between that of organic and
Embedded. The projects classified as Semi-Detached are comparatively less
familiar and difficult to develop compared to the organic ones and require
more experience and better guidance and creativity. Eg: Compilers or
different Embedded Systems can be considered of Semi-Detached types.

3. Embedded – A software project requiring the highest level of complexity,


creativity, and experience requirement fall under this category. Such software
requires a larger team size than the other two models and also the
developers need to be sufficiently experienced and creative to develop such
complex models.

i. Basic COCOMO Model


ii. Intermediate COCOMO Model
iii. Detailed COCOMO Model

b. SOFTCOST
A model may be static or dynamic. In a static model, a single variable is
taken as a key element for calculating cost and time. In a dynamic model, all
variable is interdependent, and there is no basic variable.
c. SPQR Model

d. COPMO

e. ESTIMAC

2.5. Reliability Models


A software reliability model indicates the form of a random process that
defines the behavior of software failures to time.

Software reliability models have appeared as people try to understand the features
of how and why software fails, and attempt to quantify software reliability.

Over 200 models have been established since the early 1970s, but how to
quantify software reliability remains mostly unsolved.

There is no individual model that can be used in all situations. No model is


complete or even representative.

Most software models contain the following parts:

• Assumptions
• Factors

A mathematical function that includes the reliability with the elements. The
mathematical function is generally higher-order exponential or logarithmic.
Software Reliability Modeling Techniques

Both kinds of modeling methods are based on observing and accumulating failure
data and analyzing with statistical inference.

Differentiate between software reliability prediction models and software


reliability estimation models

Basics Prediction Models Estimation Models

Data Reference Uses historical information Uses data from the current software
development effort.

When used in Usually made before Usually made later in the life cycle
development development or test phases; (after some data have been collected);
cycle can be used as early as not typically used in concept or
concept phase. development phases.

Time Frame Predict reliability at some Estimate reliability at either present or


future time. some next time.
Reliability Models
A reliability growth model is a numerical model of software reliability, which
predicts how software reliability should improve over time as errors are discovered and
repaired. These models help the manager in deciding how much efforts should be
devoted to testing. The objective of the project manager is to test and debug the
system until the required level of reliability is reached.

Following are the Software Reliability Models are:

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy