Block 2
Block 2
5.0 Introduction 5
5.1 Objectives 5
5.2 Different Types of Project Metrics 5
5.3 Software Project Estimation 9
5.3.1 Estimating the Size
5.3.2 Estimating Effort
5.3.3 Estimating Schedule
5.3.4 Estimating Cost
5.4 Models for Estimation 13
5.4.1 COCOMO Model
5.4.2 Putnam‟s Model
5.4.3 Statistical Model
5.4.4 Function Points
5.5 Automated Tools for Estimation 15
5.6 Summary 17
5.7 Solutions/Answers 17
5.8 Further Readings 17
5.0 INTRODUCTION
5.1 OBJECTIVES
Need for Project metrics : Historically, the process of software development has been
witnessing inaccurate estimations of schedule and cost, overshooting delivery target
and productivity of software engineers in not commensurate with the growth of
demand. Software development projects are quite complex and there was no scientific
method of measuring the software process. Thus effective measurement of the process
was virtually absent. The following phrase is aptly describing the need for
measurement:
If you can not measure it, then, you can not improve it.
This is why measurement is very important to software projects. Without the process
of measurement, software engineering cannot be called engineering in the true sense.
Definition of metrics : Metrics deal with measurement of the software process and the
software product. Metrics quantify the characteristics of a process or a product.
Merics are often used to estimate project cost and project schedule.
Metrics can be broadly divided into two categories namely, product metrics and
process metrics.
Another way of classification of metrics are primitive metrics and derived metrics.
Primitive metrics are directly observable quantities like lines of code (LOC), number
of man-hours etc.
Derived metrics are derived from one or more of primitive metrics like lines of code
per man-hour, errors per thousand lines of code.
Now, let us briefly discuss different types of product metrics and process metrics.
Product metrics
Lines of Code(LOC) : LOC metric is possibly the most extensively used for
measurement of size of a program. The reason is that LOC can be precisely defined.
LOC may include executable source code and non-executable program code like
comments etc.
It is evident that the productivity of the developer engaged in Module 1 is more than
the productivity of the developer engaged in Module 2. It is important to note here
how derived metrics are very handy to project managers to measure various aspects of
the projects.
Although, LOC provides a direct measure of program size, at the same time, these
metrics are not universally accepted by project managers. Looking at the data in the
table below, it can be easily observed that LOC is not an absolute measure of
program size and largely depends on the computer language and tools used for
development activity.
The LOC of same module varies with the programming language used. Hence, just
LOC cannot be an indicator of program size. The data given in the above table is only
assumed and does not correspond to any module(s).
There are other attributes of software which are not directly reflected in Lines of Code
(LOC), as the complexity of the program is not taken into account in LOC and it
penalises well designed shorter program. Another disadvantage of LOC is that the
project manager is supposed to estimate LOC before the analysis and design is
complete.
Function point : Function point metrics instead of LOC measures the functionality of
the program. Function point analysis was first developed by Allan J. Albrecht in the
1970s. It was one of the initiatives taken to overcome the problems associated with
LOC.
• External inputs : A process by which data crosses the boundary of the system.
Data may be used to update one or more logical files. It may be noted that data
here means either business or control information.
• External outputs : A process by which data crosses the boundary of the system
to outside of the system. It can be a user report or a system log report.
• External user inquires : A count of the process in which both input and output
results in data retrieval from the system. These are basically system inquiry
processes.
• Internal logical files : A group of logically related data files that resides
entirely within the boundary of the application software and is maintained
7
Software Project through external input as described above.
Management
• External interface files : A group of logically related data files that are used
by the system for reference purposes only. These data files remain completely
outside the application boundary and are maintained by external applications.
For transactions like external input, external output and user inquiry, the ranking of
high, low and medium will be based on number of file updated for external inputs or
number of files referenced for external input and external inquiries. The complexity
will also depend on the number of data elements.
Also, External Inquiry, External Input and External output based on complexity can
be assigned numerical values like rating.
Similarly, external logical files and external interface files are assigned numerical
values depending on element type and number of data elements.
Organisations may develop their own strategy to assign values to various function
points. Once the number of function points have been identified and their significance
has been arrived at, the total function point can be calculated as follows.
• Function points can be used to estimate the size of a software application correctly
irrespective of technology, language and development methodology.
• User understands the basis on which the size of software is calculated as these are
derived directly from user required functionalities.
• Function points can be used to track and monitor projects.
• Function points can be calculated at various stages of software development
process and can be compared.
Other types of metrics used for various purposes are quality metrics which
include the following:
• Reliability metrics : These metrics measure mean time to failure. This can be
done by collecting data over a period of time.
Software project estimation is the process of estimating various resources required for
the completion of a project. Effective software project estimation is an important
activity in any software development project. Underestimating software project and
under staffing it often leads to low quality deliverables, and the project misses the
target deadline leading to customer dissatisfaction and loss of credibility to the
company. On the other hand, overstaffing a project without proper control will
increase the cost of the project and reduce the competitiveness of the company.
• Estimating the size of project. There are many procedures available for estimating
the size of a project which are based on quantitative approaches like estimating
Lines of Code or estimating the functionality requirements of the project called
Function point.
9
Software Project • Estimating total cost of the project depending on the above and other
Management
resources.
User requirements
Schedule
Constraints Software Project
estimation Effort
Cost
Organisational
policies, Standards
Estimating the size of the software to be developed is the very first step to make an
effective estimation of the project. Customer‟s requirements and system specification
forms a baseline for estimating the size of a software. At a later stage of the project,
system design document can provide additional details for estimating the overall size
of a software.
• The ways to estimate project size can be through past data from an earlier
developed system. This is called estimation by analogy.
• The other way of estimation is through product feature/functionality. The system
is divided into several subsystems depending on functionality, and size of each
subsystem is calculated.
Once the size of software is estimated, the next step is to estimate the effort based on
the size. The estimation of effort can be made from the organisational specifics of
software development life cycle. The development of any application software system
is more than just coding of the system. Depending on deliverable requirements, the
estimation of effort for project will vary. Efforts are estimated in number of man-
months.
10
Software Project
• The best way to estimate effort is based on the organisation‟s own historical data
Planning
of development process. Organizations follow similar development life cycle for
developing various applications.
Hardware cost
Cost estimation
Travel expenses Project cost
process
Training cost
Communication cost
and other cost
factors
Figure 5.2 : Cost estimation process
User requirements
Estimate Cost
11
Software Project Figure 5.3: Project estimation process
Management
Now, once the estimation is complete, we may be interested to know how accurate the
estimates are to reality. The answer to this is “we do not know until the project is
complete”. There is always some uncertainty associated with all estimation
techniques. The accuracy of project estimation will depend on the following:
The following are some of the reasons which make the task of cost estimation
difficult:
The following are some of the reasons for poor and inaccurate estimation:
If we elongate the project, we can reduce overall cost. Usually, long project durations
are not liked by customers and managements. There is always shortest possible
duration for a project, but it comes at a cost.
E = f (vi)
COCOMO stands for Constructive Cost Model. It was introduced by Barry Boehm.
It is perhaps the best known and most thoroughly documented of all software cost
estimation models. It provides the following three level of models:
• Detailed COCOMO : This model computes development effort and cost which
incorporates all characteristics of intermediate level with assessment of cost
implication on each step of development (analysis, design, testing etc.).
This model may be applied to three classes of software projects as given below:
• Organic : Small size project. A simple software project where the development
team has good experience of the application
In the COCOMO model, the development effort equation assumes the following form:
E = aSb m
where a and b are constraints that are determined for each model.
E = Effort
13
Software Project S = Value of source in LOC
Management
m = multiplier that is determined from a set of 15 cost driver‟s attributes.
The following are few examples of the above cost drivers:
Barry Boehm suggested that a detailed model would provide a cost estimate to the
accuracy of ± 20 % of actual value
P = Kt exp(t2/2T2) / T2
The Rayleigh-Norden curve is used to derive an equation that relates lines of code
delivered to other parameters like development time and effort at any time during the
project.
S = CkK1/3T4/3
From the data of a number of completed software projects, C.E. Walston and
C.P. Felix developed a simple empirical model of software development effort with
respect to number of lines of code. In this model, LOC is assumed to be directly
related to development effort as given below:
E = a Lb
a and b are parameters obtained from regression analysis of data. The final equation is
of the following form:
E = 5.2 L0.91
P = L/E
14
Software Project
Planning
Where P = Productivity Index
5.4.4 Function Points
It may be noted that COCOMO, Putnam and statistical models are based on LOC. A
number of estimation models are based on function points instead of LOC. However,
there is very little published information on estimation models based on function
points.
After looking at the above models for software project estimation, we have reason to
think of software that implements these models. This is what exactly the automated
estimation tools do. These estimation tools, which estimate cost and effort, allow the
project managers to perform “What if analysis”. Estimation tools may only support
size estimation or conversion of size to effort and schedule to cost.
There are more than dozens of estimation tools available. But, all of them have the
following common characteristics:
Reports
Size estimation Project
estimation Cost
Requirements tool
tools Schedule
Effort
No estimation tool is the solution to all estimation problems. One must understand
that the tools are just to help the estimation process.
Most models require an estimate of software product size. However, software size is
difficult to predict early in the development lifecycle. Many models use LOC for
sizing, which is not measurable during requirements analysis or project planning.
Although, function points and object points can be used earlier in the lifecycle, these
measures are extremely subjective.
Size estimates can also be very inaccurate. Methods of estimation and data collection
must be consistent to ensure an accurate prediction of product size. Unless the size
15
Software Project metrics used in the model are the same as those used in practice, the model will not
Management
yield accurate results (Fenton, 1997).
16
Software Project
Tool Tool vendor site Functionality/Remark Planning
http://www.costxpert.com/
5.6 SUMMARY
Estimation is an integral part of the software development process and should not be
taken lightly. A well planned and well estimated project is likely to be completed in
time. Incomplete and inaccurate documentation may pose serious hurdles to the
success of a software project during development and implementation. Software cost
estimation is an important part of the software development process. Metrics are
important tools to measure software product and process. Metrics are to be selected
carefully so that they provide a measure for the intended process/product. Models are
used to represent the relationship between effort and a primary cost factor such as
software product size. Cost drivers are used to adjust the preliminary estimate
provided by the primary cost factor. Models have been developed to predict software
cost based on empirical data available, but many suffer from some common problems.
The structure of most models is based on empirical results rather than theory. Models
are often complex and rely heavily on size estimation. Despite problems, models are
still important to the software development process. A model can be used most
effectively to supplement and corroborate other methods of estimation.
5.7 SOLUTIONS/ANSWERS
1. True
2. Time, Schedule
Reference websites
http://www.rspa.com
http://www.ieee.org
http://www.ncst.ernet.in
17
Software Project
Management UNIT 6 RISK MANAGEMENT AND
PROJECT SCHEDULING
Structure Page Nos.
6.0 Introduction 18
6.1 Objectives 18
6.2 Identification of Software Risks 18
6.3 Monitoring of Risks 20
6.4 Management of Risks 20
6.4.1 Risk Management
6.4.2 Risk Avoidance
6.4.3 Risk Detection
6.5 Risk Control 22
6.6 Risk Recovery 23
6.7 Formulating a Task Set for the Project 24
6.8 Choosing the Tasks of Software Engineering 24
6.9 Scheduling Methods 25
6.10 The Software Project Plan 27
6.11 Summary 28
6.12 Solutions/Answers 28
6.13 Further Readings 30
6.0 INTRODUCTION
As human beings, we would like life to be free from dangers, difficulties and any risks
of any type. In case a risk arises, we would take proper measures to recover as soon as
possible. Similarly, in software engineering, risk management plays an important role
for successful deployment of the software product. Risk management involves
monitoring of risks, taking necessary actions in case risk arises by applying risk
recovery methods.
6.1 OBJECTIVES
• Updates in the hardware resources: The team should be aware of the latest
updates in the hardware resources, such as latest CPU (Intel P4, Motorola
series, etc.), peripherals, etc. In case the developer makes a product, and later
in the market, a new product is released, the product should support minimum
features. Otherwise, it is considered a risk, and may lead to the failure of the
project.
• Extra support: The software should be able to support a set of a few extra
features in the vicinity of the product to be developed.
• External Risks: The software should have backup in CD, tapes, etc., fully
encrypted with full licence facilities. The software can be stored at various
important locations to avoid any external calamities like floods, earthquakes,
etc. Encryption is maintained such that no external persons from the team can
tap the source code.
19
Software Project
Management 6.3 MONITORING OF RISKS
Various risks are identified and a risk monitor table with attributes like risk name,
module name, team members involved, lines of code, codes affecting this risk,
hardware resources, etc., is maintained. If the project is continued further to 2-3
weeks, and then further the risk table is also updated. It is seen whether there is a
ripple effect in the table, due to the continuity of old risks. Risk monitors can change
the ordering of risks to make the table easy for computation. Table 6.1 depicts a risk
table monitor. It depicts the risks that are being monitored.
Table 6.1: Risk table monitor
The above risk table monitor has a risk in module compute () where there is a risk in
line 5, 8 and 20 in week 1. In week 2, risks are present in lines 5 and 25. Risks are
reduced in week 2. The priority 3 is set. Similarly, in the second row, risk is due to
more memory and peripherals, affecting module f1 (), f5 () in week-1. After some
modifications in week 2, module f2 () is affected and the priority is set to 1.
Start Risk
Manager
No • Risk Analysis
Risk Detection
• Risk Category
• Risk Prioritisation
Detect
• Risk Pending
Risk Control • Risk Resolution
• Risk Not Solvable
End Risk
Manager
21
Software Project From the Figure 6.1, it is clear that the first phase is to avoid risk by anticipating and
Management using tools from previous project history. In case there is no risk, risk manager halts.
In case there is risk, detection is done using various risk analysis techniques and
further prioritising risks. In the next phase, risk is controlled by pending risks,
resolving risks and in the worst case (if risk is not solved) lowering the priority.
Lastly, risk recovery is done fully, partially or an alternate solution is found.
Risk Anticipation: Various risk anticipation rules are listed according to standards
from previous projects’ experience, and also as mentioned by the project manager.
Risk tools: Risk tools are used to test whether the software is risk free. The tools have
built-in data base of available risk areas and can be updated depending upon the type
of project.
The risk detection algorithm detects a risk and it can be categorically stated as :
Risk Analysis: In this phase, the risk is analyzed with various hardware and software
parameters as probabilistic occurrence (pr), weight factor (wf) (hardware resources,
lines of code, persons), risk exposure (pr * wf).
Maximum value of risk exposure indicates that the problem has to solved as soon as
possible and be given high priority. A risk analysis table is maintained as shown
above.
Risk Category: Risk identification can be from various factors like persons involved in
the team, management issues, customer specification and feedback, environment,
commercial, technology, etc. Once proper category is identified, priority is given
depending upon the urgency of the product.
Risk Prioritisation: Depending upon the entries of the risk analysis table, the
maximum risk exposure is given high priority and has to be solved first.
Once the prioritisation is done, the next step is to control various risks as follows:
• Risk Pending: According to the priority, low priority risks are pushed at the end
of the queue with a view of various resources (hardware, man power, software)
and in case it takes more time their priority is made higher.
22
• Risk Resolution: Risk manager makes a strong resolve how to solve the risk. Risk Management and
Project Scheduling
• Risk elimination: This action leads to serious error in software.
• Risk transfer: If the risk is transferred to some part of the module, then risk
analysis table entries get modified. Thereby, again risk manager will control high
priority risk.
• Risk not solvable: If a risk takes more time and more resources, then it is dealt in
its totality in the business aspect of the organisation and thereby it is notified to
the customer, and the team member proposes an alternate solution. There is a
slight variation in the customer specification after consultation.
Full : The risk analysis table is scanned and if the risk is fully solved, then
corresponding entry is deleted from the table.
Partial : The risk analysis table is scanned and due to partially solved risks, the
entries in the table are updated and thereby priorities are also updated.
23
Software Project
Management 6.7 FORMULATING A TASK SET FOR THE
PROJECT
The objective of this section is to get an insight into project scheduling by defining
various task sets dependent on the project and choosing proper tasks for software
engineering.
Various static and dynamic scheduling methods are also discussed for proper
implementation of the project.
• Technical staff expertise: All staff members should have sufficient technical
expertise for timely implementation of the project. Meetings have to be
conducted, weekly and status reports are to be generated.
• Technology update : Latest tools and existing tested modules have to be used for
fast and efficient implementation of the project.
• Full or partial implementation of the project : In case, the project is very large
and to meet the market requirements, the organisation has to satisfy the customer
with at least a few modules. The remaining modules can be delivered at a later
stage.
• Time allocation : The project has to be divided into various phases and time for
each phase has to be given in terms of person-months, module-months, etc.
• Module binding : Module has to bind to various technical staff for design,
implementation and testing phases. Their necessary inter-dependencies have to be
mentioned in a flow chart.
• Milestones : The outcome for each phase has to be mentioned in terms of quality,
specifications implemented, limitations of the module and latest updates that can
be implemented (according to the market strategy).
Once the task set has been defined, the next step is to choose the tasks for software
project. Depending upon the software process model like linear sequential, iterative,
evolutionary model etc., the corresponding task is selected. From the above task set,
let us consider how to choose tasks for project development (as an example) as
follows:
• Scope : Overall scope of the project.
24
• Scheduling and planning : Scheduling of various modules and their milestones, Risk Management and
preparation of weekly reports, etc. Project Scheduling
Scheduling Techniques
The following are various types of scheduling techniques in software engineering are:
• Work Breakdown Structure : The project is scheduled in various phases
following a bottom-up or top-down approach. A tree-like structure is followed
without any loops. At each phase or step, milestone and deliverables are
mentioned with respect to requirements. The work breakdown structure shows the
overall breakup flow of the project and does not indicate any parallel flow.
Figure 6.2 depicts an example of a work breakdown structure.
Software Project
Maintenance
The project is split into requirement and analysis, design, coding, testing and
maintenance phase. Further, requirement and analysis is divided into R1,R2 ..
Rn; design is divided into D1,D2..Dn; coding is divided into C1,C2..Cn; testing is
divided into T1,T2.. Tn; and maintenance is divided into M1, M2.. Mn. If the project
25
Software Project is complex, then further sub division is done. Upon the completion of each stage,
Management integration is done.
• Flow Graph : Various modules are represented as nodes with edges connecting
nodes. Dependency between nodes is shown by flow of data between nodes.
Nodes indicate milestones and deliverables with the corresponding module
implemented. Cycles are not allowed in the graph. Start and end nodes indicate
the source and terminating nodes of the flow. Figure 6.3 depicts a flow graph.
M2
Start M1
M4 End
M3
M1 is the starting module and the data flows to M2 and M3. The combined data
from M2 and M3 flow to M4 and finally the project terminates. In certain
projects, time schedule is also associated with each module. The arrows indicate
the flow of information between modules.
• Gantt Chart or Time Line Charts : A Gantt chart can be developed for the
entire project or a separate chart can be developed for each function. A tabular
form is maintained where rows indicate the tasks with milestones and columns
indicate duration ( weeks/months) . The horizontal bars that spans across columns
indicate duration of the task. Figure 6.4 depicts a Gantt Chart. The circles indicate
the milestones.
26
2. Time taken to complete a project or module with minimum time (all Risk Management and
resources available), tmin . Project Scheduling
An average of tnormal, tmin, tmax and thistory is taken depending upon the project.
Sometimes, various weights are added as 4*tnormal, 5*tmin, 0.9*tmax and
2*thistory to estimate the time for a project or module. Parameter fixing is done by the
project manager.
1. Within the organisation: How the project is to be implemented? What are various
constraints(time, cost, staff) ? What is market strategy?
2. With respect to the customer: Weekly or timely meetings with the customer with
presentations on status reports. Customer feedback is also taken and further
modifications and developments are done. Project milestones and deliverables are
also presented to the customer.
• Select a project
o Identifying project’s aims and objectives
o Understanding requirements and specification
o Methods of analysis, design and implementation
o Testing techniques
o Documentation
• Budget allocation
o Exceeding limits within control
• Project Estimates
o Cost
o Time
o Size of code
o Duration
• Resource Allocation
o Hardware
o Software
o Previous relevant project information
o Digital Library
• Risk Management
o Risk Avoidance
o Risk Detection
27
Software Project o Risk Control
Management o Risk Recovery
• Scheduling techniques
o Work Breakdown Structure
o Activity Graph
o Critical path method
o Gantt Chart
o Program Evaluation Review Technique
• People
o Staff Recruitment
o Team management
o Customer interaction
• Quality control and standard
All of the above methods/techniques are not covered in this unit. The student is
advised to study references for necessary information.
6.11 SUMMARY
This unit describes various risk management and risk monitoring techniques. In case,
major risks are identified, they are resolved and finally risk recovery is done. Risk
manager takes care of all the phases of risk management. Various task sets are defined
for a project from the customer point of view, the developer’s point of view, the
market strategy view, future trends, etc. For the implementation of a project, a proper
task set is chosen and various attributes are defined. For successful implementation of
a project, proper scheduling (with various techniques) and proper planning are done.
6.12 SOLUTIONS/ANSWERS
Check Your Progress 1
1) Any problem that occurs during customer specification, design, coding,
implementation and testing can be termed as a risk. If they are ignored, then they
propagate further down and it is termed ripple effect. Risk management deals with
avoidance and detection of risk at every phase of the software development cycle.
28
2) Two risks involved with team members are as follows: Risk Management and
Project Scheduling
• Improper training of the technical staff.
• Lack of proper communication between the developers.
5) Risks can be prioritised upon their dependencies on other modules and external
factors. If a module is having many dependencies then its priority is given higher
value compared to independent modules. If a module often causes security failure
in the system, its priority can be set to a higher value.
2) Various phases of risk management are risk avoidance, risk detection, risk
analysis, risk monitoring, risk control and risk recovery.
3) Attributes mentioned in the risk analysis table are risk name, probability of
occurrence of risk, weight factor and risk exposure.
4) Risk resolution means taking final steps to free the module or system from risk.
Risk resolution involves risk elimination, risk transfer and disclosure of risk to the
customer.
5) Some times, it is difficult to recover from the risk and it is better to add extra
features or an alternate solutions keeping in view of customer specification with
slight modifications in order to match future trends in hardware and software
markets.
1) The two factors to formulate a task set for a software project are as follows:
• Customer satisfaction
• Full or partial implementation of the project
3) Gantt chart or time line chart indicates timely approach and milestone for each
task and their relevant sub tasks.
4) Software project plan indicates scope of the project, milestones and deliverables,
project estimates, resource allocation, risk management, scheduling techniques
and quality control and standard .
29
Software Project
Management 6.13 FURTHER READINGS
Reference websites
http://www.rspa.com
http://www.ieee.org
http://www.ncst.ernet.in
30
Software Testing
UNIT 7 SOFTWARE TESTING
Structure Page Nos.
7.0 Introduction 53
7.1 Objectives 54
7.2 Basic Terms used in Testing 54
7.2.1 Input Domain
7.2.2 Black Box and White Box testing Strategies
7.2.3 Cyclomatic Complexity
7.3 Testing Activities 64
7.4 Debugging 65
7.5 Testing Tools 67
7.6 Summary 68
7.7 Solutions/Answers 69
7.8 Further Readings 69
7.0 INTRODUCTION
Testing means executing a program in order to understand its behaviour, that is,
whether or not the program exhibits a failure, its response time or throughput for
certain data sets, its mean time to failure, or the speed and accuracy with which users
complete their designated tasks. In other words, it is a process of operating a system or
component under specified conditions, observing or recording the results, and making
an evaluation of some aspect of the system or component. Testing can also be
described as part of the process of Validation and Verification.
Validation is the process of evaluating a system or component during or at the end of
the development process to determine if it satisfies the requirements of the system, or,
in other words, are we building the correct system?
Verification is the process of evaluating a system or component at the end of a phase
to determine if it satisfies the conditions imposed at the start of that phase, or, in other
words, are we building the system correctly?
Software testing gives an important set of methods that can be used to evaluate and
assure that a program or system meets its non-functional requirements.
To be more specific, software testing means that executing a program or its
components in order to assure:
The correctness of software with respect to requirements or intent;
The performance of software under various conditions;
The robustness of software, that is, its ability to handle erroneous inputs and
unanticipated conditions;
The usability of software under various conditions;
The reliability, availability, survivability or other dependability measures of
software; or
Installability and other facets of a software release.
The purpose of testing is to show that the program has errors. The aim of most testing
methods is to systematically and actively locate faults in the program and repair them.
Debugging is the next stage of testing. Debugging is the activity of:
Determining the exact nature and location of the suspected error within
the program and
Fixing the error. Usually, debugging begins with some indication of the
existence of an error.
The purpose of debugging is to locate errors and fix them.
53
Software Project
Management 7.1 OBJECTIVES
Inputs passed in as parameters; Variables that are inputs to function under test
can be: (i) Structured data such as linked lists, files or trees, as well as atomic
data such as integers and floating point numbers;
54
(ii) A reference or a value parameter as in the C function declaration Software Testing
int P(int *power, int base) {
...}
Inputs entered by the user via the program interface;
Inputs that are read in from files;
Inputs that are constants and precomputed values; Constants declared in an
enclosing scope of function under test, for example,
#define PI 3.14159
double circumference(double radius)
{
return 2*PI*radius;
}
In general, the inputs to a program or a function are stored in program variables. A
program variable may be:
A variable declared in a program as in the C declarations
For example: int base; char s[];
Resulting from a read statement or similar interaction with the environment,
For example: scanf(„„%d\n‟‟, &x);
7.2.2 Black Box and White Box Test Case Selection Strategies
Black box Testing: In this method, where test cases are derived from the
functional specification of the system; and
White box Testing: In this method, where test cases are derived from the
internal design specifications or actual code (Sometimes referred to as Glass-
box).
Black box test case selection can be done without any reference to the program design
or the program code. Test case selection is only concerned with the functionality and
features of the system but not with its internal operations.
The real advantage of black box test case selection is that it can be done before
the design or coding of a program. Black box test cases can also help to get the
design and coding correct with respect to the specification. Black box testing
methods are good at testing for missing functions or program behavior that
deviates from the specification. Black box testing is ideal for evaluating
products that you intend to use in your systems.
The main disadvantage of black box testing is that black box test cases cannot
detect additional functions or features that have been added to the code. This is
especially important for systems that need to be safe (additional code may
interfere with the safety of the system) or secure (additional code may be used
to break security).
White box test cases are selected using the specification, design and code of the
program or functions under test. This means that the testing team needs access to the
internal designs or code for the program.
The chief advantage of white box testing is that it tests the internal details of the
code and tries to check all the paths that a program can execute to determine if a
problem occurs. White box testing can check additional functions or code that
has been implemented, but not specified.
The main disadvantage of white box testing is that you must wait until after
design and coding of the programs of functions under test have been completed
in order to select test cases.
55
Software Project Methods for Black box testing strategies
Management
A number of test case selection methods exist within the broad classification of black
box and white box testing.
For Black box testing strategies, the following are the methods:
Boundary-value Analysis;
Equivalence Partitioning.
We will also study State Based Testing, which can be classified as opaque box
selection strategies that is somewhere between black box and white box selection
strategies.
Boundary-value-analysis
The basic concept used in Boundary-value-analysis is that if the specific test cases are
designed to check the boundaries of the input domain then the probability of detecting
an error will increase. If we want to test a program written as a function F with two
input variables x and y., then these input variables are defined with some boundaries
like a1 ≤ x ≤ a2 and b1 ≤ y ≤ b2. It means that inputs x and y are bounded by two
intervals [a1, a2] and [b1, b2].
The following set of guidelines is for the selection of test cases according to the
principles of boundary value analysis. The guidelines do not constitute a firm set of
rules for every case. You will need to develop some judgement in applying these
guidelines.
1. If an input condition specifies a range of values, then construct valid test cases
for the ends of the range, and invalid input test cases for input points just
beyond the ends of the range.
2. If an input condition specifies a number of values, construct test cases for the
minimum and maximum values; and one beneath and beyond these values.
3. If an output condition specifies a range of values, then construct valid test cases
for the ends of the output range, and invalid input test cases for situations just
beyond the ends of the output range.
4. If an output condition specifies a number of values, construct test cases for the
minimum and maximum values; and one beneath and beyond these values.
5. If the input or output of a program is an ordered set (e.g., a sequential file, linear
list, table), focus attention on the first and last elements of the set.
Equivalence Partitioning
Equivalence Partitioning is a method for selecting test cases based on a partitioning of
the input domain. The aim of equivalence partitioning is to divide the input domain of
the program or module into classes (sets) of test cases that have a similar effect on the
program. The classes are called Equivalence classes.
Equivalence Classes
An Equivalence Class is a set of inputs that the program treats identically when the
program is tested. In other words, a test input taken from an equivalence class is
representative of all of the test inputs taken from that class. Equivalence classes are
determined from the specification of a program or module. Each equivalence class is
used to represent certain conditions (or predicates) on the input domain. For
equivalence partitioning it is usual to also consider valid and invalid inputs. The terms
input condition, valid and invalid inputs, are not used consistently. But, the following
definition spells out how we will use them in this subject. An input condition on the
input domain is a predicate over the values of the input domain. A Valid input to a
program or module is an element of the input domain that is expected to return a non-
error value. An Invalid input is an input that is expected to return an error value.
Equivalence partitioning is then a systematic method for identifying interesting input
conditions to be tested. An input condition can be applied to a set of values of a
specific input variable, or a set of input variables
as well.
A Method for Choosing Equivalence Classes
The aim is to minimize the number of test cases required to cover all of the identified
equivalence classes. The following are two distinct steps in achieving this goal:
Step 1: Identify the equivalence classes
If an input condition specifies a range of values, then identify one valid equivalence
class and two invalid equivalence classes.
For example, if an input condition specifies a range of values from 1 to 99, then, three
equivalence classes can be identified:
One valid equivalence class: 1< X < 99
Two invalid equivalence classes X < 1 and X > 99
Step 2: Choose test cases
The next step is to generate test cases using the equivalence classes identified in the
previous step. The guideline is to choose test cases on the boundaries of partitions and
test cases close to the midpoint of the partition. In general, the idea is to select at least
one element from each equivalence class.
57
Software Project Example 2: Selecting Test Cases for the Triangle Program
Management
In this example, we will select a set of test cases for the following triangle program
based on its specification. Consider the following informal specification for the
Triangle Classification Program. The program reads three integer values from the
standard input. The three values are interpreted as representing the lengths of the sides
of a triangle. The program then prints a message to the standard output that states
whether the triangle, if it can be formed, is scalene, isosceles, equilateral, or right-
angled. The specification of the triangle classification program lists a number of
inputs for the program as well as the form of output. Further, we require that each of
the inputs “must be” a positive integer. Now, we can determine valid and invalid
equivalence classes for the input conditions. Here, we have a range of values. If the
three integers we have called x, y and z are all greater than zero, then, they are valid
and we have the equivalence class.
ECvalid = f(x,y, z) x > 0 and y > 0 and z > 0.
For the invalid classes, we need to consider the case where each of the three variables
in turn can be negative and so we have the following equivalence classes:
ECInvalid1 = f(x, y, z) x < 0 and y > 0 and z > 0
ECInvalid2 = f(x, y, z) x > 0 and y <0 and z > 0
ECInvalid3 = f(x, y, z) x > 0 and y > 0 and z < 0
Note that we can combine the valid equivalence classes. But, we are not allowed to
combine the invalid equivalence classes. The output domain consists of the text
„strings‟ „isosceles‟, „scalene‟, „equilateral‟ and „right-angled‟. Now, different values
in the input domain map to different elements of the output domain to get the
equivalence classes in Table 7.2. According to the equivalence partitioning method we
only need to choose one element from each of the classes above in order to test the
triangle program.
58
Statement Coverage or Node Coverage: Every statement of the program Software Testing
should be exercised at least once.
Branch Coverage or Decision Coverage: Every possible alternative in a
branch or decision of the program should be exercised at least once. For if
statements, this means that the branch must be made to take on the values true
or false.
Decision/Condition Coverage: Each condition in a branch is made to evaluate
to both true and false and each branch is made to evaluate to both true and false.
Multiple condition coverage: All possible combinations of condition outcomes
within each branch should be exercised at least once.
Path coverage: Every execution path of the program should be exercised at
least once.
In this section, we will use the control flow graph to choose white box test cases
according to the criteria above. To motivate the selection of test cases, consider the
simple program given in Program 7.1.
Example 3:
void main(void)
{
int x1, x2, x3;
scanf("%d %d %d", &x1, &x2, &x3);
if ((x1 > 1) && (x2 == 0))
x3 = x3 / x1;
if ((x1 == 2) || (x3 > 1))
x3 = x3 + 1;
while (x1 >= 2)
x1 = x1 - 2;
printf("%d %d %d", x1, x2, x3);
}
To make the first branch true, we have test input (2; 0; 3) that will make all of the
branches true. We need a test input that will now make each one false. Again looking
at all of the conditions, the test input (1; 1; 1) will make all of the branches false.
For any of the criteria involving condition coverage, we need to look at each of the
five conditions in the program: C1 = (x1>1), C2 = (x2 == 0), C3 = (x1 == 2), C4 =
(x3>1) and C5 = (x1 >= 2). The test input (1; 0; 3) will make C 1 false, C2 true, C3
false, C4 true and C5 false.
Examples of sets of test inputs and the criteria that they meet are given in Table 7.3.
The set of test cases meeting the multiple condition criteria is given in Table 7.4. In
the table, we let the branches B1 = C1&&C2, B2 = C3||C4 and B3 = C5.
59
Software Project Start
Management
int x1,x2,x3
A
scanf("%d %d %d", &x1, &x2, & x3)
True x3 = x3/x1
(x1>1) && C
(x2==0)
False
(x1==2) || True
(x3 >1) x3 = x3+1;
E
False
x1>= 2
True x1 = x1 - 2; G
False
End
Table 7.3: Test cases for the various coverage criteria for the program 7.1
60
Table 4.4: Multiple condition coverage for the program in Figure 7.1 Software Testing
Test C1 C2 C3 C4 B2 B3
cases x1 > 1 x2==0 B1 x1==2 x3 > 1 C5
x1 ≥ 2
(1,0,3) F T F F T T F
(2,1,1) T F F T F F T
(2,0,4) T T T T T T T
(1,1,1) F F F F F F F
(2,0,4) T T T T T T T
(2,1,1) T F F T F T T
(1,0,2) F T F F T T F
(1,1,1) F F F F F F F
61
Software Project Program 7.2: A program
Management
In the above program, two control constructs are used, namely, while-loop and
if-then-else. A complete CFG for the program of Program 7.2 is given below:
(Figure 4.6).
3 4
1. The results of the program were affected by the code change and the test suite
detects it. We assumed that the test suite is perfect, which means that it must
detect the change. If this happens, the mutant is called a killed mutant.
2. The results of the program are not changed and the test suite does not detect the
mutation. The mutant is called an equivalent mutant.
If we take the ratio of killed mutants to all the mutants that were created, we get a
number that is smaller than 1. This number gives an indication of the sensitivity of
program to the changes in code. In real life, we may not have a perfect program and
we may not have a perfect test suite. Hence, we can have one more scenario:
3. The results of the program are different, but the test suite does not detect it
because it does not have the right test case.
62
Consider the following program 4.3: Software Testing
Now, let‟s mutate the program. We can start with the following simple changes:
63
Software Project # killed Mutants
Management × 100
# total mutants - # equivalent mutants
7.4 DEBUGGING
Debugging occurs as a consequence of successful testing. Debugging refers to the
process of identifying the cause for defective behavior of a system and addressing that
problem. In less complex terms - fixing a bug. When a test case uncovers an error,
debugging is the process that results in the removal of the error. The debugging
process begins with the execution of a test case. The debugging process attempts to
match symptoms with cause, thereby leading to error correction. The following are
two alternative outcomes of the debugging:
1. The cause will be found and necessary action such as correction or removal will
be taken.
2. The cause will not be found.
Characteristics of bugs
1. The symptom and the cause may be geographically remote. That is, the
symptom may appear in one part of a program, while the cause may
actually be located at a site that is far removed. Highly coupled program
structures exacerbate this situation.
2. The symptom may disappear (temporarily) when another error is corrected.
3. The symptom may actually be caused by non errors (e.g., round-off
inaccuracies).
4. The symptom may be caused by human error that is not easily traced.
5. The symptom may be a result of timing problems, rather than processing
problems.
6. It may be difficult to accurately reproduce input conditions (e.g., a real-time
application in which input ordering is indeterminate).
7. The symptom may be intermittent. This is particularly common in embedded
systems that couple hardware and software inextricably.
8. The symptom may be due to causes that are distributed across a number of
tasks running on different processors.
65
Software Project Life Cycle of a Debugging Task
Management
The following are various steps involved in debugging:
a) Defect Identification/Confirmation
A problem is identified in a system and a defect report created
Defect assigned to a software engineer
The engineer analyzes the defect report, performing the following actions:
What is the expected/desired behaviour of the system?
What is the actual behaviour?
Is this really a defect in the system?
Can the defect be reproduced? (While many times, confirming a defect is
straight forward. There will be defects that often exhibit quantum
behaviour.)
b) Defect Analysis
Assuming that the software engineer concludes that the defect is genuine, the focus
shifts to understanding the root cause of the problem. This is often the most
challenging step in any debugging task, particularly when the software engineer is
debugging complex software.
Many engineers debug by starting a debugging tool, generally a debugger and try to
understand the root cause of the problem by following the execution of the program
step-by-step. This approach may eventually yield success. However, in many
situations, it takes too much time, and in some cases is not feasible, due to the
complex nature of the program(s).
c) Defect Resolution
Once the root cause of a problem is identified, the defect can then be resolved by
making an appropriate change to the system, which fixes the root cause.
Debugging Approaches
Three categories for debugging approaches are:
Brute force
Backtracking
Cause elimination.
Brute force is probably the most popular despite being the least successful. We apply
brute force debugging methods when all else fails. Using a “let the computer find the
error” technique, memory dumps are taken, run-time traces are invoked, and the
program is loaded with WRITE statements. Backtracking is a common debugging
method that can be used successfully in small programs. Beginning at the site where a
symptom has been uncovered, the source code is traced backwards till the error is
found. In Cause elimination, a list of possible causes of an error are identified and
tests are conducted until each one is eliminated.
The following are different categories of tools that can be used for testing:
Data Acquisition: Tools that acquire data to be used during testing.
Static Measurement: Tools that analyse source code without executing test
cases.
Dynamic Measurement: Tools that analyse source code during execution.
Simulation: Tools that simulate functions of hardware or other externals.
Test Management: Tools that assist in planning, development and control of
testing.
Cross-Functional tools: Tools that cross the bounds of preceding categories.
The following are some of the examples of commercial software testing tools:
Rational Test Real Time Unit Testing
Kind of Tool
Rational Test RealTime's Unit Testing feature automates C, C++ software
component testing.
Organisation
IBM Rational Software
Software Description
Rational Test RealTime Unit Testing performs black-box/functional testing, i.e.,
verifies that all units behave according to their specifications without regard to how
that functionality is implemented. The Unit Testing feature has the flexibility to
naturally fit any development process by matching and automating developers' and
testers' work patterns, allowing them to focus on value-added tasks. Rational Test
RealTime is integrated with native development environments (Unix and Windows)
as well as with a large variety of cross-development environments.
Platforms
Rational Test RealTime is available for most development and target systems
including Windows and Unix.
AQtest
Kind of Tool
Automated support for functional, unit, and regression testing
Organisation
AutomatedQA Corp.
Software Description
AQtest automates and manages functional tests, unit tests and regression tests, for
applications written with VC++, VB, Delphi, C++Builder, Java or VS.NET. It also
supports white-box testing, down to private properties or methods. External tests can
be recorded or written in three scripting languages (VBScript, JScript, DelphiScript).
Using AQtest as an OLE server, unit-test drivers can also run it directly from
application code. AQtest automatically integrates AQtime when it is on the machine.
Entirely COM-based, AQtest is easily extended through plug-ins using the complete
IDL libraries supplied. Plug-ins currently support Win32 API calls, direct ADO
access, direct BDE access, etc.
Platforms
Windows 95, 98, NT, or 2000.
67
Software Project csUnit
Management
Kind of Tool
“Complete Solution Unit Testing” for Microsoft .NET (freeware)
Organisation
csUnit.org
Software Description
csUnit is a unit testing framework for the Microsoft .NET Framework. It
targets test driven development using .NET languages such as C#, Visual
Basic .NET, and managed C++.
Platforms
Microsoft Windows
Sahi
http://sahi.sourceforge.net/
Software Description
Sahi is an automation and testing tool for web applications, with the facility to record
and playback scripts. Developed in Java and JavaScript, it uses simple JavaScript to
execute events on the browser. Features include in-browser controls, text based
scripts, Ant support for playback of suites of tests, and multi-threaded playback. It
supports HTTP and HTTPS. Sahi runs as a proxy server and the browser needs to use
the Sahi server as its proxy. Sahi then injects JavaScript so that it can access elements
in the webpage. This makes the tool independant of the website/ web application.
Platforms
OS independent. Needs at least JDK1.4
7.6 SUMMARY
The importance of software testing and its impact on software is explained in this unit.
Software testing is a fundamental component of software development life cycle and
represents a review of specification, design and coding. The objective of testing is to
have the highest likelihood of finding most of the errors within a minimum amount of
time and minimal effort. A large number of test case design methods have been
developed that offer a systematic approach to testing to the developer.
Knowing the specified functions that the product has been designed to perform, tests
can be performed that show that each function is fully operational. A strategy for
software testing may be to move upwards along the spiral. Unit testing happens at the
vortex of the spiral and concentrates on each unit of the software as implemented by
the source code. Testing happens upwards along the spiral to integration testing,
where the focus is on design and production of the software architecture. Finally, we
perform system testing, where software and other system elements are tested together.
Debugging is not testing, but always happens as a response of testing. The debugging
process will have one of two outcomes:
1) The cause will be found, then corrected or removed, or
2) The cause will not be found. Regardless of the approach that is used, debugging
has one main aim: to determine and correct errors. In general, three kinds of
debugging approaches have been put forward: Brute force, Backtracking and
Cause elimination.
68
Software Testing
7.7 SOLUTIONS / ANSWERS
Check Your Progress 1
1) Cyclomatic Complexity is asoftware metric that provides a quantitative
measure of the logical complexity of a program. When it is used in the context
of the basis path testing method, the value computed for Cyclomatic complexity
defines the number of independent paths in the basis set of a program. It also
provides an upper bound for the number of tests that must be conducted to
ensure that all statements have been executed at least once.
1) The basic levels of testing are: unit testing, integration testing, system
testing and acceptance testing.
For unit testing, structural testing approach is best suited because the focus of testing
is on testing the code. In fact, structural testing is not very suitable for large programs.
It is used mostly at the unit testing level. The next level of testing is integration testing
and the goal is to test interfaces between modules. With integration testing, we move
slowly away from structural testing and towards functional testing. This testing
activity can be considered for testing the design. The next levels are system and
acceptance testing by which the entire software system is tested. These testing levels
focus on the external behavior of the system. The internal logic of the program is not
emphasized. Hence, mostly functional testing is performed at these levels.
2) The various steps involved in debugging are:
Defect Identification/Confirmation
Defect Analysis
Defect Resolution
69
Software Project 3) An Integrated approach to Software Engineering, Pankaj Jalote; Narcosis
Management Publishing House.
Reference websites
http://www.rspa.com
http://www.ieee.org
http://standards.ieee.org
http://www.ibm.com
http://www.opensourcetesting.org
70
Software Change
UNIT 8 SOFTWARE CHANGE Management
MANAGEMENT
Structure Page Nos.
8.0 Introduction 45
8.1 Objectives 45
8.2 Baselines 45
8.3 Version Control 48
8.4 Change Control 51
8.5 Auditing and Reporting 54
8.6 Summary 56
8.7 Solutions/Answers 56
8.8 Further Readings 56
8.0 INTRODUCTION
Software change management is an umbrella activity that aims at maintaining the
integrity of software products and items. Change is a fact of life but uncontrolled
change may lead to havoc and may affect the integrity of the base product. Software
development has become an increasingly complex and dynamic activity. Software
change management is a challenging task faced by modern project managers,
especially in a environment where software development is spread across a wide
geographic area with a number of software developers in a distributed environment.
Enforcement of regulatory requirements and standards demand a robust change
management. The aim of change management is to facilitate justifiable changes in the
software product.
8.1 OBJECTIVES
8.2 BASELINES
System specification
Source code
Object code
Drawing
Software design
45
Software Project Design data
Management Database schema and file structure
Test plan and test cases
Product specific documents
Project plan
Standards procedures
Process description
approved approved
Baseline 1 changes Baseline 2 changes Baseline 3
The domain of software change management process defines how to control and
manage changes.
46
A formal process of change management is acutely felt in the current scenario when Software Change
the software is developed in a very complex distributed environment with many Management
versions of a software existing at the same time, many developers involved in the
development process using different technologies. The ultimate bottomline is to
maintain the integrity of the software product while incorporating changes.
Process of changes: As we have discussed, baseline forms the reference for any
change. Whenever a change is identified, the baseline which is available in project
database is copied by the change agent (the software developer) to his private area.
Once the modification is underway the baseline is locked for any further modification
which may lead to inconsistency. The records of all changes are tracked and recorded
in a status accounting file. After the changes are completed and the changes go
through a change control procedure, it becomes a approved item for updating the
original baseline in the project database.
Configuration
status
accounting file
All the changes during the process of modification are recorded in the configuration
status accounting file. It records all changes made to the previous baseline B to reach
the new baseline B’. The status accounting file is used for configuration authentication
which assures that the new baseline B’ has all the required planned and approved
changes incorporated. This is also known as auditing.
47
Software Project Check Your Progress 1
Management
1) serves as reference for any change.
Project identifier
Configuration item (or simply item, e.g. SRS, program, data model)
Change number or version number
The identification of the configuration item must be able to provide the relationship
between items whenever such relationship exists.
The identification process should be such that it uniquely identifies the configuration
item throughout the development life cycle, such that all such changes are traceable to
the previous configuration. An evolutionary graph graphically reflects the history of
all such changes. The aim of these controls is to facilitate the return to any previous
state of configuration item in case of any unresolved issue in the current unapproved
version.
Ver
1.4
Ver
1.3
Ver Ver
2.0 2.1
Software engineers use this version control mechanism to track the source code,
documentation and other configuration items. In practice, many tools are available to
store and number these configuration items automatically. As software is developed
and deployed, it is common to expect that multiple versions of the same software are
deployed or maintained for various reasons. Many of these versions are used by
developers to privately work to update the software.
It is also sometimes desirable to develop two parallel versions of the same product
where one version is used to fix a bug in the earlier version and other one is used to
develop new functionality and features in the software. Traditionally, software
developers maintained multiple versions of the same software and named them
uniquely by a number. But, this numbering system has certain disadvantages like it
does not give any idea about a nearly identical versions of the same software which
may exist.
The project database maintains all copies of the different versions of the software and
other items. It is quite possible that without each other’s knowledge, two developers
may copy the same version of the item to their private area and start working on it.
Updating to the central project database after completing changes will lead to
overwriting of each other’s work. Most version control systems provide a solution to
this kind of problem by locking the version for further modification.
Commercial tools are available for version control which performs one or more of
following tasks;
There are many commercial tools like Rational ClearCase, Microsoft Visual
SourceSafe and a number of other commercial tools to help version control.
Let us consider the following simple HTML file in a web based application
(welcome.htm)
<html>
<head>
<Title> A simple HTML Page</title>
</head>
<body>
<h1> Welcome to HTML Concepts</h1>
</body>
</html>
49
Software Project Once the code is tested and finalized, the first step is to register the program to he
Management project database. The revision is numbered and this file is marked read-only to prevent
any further undesirable changes. This forms the building block of source control. Each
time the file is modified, a new version is created and a new revision number is given.
The first version of the file is numbered as version 1.0. Any further modification is
possible only in the developer’s private area by copying the file from the project
database. The process of copying the configuration object (the baseline version) is
called check-out.
Check-out
Project Developer
database private area
Check-in
The version (revision) control process starts with registering the initial versions of the
file. This essentially enforces a check on the changes which ensure that the file can’t
be changed unless it is checked-out from the project database.
<hr>
a href=mailto:webmaster@xyz.com> webmaster</a>
<hr>
Then the developer check-in’s the revised version of the file to the project database
with a new version (revision) number version 1.1 i.e. the first revision along with the
details of the modification done.
Suppose further modification is required for text-based browser as graphic will not be
supported by text-based browser. Then the version 1.1 will be selected from the
project database. This shows the necessity of storing all versions of the file in the
project database.
3) How do version control systems ensure that two software developers do not
attempt the same change at the same time?
………………………………………………………………………………………
……………………………………………………………………………………...
The adoption and evolution of changes are carried out in a disciplined manner. In a
large software environment where, as changes are done by a number of software
developers, uncontrolled and un-coordinated changes may lead to havoc grossly
diverting from the basic features and requirements of the system. For this, a formal
change control process is developed.
A change request starts as a beginning of any change control process. The change
request is evaluated for merits and demerits, and the potential side effects are
evaluated. The overall impact on the system is assessed by the technical group
consisting of the developer and project manager. A change control report is
generated by the technical team listing the extent of changes and potential side effects.
A designated team called change control authority makes the final decision, based on
the change control report, whether to accept or reject the change request.
A change order called engineering change order is generated after the approval of the
change request by the change control authority. The engineering change order forms
the starting point of effecting a change in the component. If the change requested is
51
Software Project not approved by the change control authority, then the decision is conveyed to the user
Management or the change request generator.
Once, change order is received by the developers, the required configuration items are
identified which require changes. The baseline version of configuration items are
copied from the project data base as discussed earlier.
The changes are then incorporated in the copied version of the item. The changes are
subject to review (called audit) by a designated team before testing and other quality
assurance activity is carried out. Once the changes are approved, a new version is
generated for distribution.
The change control mechanisms are applied to the items which have become
baselines. For other items which are yet to attain the stage of baseline, informal
change control may be applied. For non- baseline items, the developer may make
required changes as he feels appropriate to satisfy the technical requirement as long as
it does not have an impact on the overall system.
The role of the change control authority is vital for any item which has become a
baseline item. All changes to the baseline item must follow a formal change control
process.
As discussed, change request, change report and engineering change order (change
order) are generated as part of the change control activity within the software change
management process. These documents are often represented inprinted or electronic
forms. The typical content of these documents is given below:
1.2 Requester and contact details: The name of the person requesting the change
and contact details
2.2 Justification for the change : Detailed justification for the request.
2.3 Priority : The priority of the change depending on critical effect on system
functionalities.
52
Software Change Report Format Software Change
Management
1.0 Change report Identification
1.2 Requester: The name and contact details of the person requesting the change.
1.3 Evaluator : The name of the person or team who evaluated the change request.
2.3.2 Technical risks : The risks associated with making the change are
described.
4.0 Recommendation
4.2 Internal priority: How important is this change in the light of the business
operation and priority assigned by the evaluator.
2.2.1 Technical work and tools required : A description of the work and tools
required to accomplish the change.
2.3 Technical risks: The risks associated with making the change are described in
this section.
A description of the testing and review approach required to ensure that the change
has been made without any undesirable side effects.
Description of the test plans and new tests that are required.
Version control mechanism helps the software tester to track the previous version of
the product, thereby giving emphasis on testing of the changes made since the last
approved changes. It helps the developer and tester to simultaneously work on
multiple versions of the same product and still avoid any conflict and overlapping of
activity.
The software change management process is used by the managers to keep a control
on the changes to the product thereby tracking and monitoring every change. The
existence of a formal process reassures the management. It provides a professional
approach to control software changes.
It also provides confidence to the customer regarding the quality of the product.
Auditing and Reporting helps change management process to ensure whether the
changes have been properly implemented or not, whether it has any undesired impact
on other components. A formal technical review and software configuration audit
helps in ensuring that the changes have been implemented properly during the change
process.
Whether the changes as identified and reported in the change order have been
incorporated?
Whether the procedure for identifying, recording and reporting changes has been
followed.
Reporting: Status reporting is also called status accounting. It records all changes that
lead to each new version of the item. Status reporting is the bookkeeping of each
release. The process involves tracking the change in each version that leads the
latest(new) version.
8.6 SUMMARY
8.7 SOLUTIONS/ANSWERS
Check Your Progress 1
1) Baseline.
2) The domain of software change management process defines how to control and
manage changes. The ultimate aim is to maintain the integrity of the software
product while incorporating changes.
Check Your Progress 2
1) Microsoft Visual SourceSafe
2) Yes
3) Version control system locks the configuration item once it is copied from project
database for modification by a developer.
57