SE&PM Mod4and5
SE&PM Mod4and5
MODULE- 4
The reason for these project shortcomings is often the management of projects. The
National Audit Office in the UK, for example, among other factors causing project
failure identified ‘lack of skills and proven approach to project management and risk
management’.
The dictionary definitions put a clear emphasis on the project being a planned
activity.The emphasis on being planned assumes we can determine how to carry out a task
before
There is a hazy boundary between the non-routine project and the routine job. The
first time you do a routine task it will be like a project. On the other hand, a project to
develop a system similar to previous ones that you have developed will have a large
element of the routine.
Example :
Invisibility When a physical artefact such as a bridge is constructed the progress can
actually be seen. With software, progress is not immediately visible. Software project
management can be seen as the process of making the invisible visible.
Complexity Per dollar, pound or euro spent, software products contain more complexity
than other engineered artefacts.
Conformity The ‘traditional’ engineer usually works with physical systems and
materials like cement and steel. These physical systems have complexity, but are
governed by consistent physical laws. Software developers have to conform to the
requirements of human clients. It is not just that individuals can be inconsistent.
Organizations, because of lapses in collective memory, in internal communication or
in effective decision making, can exhibit remarkable ‘organizational stupidity’.
Flexibility That software is easy to change is seen as a strength. However, where the
software system inter- faces with a physical or organizational system, it is expected
that the software will change to accommodate the other components rather than vice
versa. Thus software systems are particularly subject to change.
In-house projects are where the users and the developers of new software work for
the same organization. However, increasingly organizations contract out ICT
development to outside developers. Here, the client organization will often appoint a
‘project manager’ to supervise the contract who will delegate many technically
oriented decisions to the contractors.
1.5.1 Some of the common features of contract management and technical project
management are as follows: -
1. stakeholders are involved in both.
2. Team from both the clients and suppliers are involved foraccomplishing the project.
3. They generally evolve out of need and requirements from both theclients and
suppliers.
4. They are interdependent on each other.
5. Standard protocols are maintained by both the clients and suppliers.
Thus, the project manager will not worry about estimating the effort needed to write
individual software components as long as the overall project is within budget and on
time. On the supplier side, there will need to be project managers who deal with the
more technical issues. This book leans towards the concerns of these ‘technical’
project managers. Common features of contract management and technical
project management.
A software project is not only concerned with the actual writing of software. In fact,
where a software application is bought ‘off the shelf’, there may be no software
writing as such, but this is still fundamentally a software project because so many of
the other activities associated with software will still be present.
Usually there are three successive processes that bring a new system into being – see
Figure 1.2.
1.The feasibility study assesses whether a project is worth starting – that it has a
valid business case. Information is gathered about the requirements of the proposed
application. Requirements elicitation can, at least initially, be complex and difficult.
The stakeholders may know the aims they wish to
pursue, but not be sure about the means of achievement. The developmental and
operational costs, and the value of the benefits of the new system, will also have to be
estimated. With a large system, the feasibility study could be a project in its own right with
its own plan. The study could be part of a strategic planning exercise examining a range of
potential software developments. Sometimes an organization assesses a programme of
development made up of a number of projects.
Planning If the feasibility study indicates that the prospective project appears viable, the
project planning can start. For larger projects, we would not do all our detailed planning at
the beginning.
We create an outline plan for the whole project and a detailed one for the first stage. Because
we will have more detailed and accurate project information after the earlier stages of the
project have been completed, planning of the later stages is left to nearer their start.
Project execution The project can now be executed. The execution of a project often
contains design and implementation sub-phases.
Students new to project planning often find that the boundary between design and
planning can be hazy. Design is making decisions about the form of the products to be
created. This could relate to the external appearance of the software, that is, the user
interface, or the internal architecture. The plan details the activities to be carried out to
create these products.
Planning and design can be confused because at the most detailed level, planning decisions
are influenced by design decisions. Thus a software product with five major components
Figure 1.3 shows the typical sequence of software development activities recommended in
the international standard ISO 12207. Some activities are concerned with the system while
others relate to software. The development of software will be only one part of a project.
Software could be developed, for example, for a project which also requires the installation
of an ICT infrastructure, the design of user jobs and user training.
Architecture design The components of the new system that fulfil each requirement have to be
identified. Existing components may be able to satisfy some requirements. In other cases, a
new component will have to be made. These components are not only software: they could
be new hardware or work processes. Although software developers are primarily
concerned with software components, it is very rare that these can be developed in
isolation. They will, for example, have to take account of existing legacy systems with which
they will interoperate.
The design of the system architecture is thus an input to the software requirements. A second
architecture design process then takes place that maps the software requirements to
software components.
Detailed design Each software component is made up of a number of software units that
can be separately coded and tested. The detailed design of these units is carried out
separately.
Code and test refers to writing code for each software unit. Initial testing to debug
individual software units would be carried out at this stage.
Integration The components are tested together to see if they meet the overall
requirements. Integration could involve combining different software components,
or combining and testing the software element of the system in conjunction with the
hardware platforms and user interactions.
Installation This is the process of making the new system operational. It would
include activities such as setting up standing data (for example, the details for
employees in a payroll system), setting system parameters, installing the software
onto the hardware platforms and user training.
Acceptance support This is the resolving of problems with the newly installed
system, including the correction of any errors, and implementing agreed extensions
and improvements. Software maintenance can be seen as a series of minor software
projects. In many environments, most software development is in fact maintenance.
A plan for an activity must be based on some idea of a method of work. For
example, if you were asked to test some software, you may know nothing about
the software to be tested, but you could assume that you would need to:
● devise and write test cases that will check that each requirement has been
satisfied;
● create test scripts and expected results for each test case;
● compare the actual results and the expected results and identify discrepancies.
While a method relates to a type of activity in general, a plan takes that method
(and perhaps others) and converts it to real activities, identifying for each
activity:
The output from one method might be the input to another. Groups of methods or
techniques are often grouped into methodologies such as object-oriented design.
Projects may differ because of the different technical products to be created. Thus we
need to identify the characteristics of a project which could affect the way in which it
should be planned and managed. Other factors are discussed below.
In workplaces there are systems that staff have to use if they want to do something,
such as recording a sale. However, use of a system is increasingly voluntary, as in the
case of computer games. Here it is difficult to elicit precise requirements from
potential users as we could with a business system. What the game will do will thus
depend much on the informed ingenuity of the developers, along with techniques
such as market surveys, focus groups and prototype evaluation.
A traditional distinction has been between information systems which enable staff to
carry out office processes and embedded systems which control machines. A stock
control system would be an information system. An embedded, or process control,
system might control the air conditioning equipment in a building. Some systems may
have elements of both where, for example, the stock control system also controls an
automated warehouse.
All types of software projects can be classified into software product development
projects and software services projects.These two broad classes of software projects
can be furthur classified into subclasses as shown in figure 1.4 below.
Example : BANCS from TCS, FINACLE from Infosys in the banking domain and
Outsourced projects
While developing a large project, sometimes, it makes good commercial sense for a
company to outsource some parts of its work to other companies. There can be several
reasons behind such a decision.
For example, a company may consider outsourcing as a good option, if it feels that it
does not have sufficient expertise to develop some specific parts of the product or if
it determines that some parts can be developed cost-effectively by another company.
Since an outsourced project is a small part of some project, it is usually small in size
and needs to be completed within a few months.
Indian software companies excel in executing outsourced software projects and have
earned a fine reputation in this field all over the world. Of late, the Indian companies
have slowly begun to focus on product development as well.
The type of development work being handled by a company can have an impact on its
profitability. For example, a company that has developed a generic software product
usually gets an uninterrupted stream of revenue over several years. However,
outsourced projects fetch only one time revenue to any company.
A project might be to create a product, the details of which have been specified by the client.
The client has the responsibility for justifying the product.
On the other hand, the project requirement might be to meet certain objectives which could
be met in a number of ways. An organization might have a problem and ask a specialist to
recommend a solution.
This is useful where the technical work is being done by an external group and the
user needs are unclear at the outset. The external group can produce a preliminary
design at a fixed fee. If the design is acceptable the developers can then quote a price
for the second, implementation, stage based on an agreed requirement.
3. Free Software Projects – Free software is software that can be freely used,
modified, and reallocated with only one constraint. Any redistributed version of the
software must be distributed with the original terms of free use, modification, and
distribution. In other words, the user has the liberty to copy, run, download, distribute
and do anything for his upgradation. Thus, this software gives the liberty without
money and the users can regulate the programs as per their needs.
4. Software Hosted on Code Plex – Code Plex is Microsoft’s open- source project
hosting website. Code Plex is a site for managing open- source software projects, but
most of those projects are leniently licensed, commonly written in C# and the building
blocks for own open-source project using advanced GUI control libraries. The great
thing about permissively licensed building blocks is that one doesn’t have to worry
about the project being sucked into GPL if one decides to close the source. Because
Code Plex is based on Team Foundation Server, it also provides enterprise bug
tracking and build management for open-source project, which is far better than the
services provided by Source Forge.
A project charter explains the project in clear, concise wording for high level
management. Project charters summarizes the entirety of projects to support teams
rapidly comprehend the goals, tasks, timelines, and stakeholders.
The document provides key information about a project and provides approval to
start the project. Therefore, it serves as a formal announcement that a new
approved projectis about to commence.
Project charter also contain the appointment of the project manager, the person
who is overall responsible for the project.
The project charter is a final official document that is prepared in accordance with
the mission and visions of the company alongwith the deadlines and the milestones
The project charter clearly defines the projects, its attributes, the end results, and
the project authorities who will be handling the project.
The project charter along with the project plan provide strategic plans for the
implementation of the projects. It is also the green signal for the project manger to
commence the project.
In a nutshell, the elements of the project charter which serves the following are: -
Reasons for the project
Objectives and constraints of the project
The main stakeholders
Risks identified
Benefits of the project
General overview of the budget
Benefits of project charter
Some of the benefits of project charter are as follows: -
It improves the customer relationship
It improves project management methods
Expands and enhances the regional and headquarters
communications
Supports in gaining project funding
It recognizes senior management roles and authorities
Allows development, which is meant at achieving industry bestpractices.
1.10 Stakeholders
These are people who have a stake or interest in the project. Their early identification
is important as you need to set up adequate communication channels with them.
Stakeholders can be categorized as:
● Internal to the project team This means that they will be under the direct
managerial control of the project leader.
● External to the project team but within the same organization For example, the
project leader might need the assistance of the users to carry out systems
Project Management and Software quality 14
Module 4 and 5
testing. Here the commitment of the people involved has to be negotiated.
● External to both the project team and the organization External stakeholders
may be customers (or users) who will benefit from the system that the
project implements. They may be contractors who will carry out work for
the project. The relationship here is usually based on a contract.
Different types of stakeholder may have different objectives and one of the jobs of the
project leader is to recognize these different interests and to be able to reconcile them. For
example, end-users may be concerned with the ease of use of the new application, while
their managers may be more focused on staff savings.
The project leader therefore needs to be a good communicator and negotiator. Boehm and
Ross proposed a ‘Theory W’ of software project management where the manager
concentrates on creating situations where all parties benefit from a project and therefore
have an interest in its success. (The ‘W’ stands for ‘win–win’.)
Among all these stakeholders are those who actually own the project. They control the
financing of the project. They also set the objectives of the project.
The objectives should define what the project team must achieve for project success.
Although different stakeholders have different motivations, the project objectives identify
the shared intentions for the project.
Objectives focus on the desired outcomes of the project rather than the tasks within it – they
are the ‘post-conditions’ of the project.
Informally the objectives could be written as a set of statements following the opening
words ‘the project will be a success if ' Thus one statement in a set of objectives might be
‘customers can order our products online’ rather than ‘to build an e-commerce website’. There
is often more than one way to meet an objective and the more possible routes to success
the better.
This authority is often a project steering committee (or project board or project
management board) with overall responsibility for setting, monitoring and modifying
objectives. The project manager runs the project on a day-to-day basis, but regularly
reports to the steering committee.
An effective objective for an individual must be something that is within the control of that
individual. An objective might be that the software application produced must pay for itself
by reducing staff costs. As an overall business objective this might be reasonable.
We can say that in order to achieve the objective we must achieve certain goals or sub-
objectives first. These are steps on the way to achieving an objective, just as goals scored
in a football match are steps towards the objective of winning the match. Informally this
can be expressed as a set of statements following the words ‘To reach objective. . ., the
following must be in place. . .’.
Specific Effective objectives are concrete and well defined. Vague aspirations such as ‘to
improve customer relations’ are unsatisfactory. Objectives should be defined so that it is
obvious to all whether the project has been successful.
Measurable Ideally there should be measures of effectiveness which tell us how successful
the project has been. For example, ‘to reduce customer complaints’ would be more
satisfactory as an objective than ‘to improve customer relations’. The measure can, in some
cases, be an answer to simple yes/no question, e.g. ‘Did we install the new software by 1
June?’
Achievable It must be within the power of the individual or group to achieve the objective.
Relevant The objective must be relevant to the true purpose of the project.
Measures of effectiveness
Measures of effectiveness provide practical methods of checking that an objective has been
met. ‘Mean time between failures’ (mtbf) might be used to measure reliability. This is a
performance measurement and, as such, can only be taken once the system is operational.
Project managers want to get some idea of the performance of the completed system as it
is being constructed. They will therefore
seek predictive measures. For example, a large number of errors found during code
inspections might indicate potential problems with reliability later.
Most projects need to have a justification or business case: the effort and expense of
pushing the project through must be seen to be worthwhile in terms of the benefits that
will eventually be felt.
A cost–benefit analysis will often be part of the project’s feasibility study. This will itemize
and quantify the project’s costs and benefits. The benefits will be affected by the
completion date: the sooner the project is completed, the sooner the benefits can be
experienced.
The quantification of benefits will often require the formulation of a business model which
explains how the new application can generate the claimed benefits.
For example:
The development costs are not allowed to rise to a level which threatens to exceed the
value of benefits;The features of the system are not reduced to a level where the
expected benefits cannot be realized;
The delivery date is not delayed so that there is an unacceptable loss of benefits.
The project plan should be designed to ensure project success by preserving the
business case for the project. However, every non-trivial project will have problems,
and at what stage do we say that a project is actually a failure? Because different
stakeholders have different interests, some stakeholders in a project might see it as a
success while others do not.
The project objectives are the targets that the project team is expected to achieve. In the
case of software projects, they can usually be summarized as delivering:the agreed
functionality to the required level of quality,on time ,within budget.
A project could meet these targets but the application, once delivered could fail to meet the
business case. A computer game could be delivered on time and within budget, but might
then not sell. A commercial website used for online sales could be created successfully, but
customers might not use it to buy products, because they could buy the goods more cheaply
elsewhere.
We have seen that in business terms it can generally be said that a project is a success if the
value of benefits exceeds the costs. We have also seen that while project managers have
considerable control over development costs, the value of the benefits of the project
deliverables is dependent on external factors such as the number of customers.
Project objectives still have some bearing on eventual business success. A delay in
completion reduces the amount of time during which benefits can be generated and
diminishes the value of the project.
A project can be a success on delivery but then be a business failure, On the other hand, a
project could be late and over budget, but its deliverables could still, over time, generate
benefits that outweigh the initial expenditure.
The possible gap between project and business concerns can be reduced by having a
broader view of projects that includes business issues. For example, the project
management of an e-commerce website implementation could plan activities such as
market surveys, competitor analysis, focus groups, prototyping, and evaluation by typical
potential users – all designed to reduce business risks.
Because the focus of project management is, not unnaturally, on the immediate project, it may
not be seen that the project is actually one of a sequence. Later projects benefit from the
Technical learning will increase costs on the earlier projects, but later projects benefit as
the learnt technologies can be deployed more quickly, cheaply and accurately. This
expertise is often accompanied by additional software assets.
For example reusable code. Where software development is outsourced, there may be
immediate savings, but these longer-term benefits of increased expertise will be lost.
Astute managers may assess which areas of technical expertise it would be beneficial to
develop.
Customer relationships can also be built up over a number of projects. If a client has
trust in a supplier who has done satisfactory work in the past, they are more likely to
use that company again, particularly if the new requirement builds on functionality
already delivered. It is much more expensive to acquire new clients than it is to retain
existing ones.
We have explored some of the special characteristics of software. We now look at the
‘management’ aspect of software project management. It has been suggested that
management involves the following activities:
Much of the project manager’s time is spent on only three of the eight identified
activities, viz., project planning, monitoring, and control. The time period during
It shows that project management is carried out over three well-defined stages
or processes, irrespective of the methodology used. In the project initiation
stage, an initial plan is made.
Finally, the project is closed. In the project closing stage, all activities are logically
completed and all contracts are formally closed.
Once the project execution starts, monitoring and control activities are taken up
to ensure that the project execution proceeds as planned. The monitoring activity
involves monitoring the progress of the project. Control activities are initiated to
minimize any significant variation in the plan.
Several best practices have been proposed for software project planning
activities. Step Wise planning , which is based on the popular PRINCE2 (PRojects
IN Controlled Environments) method.
● Effort How much effort would be necessary for completing the project?
The effectiveness of all activities such as scheduling and staffing, which are
planned at a later stage, depends on the accuracy with which the above three
project parameters have been estimated.
● Miscellaneous Plans This includes making several other plans such as quality
assurance plan, configuration management plan, etc.
Project monitoring and control activities are undertaken after the initiation
of development activities. The aim of project monitoring and control activities is
to ensure that the software development proceeds as planned.
While carrying out project monitoring and control activities, a project manager
may sometimes find it necessary to change the plan to cope with specific
situations and make the plan more accurate as more project data becomes
At the start of a project, the project manager does not have complete knowledge
about the details of the project. As the project progresses through different
development phases, the manager’s information base gradually improves.
By taking these developments into account, the project manager can plan
subsequent activities more accurately with increasing levels of confidence.
Figure 1.4 shows this aspect as iterations between monitoring and control, and
the plan revision activities.
In Figure 1.5 the ‘real world’ is shown as being rather formless. Especially in the
case of large undertakings, there will be a lot going on about which management
should be aware.
This will involve the local managers in data collection. Bare details, such as
‘location X has processed 2000 documents’, will not be very useful to higher
management: data processing will be needed to transform this raw data into
useful information. This might be in such forms as ‘percentage of records
processed’, ‘average documents processed per day per person’ and ‘estimated
completion date’.
implementation
In effect they are comparing actual performance with one aspect of the overall
project objectives. They might find that one or two branches will fail to complete
the transfer of details in time. They would then need to consider what to do (this
is represented in Figure 1.5 by the box Making decisions/ plans).
One possibility would be to move staff temporarily from one branch to another.
If this is done, there is always the danger that while the completion date for the
one branch is pulled back to before the overall target date, the date for the branch
from which staff are being moved is pushed forward beyond that date.
The project manager would need to calculate carefully what the impact would be
in moving staff from particular branches. This is modelling the consequences of
a potential solution. Several different proposals could be modelled in this way
before one was chosen for implementation.
Having implemented the decision, the situation needs to be kept under review
For instance, the next time that progress is reported, a branch to which staff have
been transferred could still be behind in transferring details. This might be
because the reason why the branch has got behind in transferring details is
because the manual records are incomplete and another department, for whom
the project has a low priority, has to be involved in providing the missing
information. In this case, transferring extra staff to do data inputting will not
have accelerated data transfer.
It can be seen that a project plan is dynamic and will need constant adjustment
during the execution of the project.
A good plan provides a foundation for a good project, but is nothing without
intelligent execution. The original plan will not be set in stone but will be
modified to take account of changing circumstances.
Software development life cycle denotes the stages through which a software
is developed.In figure 1.7 a software development life cycle(SDLC) is shown in
terms of the set of activities that are undertaken during a software
development project, their grouping into different phases and their
sequencing .
During the software development cycle, starting from its conception, the
developers carry out several processes(or development methodologies)till the
software is fully developed and deployed at the client site.
In contrast to the Software Development Life Cycle typically starts well before
the software development activities start and continues for the entire duration
of SDLC shown in figure 1.7.
During the software development life cycle,the developers carry out several
types of development processes.
The activities carried out by the developers during software development life
cycle as well as the management life cycle are grouped into a number of
phases.
Sets of phases and their sequencing in the software development life cycle and
project management life cycle have shown in figure 1.8 below.
The different phases in the software development life cycle are requirements
analysis,design,development,test and delivery.
The different phases of the project management life cycle are shown in Figure 1.8. In
the following, we discuss the main activities that are carried out in each phase.
Project Initiation
As shown in Figure 1.8, the software project management life cycle starts with project
initiation.
The project initiation phase usually starts with project concept development. During
concept development the different characteristics of the software to be developed are
thoroughly understood. The different aspects of the project that are investigated and
understood include: the scope of the project, project constraints, the cost that would
be incurred and the benefits that would accrue.
For example, an organization might feel a need for a software to automate some of
its activities, possibly for more efficient operation. Based on the feasibility study, the
business case is developed.
Once the top management agrees to the business case, the project manager is
appointed, the project charter is written, and finally the project team is formed. This
sets the ground for the manager to start the project planning phase.
During the project initiation phase it is crucial for the champions of the project to
develop a thorough understanding of the important characteristics of the project.
In his W5HH principle, Barry Boehm summarized the questions that need to be asked
and answered in order to have an understanding of these project characteristics.
W5HH Principle: Boehm suggested that during project initiation, the project
champions should have comprehensive answers to a set of key questions pertaining
to the project. The answers to these questions would lead to the definition of key
project characteristics.
Project bidding:
Once an organization's top management is convinced by the business case, the project
charter is developed. For some categories of projects, it may be necessary to have a
formal bidding process to select a suitable vendor based on some cost-performance
criteria. If the project involves automating some activities of an organization, the
organization may either decide to develop it in-house or may get various software
vendors to bid for the project.
The different types of bidding techniques and their implications and applicability.
The RFQ issuing organization can select a vendor based on the price quoted as well
as the competency of the vendor. In government organizations, the term request for
tender (RFT) is usually used in place of RFQ. RFT is similar to RFQ; however, in RFT
the bidder needs to deposit a tender fee in order to participate in the bidding process.
Request for proposal (RFP) Many times it so happens that an organization has
reasonable under- standing of the problem to be solved, however it does not have a
good grasp of the solution aspects. That is, the organization may not have sufficient
In this case, the organization may solicit solution proposals from vendors. The
vendors may submit a few alternative solutions and the approximate costs for each
solution. In order to develop a better understanding, the requesting organization may
ask the vendors to explain or demonstrate their solutions.
Based on the RFP process, the requesting organization can form a clear idea of the
project solutions required, based on which it can form a statement of work (SOW) for
requesting RFQ from the vendors.
Request for Information (RFI) An organization soliciting bids may publish an RFI.
Based on the vendor response to the RFI, the organization can assess the
competencies of the vendors and shortlist the vendors who can bid for the work.
However, it must be noted that vendor selection is seldom done based on RFI, but the
RFI response from the vendors may be used in conjunction with RFP and RFQ
responses for vendor selection.
Project planning
An important outcome of the project initiation phase is the project charter. During the
project planning phase, the project manager carries out several processes and creates the
following documents:
Project plan :This document identifies the project tasks, and a schedule for the project
tasks that assigns project resources and time frames to the tasks.
Resource plan: It lists the resources, manpower and equipment that would be required to
execute the project.
Financial plan: It documents the plan for manpower, equipment and other costs.
Quality plan :Plan of quality targets and control plans are included in this document.
Risk plan : This document lists the identification of the potential risks, their prioritization
and a plan for the actions that would be taken to contain the different risks.
Project execution
In this phase the tasks are executed as per the project plan developed during the
planning phase. A series of management processes are undertaken to ensure that the
Monitoring and control processes are executed to ensure that the tasks are executed
as per plan and corrective actions are initiated whenever any deviations from the plan
are noticed.
The project plan may have to be revised periodically to accommodate any changes to
the project plan that may arise on account of change requests, risks and various
events that occur during the project execution.
Quality of the deliverables is ensured through execution of proper processes. Once all
the deliverables are produced and accepted by the customer, the project execution
phase completes and the project closure phase starts.
Project closure
Project closure involves completing the release of all the required deliverables to the
customer along with the necessary documentation.
Subsequently, all the project resources are released and supply agreements with the
vendors are terminated and all the pending payments are completed.
Over the last two decades, the basic approach taken by the software industry to develop
software has undergone a radical change. Hardly any software is being developed from
scratch any more. Software development projects are increasingly being based on either
tailoring some existing product or reusing certain pre-built libraries.
In either case, two important goals of recent life cycle models are maximization of code
reuse and compression of project durations. Other goals include facilitating and
accommodating client feedbacks and customer participation in project development work,
and incremental delivery of the product with evolving functionalities.
Change requests from customers are encouraged, rather than circumvented. Clients on the
other hand, are demanding further reductions in product delivery times and costs. These
recent developments have changed project management practices in many significant
ways.
Planning Incremental Delivery: Few decades ago, projects were much simpler and
therefore more predictable than the present day projects. In those days, projects were
planned with sufficient detail. much before the actual project execution started. After the
project initiation, monitoring and control activities were carried out to ensure that the
project execution proceeded as per plan.
Now, projects are required to be completed over a much shorter duration, and rapid
application development and deployment are considered key strategies. The traditional
long-term planning has given way to adaptive short-term planning.
Instead of making a long-term project completion plan, the project manager now plans all
incremental deliveries with evolving functionalities. This type of project management is
often called extreme project management.
Quality Management Of late: customer awareness about product quality has increased
significantly. Tasks associated with quality management have become an important
responsibility of the project manager. The key responsibilities of a project manager now
include assessment of project progress and tracking the quality of all intermediate
artifacts.
Change Management Earlier :when the requirements were signed off by the customer,
any changes to the requirements were rarely entertained. Customer suggestions are now
actively being solicited and incorporated throughout the development process.
To facilitate customer feedback, incremental delivery models are popularly being used.
Product development is being carried out through a series of product versions
implementing increasingly greater functionalities.
Also customer feedback is solicited on each version for incorporation. This has made it
necessary for an organization to keep track of the various versions and revisions through
which the product develops.
A basic premise of these modern development methodologies is that at the start of a project
the customers are often unable to fully visualize their exact needs and are only able to
determine their actual requirements after they start using the software.
From this view point, modern software development practices advocate delivery of
software in increments as and when the increments are completed by the development
team, and actively soliciting change requests from the customer as they use the increments
of the software delivered to them.
A few customer representatives are included in the development team to foster close every
day interactions with the customers. Contrast this with the practice followed in older
development methodologies, where the requirements had to be identified upfront and
these were then 'signed off' by the customer and 'frozen' before the development could
start.
Change requests from the customer after the start of the project were discouraged.
Consequently, at present in most projects, the requirements change frequently during the
development cycle. It has, therefore, become necessary to properly manage the
requirements, so that as and when there is any change in requirements, the latest and up-
to-date requirements become available to all.
Starting with an initial release, releases are made each time the code changes. There are
several reasons as to why the code needs to change. These reasons include functionality
enhancements, bug fixes and improved execution speed. Further, modern development
processes such as the agile development processes advocate frequent and regular releases
of the software to be made to the customer during the software development.
Starting with the release of the basic or core functionalities of the software, more complete
functionalities are made available to the customers every couple of weeks. In this context,
effective release management has become important.
Every project is susceptible to a host of risks that could usually be attributed to factors
such as technology, personnel and customer. Unless proper risk management is practised,
the progress of the project may get adversely affected. Risk management involves
identification of risks, assessment of the impacts of various risks, prioritization of the risks
and preparation of risk-containment plans.
Scope Management : Once a project gets underway, many requirement change requests
usually arise. Some of these can be attributed to the customers and the others to the
development team.
Modern development practices encourage the customer to come up with change requests.
While all essential changes must be carried out, the superfluous and ornamental changes
must be scrupulously avoided.
While accepting change requests, it must be remembered that the three critical project
parameters: scope, schedule and project cost are interdependent and are very intricately
related. If the scope is allowed to change extensively, while strictly maintaining the
schedule and cost, then the quality of the work would be the major casualty.
For every scope change request, the project managers examine whether the change
request is really necessary and whether the budget and time schedule would permit it.
Often, the scope change requests are superfluous.
For example, an over enthusiastic project team member may suggest to add features that
The customer may also initiate scope change requests that are more ornamental or at best
nonessential. These serve only to jeopardize the success of the project, while not adding
any perceptible value to the delivered software. Such avoidable scope change requests
originated by the customer are termed as scope creep. To ensure the success of the project,
the project manager needs to guard against both gold plating and scope creep.
----------------------******************------------------------****************-------------------
Module 5
Software quality
5.1 Introduction
While quality is generally agreed to be a good thing', in practice what is meant by the 'qualit
can be vague. We need to define precisely what qualities we require of a system.
Rather than concentrating on the quality of the final system, a potential customer for
software might check that the suppliers were using the best
methods.
Quality will be of concern at all stages of project planning and execution, but will be of
particular interest a the following points in the Step Wise framework (Figure 5.1).
Step 2: Identify project infrastructure :Within this step, activity identifies installation
standards and procedures. Some of these will almost certainly be about quality.
Step 3: Analyze project characteristics :In activity 5.2 ('Analyze other project
characteristics - including quality based ones') the application to be implemented is
examined to see if it has any special quality requirements.
If, for example, it is safety critical then a range of activities could be added, such as n-
version development where a number of teams develop versions of the same software
which are then run in parallel with the outputs being cross-checked for discrepancies.
Step 4: Identify the products and activities of the project :It is at this point that the
entry, exit and process requirements are identified for each activity.
Step 8: Review and publicize plan: At this stage the overall quality aspects of the project
plan are reviewed.
We would expect quality to be a concern of all producers of goods and services. However,
the special characteristics of software create special demands.
->Increasing criticality of software: The final customer or user is naturally anxious about
the general quality of software, especially its reliability. This is increasingly so as
organizations rely more on their computer systems and software is used in more safety-
critical applications, for example to control aircraft.
->The intangibility of software: can make it difficult to know that a project task was
completed satisfactorily. Task outcomes can be made tangible by demanding that the
developer produce 'deliverables' that can be examined for quality.
Definitions :
Software quality refers to how well a software application conforms to a set of functional and
non-functional requirements, as well as how well it satisfies the needs or expectations of its
users. It encompasses various attributes such as reliability, efficiency, maintainability,
usability, security, and scalability.
Some qualities of a software product reflect the external view of software held by users, as
in the case of usability. These external qualities have to be mapped to internal factors of
which the developers would be aware. It could be argued, for example, that well-
structured code is likely to have fewer errors and thus improve reliability.
Defining quality is not enough. If we are to judge whether a system meets our requirements
we need to be able to measure its qualities.
A good measure must relate the number of units to the maximum possible."
The maximum number of faults in a program, for example, is related to the size of the
program, so a measure of faults per thousand lines of code is more helpful than total
faults in a program.
Trying to find measures for a particular quality helps to clarify and communicate wha that
quality really is. What is being asked is, in effect, 'how do we know when
have been successful?'
The measures may be direct, where we can measure the quality directly, or indirect, where
the thing being measured is not the quality itself but an indicator that the quality is present.
For example, the number of enquiries by users received by a help desk about how one
operates a particular software application might be an indirect measurement of its
usability.
When project managers identify quality measurements they effectively set targets for
project team members. so care has to be taken that an improvement in the measured
quality is always meaningful.
For example, the number of errors found in program inspections could be counted, on the
grounds that the more thorough the inspection process, the more errors will be discovered.
This count could be improved by allowing more errors to go through to the inspection
stage rather than eradicating them earlier - which is not quite the point.
Test: the practical test of the extent to which the attribute quality exists
Minimally acceptable: the worst value which might be acceptable if other characteristics
compensated for it, and below which the product would have to be rejected out of hand
Target range: the range of values within which it is planned the quality measurement
value should lie.
-> Availability: the percentage of a particular time interval that a system is usable
.->Mean time between failures: the total service time divided by the number of failures
->Failure on demand: the probability that a system will not be available at the time
required or the probability that a transaction will fail
->Support activity: the number of fault reports that are generated and processed.
Associated with reliability is maintainability, which is how quickly a fault, once detected,
can be corrected. A key component of this is changeability, which is the ease with which
the software can be modified.
Before an amendment can be made, the fault has to be diagnosed. Maintainability can
therefore be seen as change- ability plus a new quality, analysability, which is the ease with
which causes of failure can be identified.
1. The user will be concerned with the elapsed time between a fault being detected
and it being corrected, while the software development managers will be concerned
about the effort involved.
2. The need to be able to quantitatively measure the quality of a software is often felt.
For example, one may want to set quantitative quality requirements for a software,
or to verify whether a software meets the quality requirements set of it.
Project Management and Software quality 37
Module 4 and 5
Unfortunately, it is hard to directly measure the quality of a software. It can be
expressed in terms of several attributes of a software that can be directly measured.
The quality models give a characterization (often hierarchical) of software quality in terms
of a set of characteristics of the software. The bottom level of the hierarchy can be directly
measured, thereby, enabling a quantitative assessment of the quality of the software.
There are several well-established quality models, including McCall's, Dromey's and
Boehm's. Since there was no standardization among the large number of quality models
that became available, the ISO 9126 model of quality was developed.
Garvin reasoned that sometimes users have subjective judgment of the quality of a
program (perceived quality) that must be taken into account to judge its quality.
McCall' model:
McCall defined the quality of a software in terms of three broad parameters: its operational
characteristics, how easy it is to fix defects and how easy it is to port it to different
platforms.
These three high-level quality attributes are defined based on the following eleven
attributes of the software:
1. Correctness: The extent to which a software product satisfies its specifications.
2. Reliability: The probability of the software product working satisfactorily over a
given duration.
Dromey's model
Dromey proposed that software product quality depends on four major high-level
properties of the software: Correctness, internal characteristics, contextual
characteristics and certain descriptive properties. Each of these high-level properties
of a software product, in turn, depends on several lower-level quality attributes of the
software. Dromey's hierarchical quality model is shown in Figure 5.2.
Over the years, various lists of software quality characteristics have been put forward, such
as those of James McCall and of Barry Boehm.
The term 'maintainability' has been used, for example, to refer to the ease with which an
error can be located and corrected in a piece of software, and also in a wider sense to
include the ease of making any changes. For some, 'robustness' has meant the software's
Project Management and Software quality 40
Module 4 and 5
tolerance of incorrect input, while for others it has meant the ability to change program
code without introducing errors.
The ISO 9126 standard was first introduced in 1991 to tackle the question of the definition
of software quality. The original 13-page document was designed as a foundation upon
which further, more detailed, standards could be built. The ISO 9126 standards documents
are now very lengthy. Partly this is because people with differing motivations might be
interested in software quality,
namely:
Currently, in the UK, the main ISO 9126 standard is known as BS ISO/IEC 9126- 1:2001.
This is supplemented by some technical reports' (TRS). published in 2003, which are
provisional standards. At the time of writing, a new standard in this area, ISO 25000, is
being developed.
• Acquirers who are obtaining software from external suppliers
• Developers who are building a software product
• Independent evaluators who are assessing the quality of a software product, not for
themselves but for a community of users - for example, those who might use a particular
type of software tool as part of their professional practice.
ISO 9126 has separate documents to cater for these three sets of needs. Despite the size of
this set of documentation, it relates only to the definition of software quality attributes. A
separate standard,
ISO 14598, describes the procedures that should be carried out when assessing the degree
to which a software product conforms to the selected ISO 9126 quality characteristics. This
might seem unnecessary, but it is argued that ISO 14598 could be used to carry out an
assessment using a different set of quality characteristics from those in ISO 9126 if
circumstances required it.
The difference between internal and external quality attributes has already been noted.
ISO 9126 also introduces another type of quality - quality in use- for which the following
elements have been identified:
• Effectiveness: the ability to achieve user goals with accuracy and completeness
• Productivity: avoiding the excessive use of resources, such as staff effort, in achieving
user goals.
Safety: within reasonable levels of risk of harm to people and other entities such as
business, software.. property and the environment
• Satisfaction: smiling users
'Users' in this context includes not just those who operate the system containing the
software, but also those who maintain and enhance the software. The idea of quality in use
Project Management and Software quality 41
Module 4 and 5
underlines how the required quality of the software is an attribute not just of the software
but also of the context of use.
For instance :
In the IOE scenario, suppose the maintenance job reporting procedure varies considerably,
depending on the type of equipment being serviced, because different inputs are needed
to calculate the cost to IOE. Say that 95% of jobs currently involve maintaining
photocopiers and 5% concern maintenance of printers.
If the software is written for this application, then despite good testing, some errors might
still get into the operational system. As these are reported and corrected, the software
would become more 'mature' as faults become rarer.
If there were a rapid switch so that more printer maintenance jobs were being processed,
there could be an increase in reported faults as coding bugs in previously less heavily used
parts of the software code for printer maintenance were flushed out by the larger number
of printer maintenance transactions. Thus, changes to software use involve changes to
quality requirements.
ISO 9126 suggests sub-characteristics for each of the primary characteristics. They are
useful as they clarify what is meant by each of the main characteristics.
Typically these could be auditing requirements. Since the original 1999 draft, a sub-
characteristic called 'compliance' has been added to all six ISO external characteristics. In
each case. this refers to any specific standards that might apply to the particular quality
attribute.
'Interoperability' is a good illustration of the efforts of ISO 9126 to clarify terminology.
'Interoperability refers to the ability of the software to interact with other systems.
The framers of ISO 9126 have chosen this word rather than 'compatibility' because the
latter causes confusion with the characteristic referred to by ISO 9126 as 'replaceability'
(see below).
'Maturity' refers to the frequency of failure due to faults in a software product, the
implication being that the more the software has been used, the more faults will have been
uncovered and removed.
Note that 'recoverability' has been clearly distinguished from 'security' which describes
the control of access to a system.
Note how 'learnability' is distinguished from 'operability'. A software tool could be easy to
learn but time- consuming to use because, say, it uses a large number of nested menus. This
might be fine for a package used intermittently, but not where the system is used for many
'Analysability' is the ease with which the cause of a failure can be determined.
'Changeability' is the quality that others call 'flexibility': the latter name is a better one as
'changeability' has a different connotation in plain English - it might imply that the
suppliers of the software are always changing it!
'Stability', on the other hand, does not refer to software never changing: it means that there
is a low risk of a modification to the software having unexpected effects.
'Portability compliance' relates to those standards that have a bearing on portability. The
use of a standard programming language common to many software/hardware
environments would be an example of this. 'Replaceability' refers to the factors that give
'upwards compatibility' between old software components and the new ones.
'Downwards' compatibility is not implied by the definition.
A new version of a word processing package might read the documents produced by
previous versions and thus be able to replace them, but previous versions might not be
able to read all documents created by the new version.
'Coexistence' refers to the ability of the software to share resources with other software
components; unlike 'interoperability', no direct data passing is necessarily involved.
ISO 9126 provides guidelines for the use of the quality characteristics. Variation in the
importance of different quality characteristics depending on the type of product is
stressed.
Once the requirements for the software product have been established, the following steps
are suggested:
1. Judge the importance of each quality characteristic for the application Thus reliability
will be of particular concern with safety-critical systems while efficiency will be important
for some real-time systems.
2. Select the external quality measurements within the ISO 9126 framework relevant to
the qualities prioritized above Thus for reliability mean time between failures would be an
important measurement, while for efficiency, and more specifically 'time behaviour',
response time would be an obvious measurement.
3. Map measurements onto ratings that reflect user satisfaction For response time, for
example, the mappings might be as in Table 5.1.
4. Identify the relevant internal measurements and the intermediate products in which
they appear This would only be important where software was being developed, rather
than existing software being evaluated.
For example, the efficiency characteristics of time behaviour and resource utilization
could be enhanced by exploiting the particular characteristics of the operating system and
hardware environments within which the software will perform. This, however, would
probably be at the expense of portability.
It was noted above that quality assessment could be carried out for a number of different
reasons: to assist software development, acquisition or independent assessment.
During the development of a software product, the assessment would be driven by the
need to focus the minds of the developers on key quality requirements. The aim would be
to identify possible weaknesses early on and there would be no need for an overall quality
rating.
One approach recognizes some mandatory quality rating levels which a product must
reach or be rejected, regardless of how good it is otherwise. Other characteristics might be
desirable but not essential.
For these a user satisfaction rating could be allocated in the range, say, 0-5. This could be
based on having an objective measurement of some function and then relating different
measurement values to different levels of user satisfaction - see Table 5.2 above.
Along with the rating for satisfaction, a rating in the range 1-5, say, could be assigned to
reflect how important each quality characteristic was. The scores for each quality could be
given due weight by multiplying it by its importance weighting. These weighted scores can
then be summed to obtain an overall score for the product. The scores for various products
are then put in the order of preference.
Finally, a quality assessment can be made on behalf of a user community as a whole. For
example, a professional body might assess software tools that support the working
practices of its members. Unlike the selection by an individual user/purchaser, this is an
It is clear that the result of such an exercise would vary considerably depending on the
weightings given to each software characteristic, and different users could have different
requirements. Caution would be needed here.
The users assess the quality of a software product based on its external attributes, whereas
during development, the developers assess the product's quality based on various internal
attributes. We can also say that during development, the developers can ensure the quality
of a software product based on a measurement of the relevant internal attributes.
The internal attributes may measure either some aspects of the product (called product or
of the development process (called process metrics).
Let us understand the basic differences between product and process metrics.
Product metrics help measure the characteristics of a product being developed. A few
examples of product metrics and the specific product characteristics that they measure are
the following: the LOC and function point metrics are used to measure size, the PM (person-
month) metric is used to measure the effort required to develop a product, and the time
required to develop the product is measured in months.
Errors can enter the process at any stage. They can be caused either by defects in a process,
as when software developers make mistakes in the logic of their software, or by
information not passing clearly and accurately between development stages.
Errors not removed at early stages become more expensive to correct at later stages.
Each development step that passes before the error is found increases the amount of
rework needed. An error in the specification found in testing will mean rework at all the
stages between specification and testing. Each successive step of development is also more
detailed and less able to absorb change.
Note that Extreme Programming advocates suggest that the extra effort needed to amend
software at later stages can be exaggerated and is, in any case, often justified as adding
value to the software.
• Entry requirements, which have to be in place before an activity can start. An example
would be that a comprehensive set of test data and expected results be prepared and
approved before program testing
can commence.
These requirements may be laid out in installation standards, or a Software Quality Plan
may be drawn up for the specific project if it is a major one.
Note : The concept of the Internet of Everything originated at Cisco, who defines IoE as
"the intelligent connection of people, process, data and things." Because in the Internet
of Things, all communications are between machines.
BS EN ISO 9001:2000
At IOE, a decision might be made to use an outside contractor to produce the annual
maintenance contracts subsystem. A natural concern would be the standard of the
contractor's deliverables.
Quality control would involve the rigorous testing of all the software produced by the
contractor, insisting on rework where defects are found. This would be very time-
consuming. An alternative approach would focus on quality assurance.
In this case IOE would check that the contractors themselves were carrying out effective
quality control. A key element of this would be ensuring that the contractor had the right
quality management system in place. Various national and international standards bodies,
including the British Standards Institution (BSI), have engaged in the creation of
standards for quality management systems.
The British Standard is now called BS EN ISO 9001:2000, which is identical to the
international standard, ISO 9001:2000.
Standards such as the ISO 9000 series try to ensure that a monitoring and control system
to check quality is in place. They are concerned with the certification of the development
process, not of the end-product as in the case of crash helmets and electrical appliances
with their familiar CE marks. Standards in the ISO 9000 series relate to quality systems in
general and not just those in software development.
ISO 9000 describes the fundamental features of a Quality Management System (QMS)
and its terminology.
ISO 9001 describes how a QMS can be applied to the creation of products and the provision
of services.
There has been some controversy over the value of these standards. Stephen Halliday,
writing in The Observer, had misgivings that these standards are taken by many customers
to imply that the final product is of a certified standard although, as Halliday says, 'It has
nothing to do with the quality of the product going out of the gate. You set down your own
specifications and just have to maintain them, however low they may be.
Obtaining certification can be expensive and time-consuming which can put smaller, but
still well-run, businesses at a disadvantage. Finally, there has been a concern that a
preoccupation with certification might distract attention from the real problems of
producing quality products.
Putting aside these reservations, let us examine how the standard works. First, we identify
those things to be the subject of quality requirements. We then put a system in place which
checks that the requirements are being fulfilled and corrective action taken when
necessary.
These principles are applied through cycles which involve the following activities:
Documentation of objectives -procedures (in the form of a quality manual), plans, and
records relating to the actual operation of processes. The documentation must be subject
to a change control system that ensures that it is current. Essentially one needs to be able
to demonstrate to an outsider that the QMS exists and is actually adhered to.
Management responsibility - the organization needs to show that the QMS and the
processes that produce goods and services conforming to the quality objectives are
actively and properly managed.
Planning
Determination and review of customer requirements
Effective communications between the customer and supplier.
Design and development being subject to planning, control and review
Requirements and other information used in design being adequately and clearly
recorded
Design outcomes being verified, validated and documented in a way that provides
sufficient information for those who have to use the designs
Changes to the designs should be properly controlled
Adequate measures to specify and evaluate the quality of purchased components
A historical perspective
Before the 1950s, the primary means of realizing quality products was by undertaking
extensive testing of the finished products. The emphasis of the quality paradigms later
shifted from product assurance (extensive testing of the finished product) to process
assurance (ensuring that a good quality process is used for product development).
In this context, it needs to be emphasized that a basic assumption made by all modern
quality paradigms is that if an organization's processes are good and are followed
rigorously, then the products developed by using it would certainly be of good quality.
Therefore, all modern quality assurance techniques focus on providing sufficient guidance
for recognizing, defining, analysing, and improving the process.
A good documented process enables setting up of a good quality system. However, to reach
the next quality level, it is necessary to improve the process whenever any shortcomings
in it are noticed. It is also necessary to incorporate into the development process any new
tools or techniques that may become available. This forms the essential idea behind Total
Quality Management (TQM).
In a nutshell, TQM advocates that the process followed by an organization must
continuously be improved through process measurements. Continuous process
improvement is achieved through process redesign. A term related to TQM is Business
Process Reengineering (BPR). BPR aims at reengineering the way business is carried out
in an organization.
Note : Capability Maturity Model (CMM) was developed by the Software Engineering
Institute (SEI) at Carnegie Mellon University in 1987. It is not a software process model.
It is a framework that is used to analyze the approach and techniques followed by any
organization to develop software products. It also provides guidelines to enhance
further the maturity of the process used to develop those software products.
The United States Department of Defence (US DoD) is one of the largest buyers of
software products in the world. It has often faced difficulties dealing with the quality of
performance of vendors, to whom it assigned contracts. The department had to live with
recurring problems of delivery of low quality products, late delivery, and cost escalations.
DoD worked with the Software Engineering Institute (SEI) of the Carnegie Mellon
University to develop CMM. Originally, the objective of CMM was to assist DoD in
developing an effective software acquisition method by predicting the likely contractor
performance through an evaluation of their development practices.
Definition:
CMM is a reference model for appraising a software development organization into one of
five process maturity levels. The maturity level of an organization is a ranking of the quality
of the development process used by the organization. This information can be used to predict
the most likely outcome of a project that the organization undertakes.
It should be remembered that SEI CMM can be used in two different ways, viz., capability
evaluation and process assessment.
Capability evaluation and software process assessment differ in motivation, objective, and
the final use of the result. Capability evaluation essentially concerns assessing the software
process capability of an organization. Capability evaluation is administered by the contract
On the other hand, process assessment is used by an organization with the objective of
improving its own process capability. Thus, the result of the latter type of assessment is
purely for internal use by a company.
In process assessment, the quality level is assessed by a team of assessors coming into an
organization and interviewing the key staff about their practices, using a standard
questionnaire to capture information. It needs to be remembered that in this case, a key
objective is not just to assess, but to recommend specific actions to bring the organization
up to a higher process maturity level.
The different levels of SEI CMM have been designed so that it is easy for an organization to
slowly ramp up.
Each developer feels free to follow any process that he or she may like. Due to the
chaotic development process practised, when a developer leaves the organization,
the new incumbent usually faces great difficulty in understanding the process that
was followed for the portion of the work that has been completed.
Consequently, time pressure builds up towards the product delivery time. To cope
up with the time pressure, many short cuts are tried out leading to low quality
products.
Though project failures and project completion delays are commonplace in these
level 1 organizations, yet it is possible that some projects may get successfully
completed. But an analysis of any incidence of successful completion of a project
would reveal the heroic efforts put in by some members of the project team.
Level 2: Repeatable Organizations at this level usually practise some basic project
management practices such as planning and tracking cost and schedule. Further, these
organizations make use of configuration management tools to keep the deliverable items
under configuration control.
Level 3: Defined At this level, the processes for both management and development
activities are defined and documented. There is a common organization-wide
understanding of activities, roles, and responsibilities. At this level, the organization builds
up the capabilities of its employees through periodic training programs. Also, systematic
reviews are practised to achieve phase containment of errors.
Level 5: Optimizing Organizations operating at this level not only collect process and
product metrics, but analyze them to identify scopes for improving and optimizing the
various development and management activities. In other words, these organizations
strive for continuous process improvement.
As an example of a process optimization that may be made, consider that from an analysis
of the process measurement results, it is observed that the code reviews are not very
effective and a large number of errors are detected only during the unit testing. In this case,
the review process would be fine-tuned to make it more effective.
In a level 5 organization, the lessons learned from specific projects are incorporated in to
the process. Continuous process improvement is achieved both by careful analysis of the
process measurement results and assimilation of innovative ideas and technologies.
Except for level 1, each maturity level is characterized by several Key Process Areas
(KPAS). The KPAs of a level indicate the areas that an organization at the lower maturity
level needs to focus to reach this level. The KPAS for the different process maturity levels
are shown in Table 5.4. Note that level I has no KPA associated with it, since by default all
organizations are in level 1.
KPAS provide a way for an organization to gradually improve its quality of over several
stages. In other words, at each stage of process maturity, KPAS identify the key areas on
which an organization needs to focus to take it to the next level of maturity. Each stage has
been carefully designed such that one stage enhances the capability already built up.
For example, trying to implement a defined process (level 3) before a repeatable process
(level 2) would be counterproductive as it becomes difficult to follow the defined process
due to schedule and budget pressures. In other words, trying to focus on some higher level
KPAS without achieving the lower level KPAs would be counterproductive.
CMMI is the successor of the Capability Maturity Model (CMM). In 2002, CMMI Version 1.1
was released. Version 1.2 followed in 2006. The genesis of CMMI is the following. After
CMMI was first released in 1990, it was adopted and used in many domains other than
software development, such as human resource management (HRM).
CMMS were developed for disciplines such as systems engineering (SE-CMM), people
management (PCMM), software acquisition (SA-CMM), and others. Although many
organizations found these models to be useful, they faced difficulties arising from overlap,
inconsistencies, as well as integration of the models.
For example, all the terminologies that are used are very generic in nature and even the
word software does not appear in the definition documentation of CMMI. However, CMMI
has much in common with CMM, and also describes the five distinct levels of process
maturity of CMM.
ISO/IEC 15504 is a standard for process assessment that shares many concepts with CMMI.
The two standards should be compatible. Like CMMI the standard is designed to provide
guidance on the assessment of software development processes. To do this there must be
some benchmark or process reference model which represents the ideal development life
cycle against which the actual processes can be compared.
When assessors are judging the degree to which a process attribute is being fulfilled they allocate
one of the following scores:
The CMMI standard has now grown to over 500 pages. Without getting bogged down in
detail, this section explores how the general approach might usefully be employed. To do
this we will take a scenario from industry.
UVW is a company that builds machine tool equipment containing sophisticated control
software. This equipment also produces log files of fault and other performance data in
electronic format. UVW produces software that can read these log files and produce
analysis reports and execute queries.
Both the control and analysis software is produced and maintained by the Software
Engineering department .Within this department there are separate teams who deal with
the software for different types of equipment
Lisa is a Software Team Leader in the Software Engineering department with a team of six
systems designers reporting to the leader.
The group is responsible for new control systems and the maintenance of existing systems
for a particular product line. The dividing line between new development and maintenance
is sometimes blurred as a new control system often makes use of existing software
components which are modified to create the new
software.
A separate Systems Testing Group test software for new control systems, but not fault
correction and adaptive maintenance of released systems.
A project for a new control system is controlled by a Project Engineer with overall
responsibility for managing both the hardware and software sides of the project.
The Project Engineer is not primarily a software specialist and would make heavy demands
on the Software Team Leader, such as Lisa, in an advisory capacity. Lisa may, as a Software
Team Leader, work for a number of different Project Engineers in respect of different
A new control system starts with the Project Engineer writing a software requirement
document which is Software Quality 349 reviewed by a Software Team Leader, who will
then agree to the document, usually after some amendment. A copy of the requirements
document will pass to the Systems Testing Group so that they can create system test cases
and a systems test environment.
Lisa, if she were the designated Software Team Leader, would then write an Architecture
Design document mapping the requirements to actual software components. These would
be allocated to Work Packages carried out by individual members of Lisa's team.
UVW teams get the software quickly written and uploaded onto the newly developed
hardware platform for initial debugging. The hardware and software engineers will then
invariably have to alter the requirement and consequently the software as they find
inconsistencies, faults and missing functions.
The Systems Testing Group should be notified of these changes, but this can be patchy.
Once the system seems to be satisfactory to the developers, it is released to the Systems
Testing Group for final testing before shipping to customers.
Lisa's work problems mainly relate to late deliveries of software by her group because:
(i) The Head of Software Engineering and the Project Leaders may not liaise properly,
leading to the over- commitment of resources to both new systems and maintenance jobs
at the same time
(ii) The initial testing of the prototype often leads to major new requirements being
identified.
(iii) There is no proper control over change requests - the volume of these can sometimes
increase the demand for software development well beyond that originally planned
(iv) Completion of system testing can be delayed because of the number of bug fixes
We can see that there is plenty of scope for improvements.
One problem is knowing where to start.Approaches like that of CMMI can help us identify
the order in which steps in improvement have to take place.
Given a software requirement, formal plans enable staff workloads to be distributed more
carefully. The monitoring of plans would also allow managers to identify emerging
problems with particular projects.
Effective change control procedures would make managers more aware of how changes in
the system's functionality can force project deadlines to be breached. These process
developments would help an organization move from Level 1 to Level 2.
Figure 5.5 illustrates how a project control system could be envisaged at the level of
maturity.
The steps of defining procedures for each development task and ensuring that they are
actually carried out help to bring an organization up to Level 3.
When more formalized processes exist, the behaviour of component processes can be
monitored.
For example, the numbers of change reports generated and system defects detected at the
system testing phase. Apart from information about the products passing between
processes, we can also collect effort information about each process itself. This enables
effective remedial action to be taken speedily when problems are found. The development
processes are now properly managed, bringing the organization up to Level 4.
Finally, at Level 5 of process management, the information collected is used to improve the
process model itself. It might, for example, become apparent that the changes to software
requirements are a major source of defects. Steps could therefore be taken to improve this
process.
For example, the hardware component of the system could be simulated using software
tools. This could help the hardware engineers to produce more realistic designs and reduce
changes. It might even be possible to build control software and test it against a simulated
hardware system. This could enable earlier and cheaper resolution of technical problems.
PSP is based on the work of Watts Humphrey. Unlike CMMI that is intended for companies,
PSP is suitable for individual use. It is important to note that SEI CMM does not tell software
developers how to analyze, design, code, test or document software products, but assumes
that engineers use effective personal practices.
PSP recognizes that the process for individual use is different from that necessary for a
team. The quality and productivity of an engineer is to a great extent dependent on his
process.
PSP is a framework that helps engineers to measure and improve the way they work. It
helps in developing personal skills and methods by estimating, planning and tracking
performance against plans, and provides a defined process which can be tuned by
individuals.
Time measurement
PSP advocates that developers should rack the way they spend time. Because, boring
activities seem longer than actual and interesting activities seem short. Therefore, the
actual time spent on a task should be measured with the help of a stop-clock to get an
objective picture of the time spent.
For example, he may stop the clock when attending a telephone call, taking a coffee break,
etc. An engineer should measure the time he spends for various development activities
such as designing, writing code, testing, etc. PSP Planning.
Individuals must plan their project. Unless an individual properly plans his activities,
disproportionately high effort may be spent on trivial activities and important activities
may be compromised, leading to poor quality results.
The developers must estimate the maximum, minimum and the average LOC required for
the product. They should use their productivity in minutes/LOC to calculate the maximum.
minimum and the average development time. They must record the plan data in a project
plan summary. The PSP is schematically shown in Figure 5.7.
As has been shown in Figure 5.7, an individual developer must plan the personal activities
and make the basic plans before starting the development work. While carrying out the
activities of different phases of software development, the individual developer must
the log data with the initial plan to achieve better planning in the future projects, to improve his
process etc. The four maturity levels of PSP have schematically been shown in Figure 5.8.
The activities that the developer must perform for achieving a higher level of maturity have also
been annotated on the diagram PSP2 introduces defect management via the use of checklists for
code and design reviews. The checklists are
developed by analysing the defect data gathered from earlier projects.
Six Sigma
Motorola, USA, initially developed the six sigma method in the early 1980s. Since
then, thousands of companies around the world have discovered the benefits of
adopting six sigma methodologies.
Six sigma becomes applicable to any activity that is concerned with cost, timeliness,
and quality of results. Therefore, it is applicable to virtually every industry.
Six sigma seeks to improve the quality of process outputs by identifying and
removing the causes of defects and minimizing variability in the use of process. It
uses many quality management methods, including statistical methods, and
requires presence of six sigma experts within the organization (black belts, green
belts, etc.).
A six sigma defect is defined as any system behaviour that is not as per customer
specifications. Total number of six sigma defect opportunities is then the total
number of chances for committing an error. Sigma of a process can easily be
calculated using a six sigma calculator.
The six sigma DMAIC process (define, measure, analyze, improve, control) is an
improvement system for existing processes falling below specification and looking
for incremental improvement. The six sigma DMADV process (define, measure,
analyze, design, verify) is an improvement.
system used to develop new processes or products at six sigma quality levels. Both
six sigma processes are executed by six sigma green belts and six sigma black belts,
and are overseen by six sigma master black belts. Many frameworks exist for
implementing the six sigma methodology. Six sigma consultants all over the world
Note : Green Belts typically focus on process improvement, while Black Belts also focus
on process design and innovation.
Green Belts typically report to a Black Belt or other senior leaders, while Black Belts
typically report directly to a Six Sigma executive or sponsor.
Procedural structure: At first, programmers were left to get on with writing programs as
best they could. Over the years there has been the growth of methodologies where every
process in the software development cycle has carefully laid down steps.
Focus has shifted from relying solely on checking the products of intermediate stages and
towards building an application as a number of smaller, relatively independent
components developed quickly and tested at an early stage. This can reduce some of the
problems, noted earlier, of attempting to predict the external quality of the software from
early design documents. It does not preclude careful checking of the design of components.
We are now going to look at some specific techniques. The push towards more visibility
has been dominated by the increasing use of walk-throughs, inspections and reviews. The
movement towards a more procedural structure inevitably leads to discussion of
Inspections
● Inspections can be carried out by colleagues at all levels except the very top.
● The inspection is led by a moderator who has had specific training in the
technique.
● The other participants have defined roles. For example, one person will act as a
recorder and note all defects found, and another will act as reader and take the
other participants through the document under inspection.
● Checklists are used to assist the fault-finding process.
● Statistics are maintained so that the effectiveness of the inspection process can
be monitored.
Note : A Fagan inspection is a process of trying to find defects in documents (such as
source code or formal specifications) during various phases of the software development
process. It is named after Michael Fagan, who is credited with the invention of formal
software inspections.
In the late 1960s, software was seen to be getting more complex while the capacity of the
human mind to hold detail remained limited. It was also realized that it was impossible to
test any substantial piece of software completely given the huge number of possible input
combinations. Testing, at best, could prove the presence of errors, not their absence. Thus
Dijkstra and others suggested that the only way to reassure ourselves about the
correctness of software was by examining the code.
The way to deal with complex systems, it was contended, was to break them down into
components of a size the human mind could comprehend. For a large system there would
be a hierarchy of components and subcomponents. For this decomposition to work
properly, each component would have to be self-contained, with only one entry and exit
point.
The ideas of structured programming have been further developed into the ideas of clean-
room software development by people such as the late Harlan Mills of IBM.
profile estimating the volume of use for each feature in the system;
● A development team, which develops the code but which does no machine
testing of the program code produced;
Usage profiles reflect the need to assess quality in use as discussed earlier in relation to
ISO 9126. They will be further dis cussed in the Section 5.11 on testing below.
The development team does no debugging; instead, all software has to be verified by them
using mathematical techniques. The argument is that software which is constructed by
throwing up a crude program, which then has test data thrown at it and a series of hit-and-
miss amendments made to it until it works, is bound to be unreliable.
The certification team carry out the testing, which is continued until a statistical model
shows that the failure intensity has been reduced to an acceptable level.
Formal methods
Preconditions define the allowable states. before processing, of the data items upon which
a procedure is to work.
The postconditions define the state of those data items after processing. The mathematical
notation should ensure that such a specification is precise and unambiguous.
It should also be possible to prove mathematically (in much the same way that at school
you learnt to prove Pythagoras' theorem) that a particular algorithm will work on the data
defined by the preconditions in such a way as to produce the postconditions.
Despite the claims of the effectiveness of the use of a formal notation to define software
specifications for many years now, it is rarely used in mainstream software development.
This is despite it being quite widely being taught in universities.
Much interest has been shown in Japanese software quality practices. The aim of the
'Japanese' approach is to examine and modify the activities in the development process in
order to reduce the number of errors that they have in their end-products.
Testing and Fagan inspections can assist the removal of errors - but the same types of error
could occur repeatedly in successive products created by a faculty process. By uncovering
the source of errors, this repetition can be eliminated.
Staff are involved in the identification of sources of errors through the formation of quality
circles. These can be set up in all departments of an organization, including those
producing software where they are known as Software Quality Circles (SWQC).
A quality circle is a group of four to ten volunteers working in the same area who meet for,
say, an hour a week to identify, analyze and solve their work-related problems. One of their
number is the group leader and there could be an outsider, a facilitator, who can advise on
procedural matters. In order to make the quality circle work effectively, training needs to
be given.
Together the quality group select a pressing problem that affects their work. They identify
what they think are the causes of the problem and decide on a course of action to remove
these causes. Often, because of resource or possible organizational constraints, they will
have to present their ideas to management to obtain approval before implementing the
process improvement.
Associated with quality circles is the compilation of most probable error lists.
For example, at IOE, Amanda might find that the annual maintenance contracts project is
being delayed because of errors in the requirements specifications. The project team could
be assembled and spend some time producing a list of the most common types of error that
occur in requirements specifications. This is then used to identify measures which can
reduce the occurrence of each type of error. They might suggest, for instance, that test
cases be produced at the same time as the requirements specification and that these test
Another way by which an organization can improve its performance is by reflecting on the
performance of a project at its immediate end when the experience is still fresh. This
reflection may identify lessons to be applied to future projects. Project managers are
required to write a Lessons Learnt report at the end of the project. This should be
distinguished from a Post Implementation Review (PIR).
A PIR takes place after a significant period of operation of the new system, and focuses on
the effectiveness of the new system, rather than the original project process. The PIR is
often produced by someone who was not involved in the original project, in order to ensure
neutrality. An outcome of the PIR will often be changes to enhance the effectiveness of the
installed system.
The Lessons Learnt report, on the other hand, is written by the project manager as soon as
possible after the completion of the project. This urgency is because the project team is
often dispersed to new work soon after the finish of the project. One problem that is
frequently voiced is that there is often very little follow-up on the recommendations of
such reports, as there is often no body within the organization with the responsibility and
authority to do so.
5.12 Testing
The final judgement of the quality of a software application is whether it actually works
correctly when executed. This section looks at aspects of the planning and management of
testing. A major headache with testing is estimating how much testing remains at any
point. This estimate of the work still to be done depends on an unknown, the number of
bugs left in the code. We will briefly discuss how we can deal with this problem.
The V-process model was introduced as an extension to the waterfall process model.
Figure 5.9 gives a diagrammatic representation of this model. This stresses the necessity
for validation activities that match the activities creating the products of the project.
The V-process model can be seen as expanding the activity box 'testing' in the waterfall
model. Each step has a matching validation process which can, where defects are found,
cause a loop back to the corresponding development stage and a reworking of the
following steps. Ideally this feeding back should occur only where a discrepancy has been
found between what was specified by a particular activity and what was actually
implemented in the next lower activity on the descent of the V loop.
For example, the system designer might have written that a calculation be carried out in
a certain way. A developer building code to meet this design might have misunderstood
what was required. At system testing stage, the original designer would be responsible for
checking that the software is doing what was specified and this would discover the coder's
misreading of that document.
Using the V-process model as a framework. planning decisions can be made at the outset
as to the types and amounts of testing to be done. An obvious example of this would be that
if the software were acquired 'off-the-shelf', the program design and code stages would not
be relevant and so program testing would not be needed.
The objectives of both verification and validation techniques are very similar. Both these
techniques have been designed to help remove errors in software. In spite of the apparent
similarity between their objectives, the underlying principles of these two bug detection
techniques and their applicability are very different.
The main differences between these two techniques are the following:
Example :
Note : Phase Containment Effectiveness metric is used to detect defects at current phase
and also the escaped defects in the previous phases. This used a defect injection model
to implement the phase containment effectiveness metric. The implementation has been
done on the real software development project.
All the boxes shown in the right hand side of the V-process model of Figure 5.9 correspond
to verification activities except the system testing block which corresponds to validation
activity.
There are essentially two main approaches to systematically design test cases: black-box
approach and white-box (or glass-box) approach.
In the black-box approach, test cases are designed using only the functional specification
of the software. That is, test cases are designed solely based on an analysis of the
input/output behaviour (that is, functional behaviour) and does not require any
knowledge of the internal structure of a program.
For this reason. black-box testing is also known as functional testing and also as
requirements-driven testing. Design of white-box test cases on the other hand, requires
analysis of the source code. Consequently, white-box testing is also called structural testing
or structure-driven testing.
Levels of testing
A software product is normally tested at three different stages or levels. These three testing
stages are .
• Unit testing
• Integration testing
• System testing
During unit testing, the individual components (or units) of a program are tested.
For every module, unit testing is carried out as soon as the coding for it is complete. Since
every module is tested separately, there is a good scope for parallel activities during unit
testing.
The objective of integration testing is to check whether the modules have any errors
pertaining to interfacing with each other.
Unit testing is referred to as testing in the small, whereas integration and system testing
are referred to as testing in the large. After testing all the units individually, the units are
integrated over a number of steps and tested after each step of integration (integration
testing). Finally, the fully integrated system is tested (system testing).
Testing activities
Test Planning Since many activities are carried out during testing, careful planning is
needed. The specific test case design strategies that would be deployed are also planned.
Test planning consists of determining the relevant test strategies and planning for any test
bed that may be required. A suitable test bed is an especially important concern while
testing embedded applications. A test bed usually includes setting up the hardware or
simulator.
Test Case Execution and Result Checking Each test case is run and the results are
compared with the expected results. A mismatch between the actual result and expected
results indicates a failure. The test cases for which the system fails are noted down for test
reporting.
Test Reporting When the test cases are run, the tester may raise issues, that is, report
discrepancies between the expected and the actual findings. A means of formally recording
these issues and their history is needed. A review body adjudicates these issues.
The issue is dismissed on the grounds that there has been a misunderstanding of a
requirement by the tester.
The issue is identified as a fault which the developers need to correct- Where
development is being done by contractors, they would be expected to cover the cost
of the correction.
In a commercial project, execution of the entire test suite can take several weeks to
complete. Therefore, in order to optimize the turnaround time, the test failure
information is usually informally intimated to the development team as and when
failures are noticed.
Debugging :For each failure observed during testing, debugging is carried out to identify
the statements that are in error. There are several debugging strategies, but essentially in
each the failure symptoms are analysed to locate the errors.
Defect Retesting : Once a defect has been dealt with by the development team; the
corrected code is retested by the testing team to check whether the defect has successfully
been addressed. Defect retest is also popularly called resolution testing. The resolution
tests are a subset of the complete test suite (see Figure 5.10).
FIGURE 5.10 Types of test cases in the original test suite after a change
Regression Testing While resolution testing checks whether the defect has been fixed,
regression testing checks whether the unmodified functionalities still continue to work
correctly. Thus, whenever a defect is corrected and the change is incorporated in the
program code, a danger is that a change introduced to correct an error could actually
introduce errors in functionalities that were previously working correctly. The regression
tests check whether the unmodified functionalities continue to work correctly. As a result,
after a bug-fixing session, both the resolution and regression test cases need to be run. This
is where the additional effort required to create automated test scripts can pay off. As
shown in Figure 5.6, some test cases may no more be valid after the change. These have
been shown as invalid test case. The rest are redundant test cases, which check those parts
of the program code that are not at all affected by the change.
Test Closure Once the system successfully passes all the tests, documents related to lessons
learned, test results, logs, etc., are archived for use as a reference in future projects.
Of all the above-mentioned testing activities, debugging is usually the most time-
consuming activity. Who performs testing?
Project Management and Software quality 79
Module 4 and 5
A question to be settled at the planning stage is who would carry out testing. Many
organizations have separate system testing groups to provide an independent
assessment of the correctness of software before release.
In other organizations, staff is allocated to a purely testing role but work alongside the
developers instead of a separate group. While an independent testing group can provide
final quality check, it has been argued that developers may take less care of their work if
they know the existence of this safety net.
Test automation
Testing is usually the most time consuming and laborious of all software development
activities. This is especially true for large and complex software products that are being
developed currently.
At present, testing cost often exceeds all other development life-cycle costs. With the
growing size of programs and the increased importance being given to product quality, test
automation is drawing considerable attention from both industry circles and academia.
Test automation is a generic term for automating one or some activities of the test
process.
Other than reducing human effort and time in this otherwise time and effort-intensive
work, test automation also significantly improves the thoroughness of testing. This is
because more testing can be carried out using a large number of test cases within a short
period of time without any significant cost overhead. The effectiveness of testing, to a large
extent, depends on the exact test case design strategy used.
Considering the large overheads that sophisticated testing techniques incur, in many
industrial projects, often testing is carried out using randomly selected test values. With
automation, more sophisticated test case design techniques can be deployed. Without the
use of proper tools, testing large and complex software products can especially be
extremely time consuming and laborious.
A further advantage of using testing tools is that automated test results are much more
reliable and eliminate human errors during testing. Regression testing after every change
or error correction requires running several old test cases. In this situation, test
automation simplifies repeated running of the test cases. Testing tools hold out the
promise of substantial cost and time reduction even in the testing and maintenance phases.
Repeated running of the same set of test cases over and over after every change is
monotonous, boring, and error-prone. Automated testing tools can be of considerable use
in repeatedly running the same set of test cases. Testing tools can entirely or at least
substantially eliminate the drudgery of running same test cases and also significantly
reduce testing costs.
A large number of tools are at present available both in the public domain as well as from
commercial sources. It is possible to classify these tools into the following types with
regard to the specific methodology on which they are based. Capture and Playback In this
type of tools, the test cases are executed manually only once.
During the manual execution, the sequence and values of various inputs as well as the
outputs produced are recorded. On any subsequent occasion, the test can be automatically
replayed and the results are checked against the recorded output.
An important advantage of the capture playback tools is that once test data are captured
and the results verified, the tests can be rerun several times over easily and cheaply. Thus,
these tools are very useful for regression testing. However, capture and playback tools
have a few disadvantages as well. Test maintenance can be costly when the unit under test
changes, since some of the captured tests may become invalid. It would require
considerable effort to determine and remove the invalid test cases or modify the test input
and output data. Also new test cases would have to be added for the altered code.
Automated Test Script Test scripts are used to drive an automated test tool. The scripts
provide input to the unit under test and record the output. The testers employ a variety of
languages to express test scripts.
An important advantage of test script-based tools is that once the test script is debugged
and verified, it can be rerun a large number of times easily and cheaply. However,
debugging the test script to ensure its accuracy requires significant effort. Also, every
subsequent change to the unit under test entails effort to identify impacted test scripts,
modify, rerun and reconfirm them.
An advantage of random input testing tools is that it is relatively easy. This approach
however can be the most cost-effective for finding some types of defects. However, random
input testing is a very limited form of testing. It finds only the defects that crash the unit
under test and not the majority of defects that do not crash the system but simply produce
incorrect results.
Reliability of a software product usually keeps on improving with time during the testing
and operational phases as defects are identified and repaired. In this context, the growth
of reliability over the testing and operational phases can be modelled using a mathematical
expression called Reliability Growth Model (RGM). Thus, RGM models show how the
reliability of a software product improves as failures are reported and bugs are corrected.
A large number of RGMs have been proposed by researchers based on various failure and
bug repair patterns. A few popular reliability growth models are Jelinski-Moranda model,
The reliability improvement due to fixing a single bug depends on where the bug is
located in the code.
The perceived reliability of a software product is observer-dependent.
The reliability of a product keeps changing as errors are detected and fixed.
A fundamental issue that sets the reliability study of software apart from hardware
reliability study is the difference between their failure to failure of software components.
Hardware components fail mostly due to wear and tear, whereas software components fail
due to presence of bugs.
As an example of a hardware, consider an electronic circuit. In this circuit, a failure may
occur as a logic gate may be stuck at 1 or 0, or a resistor might short circuit. To fix a
hardware fault, one has to either replace of repair the failed part. In contrast, a software
product would continue to fail until the error is tracked down and either the design or the
code is changed to fix the bug.
For this reason, when a hardware part is repaired its reliability would be maintained at the
level that existed before the failure occurred; whereas when a software failure is repaired,
the reliability may either increase or decrease (reliability may decrease if a bug fix
introduces new errors). To put this fact in a different perspective, hardware reliability
study is concerned with stability (e.g. the inter-failure times remain constant). On the other
hand, the aim of software reliability study would be reliability growth (i.e. increase in inter-
failure times).
A comparison of the changes in failure rate over the product life time for a typical hardware
product as well as a software product is sketched in Figure 5.11. Observe that the plot of
change of reliability with time for a hardware component [Figure 5.11(a)] appears like a
'bath tub'. As shown in Figure 5.11(a), for a hardware system the failure rate is initially
high, but decreases as the faulty components are identified and are either repaired or
replaced.
Figure 5.11: Reliability growth with time for hardware and software products
Reliability Metrics
The reliability requirements for different categories of software products may be different.
For this reason, it is necessary that the level of reliability required for a software product
should be specified in the SRS (software requirements specification) document. In order
to be able to do this, we need some metrics to quantitatively express the reliability of a
software product.
A good reliability measure should be observer-independent, so that different people can
agree on the degree of reliability a system has. However, in practice, it is very difficult to
formulate a metric using which precise reliability measurement would be possible.
MTTF is the time between two successive failures, averaged over a large number of
failures. To measure MTTF, we can record the failure data for n failures. Let the failures
Once failure occurs, sometime is required to fix the error. MTTR measures the average time
it takes to track the errors causing the failure and to fix them.
Mean Time between Failure (MTBF)
The MTTF and MTTR metrics can be combined to get the MTBF metric:
MTBF = MTTF + MTTR. Thus, MTBF of 300 hours indicates that once a failure occurs, the
next failure is expected after 300 hours. In this case, the time measurements are real time
and not the execution time as in MTTF.
Unlike the other metrics discussed, this metric does not explicitly involve time
measurements. POFOD measures the likelihood of the system failing when a service
request is made. For example, a POFOD of 0.001 would mean that 1 out of every 1000
service requests would result in a failure. We have already mentioned that the reliability
of a software product should be determined through specific service invocations, rather
than making the software run continuously. Thus, POFOD metric is very appropriate for
software program that are not required to run continuously given period of time. This
metric not only considers the number of failures occurring during a time interval
Availability. Availability of a system is a measure of how likely would the system be
available for use metrek but also takes into account the repair time (down time) of a system
when a failure occurs. This important for systems such as telecommunication systems,
operating systems and embedded controllers, which are supposed to be never down and
where repair and restart time are significant and loss of service during that time cannot be
overlooked.
Failures which are transient and whose consequences are not serious are in practice of
little concern in the operational use of a software product. These types of failures can at
best be minor irritants.
More severe types of failures may render the system totally unusable. In order to estimate
the reliability of a software product more accurately, it is necessary to classify various
types of failures.
In the following, give a simple classification of software failures into different types.
Transient. Transient failures occur only for certain input values while invoking a function
of the system.
The simplest reliability growth model is a step function model where it is assumed that the
reliability increases by a constant increment each time an error is detected and repaired.
Therefore, perfect error fixing is implicit in this model. Another implicit assumption in this
model is that all errors contribute equally to reliability growth (reflected in equal step
size). Both the assumptions are unrealistic since different errors contribute differently to
reliability growth and also the error fixed may not be perfect. Typical reliability growth
predicted using this model has been shown 5.12 below.
This model allows for negative reliability growth to reflect the fact that when a repair is
carried out, it may introduce additional errors. It also models the fact that as errors are
repaired, the average improvement to the product reliability per repair decreases. It treats
an error's contribution to reliability improvement to be an independent random variable
having Gamma distribution. This distribution models the fact that error corrections with
large contributions to reliability growth are removed first. This represents diminishing
return as the test continues.
Goel-Okutomo Model
In this model, it is assumed that the execution times between the failures are exponentially
distributed. The cumulative number of failures at any time can therefore be given in terms
of p(t), the expected value of failures between time t and time + At.
It is assumed that it follows a Non-Homogeneous Poisson Process (NHPP). That is, the
expected number of error occurrences for any time t to t+At is proportional to the expected
number of undetected errors at time t. Once a failure has been detected, it is assumed that
the error correction is perfect and immediate. The number of failures over time is given in
Figure 5.13. The number of failures at time I can be given by, u(t)= N(1-e), where N =
Expected total number of defects in the code and b is the rate at which the failure rate
decreases.
Some organizations produce quality plans for each project. These show how the standard
quality procedures and standards laid down in an organization's quality manual will
actually be applied to the project. If an approach to planning such as Step Wise has been
followed, quality-related activities and requirements will have been identified by the main
planning process with no need for a separate quality plan. However, where software is
being produced for an external client, the client's quality assurance staff might require that
a quality plan be produced to ensure the quality of the delivered products. A quality plan
can be seen as checklist that all quality issues have been dealt with by the planning process.
Thus, most of the content w be references to other documents.
A quality plan might have entries for:
This contents list is based on a draft IEEE standard for software quality assurance plans.
Purpose - scope of plan
List of references to other documents
----------------**************---------------------