0% found this document useful (0 votes)
223 views17 pages

5 Defect Management

The document provides an introduction to defect management in software testing. It discusses defect classification including severity, type of errors, and status. It also explains the defect management process, which involves defect prevention, deliverable baselining, defect discovery, and defect resolution. The defect life cycle is outlined, beginning with a defect being reported as new and progressing through statuses like assigned, deferred, completed, closed, and potentially reassigned if reopened. Key aspects of the process include classifying defects, tracking them through a life cycle, and ensuring they are resolved.

Uploaded by

Akshay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
223 views17 pages

5 Defect Management

The document provides an introduction to defect management in software testing. It discusses defect classification including severity, type of errors, and status. It also explains the defect management process, which involves defect prevention, deliverable baselining, defect discovery, and defect resolution. The defect life cycle is outlined, beginning with a defect being reported as new and progressing through statuses like assigned, deferred, completed, closed, and potentially reassigned if reopened. Key aspects of the process include classifying defects, tracking them through a life cycle, and ensuring they are resolved.

Uploaded by

Akshay
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Software Testing

DEFECT
MANAGEMENT
CONTENTS

I. Introduction, Defect Classification, Defect Management


Process

II. Defect Life Cycle, Defect Template

III. Estimate Expected Impact of a Defect, Techniques for Finding


Defects, Reporting a Defect.

Anuradha Bhatia 1
Software Testing

I. Introduction
i. Software defects are expensive.
ii. The cost of finding and correcting defects represents one of the most
expensive software development activities.
iii. While defects may be inevitable, we can minimize their number and impact
on our projects.
iv. To do this development teams need to implement a defect management
process that focuses on preventing defects, catching defects as early in the
process as possible, and minimizing the impact of defects.
v. A little investment in this process can yield significant returns.

1. Defect Classification
(Question: Explain the defect classification. – 8 Marks)

i. A Software Defect / Bug is a condition in a software product which does not


meet a software requirement (as stated in the requirement specifications) or
end-user expectations (which may not be specified but are reasonable).

ii. In other words, a defect is an error in coding or logic that causes a program to
malfunction or to produce incorrect/unexpected results.

iii. A program that contains a large number of bugs is said to be buggy.


iv. Reports detailing bugs in software are known as bug reports.
v. Applications for tracking bugs are known as bug tracking tools.
vi. The process of finding the cause of bugs is known as debugging.
vii. The process of intentionally injecting bugs in a software program, to estimate
test coverage by monitoring the detection of those bugs, is known
as bebugging.

There are various ways in which we can classify.

Severity Wise:
i. Major: A defect, which will cause an observable product failure or departure
from requirements.
ii. Minor: A defect that will not cause a failure in execution of the product.
iii. Fatal: A defect that will cause the system to crash or close abruptly or effect
other applications.

Type of Errors Wise


i. Comments: Inadequate/ incorrect/ misleading or missing comments in the
source code

Anuradha Bhatia 2
Software Testing

ii. Computational Error: Improper computation of the formulae / improper


business validations in code.
iii. Data error: Incorrect data population / update in database
iv. Database Error: Error in the database schema/Design
v. Missing Design: Design features/approach missed/not documented in the
design document and hence does not correspond to requirements
vi. Inadequate or sub optimal Design: Design features/approach needs
additional inputs for it to be complete Design features described does not
provide the best approach (optimal approach) towards the solution required
vii. In correct Design: Wrong or inaccurate Design
viii. Ambiguous Design: Design feature/approach is not clear to the reviewer. Also
includes ambiguous use of words or unclear design features.
ix. Boundary Conditions Neglected: Boundary conditions not
addressed/incorrect
x. Interface Error: Internal or external to application interfacing error, Incorrect
handling of passing parameters, Incorrect alignment, incorrect/misplaced
fields/objects, un friendly window/screen positions
xi. Logic Error: Missing or Inadequate or irrelevant or ambiguous functionality in
source code
xii. Message Error: Inadequate/ incorrect/ misleading or missing error messages
in source code
xiii. Navigation Error: Navigation not coded correctly in source code
xiv. Performance Error: An error related to performance/optimality of the code
xv. Missing Requirements: Implicit/Explicit requirements are missed/not
documented during requirement phase
xvi. Inadequate Requirements: Requirement needs additional inputs for to be
complete
xvii. Incorrect Requirements: Wrong or inaccurate requirements
xviii. Ambiguous Requirements: Requirement is not clear to the reviewer. Also
includes ambiguous use of words – e.g. Like, such as, may be, could be, might
etc.
xix. Sequencing / Timing Error: Error due to incorrect/missing consideration to
timeouts and improper/missing sequencing in source code.
xx. Standards: Standards not followed like improper exception handling, use of E
& D Formats and project related design/requirements/coding standards
xxi. System Error: Hardware and Operating System related error, Memory leak
xxii. Test Plan / Cases Error: Inadequate/ incorrect/ ambiguous or duplicate or
missing - Test Plan/ Test Cases & Test Scripts, Incorrect/Incomplete test setup
xxiii. Typographical Error: Spelling / Grammar mistake in documents/source code

Anuradha Bhatia 3
Software Testing

xxiv. Variable Declaration Error: Improper declaration / usage of variables, Type


mismatch error in source code

Status Wise:
i. Open
ii. Closed
iii. Deferred
iv. Cancelled

2. Defect Management Process


(Question: Explain the defect management process in software testing
with a neat diagram. – 4 Marks)

Figure 1: Defect management Process

i. Defect Prevention-- Implementation of techniques, methodology and


standard processes to reduce the risk of defects.

ii. Deliverable Baseline-- Establishment of milestones where deliverables will be


considered complete and ready for further development work. When a
deliverable is base lined, any further changes are controlled. Errors in a
deliverable are not considered defects until after the deliverable is base lined.

iii. Defect Discovery-- Identification and reporting of defects for development


team acknowledgment. A defect is only termed discovered when it has been
documented and acknowledged as a valid defect by the development team
member(s) responsible for the component(s) in error.

iv. Defect Resolution-- Work by the development team to prioritize, schedule and
fix a defect, and document the resolution. This also includes notification back
to the tester to ensure that the resolution is verified.

Anuradha Bhatia 4
Software Testing

II. Defect life cycle and Defect Template


1. Defect life cycle
(Question: Explain the defect life cycle or a bug life cycle in software
testing. – 8 Marks)
i. Defect Life Cycle (Bug Life cycle) is the journey of a defect from its
identification to its closure.
ii. The Life Cycle varies from organization to organization and is governed by the
software testing process the organization or project follows and/or the Defect
tracking tool being used.
Nevertheless, the life cycle in general resembles the following:

Figure 2: Bugs Life Cycle

Status Alternative Status

NEW

ASSIGNED OPEN

DEFERRED

DROPPED REJECTED

Anuradha Bhatia 5
Software Testing

COMPLETED FIXED, RESOLVED, TEST

REASSIGNED REOPENED

CLOSED VERIFIED
Table 1: Defect Status
Defect Status Explanation
i. NEW: Tester finds a defect and posts it with the status NEW. This defect is yet
to be studied/approved. The fate of a NEW defect is one of ASSIGNED,
DROPPED and DEFERRED.
ii. ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect
and if it is found to be valid it is assigned to a member of the Development
Team. The assigned Developer’s responsibility is now to fix the defect and have
it COMPLETED. Sometimes, ASSIGNED and OPEN can be different statuses. In
that case, a defect can be open yet unassigned.
iii. DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed in
upcoming releases instead of the current release it is DEFERRED. This defect is
ASSIGNED when the time comes.
iv. DROPPED / REJECTED: Test / Development/ Project lead studies the NEW
defect and if it is found to be invalid, it is DROPPED / REJECTED. Note that the
specific reason for this action needs to be given.
v. COMPLETED / FIXED / RESOLVED / TEST: Developer ‘fixes’ the defect that is
ASSIGNED to him or her. Now, the ‘fixed’ defect needs to be verified by the
Test Team and the Development Team ‘assigns’ the defect back to the Test
Team. A COMPLETED defect is either CLOSED, if fine, or REASSIGNED, if still
not fine.
vi. If a Developer cannot fix a defect, some organizations may offer the following
statuses:
 Won’t Fix / Can’t Fix: The Developer will not or cannot fix the defect due
to some reason.
 Can’t Reproduce: The Developer is unable to reproduce the defect.
 Need More Information: The Developer needs more information on the
defect from the Tester.
vii. REASSIGNED / REOPENED: If the Tester finds that the ‘fixed’ defect is in fact
not fixed or only partially fixed, it is reassigned to the Developer who ‘fixed’ it.
A REASSIGNED defect needs to be COMPLETED again.
viii. CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is indeed
fixed and is no more of any concern, it is CLOSED / VERIFIED. This is the happy
ending.

Anuradha Bhatia 6
Software Testing

2. Defect Template
(Question: Create the bug template for a login form. – 4Marks)
i. Reporting a bug/defect properly is as important as finding a defect.
ii. If the defect found is not logged/reported correctly and clearly in bug tracking
tools (like Bugzilla, ClearQuest etc.) then it won’t be addressed properly by the
developers, so it is very important to fill as much information as possible in the
defect template so that it is very easy to understand the actual issue with the
software.
1. Sample defect template
Abstract :
Platform :
Testcase Name :
Release :
Build Level :
Client Machine IP/Hostname :
Client OS :
Server Machine IP/Hostname :
Server OS :
Defect Type :
Priority :
Sevierity :
Developer Contacted :
Test Contact Person :
Attachments :
Any Workaround :
Steps to Reproduce
1.
2.
3.
Expected Result:

Actual Result:

Anuradha Bhatia 7
Software Testing

2. Defect report template


i. A defect reporting tool is used and the elements of a report can vary.
ii. A defect report can consist of the following elements.

ID Unique identifier given to the defect. (Usually Automated)

Project Project name.

Product Product name.

Release Version Release version of the product. (e.g. 1.2.3)

Module Specific module of the product where the defect was detected.

Detected Build Build version of the product where the defect was detected
Version (e.g. 1.2.3.5)

Summary Summary of the defect. Keep this clear and concise.

Detailed description of the defect. Describe as much as possible


but without repeating anything or using complex words. Keep
Description it simple but comprehensive.

Step by step description of the way to reproduce the defect.


Steps to Replicate Number the steps.

Actual Result The actual result you received when you followed the steps.

Expected Results The expected results.

Attachments Attach any additional information like screenshots and logs.

Remarks Any additional comments on the defect.

Anuradha Bhatia 8
Software Testing

Defect Severity Severity of the Defect.

Defect Priority Priority of the Defect.

Reported By The name of the person who reported the defect.

The name of the person that is assigned to analyze/fix the


Assigned To defect.

Status The status of the defect. (See Defect Life Cycle)

Build version of the product where the defect was fixed (e.g.
Fixed Build Version 1.2.3.9)

Table 2: Defect Report Template

3. Defect tracking tools


Following are some of the commonly used defect tracking tools:

i. Bugzilla - Open Source Bug Tracking.


ii. Testlink - Open Source Bug Tracking.
iii. ClearQuest – Defect tracking tool by IBM Rational tools.
iv. HP Quality Center– Test Management tool by HP.
v.
III. Estimate Expected Impact of a Defect, Techniques
for Finding Defects, Reporting a Defect.

1. Estimate Expected Impact of a Defect


i. There is a strong relationship between the number of test cases and the
number of function points.
ii. There is a strong relationship between the number of defects and the
number of test cases and number of function points.
iii. The number of acceptance test cases can be estimated by multiplying the
number of function points by 1.2.
iv. Acceptance test cases should be independent of technology and
implementation techniques.
v. If a software project was 100 function points the estimated number of test
cases would be 120.
vi. To estimate the number of potential defects is more involved.

Anuradha Bhatia 9
Software Testing

a) Estimating Defects
i. Intuitively the number of maximum potential defects is equal to the
number of acceptance test cases which is 1.2 x Function Points.

b) Preventing, Discovering and Removing Defects


i. To reduce the number of defects delivered with a software project an
organization can engage in a variety of activities.
ii. While defect prevention is much more effective and efficient in reducing
the number of defects, most organization conduct defect discovery and
removal.
iii. Discovering and removing defects is an expensive and inefficient process.
iv. It is much more efficient for an organization to conduct activities that
prevent defects.

c) Defect Removal Efficiency


i. If an organization has no defect prevention methods in place then they are
totally reliant on defect removal efficiency.

Figure 3: Defect Removal Efficiency

1. Requirements Reviews up to 15% removal of potential defects.


2. Design Reviews up to 30% removal of potential defects.
3. Code Reviews up to 20% removal of potential defects.
4. Formal Testing up to 25% removal of potential defects.

Anuradha Bhatia 10
Software Testing

d) Defect Discovery and Removal

Size in Function Totals Defects Remaining


Points Max Defects Perfect Medium Poor
100 120 12 66 102
200 240 24 132 204
500 600 60 330 510
1,000 1,200 120 660 1,020
2,500 3,000 300 1,650 2,550
5,000 6,000 600 3,300 5,100
10,000 12,000 1,200 6,600 10,200
20,000 24,000 2,000 13,200 20,400

Table 3: Defect Removal and Recovery

i. An organization with a project of 2,500 function points and was about


medium at defect discovery and removal would have 1,650 defects
remaining after all defect removal and discovery activities.
ii. The calculation is 2,500 x 1.2 = 3,000 potential defects.
iii. The organization would be able to remove about 45% of the defects or
1,350 defects.
iv. The total potential defects (3,000) less the removed defects (1,350) equals
the remaining defects of 1,650.

e) Defect Prevention

If an organization concentrates on defect prevention (instead of defect


detection) then the number of defects inserted or created is much less. The
amount of time and effort required to discover and remove this defects is
much less also.

i. Roles and Responsibilities Clearly Defined up to 15% reduction in number


of defects created
ii. Formalized Procedures up to 25% reduction in number of defects created
iii. Repeatable Processes up to 35% reduction in number of defects created
iv. Controls and Measures in place up to 30% reduction in number of defects
created.

Anuradha Bhatia 11
Software Testing

2. Techniques to find defects


(Question: Explain any two techniques to find the defect with strength and
weakness. – 8 Marks)

a) Quick Attacks:

i. Strengths
 The quick-attacks technique allows you to perform a cursory analysis
of a system in a very compressed timeframe.
 Even without a specification, you know a little bit about the software,
so the time spent is also time invested in developing expertise.
 The skill is relatively easy to learn, and once you've attained some
mastery your quick-attack session will probably produce a few bugs.
 Finally, quick attacks are quick.
 They can help you to make a rapid assessment. You may not know the
requirements, but if your attacks yielded a lot of bugs, the
programmers probably aren't thinking about exceptional conditions,
and it's also likely that they made mistakes in the main functionality.
 If your attacks don't yield any defects, you may have some confidence
in the general, happy-path functionality.
ii. Weaknesses
 Quick attacks are often criticized for finding "bugs that don't matter"—
especially for internal applications.
 While easy mastery of this skill is a strength, it creates the risk that
quick attacks are "all there is" to testing; thus, anyone who takes a two-
day course can do the work.

b) Equivalence and Boundary Conditions

i. Strengths
 Boundaries and equivalence classes give us a technique to reduce an
infinite test set into something manageable.
 They also provide a mechanism for us to show that the requirements
are "covered".
ii. Weaknesses
 The "classes" in the table in Figure 1 are correct only in the mind of the
person who chose them.

Anuradha Bhatia 12
Software Testing

 We have no idea whether other, "hidden" classes exist—for example,


if a numeric number that represents time is compared to another time
as a set of characters, or a "string," it will work just fine for most
numbers.

c) Common Failure Modes


i. Strengths
 The heart of this method is to figure out what failures are common for
the platform, the project, or the team; then try that test again on this
build.
 If your team is new, or you haven't previously tracked bugs, you can
still write down defects that "feel" recurring as they occur—and start
checking for them.
ii. Weaknesses
 In addition to losing its potency over time, this technique also entirely
fails to find "black swans"—defects that exist outside the team's recent
experience.
 The more your team stretches itself (using a new database, new
programming language, new team members, etc.), the riskier the
project will be—and, at the same time, the less valuable this technique
will be.

d) State-Transition Diagrams

Figure 4: State Transition Map

Anuradha Bhatia 13
Software Testing

i. Strengths
 Mapping out the application provides a list of immediate, powerful test
ideas.
 Model can be improved by collaborating with the whole team to find
"hidden" states—transitions that might be known only by the original
programmer or specification author.
 Once you have the map, you can have other people draw their own
diagrams, and then compare theirs to yours.
 The differences in those maps can indicate gaps in the requirements,
defects in the software, or at least different expectations among team
members.
ii. Weaknesses
 The map you draw doesn't actually reflect how the software will
operate; in other words, "the map is not the territory."
 Drawing a diagram won't find these differences, and it might even give
the team the illusion of certainty.
 Like just about every other technique on this list, a state-transition
diagram can be helpful, but it's not sufficient by itself to test an entire
application.

e) Use Cases and Soap Opera Tests

Use cases and scenarios focus on software in its role to enable a human being
to do something.

i. Strengths
 Use cases and scenarios tend to resonate with business customers, and
if done as part of the requirement process, they sort of magically
generate test cases from the requirements.

 They make sense and can provide a straightforward set of confirmatory


tests. Soap opera tests offer more power, and they can combine many
test types into one execution.

Anuradha Bhatia 14
Software Testing

ii. Weaknesses
 Soap opera tests have the opposite problem; they're so complex that
if something goes wrong, it may take a fair bit of troubleshooting to
find exactly where the error came from!

f) Code-Based Coverage Models

Imagine that you have a black-box recorder that writes down every single line of
code as it executes.

i. Strengths
 Programmers love code coverage. It allows them to attach a number—
an actual, hard, real number, such as 75%—to the performance of their
unit tests, and they can challenge themselves to improve the score.

 Meanwhile, looking at the code that isn't covered also can yield
opportunities for improvement and bugs!

ii. Weaknesses
 Customer-level coverage tools are expensive, programmer-level tools
that tend to assume the team is doing automated unit testing and has
a continuous-integration server and a fair bit of discipline.
 After installing the tool, most people tend to focus on statement
coverage—the least powerful of the measures.
 Even decision coverage doesn't deal with situations where the decision
contains defects, or when there are other, hidden equivalence classes;
say, in the third-party library that isn't measured in the same way as
your compiled source code is.
 Having code-coverage numbers can be helpful, but using them as a
form of process control can actually encourage wrong behaviours. In
my experience, it's often best to leave these measures to the
programmers, to measure optionally for personal improvement (and
to find dead spots), not as a proxy for actual quality.

g) Regression and High-Volume Test Techniques

 People spend a lot of money on regression testing, taking the old test
ideas described above and rerunning them over and over.

Anuradha Bhatia 15
Software Testing

 This is generally done with either expensive users or very expensive


programmers spending a lot of time writing and later maintaining
those automated tests.
i. Strengths
 For the right kind of problem, say an IT shop processing files through a
database, this kind of technique can be extremely powerful.
 Likewise, if the software deliverable is a report written in SQL, you can
hand the problem to other people in plain English, have them write
their own SQL statements, and compare the results.
 Unlike state-transition diagrams, this method shines at finding the
hidden state in devices. For a pacemaker or a missile-launch device,
finding those issues can be pretty important.
ii. Weaknesses
 Building a record/playback/capture rig for a GUI can be extremely
expensive, and it might be difficult to tell whether the application
hasn't broken, but has changed in a minor way.
 For the most part, these techniques seem to have found a niche in
IT/database work, at large companies like Microsoft and AT&T, which
can have programming testers doing this work in addition to traditional
testing, or finding large errors such as crashes without having to
understand the details of the business logic.
 While some software projects seem ready-made for this approach,
others...aren't.
 You could waste a fair bit of money and time trying to figure out where
your project falls.

3. Reporting defects effectively


(Question: Explain how defects can be effectively reported. – 4 Marks)

It is essential that you report defects effectively so that time and effort is not
unnecessarily wasted in trying to understand and reproduce the defect. Here are
some guidelines:

i. Be specific:
 Specify the exact action: Do not say something like ‘Select Button B’.
 Do you mean ‘Click Button B’ or ‘Press ALT+B’ or ‘Focus on Button B and
click ENTER’.

Anuradha Bhatia 16
Software Testing

 In case of multiple paths, mention the exact path you followed: Do not say
something like “If you do ‘A and X’ or ‘B and Y’ or ‘C and Z’, you get D.”
Understanding all the paths at once will be difficult. Instead, say “Do ‘A and
X’ and you get D.” You can, of course, mention elsewhere in the report that
“D can also be got if you do ‘B and Y’ or ‘C and Z’.”
 Do not use vague pronouns: Do not say something like “In Application A,
open X, Y, and Z, and then close it.” What does the ‘it’ stand for? ‘Z’ or, ‘Y’,
or ‘X’ or ‘Application A’?”
ii. Be detailed:
 Provide more information (not less). In other words, do not be lazy.
 Developers may or may not use all the information you provide but they
sure do not want to beg you for any information you have missed.
iii. Be objective:
 Do not make subjective statements like “This is a lousy application” or “You
fixed it real bad.”
 Stick to the facts and avoid the emotions.
iv. Reproduce the defect:
 Do not be impatient and file a defect report as soon as you uncover a
defect. Replicate it at least once more to be sure.
v. Review the report:
 Do not hit ‘Submit’ as soon as you write the report.
 Review it at least once.
 Remove any typing errors.

Anuradha Bhatia 17

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy