0% found this document useful (0 votes)
27 views27 pages

Testing Material

The document outlines the distinctions between errors, bugs, and defects in software, defining each term and explaining their implications in software development. It details the bug life cycle, including stages from logging a new defect to its closure, and emphasizes the importance of severity and priority in managing bugs. Additionally, it categorizes various types of software testing and their roles in ensuring product quality.

Uploaded by

kingdomdennis983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views27 pages

Testing Material

The document outlines the distinctions between errors, bugs, and defects in software, defining each term and explaining their implications in software development. It details the bug life cycle, including stages from logging a new defect to its closure, and emphasizes the importance of severity and priority in managing bugs. Additionally, it categorizes various types of software testing and their roles in ensuring product quality.

Uploaded by

kingdomdennis983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Differences between bugs, defects and errors.

Error: A discrepancy between a computed, observed, or measured value or condition and the true,
specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, and fault

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated
manner. See: anomaly, defect, error, exception, fault.

Defect: Mismatch between the requirements and the implementation.

Bug life cycle


What is Bug/Defect?

Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a
computer program that prevents it from working correctly or produces an incorrect result. Bugs
arise from mistakes and errors, made by people, in either a program’s source code or its design.”

Other definitions can be:


An unwanted and unintended property of a program or piece of hardware, especially one that
causes it to malfunction.

or
A fault in a program, which causes the program to perform in an unintended or unanticipated
manner.Lastly the general definition of bug is: “failure to conform to specifications”.If you want to
detect and resolve the defect in early development stage, defect tracking and software development
phases should start simultaneously.

We will discuss more on Writing effective bug report in another article. Let’s concentrate here on
bug/defect life cycle.

Life cycle of Bug:

1) Log new defect


When tester logs any new bug the mandatory fields are:
Build version, Submit On, Product, Module, Severity, Synopsis and Description to Reproduce

In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or
screenshots.

The following fields remain either specified or blank:


If you have authority to add bug Status, Priority and ‘Assigned to’ fields them you can specify these
fields. Otherwise Test manager will set status, Bug priority and assign the bug to respective module
owner.
The figure is quite complicated but when you consider the significant steps in bug life cycle you will
get quick idea of bug life.

On successful logging the bug is reviewed by Development or Test manager. Test manager can set
the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.

When bug gets assigned to developer and can start working on it. Developer can set bug status as
won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.

If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific
action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or
Reopen.

Bug status description:


These are various stages of bug life cycle. The status caption may vary depending on the bug tracking
system you are using.

1) New: When QA files new bug.

2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not
important to fix immediately then the project manager can set the bug status as deferred.

3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.

4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then
he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.

5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug
report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is
reproduced and can assign to developer with detailed reproducing steps.

6) Need more information: If developer is not clear about the bug reproduce steps provided by QA
to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to
add detailed reproducing steps and assign bug back to dev for fix.

7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can
mark it as ‘Reopen’ so that developer can take appropriate action.

8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can
mark bug as ‘Closed’.

9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or
invalid if the system is working according to specifications and bug is just due to some
misinterpretation

Software Testing:
Software Testing is the process of executing a program or system with the intent of finding errors.
Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and
determining that it meets its required results. Software is not unlike other physical processes where
inputs are received and outputs are produced. Where software differs is in the manner in which it
fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software
can fail in many bizarre ways. Detecting all of the different failure modes for software is generally
infeasible.

Severity of Bugs:
It is extremely important to understand the type & importance of every bug detected
during the testing & its subsequent effect on the users of the subject software application
being tested.

Such information is helpful to the developers and the management in deciding the
urgency or priority of fixing the bug during the product-testing phase.
Following Severity Levels are assigned during the Testing Phase:

Critical – is the most dangerous level, which does not permit continuance of the testing
effort beyond a particular point. Critical situation can arise due to popping up of some
error message or crashing of the system leading to forced full closure or semi closure of
the application. Criticality of the situation can be judged by the fact that any type of
workaround is not feasible. A bug can fall into "Critical" category in case of some menu
option being absent or needing special security permissions to gain access to the desired
function being tested.

High – is a level of major defect under which the product fails to behave according to the
desired expectations or it can lead to malfunctioning of some other functions thereby
causing failure to meet the customer requirements. Bugs under this category can be
tackled through some sort of workaround. Examples of bugs of this type can be mistake
in formulas for calculations or incorrect format of fields in the database causing failure in
updating of records. Likewise there can be many instances.

Medium – defects falling under this category of medium or average severity do not have
performance effect on the application. But these defects are certainly not acceptable due
to non-conformance to the standards or companies vide conventions. Medium level bugs
are comparatively easier to tackle since simple workarounds are possible to achieve
desired objectives for performance. Examples of bugs of this type can be mismatch
between some visible link compared with its corresponding text link.

Low - defects falling under low priority or minor defect category are the ones, which do
not have effect on the functionality of the product. Low severity failures generally do not
happen during normal usage of the application and have very less effect on the business.
Such types of bugs are generally related to looks & feel of the user interface & are
mainly cosmetic in nature.

Life Cycle of Bug:


In software development process, every bug has its own life cycle across which it passes
through before getting closed. As a matter of standardization, specific life cycle is
defined for the Bugs. During its life cycle, the bug attains various states, which are
illustrated below

Various States of a Bug during its Life Cycle are:

1. New Bug: When a bug is posted for the first time, its state is called "NEW". This
implies that the bug is not approved yet.

2. Open Bug: Once the software tester posts a bug, the team leader approves it after
satisfying himself about its genuinity, and changes its state to "OPEN".

3. Assigned Bug: Once the lead changes the state to "OPEN", the bug is assigned to
the concerned developer team. The state of the bug is changed now to "ASSIGNED".

4. Test Bug: Once the developer fixes the bug, he transfers the bug to the testing team
for next round of testing. After fixing the bug & prior to releasing it back to the testing
team, the state of the bug is changed to "TEST". In other words, the state "Test Bug"
implies that the bug has been fixed and is released to the testing team.

5. Deferred Bug: When the bug is expected to be fixed in next releases, its state is
changed to deferred state. Many factors are responsible for changing the bug to this
state. Few of such factors are priority of the bug may be low, lack of time for the release
or the bug may not have major effect on the software.

6. Rejected Bug: If the developer feels that the bug is not a genuine one, he rejects it.
This leads change of state of the bug to "REJECTED".

7. Duplicate Bug: If a particular bug gets repeated more than once or two bugs point
towards the same concept, then the status of one of the bug is changed to "DUPLICATE".
8. Verified Bug: Once the developer fixes the bug and its status is changed to "TEST",
the software tester confirms the absence of the bug. If the bug is not detected in the
software, the tester approves that the bug is duly fixed and changes its status to
"VERIFIED".

9. Reopened Bug: If the bug is detected again even after the bug is claimed to be fixed
by the developer, the tester changes its status to "REOPENED". The cycle repeats again
& again till the bug gets ultimately fixed & get closed.

10. Closed Bug: Once the bug is fixed & the tester confirms its absence, he changes its
status to "CLOSED". This is the final state which implies that the bug is fixed, tested and
approved.

As is well known that prevention is better than cure, similarly prevention of defect in
software is much more effective and efficient in reducing the number of defects. Some
organizations focus on discovery of defect and subsequent removal. Since discovering
and removing defects is an expensive and inefficient process, hence It is better &
economical for an organization to focus their major attention on activities which prevent
defects.

Typical Lifecycles of Some of the Popular Bugs are:

 Valid Bug: New -> Assigned -> Fixed but not patched -> Ready for Re-testing ->
Closed & Fix has been Verified

 Invalid Bug: New -> Not a Bug -> Closed since it is Not a Bug

 Duplicate Bug: New -> Duplicate Bug -> Closed since it is a Duplicate Bug

 Reopened Bug: New -> Assigned -> Fixed but not patched -> Ready for Re-
testing -> Reopened -> Fixed but not patched -> Ready for Re-testing -> Closed
& has been Fix Verified

Analysis of Bugs:
Bugs detected & logged during the testing phase provide valuable opportunity to
improve the product as well as the testing processes. The aim of every testing team
remains to achieve zero Customer Bugs. Majority of the Customer Bugs starts pouring in
first 6 Months to 1 year of the product usage.

Immediately after the completion of the product testing, the testing teams should carry
out detailed analysis of the entire set of Invalid Bugs / Duplicate Bugs /
Could_Not_Be_Reproduced Bugs and come up with adequate measures to reduce their
count in future testing efforts.

However once Customer Bugs start pouring in, the testing Team immediately starts
analyzing each one of them & try to find out as to how & why these bugs have missed
during their testing effort and take appropriate measures immediately.

Sr. Priority Severity

1 It is associated with schedule to resolve It is associated with benchmark quality or


e.g. out of many issues to be tackled, adherence to standard. It reflects
which one should be addressed first by harshness of a quality expectation.
the order of its importance or urgency.
2 Is largely related to Business or Marketing Is related to technical aspect of the
aspect. It is a pointer towards the product. It reflects on how bad the bug is
importance of the bug. for the system.

3 Priority refers to how soon the bug should Severity refers to the seriousness of the
be fixed. bug on the functionality of the product.
Higher effect on the functionality will lead
to assignment of higher severity to the
bug.

4 Priority to fix a bug is decided in The Quality Assurance Engineer decides


consultation with the client. the severity level. It is decided as per the
risk assessment of the customer.

5 Product fixes are based on 'Project Product fixes are based on Bug Severity.
Priorities.

1) Generally speaking, a "High Severity" bug would also carry a "High Priority" tag along
with it. However this is not a hard & fast rule. There can be many exceptions to this rule
depending on the nature of the application and its schedule of release.

2) High Priority & Low Severity: A spelling mistake in the name of the company on
the home page of the company’s web site is certainly a High Priority issue. But it can be
awarded a Low Severity just because it is not going to affect the functionality of the Web
site / application.

3) High Severity & Low Priority: System crashes encountered during a roundabout
scenario, whose likelihood of detection by the client is minimal, will have HIGH severity.
In spite of its major affect on the functionality of the product, it may be awarded a Low
Priority by the project manager since many other important bugs are likely to gain more
priority over it simply because they are more visible to the client.

Software Testing Classification


The development process involves various types of testing. Each test type addresses a
specific testing requirement. The most fundamental types of testing involved in the
development process are:

 Unit Testing
 System Testing
 Integration Testing
 Functional Testing
 Performance Testing
 Beta Testing
 Acceptance Testing The industry experts based upon the requirement have
categorized many types of Software Testing. Following list presents a brief
introduction to such types.

Unit Testing:
Unit is the smallest compilable component of the software. A unit typically is the work of
one programmer. The unit is tested in isolation with the help of stubs or drivers. It is
functional and reliability testing in an Engineering environment. Producing tests for the
behaviour of components of a product to ensure their correct behaviour prior to system
integration. Unit testing is typically done by the programmers and not by the
testers. More Details
Integration Testing:
Testing of the application after combining / integrating its various parts to find out if all
parts function together correctly. The parts can be code modules, individual applications,
client and server applications on a network, etc. It begins after two or more programs or
application components have been successfully unit tested. This type of testing is
especially relevant to client/server and distributed systems. More Details
Is a type of Unit testing which runs with no specific test in mind. Here the monkey is the
producer of any input data (which can be either a file data or can be an input device

Incremental Integration Testing:


Involves continuous testing of an application while new functionality is simultaneously
added. It requires that various aspects of an application's functionality be independent
enough to work separately before all parts of the program are completed, or that test
drivers be developed as needed. This testing is done by programmers or by testers.
Acceptance Testing :
Is the best industry practice & its is the final testing based on specifications provided by
the end-user or customer, or based on use by end-users/customers over some limited
period of time. In theory when all the acceptance tests pass, it can be said that the
project is done. More Details

Functional Testing:
Validating an application or Web site conforms to its specifications and correctly
performs all its required functions. This entails a series of tests which perform a feature
by feature validation of behaviour, using a wide range of normal and erroneous input
data. This can involve testing of the product's user interface, APIs, database
management, security, installation, networking, etc Functional testing can be performed
on an automated or manual basis using black box or white box methodologies. This is
usually done by the testers.

Performance Testing:
Performance testing can be applied to understand your application or WWW site's
scalability, or to benchmark the performance in an environment of third party products
such as servers and middle-ware for potential purchase. This sort of testing is particularly
useful to identify performance bottlenecks in high use applications. Performance testing
generally involves an automated test suite as this allows easy simulation of a variety of
normal, peak, and exceptional load conditions. It validates that both the online response
time and batch run times meet the defined performance requirements

System Testing:
Falls within the scope of black box testing, and as such, should require no knowledge of
the inner design of the code or logic. It is conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. System testing is a
more limiting type of testing; it seeks to detect defects both within the Inter assemblages
and also within the system as a whole. More Details

Alpha Testing:
Is simulated or actual operational testing by potential users / customers or an
independent test team at the developers' site. Alpha testing is often employed for off-
the-shelf software as a form of internal acceptance testing, before the software goes to
beta testing. It is usually done when the development of the software product is nearing
completion; minor design changes may still be made as a result of such testing.

Beta Testing:
Comes after alpha testing. Versions of the software, known as beta versions, are released
to a limited audience outside of the programming team. The software is released to
groups of people so that further testing can ensure the product has few faults or bugs.
Sometimes, beta versions are made available to the open public to increase the feedback
field to a maximal number of future users. Thus beta testing is done by end-users or
others, & not by the programmers or testers.

Acceptance Testing :
Is the best industry practice & its is the final testing based on specifications provided by
the end-user or customer, or based on use by end-users/customers over some limited
period of time. In theory when all the acceptance tests pass, it can be said that the
project is done

Black & White Box Testing Techniques:


What is Black Box Testing:
The term "Black Box" refers to the software, which is treated as a Black Box. By treating
it as a Black Box, we mean that the system or source code is not checked at all. It is done
from the customer’s viewpoint. The test engineer engaged in Black Box testing only
knows the set of inputs & expected outputs & is unaware of how those inputs are
transformed into outputs by the software.

What is White Box Testing:


White box testing is a way of testing the external functionality of the code by examining
and testing the program code that realizes the external functionality. It is a methodology
to design the test cases that uses the control structure of the application to design test
cases. White box testing is used to test the program code, code structure and the
internal design flow

Broad Comparison among the two Prime testing techniques i.e. Black Box
Testing & White Box Testing are as under

Black Box Testing or Functional White Box Testing or Glass Box


Testing Testing or Structural Testing

1 This method focus on functional This method focuses on procedural


requirements of the software, i.e., it details i.e., internal logic of a
enables the software engineer to derive program.
sets of input conditions that will fully
exercise all functional requirements for a
program.

2 It is Not an alternative approach to white It concentrates on internal logic,


box technique rather is complementary mainly.
approach that is likely to uncover a
different class of errors.

3 Black box testing is applied during later Whereas, white box testing is
stages of testing. performed early in the testing
process.

4 It attempts to find errors in following Whereas, white box testing attempts


categories : errors in following cases
a) Incorrect or missing functions
a) Internal logic of your program.
b) Interface errors
c) Errors in data structures or external b) Status of program.
database access
d) Performance errors
e) Initialization and termination errors.

5 It disregards control structure of It uses control structure of the


procedural design (i.e., what is the control procedural design to derive test
structure of our program, we do not cases.
consider here).

6 Black box testing, broadens our focus, on White box testing, as described by
the information domain and might be Hetzel is "testing in small" i.e.,
called as "testing in the large' i.e., testing testing small program components
bigger monolithic programs. (e.g., modules or small group of
modules).

7 Using black box testing techniques, we Using white box testing, the software
derive a set of test cases that satisfy engineer can desire test cases that
following criteria

a) Guarantee that all independent


a) Test cases that reduce, (by a count that paths within a module have been
is greater than 1), the number of exercised at least once.
additional test cases that must be
designed to achieve reasonable testing. b) Exercise all logical decisions on
their true and false sides.
b) Test cases that tell us something about
the presence or absence of classes of c) Execute all loops at their
errors rather than an error associated only boundaries and within their
with the specific tests at hand. operational bounds.

d) And exercise internal data


structures to ensure their validity.

8 It includes the tests that are conducted at A close examination of procedural


the software interface. detail is done.

9 Are used to uncover errors. Logical paths through the software


are tested by providing test cases
that exercise specific sets of
conditions or loops.

10 To demonstrate that software functions A limited set of logical paths be


are operational i.e., input is properly however examined.
accepted and output is correctly
produced. Also, the integrity of external
information (e.g. database) is maintained.

White Box Testing:


White box testing is a way of testing the external functionality of the code by
examining and testing the program code that realizes the external functionality. It is a
methodology to design the test cases that uses the control structure of the application
to design test cases. White box testing is used to test the program code, code structure
and the internal design flow.
A number of defects get amplified because of incorrect translation of requirements and
design into program code. Let us see different techniques of white box testing

Primarily White Box Testing comprises of two sub-streams of testing like:

1) Static White box Testing

2) Dynamic White box Testing

code inspection

A review technique carried out at the end of the coding phase for a module. A
specification (and design documentation) for the module is distributed to the inspection
team in advance. M. E. Fagan recommends an inspection team of about four people.
The module programmer explains the module code to the rest of the team. A
moderator records detected faults in the code and ensures there is no discussion of
corrections. The code designer and code tester complete the team. Any faults are
corrected outside the inspection, and reinspection may take place subject to the quality
targets adopted

Code Walkthroughs
A source code walkthrough often is called a technical code walkthrough or a peer code
review. The typical scenario finds a developer inviting his technical lead, a database
administrator, and one or more peers to a meeting to review a set of source modules
prior to production implementation. Often the modified code is indicated after the fact
on a hardcopy listing with annotations or a highlighting pen, or within the code itself
with comments.

A code walkthrough is an effective tool in the areas of quality assurance and


education. The developer is exposed to alternate methods and processes as the
technical lead and database administrator suggest and discuss improvements to the
code. The technical lead is assured of an acceptable level of quality and the database
administrator is assured of an acceptable level of database performance. The result is
better performance of the developer, his programs, and the entire application

Statement Coverage
Statement coverage identifies which statements in a method or class have been
executed. It is a simple metric to calculate, and a number of open source products exist
that measure this level of coverage.

Ultimately, the benefit of statement coverage is its ability to identify which blocks of
code have not been executed. The problem with statement coverage, however, is that
it does not identify bugs that arise from the control flow constructs in your source code,
such as compound conditions or consecutive switch labels. This means that you easily
can get 100 percent coverage and still have glaring, uncaught bugs.

Branch Coverage
A branch is the outcome of a decision, so branch coverage simply measures which
decision outcomes have been tested.

This sounds great because it takes a more in-depth view of the source code than simple
statement coverage, but branch coverage can also leave you wanting more.
Determining the number of branches in a method is easy. Boolean decisions obviously
have two outcomes, true and false, whereas switches have one outcome for each case
—and don't forget the default case! The total number of decision outcomes in a method
is therefore equal to the number of branches that need to be covered plus the entry
branch in the method (after all, even methods with straight line code have one branch).

Path Coverage
A path represents the flow of execution from the start of a method to its exit. A method
with N decisions has 2^N possible paths, and if the method contains a loop, it may
have an infinite number of paths. Fortunately, you can use a metric called cyclomatic
complexity to reduce the number of paths you need to test.

The cyclomatic complexity of a method is calculated as one plus the number of


unique decisions in the method. Cyclomatic complexity helps you define the number of
linearly independent paths, called the basis set, through a method. The definition of
linear independence is beyond the scope of this article, but in summary, the basis set is
the smallest set of paths that can be combined to create every other possible path
through a method.
Like branch coverage, testing the basis set of paths ensures that you test every
decision outcome, but unlike branch coverage, basis path coverage ensures that you
test all decision outcomes independently of one another. In other words, each new
basis path "flips" exactly one previously executed decision, leaving all other executed
branches unchanged. This is the crucial factor that makes basis path coverage more
robust than branch coverage and allows you to see how changing that one decision
affects the method's behavior.

Cyclomatic complexity

it is a software metric (measurement). It was developed by Thomas McCabe and is used


to measure the complexity of a program. It directly measures the number of linearly
independent paths through a program's source code.

The concept, although not the method, is somewhat similar to that of general text
complexity measured by the Flesch-Kincaid Readability Test.

Cyclomatic complexity is computed using a graph that describes the control flow of the
program. The nodes of the graph correspond to the commands of a program. A directed
edge connects two nodes if the second command might be executed immediately after
the first command.

Mutation testing
Mutation testing (or Mutation analysis) is a method of software testing, which involves
modifying program's source code or byte code in small ways.[1] In short, any tests
which pass after code has been mutated are defective. These, so-called mutations, are
based on well-defined mutation operators that either mimic typical programming errors
(such as using the wrong operator or variable name) or force the creation of valuable
tests (such as driving each expression to zero). The purpose is to help the tester
develop effective tests or locate weaknesses in the test data used for the program or in
sections of the code that are seldom or never accessed during execution.

Tests can be created to verify the correctness of the implementation of a given


software system. But the creation of tests still poses the question whether the tests are
correct and sufficiently cover the requirements that have originated the
implementation. (This technological problem is itself an instance of a deeper
philosophical problem named "Quis custodiet ipsos custodes?" ["Who will guard the
guards?"].) In this context, mutation testing was pioneered in the 1970s to locate and
expose weaknesses in test suites. The theory was that if a mutation was introduced
without the behavior (generally output) of the program being affected, this indicated
either that the code that had been mutated was never executed (redundant code) or
that the testing suite was unable to locate the injected fault. In order for this to
function at any scale, a large number of mutations had to be introduced into a large
program, leading to the compilation and execution of an extremely large number of
copies of the program. This problem of the expense of mutation testing has reduced its
practical use as a method of software testing

Code Based Fault injection


In software testing, fault injection is a technique for improving the coverage of a test by
introducing faults to test code paths, in particular error handling code paths, that might
otherwise rarely be followed. It is often used with stress testing and is widely
considered to be an important part of developing robust software[1]. Robustness
testing[2] (also known as Syntax Testing, Fuzzing or Fuzz testing) is a type of fault
injection commonly used to test for vulnerabilities in communication interfaces such as
protocols, command line parameters, or APIs.
The propagation of a fault through to an observable failure follows a well defined cycle.
When executed, a fault may cause an error, which is an invalid state within a system
boundary. An error may cause further errors within the system boundary, therefore
each new error acts as a fault, or it may propagate to the system boundary and be
observable. When error states are observed at the system boundary they are termed
failures. This mechanism is termed the fault-error-failure cycle [3] and is a key
mechanism in dependability.

Black Box Testing:


The term 'Black Box' refers to the software, which is treated as a black box. By treating
it as a black box, we mean that the system or source code is not checked at all. It is
done from customer's viewpoint. The test engineer engaged in black box testing only
knows the set of inputs and expected outputs and is unaware of how those inputs are
transformed into outputs by the software.

Types of Black Box Testing Techniques: Following techniques are used for
performing black box testing

1) Boundary Value Analysis (BVA)


2) Equivalence Class Testing

3) Decision Table based testing

4) Cause-Effect Graphing Technique

1) Boundary Value Analysis (BVA):


This testing technique believes and extends the concept that the density of defect is
more towards the boundaries. This is done to the following reasons.
a) Usually the programmers are not able to decide whether they have to use <=
operator or < operator when trying to make comparisons.

b) Different terminating conditions of For-loops, While loops and Repeat loops may
cause defects to move
around the boundary conditions.

c) The requirements themselves may not be clearly understood, especially around the
boundaries, thus causing even the correctly coded program to not perform the correct
way.

2) Equivalence Class Testing:


The use of equivalence classes as the basis for functional testing is appropriate in
situations like

a) When exhaustive testing is desired.

b) When there is a strong need to avoid redundancy.


The above are not handled by BVA technique as we can see massive redundancy in the
tables of test cases. In this technique, the input and the output domain is divided into a
finite number of equivalence classes .
3) Decision Table Based Testing:
Decision tables are a precise and compact way to model complicated logic. Out of all
the functional testing methods, the ones based on decision tables are the most rigorous
due to the reason that the decision tables enforce logical rigour.

Decision tables are ideal for describing situations in which a number of combinations of
actions are taken under varying sets of conditions.

4) Cause-Effect Graphing Technique:


This is basically a hardware testing technique adapted to software testing. It considers
only the desired external behaviour of a system. This is a testing technique that aids in
selecting test cases that logically relate Causes (inputs) to Effects (outputs) to produce
test cases.

A “Cause” represents a distinct input condition that brings about an internal change in
the system. An “Effect” represents an output condition, a system transformation or a
state resulting from a combination of causes.

Functional VS Non Functional Testing:

S. No
Functional Testing Non-Functional Testing

1 Testing developed application Testing the application based on the


against business requirements. clients and performance
Functional testing is done using the requirement.
functional specifications provided Non-Functioning testing is done
by the client or by using the design based on the requirements and test
specifications like use cases scenarios defined by the client.
provided by the design team.
2 Functional testing covers Non-Functional testing covers

· Unit Testing · Load and Performance Testing


· Smoke testing / Sanity testing · Ergonomics Testing
· Integration Testing (Top Down, · Stress & Volume Testing
Bottom up Testing) · Compatibility & Migration Testing
· Interface & Usability Testing · Data Conversion Testing
· System Testing · Security / Penetration Testing
· Regression Testing · Operational Readiness Testing
· Pre User Acceptance Testing · Installation Testing
(Alpha & Beta) · Security Testing (Application
· User Acceptance Testing Security, Network Security,
· White Box & Black Box Testing System Security)
· Globalization & Localization
Testing

Non Functional Testing:


Aim of Non Functional Testing:
Such tests are aimed to verify the non-functionality factors related to the customer
expectations. Following testing techniques are employed to validate the various non-
functionality factors.

1) Usability Test or User interface Testing:


To verify the user friendliness of the application is known as Usability Test, which is
aimed to verify the following factors.

a) Adequacy of look and feel factors like back ground color, font size, spelling mistakes
etc..

b) Adequacy of alignment of various controls.

c) Ease of Navigation.

d) Meaning fullness of the Help document.

2) Performance Testing:
To verify the speed of the process for completing a transaction. Following performance
testing techniques are employed here.

a) Load Testing or Scalability Testing: To verify that the application supports the
customer expected load or not across the desired number of configured systems.

b) Stress Testing: Is aimed at estimating the peak limit of the load the application can
handle. For such load testing & stress testing, automation tools like load runner etc. are
deployed

c) Data volume testing: To verify the maximum storage capacity in the application
database.

3) Security Testing:
To verify the privacy to the user operations. During security testing, major focus is laid
on the following two factors.

a) Authorization: To verify as to whether the application is permitting the valid users &
at the same time it should be preventing the invalid users.

b) Access Control: To verify as to whether the application is providing the right


services to the valid users or not.

4) Recovery Testing or Reliability Testing:


To verify as to whether the application is able to return back to its normal state or not
after the occurrence of some abnormal behavior with the help of the available recovery
procedures. It involves estimation of the recovery time as well.

5) Compatibility Testing or Portability Testing:


To verify as to whether the application supports the customer expected operating
systems, network environment, browsers etc. etc. In compatibility testing following two
techniques are deployed.

a) Forward compatibility Testing: Is aimed to verify as to whether the application


supports the future versions of operating systems or not.
b) Backward compatibility Testing: Is aimed to verify as to whether the application
supports the older / previous versions of the operating systems or not.

6) Configuration Testing:
To verify that the application supports different technology hardware devices or not. e.g.
The application is to be checked for printers based upon various technologies.

7) End to End Testing:


To verify as to how well the new software coexists with already existing software sharing
common resources. This approach involves execution of all transaction right from the
login session to the logout session.

8) Installation Testing:
This test is aimed to verify the following factors.

a) Availability of the License.

b) Whether all setup programs are working properly or not.

c) Availability of the required memory space.

9) Sanitation Testing:
This test is aimed to find out the presence of extra features in the application, although
not specified in the client requirements.

10) Comparative Testing or Parallel Testing:


This test is aimed to understand the strengths & weaknesses of the application viz. A viz.
similar product from the competitors of the market.

Functional Testing:
1)Unit Testing:
In software engineering, unit testing is a test (often automated) that validates that
individual units of source code are working properly. A unit is the smallest testable part
of an application. In procedural programming a unit may be an individual program,
function, procedure, etc., while in object-oriented programming, the smallest unit is a
method, which may belong to a base / super class, abstract class or derived / child class.

Ideally, each test case is independent from the others; Double objects like stubs, mock or
fake objects as well as test harnesses can be used to assist testing a module in isolation.
Unit testing is typically done by software developers to ensure that the code they have
written meets software requirements and behaves as the developer intended.

The goal of unit testing is to isolate each part of the program and show that the
individual parts are correct. A unit test provides a strict, written contract that the piece of
code must satisfy. As a result, it affords several benefits.

Following three steps of unit-testing effectively address the goal of finding faults in
software modules

a) Examination of the code:


The code is examined thoroughly through static testing methods like Reviews,
Walkthroughs and Inspections etc.

b) Proving the correctness of the code:


 After completion of the exercise if coding and review etc. we would like to confirm
the correctness of the code. A program is said to be correct if it implements the
functions and data properly as indicated in the design and if it interfaces properly
with all other components. One way to investigate program correctness is to view
the code as a statement of logical flow. Using mathematical logic, if we can
formulate the program as a set of assertions and theorems, we can show that the
truth of the theorems implies the correctness of the code.
 With this approach we become more strict, rigorous and precise in our
specification. This would require great amount of effort in setting up and carrying
out the proof.

c) Testing of Program components or Units or Modules:

 In the absence of simpler methods and automated tools, "Proving code


correctness" will be an elusive goal for software engineers. Proving views
programs in terms of classes of data and conditions and the proof may not involve
execution of the code. On the contrary, testing is a series of experiments to
observe the behavior of the program for various input conditions. While proof tells
us how a program will work in a hypothetical environment described by the
design and requirements, testing gives us information about how a program
works in its actual operating environment.
 To test a component, input data and conditions are chosen to demonstrate an
observable behavior of the code. A test case is a particular choice of input data to
be used in testing a program. Test case are generated by using either black-box
or white-box approaches

2) Sanity Testing:
A sanity test or sanity check is a basic test to quickly evaluate the validity of a claim or
calculation. In mathematics, for example, when dividing by three or nine, verifying that
the sum of the digits of the result is a multiple of 3 or 9 (casting out nines) respectively is
a sanity test.

In computer science it is a very brief run-through of the functionality of a computer


program, system, calculation, or other analysis, to assure that the system or
methodology works as expected, often prior to a more exhaustive round of testing

In software development, the sanity test (a form of software testing which offers "quick,
broad, and shallow testing" determines whether it is reasonable to proceed with further
testing.

Software sanity tests are commonly conflated with smoke tests. A smoke test determines
whether it is possible to continue testing, as opposed to whether it is reasonable. A
software smoke test determines whether the program launches and whether its
interfaces are accessible and responsible (for example, the responsiveness of a web
page or an input button). If the smoke test fails, it is impossible to conduct a sanity test.
In contrast, the ideal sanity test exercises the smallest subset of application functions
needed to determine whether the application logic is generally functional and correct (for
example, an interest rate calculation for a financial application). If the sanity test fails, it
is not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are
ways to avoid wasting time and effort by quickly determining whether an application is
too flawed to merit any rigorous testing. Many companies run sanity tests on a weekly
build as part of their development process.

The Hello world program s often used as a sanity test for a development environment. If
Hello World fails to compile the basic environment (or the compile process the user is
attempting) has a configuration problem. If it work
3) Smoke Testing:
Smoke testing is a term used in plumbing, woodwind repair, electronics, and computer
software development. It refers to the first test made after repairs or first assembly to
provide some assurance that the system under test will not catastrophically fail. After a
smoke test proves that the pipes will not leak, the keys seal properly, the circuit will not
burn, or the software will not crash outright, the assembly is ready for more stressful
testing.

In software testing area, smoke testing is a preliminary to further testing, which should
reveal simple failures severe enough to reject a prospective software release. In this
case, the smoke is metaphorical.

Smoke testing is done by developers before the build is released or by testers before
accepting a build for further testing.

In software engineering, a smoke test generally consists of a collection of tests that can
be applied to a newly created or repaired computer program. Sometimes the tests are
performed by the automated system that builds the final software. In this sense a smoke
test is the process of validating code changes before the changes are checked into the
larger product’s official source code collection. Next after code reviews, smoke testing is
the most cost effective method for identifying and fixing defects in software; some even
believe that it is the most effective of all.[citation needed]

In software testing, a smoke test is a collection of written tests that are performed on a
system prior to being accepted for further testing. This is also known as a build
verification test. This is a "shallow and wide" approach to the application. The tester
"touches" all areas of the application without getting too deep, looking for answers to
basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do
the buttons on the window do things?". There is no need to get down to field validation or
business flows. If you get a "No" answer to basic questions like these, then the
application is so badly broken, there's effectively nothing there to allow further testing.
These written tests can either be performed manually or using an automated tool. When
automated tools are used, the tests are often initiated by the same process that
generates the build itself.

4) Integration Testing:
Integration testing (sometimes called Integration and Testing, abbreviated as I&T) is the
phase of software testing in which individual software modules are combined and tested
as a group. It follows unit testing and precedes system testing.

Integration testing takes as its input modules that have been unit tested, groups them in
larger aggregates, applies tests defined in an integration test plan to those aggregates,
and delivers as its output the integrated system ready for system testing.

Purpose:
The purpose of integration testing is to verify functional, performance and reliability
requirements placed on major design items. These "design items", i.e. assemblages (or
groups of units), are exercised through their interfaces using black box testing, success
and error cases being simulated via appropriate parameter and data inputs. Simulated
usage of shared data areas and inter-process communication is tested and individual
subsystems are exercised through their input interface. Test cases are constructed to
test that all components within assemblages interact correctly, for example across
procedure calls or process activations, and this is done after testing individual modules,
i.e. unit testing.

The overall idea is a "building block" approach, in which verified assemblages are added
to a verified base which is then used to support the integration testing of further
assemblages.
a) Top Down Integration Testing

Testing Types - Life Cycle


Top down integration testing is an incremental integration testing technique which
begins by testing the top level module and and progressively adds in lower level module
one by one. Lower level modules are normally simulated by stubs which mimic
functionality of lower level modules. As you add lower level code, you will replace stubs
with the actual components. Top Down integration can be performed and tested in
breadth first or depth firs manner.
Advantages :

Driver do not have to be written when top down testing is used.


It provides early working module of the program and so design defects can be found and
corrected early.

Disadvantages

Stubs have to be written with utmost care as they will simulate setting of output
parameters.It is difficult to have other people or third parties to perform this testing,
mostly developers will have to spend time on this.

b)Bottom Up Integration Testing


In bottom up integration testing, module at the lowest level are developed first and other
modules which go towards the 'main' program are integrated and tested one at a time.
Bottom up integration also uses test drivers to drive and pass appropriate data to the
lower level modules. As and when code for other module gets ready, these drivers are
replaced with the actual module. In this approach, lower level modules are tested
extensively thus make sure that highest used module is tested properly.

Advantages

 Behavior of the interaction points are crystal clear, as components are added in the
controlled manner and tested repetitively.
 Appropriate for applications where bottom up design methodology is used.

Disadvantages

 Writing and maintaining test drivers or harness is difficult than writing stubs.
 This approach is not suitable for the software development using top down approach.

5) Usability Testing:

Usability testing is a black-box testing technique. The aim is to observe people using the
product to discover errors and areas of improvement. Usability testing generally involves
measuring how well test subjects respond in four areas: efficiency, accuracy, recall, and
emotional response. The results of the first test can be treated as a baseline or control
measurement; all subsequent tests can then be compared to the baseline to indicate
improvement.

 Performance -- How much time, and how many steps, are required for people to
complete basic tasks? (For example, find something to buy, create a new account,
and order the item.)
 Accuracy -- How many mistakes did people make? (And were they fatal or
recoverable with the right information?)
 Recall -- How much does the person remember afterwards or after periods of non-
use?
 Emotional response -- How does the person feel about the tasks completed? Is the
person confident, stressed? Would the user recommend this system to a friend?

6) System Testing:
System testing of software or hardware is testing conducted on a complete, integrated
system to evaluate the system's compliance with its specified requirements. System
testing falls within the scope of black box testing, and as such, should require no
knowledge of the inner design of the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components
that have successfully passed integration testing and also the software system itself
integrated with any applicable hardware system(s). The purpose of integration testing is
to detect any inconsistencies between the software units that are integrated together
(called assemblages) or between any of the assemblages and the hardware. System
testing is a more limiting type of testing; it seeks to detect defects both within the "inter-
assemblages" and also within the system as a whole.

Testing the whole system:


System testing is performed on the entire system in the context of a Functional
Requirement Specification(s) (FRS) and / or a System Requirement Specification (SRS).
System testing is an investigatory testing phase, where the focus is to have almost a
destructive attitude[citation needed] and test not only the design, but also the behavior
and even the believed expectations of the customer. It is also intended to test up to and
some suggest beyond the bounds defined in the software / hardware requirements
specification(s) - although how this is meaningfully possible is undefined.

Types of system testing:


The following examples are different types of System testing:

1) GUI software testing

2) Usability testing

3) Performance testing

4) Compatibility testing

5) Load testing

6) Volume testing

7) Stress testing

8) Security testing

9) Scalability testing

10) Sanity testing

11) Smoke testing

12) Exploratory testing

13) Ad hoc testing

14) Regression testing


15) Reliability testing

16) Recovery testing

17) Installation testing

18) Maintenance testing

7) Regression Testing:
Regression testing is any type of software testing which seeks to uncover regression
bugs. Regression bugs occur whenever software functionality that previously worked as
desired, stops working or no longer works in the same way that was previously planned.
Typically regression bugs occur as an unintended consequence of program changes.

After modifying software, either for a change in functionality or to fix defects, a


regression test re-runs previously passing tests on the modified software to ensure that
the modifications haven't unintentionally caused a regression of previous functionality.
These regression tests are often automated.

More specific forms of regression testing are known as sanity testing, when quickly
checking for erratic behavior, and smoke testing when testing for basic functionality.

8) Pre Acceptance Testing:

Alpha Test: The first test of newly developed hardware or software in a laboratory
setting. When the first round of bugs has been fixed, the product goes into beta test with
actual users. For custom software, the customer may be invited into the vendor's
facilities for an alpha test to ensure the client's vision has been interpreted properly by
the developer.

Beta Test: A test of new or revised hardware or software that is performed by users at
their facilities under normal operating conditions. Beta testing follows alpha testing.
Vendors of packaged software often offer their customers the opportunity of beta testing
new releases or versions, and the beta testing of elaborate products such as operating
systems can take months

9)Acceptance Testing:

Acceptance testing is a Black-Box testing performed on a software system prior to its


delivery.

In some engineering sub-disciplines, it is known as Functional Testing, Black-Box testing,


Release Acceptance, QA testing, Application Testing, Confidence Testing, Final Testing,
Validation Testing, Usability Testing, or Factory Acceptance Testing.

Acceptance testing generally involves running a suite of tests on the completed system.
Each individual test, known as a case, exercises a particular operating condition of the
user's environment or feature of the system, and will result in a pass or fail Boolean
outcome.

Process:
The acceptance test suite is run against the supplied input data or using an acceptance
test script to direct the testers. Then the results obtained are compared with the
expected results. If there is a correct match for every case, the test suite is said to pass.
If not, the system may either be rejected or accepted on conditions previously agreed
between the sponsor and the manufacturer.
The objective is to provide confidence that the delivered system meets the business
requirements of both sponsors and users. The acceptance phase may also act as the final
quality gateway, where any quality defects not previously detected may be uncovered

10.Globalization & Localization Testing:

Difference between Static or Dynamic testing?

Static Testing:
The Verification activities fall into the category of Static Testing. During static testing,
you have a checklist to check whether the work you are doing is going as per the set
standards of the organization. These standards can be for Coding, Integrating and
Deployment. Review's, Inspection's and Walkthrough's are static testing methodologies.

Dynamic Testing:
Dynamic Testing involves working with the software, giving input values and checking if
the output is as expected. These are the Validation activities. Unit Tests, Integration
Tests, System Tests and Acceptance Tests are few of the Dynamic Testing
methodologies.

Difference between verification and validation

verification:

Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code,
requirements and specifications; this can be done with checklists, issues lists,
walkthroughs and inspection meetings.

validation:

Validation ensures that functionality, as defined in requirements, is the intended


behavior of the product; validation typically involves actual testing and takes place after
verifications are completed

Test case:

A test case in software engineering is a set of conditions or variables under which a


tester will determine whether an application or software system is working correctly or
not. The mechanism for determining whether a software program or system has passed
or failed such a test is known as a test oracle. In some settings, an oracle could be a
requirement or use case, while in others it could be a heuristic. It may take many test
cases to determine that a software program or system is functioning correctly. Test
cases are often referred to as test scripts, particularly when written. Written test cases
are usually collected into test suites.

Test Suite:
In software development, a test suite, less commonly known as a validation suite, is a
collection of test cases that are intended to be used to test a software program to show
that it has some specified set of behaviours. A test suite often contains detailed
instructions or goals for each collection of test cases and information on the system
configuration to be used during testing

Test Plan:

a management planning document that shows:

How the testing will be done (including SUT configurations).

Who will do it

What will be tested

How long it will take (although this may vary, depending upon resource
availability).

What the test coverage will be, i.e. what quality level is required
Test Design Specification:

detailing test conditions and the expected results as well as test pass criteria.

Test Case Specification:

specifying the test data for use in running the test conditions identified in the Test
Design Specification

Test Procedure Specification:

detailing how to run each test, including any set-up preconditions and the steps that
need to be followed

Test Item Transmittal Report:

reporting on when tested software components have progressed from one stage of
testing to the next

Test Log:

recording which tests cases were run, who ran them, in what order, and whether each
test passed or failed

Test Incident Report:

detailing, for any test that failed, the actual versus expected result, and other
information intended to throw light on why a test has failed. This document is
deliberately named as an incident report, and not a fault report. The reason is that a
discrepancy between expected and actual results can occur for a number of reasons
other than a fault in the system. These include the expected results being wrong, the
test being run wrongly, or inconsistency in the requirements meaning that more than one
interpretation could be made. The report consists of all details of the incident such as
actual and expected results, when it failed, and any supporting evidence that will help in
its resolution. The report will also include, if possible, an assessment of the impact of an
incident upon testing.

Test Summary Report:


A management report providing any important information uncovered by the tests
accomplished, and including assessments of the quality of the testing effort, the quality
of the software system under test, and statistics derived from Incident Reports. The
report also records what testing was done and how long it took, in order to improve any
future test planning. This final document is used to indicate whether the software system
under test is fit for purpose according to whether or not it has met acceptance criteria
defined by project stakeholders.

Test Strategy:
It is a company level document developed bye quality
assurance manager or quality analyst category people. It
defines testing approach to reach the standards. During
test strategy document preparation QA people concentrate on
below factors:
1. scope and Objective
2. Budget control
3. Testing approach
4. Test deliverables
5. roles and responsibilities
6. communication and status reporting
7. automation tools (if needed)
8. testing measurements
9. risks and litigations
10. change configuration management
11. training plan

Test Script:
A test script in software testing is a set of instructions that will be performed on the
system under test to test that the system functions as expected.

Test Entry & Exit Criteria:


Entry Criteria:
1 All code of application, unit tested.
2 Test plan, test cases reviewed and approved.
3 QA/Tester get significant knowledge of application.
4 Test environment/test ware get prepared.
5 After getting application build.

Exit Criteria:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a
specified point
Bug rate falls below a certain level
Beta or alpha testing period ends

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy