Week 14 Software Testing
Week 14 Software Testing
Software Testing
Topics
Development testing
Test-driven
development
Release testing
User testing
Program testing
Testing is intended to show that a program does what it is
intended to do and to discover program defects before it is
put into use.
When you test software, you execute a program
using artificial data.
You check the results of the test run for errors, anomalies
or information about the program’s non-functional
attributes.
Can reveal the presence of errors NOT their
absence.
Testing is part of a more general verification and
validation process, which also includes static validation
techniques.
Program testing goals
To demonstrate to the developer and the
customer that the software meets its
requirements.
For custom software, this means that there should be
at least one test for every requirement in the
requirements document. For generic software
products, it means that there should be tests for all of
the system features, plus combinations of these
features, that will be incorporated in the product
release.
To discover situations in which the behavior of
the software is incorrect, undesirable or does not
conform to its specification.
Defect testing is concerned with rooting out
undesirable system behavior such as system crashes,
unwanted interactions with other systems, incorrect
computations and data corruption.
Validation and defect testing
The first goal leads to validation testing
You expect the system to perform correctly using a
given set of test cases that reflect the system’s
expected use.
The second goal leads to defect testing
The test cases are designed to expose defects. The test
cases in defect testing can be deliberately obscure and
need not reflect how the system is normally used.
Testing process goals
Validation testing
To demonstrate to the developer and the system
customer that the software meets its requirements
A successful test shows that the system operates as
intended.
Defect testing
To discover faults or defects in the software where its
behaviour is incorrect or not in conformance with its
specification
A successful test is a test that makes the system
perform incorrectly and so exposes a defect in
the system.
An input-output model of program testing
System
Outputs which
Output test Oe
results reveal the
presence of
defects
Verification vs validation
Verification:
"Are we building the product right”.
The software should conform to its
specification.
Validation:
"Are we building the right product”.
The software should do what the user really
requires.
V & V confidence
Aim of V & V is to establish confidence that the
system is ‘fit for purpose’.
Depends on system’s purpose, user expectations
and marketing environment
Software purpose
• The level of confidence depends on how critical the software
is to an organisation.
User expectations
• Users may have low expectations of certain kinds of software.
Marketing environment
• Getting a product to market early may be more important
than finding defects in the program.
Inspections and testing
Inspection
s
System
prototyp Testin
e g
Software inspections
WeatherStation
identifier
reportWeather ( )
reportStatus ( )
powerSave
(instruments)
remoteControl
(commands) reconfigure
(commands) restart
(instruments) shutdown
(instruments)
Weather station testing
Need to define test cases for reportWeather,
calibrate, test, startup and shutdown.
Using a state model, identify sequences of
state transitions to be tested and the event
sequences to cause these transitions
For example:
Shutdown -> Running-> Shutdown
Configuring-> Running-> Testing -> Transmitting -
> Running
Running-> Collecting-> Running-> Summarizing -
> Transmitting
-> Running
Automated testing
Weather
information
system
SatComm WeatherStati Commslink WeatherData
s on
request
(report)
acknowledge
r
e acknowledg get summarise
pe (summary) ()
o
r
t send
W(report)
e
acknowledg
ae
reply (report) t
h
e
acknowledge r
(
)
Testing policies
Exhaustive system testing is impossible so testing policies
which define the required system test coverage may be
developed.
Examples of testing policies:
All system functions that are accessed through menus should
be tested.
Combinations of functions (e.g. text formatting) that
are accessed through the same menu must be tested.
Where user input is provided, all functions must be tested
with both correct and incorrect input.
Test-driven development
Test-driven development (TDD) is an approach
to program development in which you inter-
leave testing and code development.
Tests are written before code and ‘passing’ the
tests is the critical driver of development.
You develop code incrementally, along with a test
for that increment. You don’t move on to the next
increment until the code that you have developed
passes its test.
TDD was introduced as part of agile methods
such as Extreme Programming. However, it can
also be used in plan-driven development
processes.
Test-driven development
Identify pas
s
new
functionalit
y
fai Implement
Write Run l functionality
test test and refactor
TDD process activities