Certified Tester Foundation Level Syllabus: Version 2018 Version
Certified Tester Foundation Level Syllabus: Version 2018 Version
Copyright Notice
This document may be copied in its entirety, or extracts made, if the source is acknowledged.
Copyright Notice © International Software Testing Qualifications Board (hereinafter called ISTQB®)
ISTQB is a registered trademark of the International Software Testing Qualifications Board.
Copyright © 2018 the authors for the update 2018 Klaus Olsen (chair), Tauhida Parveen (vice chair), Rex
Black (project manager), Debra Friedenberg, Matthias Hamburg, Judy McKay, Meile Posthuma, Hans
Schaefer, Radoslaw Smilgin, Mike Smith, Steve Toms, Stephanie Ulrich, Marie Walsh, and Eshraka
Zakaria.
Copyright © 2011 the authors for the update 2011 Thomas Müller (chair), Debra Friedenberg, and the
ISTQB WG Foundation Level.
International
Certified Tester Software Testing
Foundation Level Syllabus Qualifications Board
Copyright © 2010 the authors for the update 2010 Thomas Müller (chair), Armin Beer, Martin Klonk, and
Rahul Verma.
Copyright © 2007 the authors for the update 2007 Thomas Müller (chair), Dorothy Graham, Debra
Friedenberg and Erik van Veenendaal.
Copyright © 2005, the authors Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus
Olsen, Maaret Pyhäjärvi, Geoff Thompson, and Erik van Veenendaal.
The authors hereby transfer the copyright to the International Software Testing Qualifications Board
(ISTQB). The authors (as current copyright holders) and ISTQB (as the future copyright holder) have
agreed to the following conditions of use:
Any individual or training company may use this syllabus as the basis for a training course if the authors
and the ISTQB are acknowledged as the source and copyright owners of the syllabus and provided that
any advertisement of such a training course may mention the syllabus only after submission for official
accreditation of the training materials to an ISTQB recognized Member Board.
Any individual or group of individuals may use this syllabus as the basis for articles, books, or other
derivative writings if the authors and the ISTQB are acknowledged as the source and copyright owners of
the syllabus.
Any ISTQB-recognized Member Board may translate this syllabus and license the syllabus (or its
translation) to other parties.
Revision History
ISTQB 2018 9-December-2017 Alpha review 2.5 release – Technical edit of 2.0 release, no
new content added
ISTQB 2018 22-November-2017 Alpha review 2.0 release – Certified Tester Foundation
Level Syllabus Major Update 2018 – see Appendix C –
Release Notes for details
ISTQB 2018 12-June-2017 Alpha review release - Certified Tester Foundation Level
Syllabus Major Update 2018 – see Appendix C – Release
Notes
ASQF V2.2 July-2003 ASQF Syllabus Foundation Level Version 2.2 “Lehrplan
Grundlagen des Software-testens“
Table of Contents
Copyright
Notice............................................................................................................................................2
Revision History
............................................................................................................................................3
Table of
Contents..........................................................................................................................................4
Acknowledgements.......................................................................................................................................7
0 Introduction...........................................................................................................................................
9
0.1 Purpose of this Syllabus..............................................................................................................9
0.2 The Certified Tester Foundation Level in Software Testing........................................................9
0.3 Examinable Learning Objectives and Cognitive Levels of Knowledge .....................................10
0.4 The Foundation Level Certificate Exam
....................................................................................10
0.5 Accreditation..............................................................................................................................10
0.6 Level of Detail............................................................................................................................11
0.7 How this Syllabus is Organized.................................................................................................11
1 Fundamentals of Testing....................................................................................................................12
1.1 What is Testing?........................................................................................................................13
1.1.1 Typical Objectives of Testing
................................................................................................13
1.1.2 Testing and Debugging.........................................................................................................14
1.2 Why is Testing Necessary?.......................................................................................................14
1.2.1 Testing’s Contributions to Success.......................................................................................14
1.2.2
Quality Assurance and Testing
.............................................................................................15
1.2.3 Errors, Defects, and
Failures.................................................................................................15
1.2.4 Defects, Root Causes and Effects
........................................................................................16
1.3 Seven Testing Principles...........................................................................................................16
1.4 Test Process..............................................................................................................................17
1.4.1 Test Process in Context........................................................................................................17
1.4.2 Test Activities and
Tasks.......................................................................................................18
1.4.3 Test Work Products...............................................................................................................22
1.4.4 Traceability between the Test Basis and Test Work Products..............................................24
1.5 The Psychology of Testing
........................................................................................................25
1.5.1 Human Psychology and
Testing............................................................................................25
1.5.2 Tester’s and Developer’s Mindsets.......................................................................................25
2 Testing Throughout the Software Development Lifecycle..................................................................27
2.1 Software Development Lifecycle Models...................................................................................28
2.1.1 Software Development and Software Testing.......................................................................28
2.1.2 Software Development Lifecycle Models in Context.............................................................29
2.2 Test
Levels.................................................................................................................................30
2.2.1 Component Testing...............................................................................................................31
2.2.2 Integration
Testing.................................................................................................................32
2.2.3 System
Testing......................................................................................................................34
2.2.4 Acceptance Testing...............................................................................................................36
2.3 Test Types.................................................................................................................................39
2.3.1 Functional Testing.................................................................................................................39
2.3.2 Non-functional Testing
..........................................................................................................40
2.3.3 White-box Testing
.................................................................................................................40
2.3.4 Change-related Testing.........................................................................................................41
2.3.5 Test Types and Test Levels..................................................................................................41
2.4 Maintenance Testing
.................................................................................................................42
2.4.1 Triggers for Maintenance
......................................................................................................43
2.4.2 Impact Analysis for Maintenance
..........................................................................................43
3 Static
Testing......................................................................................................................................45
Level 1: Remember
(K1).........................................................................................................................90
Level 2: Understand
(K2)........................................................................................................................90
Level 3: Apply (K3)
.................................................................................................................................90
10 Appendix C – Release Notes.............................................................................................................91
11
Index...................................................................................................................................................9
2
Acknowledgements
This document was formally released by the General Assembly of the ISTQB (4 June 2018).
It was produced by a team from the International Software Testing Qualifications Board: Klaus Olsen
(chair), Tauhida Parveen (vice chair), Rex Black (project manager), Debra Friedenberg, Judy McKay,
Meile Posthuma, Hans Schaefer, Radoslaw Smilgin, Mike Smith, Steve Toms, Stephanie Ulrich, Marie
Walsh, and Eshraka Zakaria.
The team thanks Rex Black and Dorothy Graham for their technical editing, and the review team, the
cross-review team, and the Member Boards for their suggestions and input.
The following persons participated in the reviewing, commenting and balloting of this syllabus: Tom
Adams, Tobias Ahlgren, Xu Aiguo, Chris Van Bael, Katalin Balla, Graham Bath, Gualtiero Bazzana, Arne
Becher, Veronica Belcher, Lars Hilmar Bjørstrup, Ralf Bongard, Armin Born, Robert Bornelind, Mette
Bruhn-Pedersen, Geza Bujdoso, Earl Burba, Filipe Carlos, Young Jae Choi, Greg Collina, Alessandro
Collino, Cui Zhe, Taz Daughtrey, Matthias Daigl, Wim Decoutere, Frans Dijkman, Klaudia Dussa-Zieger,
Yonit Elbaz, Ofer Feldman, Mark Fewster, Florian Fieber, David Frei, Debra Friedenberg, Conrad
Fujimoto, Pooja Gautam, Thorsten Geiselhart, Chen Geng, Christian Alexander Graf, Dorothy Graham,
Michel Grandjean, Richard Green, Attila Gyuri, Jon Hagar, Kobi Halperin, Matthias Hamburg, Zsolt
Hargitai, Satoshi Hasegawa, Berit Hatten, Wang Hongwei, Tamás Horváth, Leanne Howard, Chinthaka
Indikadahena, J. Jayapradeep, Kari Kakkonen, Gábor Kapros, Beata Karpinska, Karl Kemminger,
Kwanho Kim, Seonjoon Kim, Cecilia Kjellman, Johan Klintin, Corne Kruger, Gerard Kruijff, Peter Kunit,
Hyeyong Kwon, Bruno Legeard, Thomas Letzkus, Alon Linetzki, Balder Lingegård, Tilo Linz, Hongbiao
Liu, Claire Lohr, Ine Lutterman, Marek Majernik, Rik Marselis, Romanos Matthaios, Judy McKay, Fergus
McLachlan, Dénes Medzihradszky, Stefan Merkel, Armin Metzger, Don Mills, Gary Mogyorodi, Ninna
Morin, Ingvar Nordström, Adam Novak, Avi Ofer, Magnus C Ohlsson, Joel Oliviera, Monika Stocklein
Olsen, Kenji Onishi, Francisca Cano Ortiz, Gitte Ottosen, Tuula Pääkkönen, Ana Paiva, Tal Pe'er, Helmut
Pichler, Michaël Pilaeten, Horst Pohlmann, Andrew Pollner, Meile Posthuma, Vitalijs Puiso, Salvatore
Reale, Stuart Reid, Ralf Reissing, Shark Ren, Miroslav Renda, Randy Rice, Adam Roman, Jan Sabak,
Hans Schaefer, Ina Schieferdecker, Franz Schiller, Jianxiong Shen, Klaus Skafte, Mike Smith, Cristina
Sobrero, Marco Sogliani, Murian Song, Emilio Soresi, Helder Sousa, Michael Sowers, Michael Stahl,
Lucjan Stapp, Li Suyuan, Toby Thompson, Steve Toms, Sagi Traybel, Sabine Uhde, Stephanie Ulrich,
Philippos Vakalakis, Erik van Veenendaal, Marianne Vesterdal, Ernst von Düring, Salinda
Wickramasinghe, Marie Walsh, Søren Wassard, Hans Weiberg, Paul Weymouth, Hyungjin Yoon, John
Young, Surong Yuan, Ester Zabar, and Karolina Zmitrowicz.
International Software Testing Qualifications Board Working Group Foundation Level (Edition 2018):
Klaus
Olsen (chair), Tauhida Parveen (vice chair), Rex Black (project manager), Dani Almog, Debra
Friedenberg,
Rashed Karim, Johan Klintin, Vipul Kocher, Corne Kruger, Sunny Kwon, Judy McKay, Thomas Müller,
Igal Levi, Ebbe Munk, Kenji Onishi, Meile Posthuma, Eric Riou du Cosquer, Hans Schaefer, Radoslaw
Smilgin, Mike Smith, Steve Toms, Stephanie Ulrich, Marie Walsh, Eshraka Zakaria, and Stevan
Zivanovic. The core team thanks the review team and all Member Boards for their suggestions.
International Software Testing Qualifications Board Working Group Foundation Level (Edition 2011):
Thomas Müller (chair), Debra Friedenberg. The core team thanks the review team (Dan Almog, Armin
Beer, Rex Black, Julie Gardiner, Judy McKay, Tuula Pääkkönen, Eric Riou du Cosquier Hans Schaefer,
Stephanie Ulrich, Erik van Veenendaal), and all Member Boards for their suggestions.
International Software Testing Qualifications Board Working Group Foundation Level (Edition 2010):
Thomas Müller (chair), Rahul Verma, Martin Klonk and Armin Beer. The core team thanks the review
team (Rex Black, Mette Bruhn-Pederson, Debra Friedenberg, Klaus Olsen, Judy McKay, Tuula
Pääkkönen, Meile Posthuma, Hans Schaefer, Stephanie Ulrich, Pete Williams, Erik van Veenendaal),
and all Member Boards for their suggestions.
International Software Testing Qualifications Board Working Group Foundation Level (Edition 2007):
Thomas Müller (chair), Dorothy Graham, Debra Friedenberg, and Erik van Veenendaal. The core team
thanks the review team (Hans Schaefer, Stephanie Ulrich, Meile Posthuma, Anders Pettersson, and
Wonil Kwon) and all the Member Boards for their suggestions.
International Software Testing Qualifications Board Working Group Foundation Level (Edition 2005):
Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi, Geoff
Thompson and Erik van Veenendaal. The core team thanks the review team and all Member Boards for
their suggestions.
0 Introduction
Learning objectives support the business outcomes and are used to create the Certified Tester
Foundation Level exams.
In general, all contents of this syllabus are examinable at a K1 level, except for the Introduction and
Appendices. That is, the candidate may be asked to recognize, remember, or recall a keyword or concept
mentioned in any of the six chapters. The knowledge levels of the specific learning objectives are shown
at the beginning of each chapter, and classified as follows:
• K1: remember
• K2: understand
• K3: apply
Further details and examples of learning objectives are given in Appendix B.
The definitions of all terms listed as keywords just below chapter headings shall be remembered (K1),
even if not explicitly mentioned in the learning objectives.
Exams may be taken as part of an accredited training course or taken independently (e.g., at an exam
center or in a public exam). Completion of an accredited training course is not a pre-requisite for the
exam.
0.5 Accreditation
An ISTQB Member Board may accredit training providers whose course material follows this syllabus.
Training providers should obtain accreditation guidelines from the Member Board or body that performs
the accreditation. An accredited course is recognized as conforming to this syllabus, and is allowed to
have an ISTQB exam as part of the course.
The level of detail in this syllabus allows internationally consistent courses and exams. In order to achieve
this goal, the syllabus consists of:
• General instructional objectives describing the intention of the Foundation Level
• A list of terms that students must be able to recall
• Learning objectives for each knowledge area, describing the cognitive learning outcome to be
achieved
• A description of the key concepts, including references to sources such as accepted literature or
standards
The syllabus content is not a description of the entire knowledge area of software testing; it reflects the
level of detail to be covered in Foundation Level training courses. It focuses on test concepts and
techniques that can apply to all software projects, including Agile projects. This syllabus does not contain
any specific learning objectives related to any particular software development lifecycle or method, but it
does discuss how these concepts apply in Agile projects, other types of iterative and incremental
lifecycles, and in sequential lifecycles.
A common misperception of testing is that it only consists of running tests, i.e., executing the software
and checking the results. As described in section 1.4, software testing is a process which includes many
different activities; test execution (including checking of results) is only one of these activities. The test
process also includes activities such as test planning, analyzing, designing, and implementing tests,
reporting test progress and results, and evaluating the quality of a test object.
Some testing does involve the execution of the component or system being tested; such testing is called
dynamic testing. Other testing does not involve the execution of the component or system being tested;
such testing is called static testing. So, testing also includes reviewing work products such as
requirements, user stories, and source code.
Another common misperception of testing is that it focuses entirely on verification of requirements, user
stories, or other specifications. While testing does involve checking whether the system meets specified
requirements, it also involves validation, which is checking whether the system will meet user and other
stakeholder needs in its operational environment(s).
Test activities are organized and carried out differently in different lifecycles (see section 2.1).
Not all unexpected test results are failures. False positives may occur due to errors in the way tests were
executed, or due to defects in the test data, the test environment, or other testware, or for other reasons.
The inverse situation can also occur, where similar errors or defects lead to false negatives. False
negatives are tests that do not detect defects that they should have detected; false positives are reported
as defects, but aren’t actually defects.
defect clusters in test or operation, are an important input into a risk analysis used to focus the test effort
(as mentioned in principle 2).
5. Beware of the pesticide paradox
If the same tests are repeated over and over again, eventually these tests no longer find any new defects.
To detect new defects, existing tests and test data may need changing, and new tests may need to be
written. (Tests are no longer effective at finding defects, just as pesticides are no longer effective at killing
insects after a while.) In some cases, such as automated regression testing, the pesticide paradox has a
beneficial outcome, which is the relatively low number of regression defects.
6. Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical industrial control software is
tested differently from an e-commerce mobile app. As another example, testing in an Agile project is done
differently than testing in a sequential lifecycle project (see section 2.1).
7. Absence-of-errors is a fallacy
Some organizations expect that testers can run all possible tests and find all possible defects, but
principles 2 and 1, respectively, tell us that this is impossible. Further, it is a fallacy (i.e., a mistaken belief)
to expect that just finding and fixing a large number of defects will ensure the success of a system. For
example, thoroughly testing all specified requirements and fixing all defects found could still produce a
system that is difficult to use, that does not fulfill the users’ needs and expectations, or that is inferior
compared to other competing systems.
See Myers 2011, Kaner 2002, and Weinberg 2008 for examples of these and other testing principles.
Complexity
o Contractual and
regulatory
requirements
Test planning
Test planning involves activities that define the objectives of testing and the approach for meeting test
objectives within constraints imposed by the context (e.g., specifying suitable test techniques and tasks,
and formulating a test schedule for meeting a deadline). Test plans may be revisited based on feedback
from monitoring and control activities. Test planning is further explained in section 5.2.
Test monitoring and control
Test monitoring involves the on-going comparison of actual progress against the test plan using any test
monitoring metrics defined in the test plan. Test control involves taking actions necessary to meet the
objectives of the test plan (which may be updated over time). Test monitoring and control are supported
by the evaluation of exit criteria, which are referred to as the definition of done in some lifecycles (see
ISTQB-AT Foundation Level Agile Tester Extension Syllabus). For example, the evaluation of exit criteria
for test execution as part of a given test level may include:
• Checking test results and logs against specified coverage criteria
• Assessing the level of component or system quality based on test results and logs
• Determining if more tests are needed (e.g., if tests originally intended to achieve a certain level of
product risk coverage failed to do so, requiring additional tests to be written and executed)
Test progress against the plan is communicated to stakeholders in test progress reports, including
deviations from the plan and information to support any decision to stop testing.
Test monitoring and control are further explained in section 5.3.
Test analysis
During test analysis, the test basis is analyzed to identify testable features and define associated test
conditions. In other words, test analysis determines “what to test” in terms of measurable coverage
criteria.
Test analysis includes the following major activities:
• Analyzing the test basis appropriate to the test level being considered, for example:
o Requirement specifications, such as business requirements, functional requirements,
system requirements, user stories, epics, use cases, or similar work products that specify
desired functional and non-functional component or system behavior
o Design and implementation information, such as system or software architecture
diagrams or documents, design specifications, call flows, modelling diagrams (e.g., UML
or entity-relationship diagrams), interface specifications, or similar work products that
specify component or system structure
o The implementation of the component or system itself, including code, database
metadata and queries, and interfaces
o Risk analysis reports, which may consider functional, non-functional, and structural
aspects of the component or system
• Evaluating the test basis and test items to identify defects of various types, such as:
tested
• Defining and prioritizing test conditions for each feature based on analysis of the test basis, and
considering functional, non-functional, and structural characteristics, other business and technical
factors, and levels of risks
• Capturing bi-directional traceability between each element of the test basis and the associated
test conditions (see sections 1.4.3 and 1.4.4)
The application of black-box, white-box, and experience-based test techniques can be useful in the
process of test analysis (see chapter 4) to reduce the likelihood of omitting important test conditions and
to define more precise and accurate test conditions.
In some cases, test analysis produces test conditions which are to be used as test objectives in test
charters. Test charters are typical work products in some types of experience-based testing (see section
4.4.2). When these test objectives are traceable to the test basis, coverage achieved during such
experience-based testing can be measured.
The identification of defects during test analysis is an important potential benefit, especially where no
other review process is being used and/or the test process is closely connected with the review process.
Such test analysis activities not only verify whether the requirements are consistent, properly expressed,
and complete, but also validate whether the requirements properly capture customer, user, and other
stakeholder needs. For example, techniques such as behavior driven development (BDD) and
acceptance test driven development (ATDD), which involve generating test conditions and test cases
from user stories and acceptance criteria prior to coding, also verify, validate, and detect defects in the
user stories and acceptance criteria (see ISTQB Foundation Level Agile Tester Extension syllabus).
Test design
During test design, the test conditions are elaborated into high-level test cases, sets of high-level test
cases, and other testware. So, test analysis answers the question “what to test?” while test design
answers the question “how to test?”
Test design includes the following major activities:
• Designing and prioritizing test cases and sets of test cases
• Identifying necessary test data to support test conditions and test cases
• Designing the test environment and identifying any required infrastructure and tools
• Capturing bi-directional traceability between the test basis, test conditions, test cases, and test
procedures (see section 1.4.4)
The elaboration of test conditions into test cases and sets of test cases during test design often involves
using test techniques (see chapter 4).
As with test analysis, test design may also result in the identification of similar types of defects in the test
basis. Also as with test analysis, the identification of defects during test design is an important potential
benefit.
• Verifying and updating bi-directional traceability between the test basis, test conditions, test
cases, test procedures, and test results.
Test completion
Test completion activities collect data from completed test activities to consolidate experience, testware,
and any other relevant information. Test completion activities occur at project milestones such as when a
software system is released, a test project is completed (or cancelled), an Agile project iteration is
finished (e.g., as part of a retrospective meeting), a test level is completed, or a maintenance release has
been completed.
Test completion includes the following major activities:
• Checking whether all defect reports are closed, entering change requests or product backlog
items for any defects that remain unresolved at the end of test execution
• Creating a test summary report to be communicated to stakeholders
• Finalizing and archiving the test environment, the test data, the test infrastructure, and other
testware for later reuse
• Handing over the testware to the maintenance teams, other project teams, and/or other
stakeholders who could benefit from its use
• Analyzing lessons learned from the completed test activities to determine changes needed for
future iterations, releases, and projects
• Using the information gathered to improve test process maturity
Test monitoring and control work products should also address project management concerns, such as
task completion, resource allocation and usage, and effort.
Test monitoring and control, and the work products created during these activities, are further explained
in section 5.3 of this syllabus.
Test analysis work products
Test analysis work products include defined and prioritized test conditions, each of which is ideally
bidirectionally traceable to the specific element(s) of the test basis it covers. For exploratory testing, test
analysis may involve the creation of test charters. Test analysis may also result in the discovery and
reporting of defects in the test basis.
Test design work products
Test design results in test cases and sets of test cases to exercise the test conditions defined in test
analysis. It is often a good practice to design high-level test cases, without concrete values for input data
and expected results. Such high-level test cases are reusable across multiple test cycles with different
concrete data, while still adequately documenting the scope of the test case. Ideally, each test case is
bidirectionally traceable to the test condition(s) it covers.
Test design also results in the design and/or identification of the necessary test data, the design of the
test environment, and the identification of infrastructure and tools, though the extent to which these
results are documented varies significantly.
Test conditions defined in test analysis may be further refined in test design.
Test implementation work products
Test implementation work products include:
• Test procedures and the sequencing of those test procedures
• Test suites
• A test execution schedule
Ideally, once test implementation is complete, achievement of coverage criteria established in the test
plan can be demonstrated via bi-directional traceability between test procedures and specific elements of
the test basis, through the test cases and test conditions.
In some cases, test implementation involves creating work products using or used by tools, such as
service virtualization and automated test scripts.
Test implementation also may result in the creation and verification of test data and the test environment.
The completeness of the documentation of the data and/or environment verification results may vary
significantly.
The test data serve to assign concrete values to the inputs and expected results of test cases. Such
concrete values, together with explicit directions about the use of the concrete values, turn high-level test
cases into executable low-level test cases. The same high-level test case may use different test data
when executed on different releases of the test object. The concrete expected results which are
associated with concrete test data are identified by using a test oracle.
In exploratory testing, some test design and implementation work products may be created during test
execution, though the extent to which exploratory tests (and their traceability to specific elements of the
test basis) are documented may vary significantly.
Test conditions defined in test analysis may be further refined in test implementation.
Test execution work products
Test execution work products include:
• Documentation of the status of individual test cases or test procedures (e.g., ready to run, pass,
fail, blocked, deliberately skipped, etc.)
• Defect reports (see section 5.6)
• Documentation about which test item(s), test object(s), test tools, and testware were involved in
the testing
Ideally, once test execution is complete, the status of each element of the test basis can be determined
and reported via bi-directional traceability with the associated the test procedure(s). For example, we can
say which requirements have passed all planned tests, which requirements have failed tests and/or have
defects associated with them, and which requirements have planned tests still waiting to be run. This
enables verification that the coverage criteria have been met, and enables the reporting of test results in
terms that are understandable to stakeholders.
Test completion work products
Test completion work products include test summary reports, action items for improvement of subsequent
projects or iterations (e.g., following a project Agile retrospective), change requests or product backlog
items, and finalized testware.
1.4.4 Traceability between the Test Basis and Test Work Products
As mentioned in section 1.4.3, test work products and the names of those work products vary
significantly. Regardless of these variations, in order to implement effective test monitoring and control, it
is important to establish and maintain traceability throughout the test process between each element of
the test basis and the various test work products associated with that element, as described above. In
addition to the evaluation of test coverage, good traceability supports:
• Analyzing the impact of changes
• Making testing auditable
• Meeting IT governance criteria
• Improving the understandability of test progress reports and test summary reports to include the
status of elements of the test basis (e.g., requirements that passed their tests, requirements that
failed their tests, and requirements that have pending tests)
• Relating the technical aspects of testing to stakeholders in terms that they can understand
• Providing information to assess product quality, process capability, and project progress against
business goals
Some test management tools provide test work product models that match part or all of the test work
products outlined in this section. Some organizations build their own management systems to organize
the work products and provide the information traceability they require.
finding defects prior to release, and so forth. These are different sets of objectives which require different
mindsets. Bringing these mindsets together helps to achieve a higher level of product quality.
A mindset reflects an individual’s assumptions and preferred methods for decision making and
problemsolving. A tester’s mindset should include curiosity, professional pessimism, a critical eye,
attention to detail, and a motivation for good and positive communications and relationships. A tester’s
mindset tends to grow and mature as the tester gains experience.
A developer’s mindset may include some of the elements of a tester’s mindset, but successful developers
are often more interested in designing and building solutions than in contemplating what might be wrong
with those solutions. In addition, confirmation bias makes it difficult to find mistakes in their own work.
With the right mindset, developers are able to test their own code. Different software development
lifecycle models often have different ways of organizing the testers and test activities. Having some of the
test activities done by independent testers increases defect detection effectiveness, which is particularly
important for large, complex, or safety-critical systems. Independent testers bring a perspective which is
different than that of the work product authors (i.e., business analysts, product owners, designers, and
programmers), since they have different cognitive biases from the authors.
100 minutes
2 Testing Throughout the Software
Development Lifecycle
Keywords
acceptance testing, alpha testing, beta testing, commercial off-the-shelf (COTS), component integration
testing, component testing, confirmation testing, contractual acceptance testing, functional testing, impact
analysis, integration testing, maintenance testing, non-functional testing, operational acceptance testing,
regression testing, regulatory acceptance testing, sequential development model, system integration
testing, system testing, test basis, test case, test environment, test level, test object, test objective, test
type, user acceptance testing, white-box testing
Examples include:
• Rational Unified Process: Each iteration tends to be relatively long (e.g., two to three months),
and the feature increments are correspondingly large, such as two or three groups of related
features
• Scrum: Each iteration tends to be relatively short (e.g., hours, days, or a few weeks), and the
feature increments are correspondingly small, such as a few enhancements and/or two or three
new features
• Kanban: Implemented with or without fixed-length iterations, which can deliver either a single
enhancement or feature upon completion, or can group features together to release at once
• Spiral (or prototyping): Involves creating experimental increments, some of which may be heavily
re-worked or even abandoned in subsequent development work
Components or systems developed using these methods often involve overlapping and iterating test
levels throughout development. Ideally, each feature is tested at several test levels as it moves towards
delivery. In some cases, teams use continuous delivery or continuous deployment, both of which involve
significant automation of multiple test levels as part of their delivery pipelines. Many development efforts
using these methods also include the concept of self-organizing teams, which can change the way testing
work is organized as well as the relationship between testers and developers.
These methods form a growing system, which may be released to end-users on a feature-by-feature
basis, on an iteration-by-iteration basis, or in a more traditional major-release fashion. Regardless of
whether the software increments are released to end-users, regression testing is increasingly important
as the system grows.
In contrast to sequential models, iterative and incremental models may deliver usable software in weeks
or even days, but may only deliver the complete set of requirements product over a period of months or
even years.
For more information on software testing in the context of Agile development, see ISTQB-AT Foundation
Level Agile Tester Extension Syllabus, Black 2017, Crispin 2008, and Gregory 2015.
In addition, software development lifecycle models themselves may be combined. For example, a Vmodel
may be used for the development and testing of the backend systems and their integrations, while an
Agile development model may be used to develop and test the front-end user interface (UI) and
functionality. Prototyping may be used early in a project, with an incremental development model adopted
once the experimental phase is complete.
Internet of Things (IoT) systems, which consist of many different objects, such as devices, products, and
services, typically apply separate software development lifecycle models for each object. This presents a
particular challenge for the development of Internet of Things system versions. Additionally the software
development lifecycle of such objects places stronger emphasis on the later phases of the software
development lifecycle after they have been introduced to operational use (e.g., operate, update, and
decommission phases).
However, in Agile development especially, writing automated component test cases may precede writing
application code.
For example, consider test driven development (TDD). Test driven development is highly iterative and is
based on cycles of developing automated test cases, then building and integrating small pieces of code,
then executing the component tests, correcting any issues, and re-factoring the code. This process
continues until the component has been completely built and all component tests are passing. Test driven
development is an example of a test-first approach. While test driven development originated in eXtreme
Programming (XP), it has spread to other forms of Agile and also to sequential lifecycles (see ISTQB-AT
Foundation Level Agile Tester Extension Syllabus).
• Sequence diagrams
• Interface and communication protocol specifications
• Use cases
• Architecture at component or system level
• Workflows
• External interface definitions
Test objects
Typical test objects for integration testing include:
• Subsystems
• Databases
• Infrastructure
• Interfaces
• APIs
• Microservices
Typical defects and failures
Examples of typical defects and failures for component integration testing include:
• Incorrect data, missing data, or incorrect data encoding
• Incorrect sequencing or timing of interface calls
• Interface mismatch
• Failures in communication between components
• Unhandled or improperly handled communication failures between components
• Incorrect assumptions about the meaning, units, or boundaries of the data being passed between
components
Examples of typical defects and failures for system integration testing include:
• Inconsistent message structures between systems
• Incorrect data, missing data, or incorrect data encoding
• Interface mismatch
• Failures in communication between systems
• Unhandled or improperly handled communication failures between systems
• Incorrect assumptions about the meaning, units, or boundaries of the data being passed between
systems
• Failure to comply with mandatory security regulations
The test environment should ideally correspond to the final target or production environment.
Test basis
Examples of work products that can be used as a test basis for system testing include:
• System and software requirement specifications (functional and non-functional)
• Risk analysis reports
• Use cases
• Epics and user stories
• Models of system behavior
• State diagrams
• System and user manuals
Test objects
Typical test objects for system testing include:
• Applications
• Hardware/software systems
• Operating systems
• System under test (SUT)
• System configuration and configuration data
Typical defects and failures
Examples of typical defects and failures for system testing include:
• Incorrect calculations
• Incorrect or unexpected system functional or non-functional behavior
• Incorrect control and/or data flows within the system
• Failure to properly and completely carry out end-to-end functional tasks
• Failure of the system to work properly in the production environment(s)
• Failure of the system to work as described in system and user manuals
Specific approaches and responsibilities
System testing should focus on the overall, end-to-end behavior of the system as a whole, both functional
and non-functional. System testing should use the most appropriate techniques (see chapter 4) for the
aspect(s) of the system to be tested. For example, a decision table may be created to verify whether
functional behavior is as described in business rules.
Independent testers typically carry out system testing. Defects in specifications (e.g., missing user stories,
incorrectly stated business requirements, etc.) can lead to a lack of understanding of, or disagreements
about, expected system behavior. Such situations can cause false positives and false negatives, which
waste time and reduce defect detection effectiveness, respectively. Early involvement of testers in user
story refinement or static testing activities, such as reviews, helps to reduce the incidence of such
situations.
The main objective of operational acceptance testing is building confidence that the operators or system
administrators can keep the system working properly for the users in the operational environment, even
under exceptional or difficult conditions.
Contractual and regulatory acceptance testing
Contractual acceptance testing is performed against a contract’s acceptance criteria for producing
custom-developed software. Acceptance criteria should be defined when the parties agree to the
contract. Contractual acceptance testing is often performed by users or by independent testers.
Regulatory acceptance testing is performed against any regulations that must be adhered to, such as
government, legal, or safety regulations. Regulatory acceptance testing is often performed by users or by
independent testers, sometimes with the results being witnessed or audited by regulatory agencies.
The main objective of contractual and regulatory acceptance testing is building confidence that
contractual or regulatory compliance has been achieved.
Alpha and beta testing
Alpha and beta testing are typically used by developers of commercial off-the-shelf (COTS) software who
want to get feedback from potential or existing users, customers, and/or operators before the software
product is put on the market. Alpha testing is performed at the developing organization’s site, not by the
development team, but by potential or existing customers, and/or operators or an independent test team.
Beta testing is performed by potential or existing customers, and/or operators at their own locations. Beta
testing may come after alpha testing, or may occur without any preceding alpha testing having occurred.
One objective of alpha and beta testing is building confidence among potential or existing customers,
and/or operators that they can use the system under normal, everyday conditions, and in the operational
environment(s) to achieve their objectives with minimum difficulty, cost, and risk. Another objective may
be the detection of defects related to the conditions and environment(s) in which the system will be used,
especially when those conditions and environment(s) are difficult to replicate by the development team.
Test basis
Examples of work products that can be used as a test basis for any form of acceptance testing include:
• Business processes
• User or business requirements
• Regulations, legal contracts and standards
• Use cases
• System requirements
• System or user documentation
• Installation procedures
• Risk analysis reports
In addition, as a test basis for deriving test cases for operational acceptance testing, one or more of the
following work products can be used:
• Backup and restore procedures
• Disaster recovery procedures
• Non-functional requirements
• Operations documentation
• Deployment and installation instructions
• Performance targets
• Database packages
• Security standards or regulations
Typical test objects
Typical test objects for any form of acceptance testing include:
• System under test
• System configuration and configuration data
• Business processes for a fully integrated system
• Recovery systems and hot sites (for business continuity and disaster recovery testing)
Operational and maintenance processes
• Forms
• Reports
• Existing and converted production data
Typical defects and failures
Examples of typical defects for any form of acceptance testing include:
• System workflows do not meet business or user requirements
• Business rules are not implemented correctly
• System does not satisfy contractual or regulatory requirements
• Non-functional failures such as security vulnerabilities, inadequate performance efficiency under
high loads, or improper operation on a supported platform
Specific approaches and responsibilities
Acceptance testing is often the responsibility of the customers, business users, product owners, or
operators of a system, and other stakeholders may be involved as well.
Acceptance testing is often thought of as the last test level in a sequential development lifecycle, but it
may also occur at other times, for example:
• Acceptance testing of a COTS software product may occur when it is installed or integrated
• Acceptance testing of a new functional enhancement may occur before system testing
In iterative development, project teams can employ various forms of acceptance testing during and at the
end of each iteration, such as those focused on verifying a new feature against its acceptance criteria and
those focused on validating that a new feature satisfies the users’ needs. In addition, alpha tests and beta
tests may occur, either at the end of each iteration, after the completion of each iteration, or after a series
of iterations. User acceptance tests, operational acceptance tests, regulatory acceptance tests, and
contractual acceptance tests also may occur, either at the close of each iteration, after the completion of
each iteration, or after a series of iterations.
Contrary to common misperceptions, non-functional testing can and often should be performed at all test
levels, and done as early as possible. The late discovery of non-functional defects can be extremely
dangerous to the success of a project.
Black-box techniques (see section 4.2) may be used to derive test conditions and test cases for
nonfunctional testing. For example, boundary value analysis can be used to define the stress conditions
for performance tests.
The thoroughness of non-functional testing can be measured through non-functional coverage.
Nonfunctional coverage is the extent to which some type of non-functional element has been exercised
by tests, and is expressed as a percentage of the type(s) of element being covered. For example, using
traceability between tests and supported devices for a mobile application, the percentage of devices
which are addressed by compatibility testing can be calculated, potentially identifying coverage gaps.
Non-functional test design and execution may involve special skills or knowledge, such as knowledge of
the inherent weaknesses of a design or technology (e.g., security vulnerabilities associated with particular
programming languages) or the particular user base (e.g., the personas of users of healthcare facility
management systems).
Refer to ISTQB-ATA Advanced Level Test Analyst Syllabus, ISTQB-ATTA Advanced Level Technical
Test Analyst Syllabus, ISTQB-SEC Advanced Level Security Tester Syllabus, and other ISTQB specialist
modules for more details regarding the testing of non-functional quality characteristics.
may also be tested with new tests if, for instance, the defect was missing functionality. At the very
least, the steps to reproduce the failure(s) caused by the defect must be re-executed on the new
software version. The purpose of a confirmation test is to confirm whether the original defect has
been successfully fixed.
• Regression testing: It is possible that a change made in one part of the code, whether a fix or
another type of change, may accidentally affect the behavior of other parts of the code, whether
within the same component, in other components of the same system, or even in other systems.
Changes may include changes to the environment, such as a new version of an operating system
or database management system. Such unintended side-effects are called regressions.
Regression testing involves running tests to detect such unintended side-effects.
Confirmation testing and regression testing are performed at all test levels.
Especially in iterative and incremental development lifecycles (e.g., Agile), new features, changes to
existing features, and code refactoring result in frequent changes to the code, which also requires
change-related testing. Due to the evolving nature of the system, confirmation and regression testing are
very important. This is particularly relevant for Internet of Things systems where individual objects (e.g.,
devices) are frequently updated or replaced.
Regression test suites are run many times and generally evolve slowly, so regression testing is a strong
candidate for automation. Automation of these tests should start early in the project (see chapter 6).
• For system integration testing, reliability tests are designed to evaluate system robustness if the
credit score microservice fails to respond.
• For acceptance testing, usability tests are designed to evaluate the accessibility of the banker’s
credit processing interface for people with disabilities.
The following are examples of white-box tests:
• For component testing, tests are designed to achieve complete statement and decision coverage
(see section 4.3) for all components that perform financial calculations.
• For component integration testing, tests are designed to exercise how each screen in the browser
interface passes data to the next screen and to the business logic.
• For system testing, tests are designed to cover sequences of web pages that can occur during a
credit line application.
• For system integration testing, tests are designed to exercise all possible inquiry types sent to the
credit score microservice.
• For acceptance testing, tests are designed to cover all supported financial data file structures and
value ranges for bank-to-bank transfers.
Finally, the following are examples for change-related tests:
• For component testing, automated regression tests are built for each component and included
within the continuous integration framework.
• For component integration testing, tests are designed to confirm fixes to interface-related defects
as the fixes are checked into the code repository.
• For system testing, all tests for a given workflow are re-executed if any screen on that workflow
changes.
• For system integration testing, tests of the application interacting with the credit scoring
microservice are re-executed daily as part of continuous deployment of that microservice.
• For acceptance testing, all previously-failed tests are re-executed after a defect found in
acceptance testing is fixed.
While this section provides examples of every test type across every level, it is not necessary, for all
software, to have every test type represented across every level. However, it is important to run
applicable test types at each level, especially the earliest level where the test type occurs.
When any changes are made as part of maintenance, maintenance testing should be performed, both to
evaluate the success with which the changes were made and to check for possible side-effects (e.g.,
regressions) in parts of the system that remain unchanged (which is usually most of the system).
Maintenance testing focuses on testing the changes to the system, as well as testing unchanged parts
that might have been affected by the changes. Maintenance can involve planned releases and unplanned
releases (hot fixes).
A maintenance release may require maintenance testing at multiple test levels, using various test types,
based on its scope. The scope of maintenance testing depends on:
• The degree of risk of the change, for example, the degree to which the changed area of software
communicates with other components or systems
• The size of the existing system
• The size of the change
Impact analysis may be done before a change is made, to help decide if the change should be made,
based on the potential consequences in other areas of the system.
Impact analysis can be difficult if:
• Specifications (e.g., business requirements, user stories, architecture) are out of date or missing
• Test cases are not documented or are out of date
• Bi-directional traceability between tests and the test basis has not been maintained
• Tool support is weak or non-existent
• The people involved do not have domain and/or system knowledge
• Insufficient attention has been paid to the software's maintainability during development
The results of a work product review vary, depending on the review type and formality, as described in
section 3.2.3.
• Reviewers should be technical peers of the author, and technical experts in the same or other
disciplines
• Individual preparation before the review meeting is required
• Review meeting is optional, ideally led by a trained facilitator (typically not the author)
• Scribe is mandatory, ideally not the author
• Use of checklists is optional
• Potential defect logs and review reports are typically produced
Inspection
• Main purposes: detecting potential defects, evaluating quality and building confidence in the work
product, preventing future similar defects through author learning and root cause analysis
• Possible further purposes: motivating and enabling authors to improve future work products and
the software development process, achieving consensus
• Follows a defined process with formal documented outputs, based on rules and checklists
• Uses clearly defined roles, such as those specified in section 3.2.2 which are mandatory, and
may include a dedicated reader (who reads the work product aloud during the review meeting)
• Individual preparation before the review meeting is required
• Reviewers are either peers of the author or experts in other disciplines that are relevant to the
work product
• Specified entry and exit criteria are used
• Scribe is mandatory
• Review meeting is led by a trained facilitator (not the author)
• Author cannot act as the review leader, reader, or scribe
• Potential defect logs and review report are produced
• Metrics are collected and used to improve the entire software development process, including the
inspection process
A single work product may be the subject of more than one type of review. If more than one type of
review is used, the order may vary. For example, an informal review may be carried out before a technical
review, to ensure the work product is ready for a technical review.
The types of reviews described above can be done as peer reviews, i.e., done by colleagues at a similar
approximate organizational level.
The types of defects found in a review vary, depending especially on the work product being reviewed.
See section 3.1.3 for examples of defects that can be found by reviews in different work products, and
see Gilb 1993 for information on formal inspections.
above. The effectiveness of the techniques may differ depending on the type of review used. Examples of
different individual review techniques for various review types are listed below.
Ad hoc
In an ad hoc review, reviewers are provided with little or no guidance on how this task should be
performed. Reviewers often read the work product sequentially, identifying and documenting issues as
they encounter them. Ad hoc reviewing is a commonly used technique needing little preparation. This
technique is highly dependent on reviewer skills and may lead to many duplicate issues being reported by
different reviewers.
Checklist-based
A checklist-based review is a systematic technique, whereby the reviewers detect issues based on
checklists that are distributed at review initiation (e.g., by the facilitator). A review checklist consists of a
set of questions based on potential defects, which may be derived from experience. Checklists should be
specific to the type of work product under review and should be maintained regularly to cover issue types
missed in previous reviews. The main advantage of the checklist-based technique is a systematic
coverage of typical defect types. Care should be taken not to simply follow the checklist in individual
reviewing, but also to look for defects outside the checklist.
Scenarios and dry runs
In a scenario-based review, reviewers are provided with structured guidelines on how to read through the
work product. A scenario-based approach supports reviewers in performing “dry runs” on the work
product based on expected usage of the work product (if the work product is documented in a suitable
format such as use cases). These scenarios provide reviewers with better guidelines on how to identify
specific defect types than simple checklist entries. As with checklist-based reviews, in order not to miss
other defect types (e.g., missing features), reviewers should not be constrained to the documented
scenarios.
Role-based
A role-based review is a technique in which the reviewers evaluate the work product from the perspective
of individual stakeholder roles. Typical roles include specific end user types (experienced, inexperienced,
senior, child, etc.), and specific roles in the organization (user administrator, system administrator,
performance tester, etc.).
Perspective-based
In perspective-based reading, similar to a role-based review, reviewers take on different stakeholder
viewpoints in individual reviewing. Typical stakeholder viewpoints include end user, marketing, designer,
tester, or operations. Using different stakeholder viewpoints leads to more depth in individual reviewing
with less duplication of issues across reviewers.
In addition, perspective-based reading also requires the reviewers to attempt to use the work product
under review to generate the product they would derive from it. For example, a tester would attempt to
generate draft acceptance tests if performing a perspective-based reading on a requirements
specification to see if all the necessary information was included. Further, in perspective-based reading,
checklists are expected to be used.
Empirical studies have shown perspective-based reading to be the most effective general technique for
reviewing requirements and technical work products. A key success factor is including and weighing
different stakeholder viewpoints appropriately, based on risks. See Shul 2000 for details on
perspectivebased reading, and Sauer 2000 for the effectiveness of different review types.
See Gilb 1993, Wiegers 2002, and van Veenendaal 2004 for more on successful reviews.
• Regulatory standards
• Customer or contractual requirements
• Risk levels
• Risk types
• Test objectives
• Available documentation
• Tester knowledge and skills
• Available tools
• Time and budget
• Software development lifecycle model
• Expected use of the software
• Previous experience with using the test techniques on the component or system to be tested
• The types of defects expected in the component or system
Some techniques are more applicable to certain situations and test levels; others are applicable to all test
levels. When creating test cases, testers generally use a combination of test techniques to achieve the
best results from the test effort.
The use of test techniques in the test analysis, test design, and test implementation activities can range
from very informal (little to no documentation) to very formal. The appropriate level of formality depends
on the context of testing, including the maturity of test and development processes, time constraints,
safety or regulatory requirements, the knowledge and skills of the people involved, and the software
development lifecycle model being followed.
4.1.2 Categories of Test Techniques and Their Characteristics
In this syllabus, test techniques are classified as black-box, white-box, or experience-based.
Black-box test techniques (also called behavioral or behavior-based techniques) are based on an
analysis of the appropriate test basis (e.g., formal requirements documents, specifications, use cases,
user stories, or business processes). These techniques are applicable to both functional and
nonfunctional testing. Black-box test techniques concentrate on the inputs and outputs of the test object
without reference to its internal structure.
White-box test techniques (also called structural or structure-based techniques) are based on an analysis
of the architecture, detailed design, internal structure, or the code of the test object. Unlike black-box test
techniques, white-box test techniques concentrate on the structure and processing within the test object.
Experience-based test techniques leverage the experience of developers, testers and users to design,
implement, and execute tests. These techniques are often combined with black-box and white-box test
techniques.
Common characteristics of black-box test techniques include the following:
• Test conditions, test cases, and test data are derived from a test basis that may include software
requirements, specifications, use cases, and user stories
• Test cases may be used to detect gaps between the requirements and the implementation of the
requirements, as well as deviations from the requirements
• Coverage is measured based on the items tested in the test basis and the technique applied to
the test basis
Common characteristics of white-box test techniques include the following:
• Test conditions, test cases, and test data are derived from a test basis that may include code,
software architecture, detailed design, or any other source of information regarding the structure
of the software
• Coverage is measured based on the items tested within a selected structure (e.g., the code or
interfaces)
• Specifications are often used as an additional source of information to determine the expected
outcome of test cases
Common characteristics of experience-based test techniques include the following:
• Test conditions, test cases, and test data are derived from a test basis that may include
knowledge and experience of testers, developers, users and other stakeholders
This knowledge and experience includes expected use of the software, its environment, likely defects,
and the distribution of those defects
The international standard (ISO/IEC/IEEE 29119-4) contains descriptions of test techniques and their
corresponding coverage measures (see Craig 2002 and Copeland 2004 for more on techniques).
To achieve 100% coverage with this technique, test cases must cover all identified partitions (including
invalid partitions) by using a minimum of one value from each partition. Coverage is measured as the
number of equivalence partitions tested by at least one value, divided by the total number of identified
equivalence partitions, normally expressed as a percentage. Equivalence partitioning is applicable at all
test levels.
green, blue), but can also be numbers or ranges of numbers. These different types of conditions and
actions might be found together in the same table.
The common notation in decision tables is as follows:
For conditions:
• Y means the condition is true (may also be shown as T or 1)
• N means the condition is false (may also be shown as F or 0)
• — means the value of the condition doesn’t matter (may also be shown as N/A) For
actions:
• X means the action should occur (may also be shown as Y or T or 1)
• Blank means the action should not occur (may also be shown as – or N or F or 0)
A full decision table has enough columns to cover every combination of conditions. The table can be
collapsed by deleting columns containing impossible combinations of conditions, columns containing
possible but infeasible combinations of conditions, and columns that test combinations of conditions that
do not affect the outcome. For more information on how to collapse decision tables, see ISTQB-ATA
Advanced Level Test Analyst Syllabus.
The common minimum coverage standard for decision table testing is to have at least one test case per
decision rule in the table. This typically involves covering all combinations of conditions. Coverage is
measured as the number of decision rules tested by at least one test case, divided by the total number of
decision rules, normally expressed as a percentage.
The strength of decision table testing is that it helps to identify all the important combinations of
conditions, some of which might otherwise be overlooked. It also helps in finding any gaps in the
requirements. It may be applied to all situations in which the behavior of the software depends on a
combination of conditions, at any test level.
4.2.4 State Transition Testing
Components or systems may respond differently to an event depending on current conditions or previous
history (e.g., the events that have occurred since the system was initialized). The previous history can be
summarized using the concept of states. A state transition diagram shows the possible software states,
as well as how the software enters, exits, and transitions between states. A transition is initiated by an
event (e.g., user input of a value into a field). The event results in a transition. If the same event can result
in two or more different transitions from the same state, that event may be qualified by a guard condition.
The state change may result in the software taking an action (e.g., outputting a calculation or error
message).
A state transition table shows all valid transitions and potentially invalid transitions between states, as well
as the events, guard conditions, and resulting actions for valid transitions. State transition diagrams
normally show only the valid transitions and exclude the invalid transitions.
Tests can be designed to cover a typical sequence of states, to exercise all states, to exercise every
transition, to exercise specific sequences of transitions, or to test invalid transitions.
State transition testing is used for menu-based applications and is widely used within the embedded
software industry. The technique is also suitable for modeling a business scenario having specific states
or for testing screen navigation. The concept of a state is abstract – it may represent a few lines of code
or an entire business process.
Coverage is commonly measured as the number of identified states or transitions tested, divided by the
total number of identified states or transitions in the test object, normally expressed as a percentage. For
more information on coverage criteria for state transition testing, see ISTQB-ATA Advanced Level Test
Analyst Syllabus.
Coverage is measured as the number of decision outcomes executed by the tests divided by the total
number of decision outcomes in the test object, normally expressed as a percentage.
test charter containing test objectives to guide the testing. The tester may use test session sheets to
document the steps followed and the discoveries made.
Exploratory testing is most useful when there are few or inadequate specifications or significant time
pressure on testing. Exploratory testing is also useful to complement other more formal testing
techniques.
Exploratory testing is strongly associated with reactive test strategies (see section 5.2.2). Exploratory
testing can incorporate the use of other black-box, white-box, and experience-based techniques.
FL-5.2.6 (K2) Explain the difference between two estimation techniques: the metrics-based technique
and the expert-based technique
5.3 Test Monitoring and Control
FL-5.3.1 (K1) Recall metrics used for testing
FL-5.3.2 (K2) Summarize the purposes, contents, and audiences for test reports
5.4 Configuration Management
FL-5.4.1 (K2) Summarize how configuration management supports testing
5.5 Risks and Testing
FL-5.5.1 (K1) Define risk level by using likelihood and impact
FL-5.5.2 (K2) Distinguish between project and product risks
FL-5.5.3 (K2) Describe, by using examples, how product risk analysis may influence the thoroughness
and scope of testing
5.6 Defect Management
FL-5.6.1 (K3) Write a defect report, covering defects found during testing
organizations using Agile methods, these testers may be considered part of a larger independent test
team as well. In addition, in such organizations, product owners may perform acceptance testing to
validate user stories at the end of each iteration.
Potential benefits of test independence include:
• Independent testers are likely to recognize different kinds of failures compared to developers
because of their different backgrounds, technical perspectives, and biases
• An independent tester can verify, challenge, or disprove assumptions made by stakeholders
during specification and implementation of the system Potential drawbacks of test independence
include:
• Isolation from the development team, leading to a lack of collaboration, delays in providing
feedback to the development team, or an adversarial relationship with the development team
• Developers may lose a sense of responsibility for quality
• Independent testers may be seen as a bottleneck or blamed for delays in release
• Independent testers may lack some important information (e.g., about the test object)
Many organizations are able to successfully achieve the benefits of test independence while avoiding the
drawbacks.
5.1.2 Tasks of a Test Manager and Tester
In this syllabus, two test roles are covered, test managers and testers. The activities and tasks performed
by these two roles depend on the project and product context, the skills of the people in the roles, and the
organization.
The test manager is tasked with overall responsibility for the test process and successful leadership of the
test activities. The test management role might be performed by a professional test manager, or by a
project manager, a development manager, or a quality assurance manager. In larger projects or
organizations, several test teams may report to a test manager, test coach, or test coordinator, each team
being headed by a test leader or lead tester.
Typical test manager tasks may include:
• Develop or review a test policy and test strategy for the organization
• Plan the test activities by considering the context, and understanding the test objectives and risks.
This may include selecting test approaches, estimating test time, effort and cost, acquiring
resources, defining test levels and test cycles, and planning defect management
• Write and update the test plan(s)
• Coordinate the test plan(s) with project managers, product owners, and others
• Share testing perspectives with other project activities, such as integration planning
• Initiate the analysis, design, implementation, and execution of tests, monitor test progress and
results, and check the status of exit criteria (or definition of done)
• Prepare and deliver test progress reports and test summary reports based on the information
gathered
• Adapt planning based on test results and progress (sometimes documented in test progress
reports, and/or in test summary reports for other testing already completed on the project) and
take any actions necessary for test control
• Support setting up the defect management system and adequate configuration management of
testware
• Introduce suitable metrics for measuring test progress and evaluating the quality of the testing
and the product
• Support the selection and implementation of tools to support the test process, including
recommending the budget for tool selection (and possibly purchase and/or support), allocating
time and effort for pilot projects, and providing continuing support in the use of the tool(s)
• Decide about the implementation of test environment(s)
• Promote and advocate the testers, the test team, and the test profession within the organization
• Develop the skills and careers of testers (e.g., through training plans, performance evaluations,
coaching, etc.)
The way in which the test manager role is carried out varies depending on the software development
lifecycle. For example, in Agile development, some of the tasks mentioned above are handled by the
Agile team, especially those tasks concerned with the day-to-day testing done within the team, often by a
tester working within the team. Some of the tasks that span multiple teams or the entire organization, or
that have to do with personnel management, may be done by test managers outside of the development
team, who are sometimes called test coaches. See Black 2009 for more on managing the test process.
Typical tester tasks may include:
• Review and contribute to test plans
• Analyze, review, and assess requirements, user stories and acceptance criteria, specifications,
and models for testability (i.e., the test basis)
• Identify and document test conditions, and capture traceability between test cases, test
conditions, and the test basis
• Design, set up, and verify test environment(s), often coordinating with system administration and
network management
• Design and implement test cases and test procedures
• Prepare and acquire test data
• Create the detailed test execution schedule
• Execute tests, evaluate the results, and document deviations from expected results
• Use appropriate tools to facilitate the test process
• Automate tests as needed (may be supported by a developer or a test automation expert)
• Evaluate non-functional characteristics such as performance efficiency, reliability, usability,
security, compatibility, and portability
• Review tests developed by others
People who work on test analysis, test design, specific test types, or test automation may be specialists in
these roles. Depending on the risks related to the product and the project, and the software development
lifecycle model selected, different people may take over the role of tester at different test levels. For
example, at the component testing level and the component integration testing level, the role of a tester is
often done by developers. At the acceptance test level, the role of a tester is often done by business
analysts, subject matter experts, and users. At the system test level and the system integration test level,
the role of a tester is often done by an independent test team. At the operational acceptance test level,
the role of a tester is often done by operations and/or systems administration staff.
Methodical: This type of test strategy relies on making systematic use of some predefined set of
tests or test conditions, such as a taxonomy of common or likely types of failures, a list of
important quality characteristics, or company-wide look-and-feel standards for mobile apps or
web pages.
• Process-compliant (or standard-compliant): This type of test strategy involves analyzing,
designing, and implementing tests based on external rules and standards, such as those
specified by industry-specific standards, by process documentation, by the rigorous identification
and use of the test basis, or by any process or standard imposed on or by the organization.
• Directed (or consultative): This type of test strategy is driven primarily by the advice, guidance, or
instructions of stakeholders, business domain experts, or technology experts, who may be
outside the test team or outside the organization itself.
• Regression-averse: This type of test strategy is motivated by a desire to avoid regression of
existing capabilities. This test strategy includes reuse of existing testware (especially test cases
and test data), extensive automation of regression tests, and standard test suites.
• Reactive: In this type of test strategy, testing is reactive to the component or system being
tested, and the events occurring during test execution, rather than being pre-planned (as the
preceding strategies are). Tests are designed and implemented, and may immediately be
executed in response to knowledge gained from prior test results. Exploratory testing is a
common technique employed in reactive strategies.
An appropriate test strategy is often created by combining several of these types of test strategies. For
example, risk-based testing (an analytical strategy) can be combined with exploratory testing (a reactive
strategy); they complement each other and may achieve more effective testing when used together.
While the test strategy provides a generalized description of the test process, the test approach tailors the
test strategy for a particular project or release. The test approach is the starting point for selecting the test
techniques, test levels, and test types, and for defining the entry criteria and exit criteria (or definition of
ready and definition of done, respectively). The tailoring of the strategy is based on decisions made in
relation to the complexity and goals of the project, the type of product being developed, and product risk
analysis. The selected approach depends on the context and may consider factors such as risks, safety,
available resources and skills, technology, the nature of the system (e.g., custom-built versus COTS),
test objectives, and regulations.
5.2.3 Entry Criteria and Exit Criteria (Definition of Ready and Definition of Done)
In order to exercise effective control over the quality of the software, and of the testing, it is advisable to
have criteria which define when a given test activity should start and when the activity is complete. Entry
criteria (more typically called definition of ready in Agile development) define the preconditions for
undertaking a given test activity. If entry criteria are not met, it is likely that the activity will prove more
difficult, more time-consuming, more costly, and more risky. Exit criteria (more typically called definition of
done in Agile development) define what conditions must be achieved in order to declare a test level or a
set of tests completed. Entry and exit criteria should be defined for each test level and test type, and will
differ based on the test objectives.
Typical entry criteria include:
• Availability of testable requirements, user stories, and/or models (e.g., when following a
modelbased testing strategy)
• Availability of test items that have met the exit criteria for any prior test levels
Availability of test environment
• Availability of necessary test tools
• Availability of test data and other necessary resources Typical exit criteria include:
• Planned tests have been executed
• A defined level of coverage (e.g., of requirements, user stories, acceptance criteria, risks, code)
has been achieved
• The number of unresolved defects is within an agreed limit
• The number of estimated remaining defects is sufficiently low
• The evaluated levels of reliability, performance efficiency, usability, security, and other relevant
quality characteristics are sufficient
Even without exit criteria being satisfied, it is also common for test activities to be curtailed due to the
budget being expended, the scheduled time being completed, and/or pressure to bring the product to
market. It can be acceptable to end testing under such circumstances, if the project stakeholders and
business owners have reviewed and accepted the risk to go live without further testing.
effort may include characteristics of the product, characteristics of the development process,
characteristics of the people, and the test results, as shown below.
Product characteristics
• The risks associated with the product
• The quality of the test basis
The size of the product
• The complexity of the product domain
• The requirements for quality characteristics (e.g., security, reliability)
• The required level of detail for test documentation
• Requirements for legal and regulatory compliance
Development process characteristics
• The stability and maturity of the organization
• The development model in use
• The test approach
• The tools used
• The test process
• Time pressure
People characteristics
• The skills and experience of the people involved, especially with similar projects and products
(e.g., domain knowledge)
• Team cohesion and leadership
Test results
• The number and severity of defects found
• The amount of rework required
expert-based approach, as team members are estimating the effort to deliver a feature based on their
experience (ISTQB-AT Foundation Level Agile Tester Extension Syllabus).
Within sequential projects, defect removal models are examples of the metrics-based approach, where
volumes of defects and time to remove them are captured and reported, which then provides a basis for
estimating future projects of a similar nature; whereas the Wideband Delphi estimation technique is an
example of the expert-based approach in which groups of experts provides estimates based on their
experience (ISTQB-ATM Advanced Level Test Manager Syllabus).
During test monitoring and control, the test manager regularly issues test progress reports for
stakeholders. In addition to content common to test progress reports and test summary reports, typical
test progress reports may also include:
• The status of the test activities and progress against the test plan
• Factors impeding progress
• Testing planned for the next reporting period
• The quality of the test object
When exit criteria are reached, the test manager issues the test summary report. This report provides a
summary of the testing performed, based on the latest test progress report and any other relevant
information.
Typical test progress reports and test summary reports may include:
• Summary of testing performed
• Information on what occurred during a test period
• Deviations from plan, including deviations in schedule, duration, or effort of test activities
• Status of testing and product quality with respect to the exit criteria or definition of done
• Factors that have blocked or continue to block progress
• Metrics of defects, test cases, test coverage, activity progress, and resource consumption. (e.g.,
as described in 5.3.1)
• Residual risks (see section 5.5)
• Reusable test work products produced
The contents of a test report will vary depending on the project, the organizational requirements, and the
software development lifecycle. For example, a complex project with many stakeholders or a regulated
project may require more detailed and rigorous reporting than a quick software update. As another
example, in Agile development, test progress reporting may be incorporated into task boards, defect
summaries, and burndown charts, which may be discussed during a daily stand-up meeting (see
ISTQBAT Foundation Level Agile Tester Extension Syllabus).
In addition to tailoring test reports based on the context of the project, test reports should be tailored
based on the report’s audience. The type and amount of information that should be included for a
technical audience or a test team may be different from what would be included in an executive summary
report. In the first case, detailed information on defect types and trends may be important. In the latter
case, a high-level report (e.g., a status summary of defects by priority, budget, schedule, and test
conditions passed/failed/not tested) may be more appropriate.
ISO standard (ISO/IEC/IEEE 29119-3) refers to two types of test reports, test progress reports and test
completion reports (called test summary reports in this syllabus), and contains structures and examples
for each type.
• All test items are uniquely identified, version controlled, tracked for changes, and related to each
other
• All items of testware are uniquely identified, version controlled, tracked for changes, related to
each other and related to versions of the test item(s) so that traceability can be maintained
throughout the test process
• All identified documents and software items are referenced unambiguously in test documentation
During test planning, configuration management procedures and infrastructure (tools) should be identified
and implemented.
o Skills, training, and staff may not be sufficient o Personnel issues may cause
conflict and problems
o Users, business staff, or subject matter experts may not be available due to
conflicting business priorities Political issues:
o Testers may not communicate their needs and/or the test results adequately o
Developers and/or testers may fail to follow up on information found in
testing and reviews (e.g., not improving development and testing practices)
o There may be an improper attitude toward, or expectations of, testing (e.g., not
appreciating the value of finding defects during testing) Technical
issues:
o Requirements may not be defined well enough o The requirements may
not be met, given existing constraints o The test environment may not be ready
on time
o Data conversion, migration planning, and their tool support may be late
o Weaknesses in the development process may impact the consistency or quality
of project work products such as design, code, configuration, test data, and test
cases
o Poor defect management and similar problems may result in accumulated defects
and other technical debt Supplier issues:
o A third party may fail to deliver a necessary product or service, or go bankrupt o
• Determine the particular levels and types of testing to be performed (e.g., security testing,
accessibility testing)
• Determine the extent of testing to be carried out
• Prioritize testing in an attempt to find the critical defects as early as possible
• Determine whether any activities in addition to testing could be employed to reduce risk (e.g.,
providing training to inexperienced designers)
Risk-based testing draws on the collective knowledge and insight of the project stakeholders to carry out
product risk analysis. To ensure that the likelihood of a product failure is minimized, risk management
activities provide a disciplined approach to:
• Analyze (and re-evaluate on a regular basis) what can go wrong (risks)
• Determine which risks are important to deal with
• Implement actions to mitigate those risks
• Make contingency plans to deal with the risks should they become actual events
In addition, testing may identify new risks, help to determine what risks should be mitigated, and lower
uncertainty about risks.
• Provide test managers a means of tracking the quality of the work product and the impact on the
testing (e.g., if a lot of defects are reported, the testers will have spent a lot of time reporting them
instead of running tests, and there will be more confirmation testing needed)
• Provide ideas for development and test process improvement A defect report filed during dynamic
testing typically includes:
• An identifier
• A title and a short summary of the defect being reported
• Date of the defect report, issuing organization, and author
• Identification of the test item (configuration item being tested) and environment
• The development lifecycle phase(s) in which the defect was observed
• A description of the defect to enable reproduction and resolution, including logs, database dumps
screenshots, or recordings (if found during test execution)
• Expected and actual results
• Scope or degree of impact (severity) of the defect on the interests of stakeholder(s)
• Urgency/priority to fix
• State of the defect report (e.g., open, deferred, duplicate, waiting to be fixed, awaiting
confirmation testing, re-opened, closed)
• Conclusions, recommendations and approvals
• Global issues, such as other areas that may be affected by a change resulting from the defect
• Change history, such as the sequence of actions taken by project team members with respect to
the defect to isolate, repair, and confirm it as fixed
• References, including the test case that revealed the problem
Some of these details may be automatically included and/or managed when using defect management
tools, e.g., automatic assignment of an identifier, assignment and update of the defect report state during
the workflow, etc. Defects found during static testing, particularly reviews, will normally be documented in
a different way, e.g., in review meeting notes.
An example of the contents of a defect report can be found in ISO standard (ISO/IEC/IEEE 29119-3)
(which refers to defect reports as incident reports).
Tools can be classified based on several criteria such as purpose, pricing, licensing model (e.g.,
commercial or open source), and technology used. Tools are classified in this syllabus according to the
test activities that they support.
Some tools clearly support only or mainly one activity; others may support more than one activity, but are
classified under the activity with which they are most closely associated. Tools from a single provider,
especially those that have been designed to work together, may be provided as an integrated suite.
Some types of test tools can be intrusive, which means that they may affect the actual outcome of the
test. For example, the actual response times for an application may be different due to the extra
instructions that are executed by a performance testing tool, or the amount of code coverage achieved
may be distorted due to the use of a coverage tool. The consequence of using intrusive tools is called the
probe effect.
Some tools offer support that is typically more appropriate for developers (e.g., tools that are used during
component and integration testing). Such tools are marked with “(D)” in the sections below.
Tool support for management of testing and testware
Management tools may apply to any test activities over the entire software development lifecycle.
Examples of tools that support management of testing and testware include:
• Test management tools and application lifecycle management tools (ALM)
• Requirements management tools (e.g., traceability to test objects)
• Defect management tools
• Configuration management tools
• Continuous integration tools (D)
Tool support for static testing
Static testing tools are associated with the activities and benefits described in chapter 3. Examples of
such tools include:
• Tools that support reviews
• Static analysis tools (D)
Tool support for test design and implementation
Test design tools aid in the creation of maintainable work products in test design and implementation,
including test cases, test procedures and test data. Examples of such tools include:
• Test design tools
• Model-Based testing tools
• Test data preparation tools
• Acceptance test driven development (ATDD) and behavior driven development (BDD) tools
• Test driven development (TDD) tools (D)
In some cases, tools that support test design and implementation may also support test execution and
logging, or provide their outputs directly to other tools that support test execution and logging.
• Easier access to information about testing (e.g., statistics and graphs about test progress, defect
rates and performance)
Potential risks of using tools to support testing include:
• Expectations for the tool may be unrealistic (including functionality and ease of use)
• The time, cost and effort for the initial introduction of a tool may be under-estimated (including
training and external expertise)
• The time and effort needed to achieve significant and continuing benefits from the tool may be
under-estimated (including the need for changes in the test process and continuous improvement
in the way the tool is used)
• The effort required to maintain the test assets generated by the tool may be under-estimated
• The tool may be relied on too much (seen as a replacement for test design or execution, or the
use of automated testing where manual testing would be better)
• Version control of test assets may be neglected
• Relationships and interoperability issues between critical tools may be neglected, such as
requirements management tools, configuration management tools, defect management tools and
tools from multiple vendors
• The tool vendor may go out of business, retire the tool, or sell the tool to a different vendor
• The vendor may provide a poor response for support, upgrades, and defect fixes
• An open source project may be suspended
• A new platform or technology may not be supported by the tool
• There may be no clear ownership of the tool (e.g., for mentoring, updates, etc.)
6.1.3 Special Considerations for Test Execution and Test Management Tools
In order to have a smooth and successful implementation, there are a number of things that ought to be
considered when selecting and integrating test execution and test management tools into an
organization.
Test execution tools
Test execution tools execute test objects using automated test scripts. This type of tool often requires
significant effort in order to achieve significant benefits.
Capturing tests by recording the actions of a manual tester seems attractive, but this approach does not
scale to large numbers of test scripts. A captured script is a linear representation with specific data and
actions as part of each script. This type of script may be unstable when unexpected events occur. The
latest generation of these tools, which takes advantage of “smart” image capturing technology, has
increased the usefulness of this class of tools, although the generated scripts still require ongoing
maintenance as the system’s user interface evolves over time.
A data-driven testing approach separates out the test inputs and expected results, usually into a
spreadsheet, and uses a more generic test script that can read the input data and execute the same test
script with different data. Testers who are not familiar with the scripting language can then create new
test data for these predefined scripts.
In a keyword-driven testing approach, a generic script processes keywords describing the actions to be
taken (also called action words), which then calls keyword scripts to process the associated test data.
Testers (even if they are not familiar with the scripting language) can then define tests using the
keywords and associated data, which can be tailored to the application being tested. Further details and
examples of data-driven and keyword-driven testing approaches are given in ISTQB-TAE Advanced
Level Test Automation Engineer Syllabus, Fewster 1999 and Buwalda 2001.
The above approaches require someone to have expertise in the scripting language (testers, developers
or specialists in test automation). Regardless of the scripting technique used, the expected results for
each test need to be compared to actual results from the test, either dynamically (while the test is
running) or stored for later (post-execution) comparison.
Model-Based testing (MBT) tools enable a functional specification to be captured in the form of a model,
such as an activity diagram. This task is generally performed by a system designer. The MBT tool
interprets the model in order to create test case specifications which can then be saved in a test
management tool and/or executed by a test execution tool (see ISTQB-MBT Foundation Level
ModelBased Testing Syllabus).
Test management tools
Test management tools often need to interface with other tools or spreadsheets for various reasons,
including:
• To produce useful information in a format that fits the needs of the organization
• To maintain consistent traceability to requirements in a requirements management tool
• To link with test object version information in the configuration management tool
This is particularly important to consider when using an integrated tool (e.g., Application Lifecycle
Management), which includes a test management module (and possibly a defect management system),
as well as other modules (e.g., project schedule and budget information) that are used by different groups
within an organization.
• Identification of internal requirements for coaching and mentoring in the use of the tool
• Evaluation of training needs, considering the testing (and test automation) skills of those who will
be working directly with the tool(s)
• Consideration of pros and cons of various licensing models (e.g., commercial or open source)
• Estimation of a cost-benefit ratio based on a concrete business case (if required)
As a final step, a proof-of-concept evaluation should be done to establish whether the tool performs
effectively with the software under test and within the current infrastructure or, if necessary, to identify
changes needed to that infrastructure to use the tool effectively.
6.2.2 Pilot Projects for Introducing a Tool into an Organization
After completing the tool selection and a successful proof-of-concept, introducing the selected tool into an
organization generally starts with a pilot project, which has the following objectives:
• Gaining in-depth knowledge about the tool, understanding both its strengths and weaknesses
• Evaluating how the tool fits with existing processes and practices, and determining what would
need to change
• Deciding on standard ways of using, managing, storing, and maintaining the tool and the test
assets (e.g., deciding on naming conventions for files and tests, selecting coding standards,
creating libraries and defining the modularity of test suites)
• Assessing whether the benefits will be achieved at reasonable cost
• Understanding the metrics that you wish the tool to collect and report, and configuring the tool to
ensure these metrics can be captured and reported
7 References
Standards
ISO/IEC/IEEE 29119-1 (2013) Software and systems engineering - Software testing - Part 1: Concepts
and definitions
ISO/IEC/IEEE 29119-2 (2013) Software and systems engineering - Software testing - Part 2: Test
processes
ISO/IEC/IEEE 29119-3 (2013) Software and systems engineering - Software testing - Part 3: Test
documentation
ISO/IEC/IEEE 29119-4 (2015) Software and systems engineering - Software testing - Part 4: Test
techniques
ISO/IEC 25010, (2011) Systems and software engineering – Systems and software Quality Requirements
and Evaluation (SQuaRE) System and software quality models
ISO/IEC 20246: (2017) Software and systems engineering — Work product reviews
UML 2.5, Unified Modeling Language Reference Manual, http://www.omg.org/spec/UML/2.5.1/, 2017
ISTQB documents
ISTQB Glossary
ISTQB Foundation Level Overview 2018
ISTQB-MBT Foundation Level Model-Based Tester Extension Syllabus
ISTQB-AT Foundation Level Agile Tester Extension Syllabus
ISTQB-ATA Advanced Level Test Analyst Syllabus
ISTQB-ATM Advanced Level Test Manager Syllabus
ISTQB-SEC Advanced Level Security Tester Syllabus
ISTQB-TAE Advanced Level Test Automation Engineer Syllabus
ISTQB-ETM Expert Level Test Management Syllabus
ISTQB-EITP Expert Level Improving the Test Process Syllabus
Fewster, M. and Graham, D. (1999) Software Test Automation, Addison Wesley: Harlow UK
Gilb, T. and Graham, D. (1993) Software Inspection, Addison Wesley: Reading MA
Graham, D. and Fewster, M. (2012) Experiences of Test Automation, Pearson Education: Boston MA
Gregory, J. and Crispin, L. (2015) More Agile Testing, Pearson Education: Boston MA
Jorgensen, P. (2014) Software Testing, A Craftsman’s Approach (4e), CRC Press: Boca Raton FL
Kaner, C., Bach, J. and Pettichord, B. (2002) Lessons Learned in Software Testing, John Wiley & Sons:
New York NY
Kaner, C., Padmanabhan, S. and Hoffman, D. (2013) The Domain Testing Workbook, Context-Driven
Press: New York NY
Kramer, A., Legeard, B. (2016) Model-Based Testing Essentials: Guide to the ISTQB Certified
ModelBased Tester: Foundation Level, John Wiley & Sons: New York NY
Myers, G. (2011) The Art of Software Testing, (3e), John Wiley & Sons: New York NY
Sauer, C. (2000) “The Effectiveness of Software Development Technical Reviews: A Behaviorally
Motivated Program of Research,” IEEE Transactions on Software Engineering, Volume 26, Issue 1, pp 1-
Shull, F., Rus, I., Basili, V. July 2000. “How Perspective-Based Reading can Improve Requirement
Inspections.” IEEE Computer, Volume 33, Issue 7, pp 73-79
van Veenendaal, E. (ed.) (2004) The Testing Practitioner (Chapters 8 - 10), UTN Publishers: The
Netherlands
Wiegers, K. (2002) Peer Reviews in Software, Pearson Education: Boston MA
Weinberg, G. (2008) Perfect Software and Other Illusions about Testing, Dorset House: New York NY
Can explain the reason why test analysis and design should occur as early as possible:
• To find defects when they are cheaper to remove
• To find the most important defects first
Can explain the similarities and differences between integration and system testing:
• Similarities: the test objects for both integration testing and system testing include more than one
component, and both integration testing and system testing can include non-functional test types
• Differences: integration testing concentrates on interfaces and interactions, and system testing
concentrates on whole-system aspects, such as end-to-end processing
11 Index
acceptance testing 14, 27, 30, 36–39, 41–42, 82–83
64, 67 action words see confirmation bias 25–26
keyword-driven testing ad hoc review 45, confirmation testing 14, 21, 27 39, 41, 46, 69,
52 71, 76–77 context 12–13, 17–18,
Agile development 14, 18, 29–30, 32, 46, 50, 27, 29, 56, 65, 67–68,
64–65, 68, 70, 72 alpha and beta testing 72, 76, 74 contractual acceptance
27, 36–37, 39 audience, for test reports 72 testing 27, 36–37, 39 coverage 12, 14, 18–
automated component regression tests 31–32 20, 23–24, 39–42, 47, 52,
automation 41, 66, 68, 78, 81–84 banking 55, 57–62, 69, 71–72, 79–
application example, test types and test levels 80 black-box testing 57–60
41–42 checklist-based 52 code 14, 40,
beta testing see alpha and beta testing black-box 79–80
test techniques 20, 39–40, 55, decision 55, 61 decision
57–60, 62 boundary value table 59 equivalence
analysis (BVA) 40, 55, partitioning 58
58–59 experience-based 61–62
decision table testing 35, 55, 59 functional 39 non-
equivalence partitioning 55, 58 functional 40 state
state transition testing 55, 60 use transition 60 statement
case testing 55, 60 55, 61 use case 60 white-
boundary value analysis 55, 58–59 buddy box testing 40, 57, 61
check see informal review change-related data-driven testing 78, 82 debugging 12,
testing 41, 42 checklist-based review 52 14 decision table 35, 55, 59 decision
checklist-based testing 62 code coverage 14, coverage 42, 55, 61 decision testing 31,
40, 79–80 code coverage tools 40, 79–80 61 defect management 32, 63, 65, 74,
commercial off-the-shelf (COTS) software 27, 76–77 defect reports 22, 24–25, 49, 63,
29, 37, 39, 43, 68 component 76–77 defects 12, 15–17 acceptance
integration testing 27, 63, 32–34, testing, typical 38 clusters 16
40–42, 66 see also component testing, typical 31–32
integration testing integration testing, typical 33–34
component testing 14, 27, 30–32, 39–42, 56, 79 necessity of testing 14 pesticide paradox
configuration management 63, 73–74, 80, 17 psychology 25 root causes of 16
static testing benefits 46–47 system
testing, typical 35 test analysis 19–20 independent testers and testing 26, 36–37,
testing principles and 16–17 63–64, 66 informal
development lifecycle model see software review 45, 48, 50, 52
development lifecycle model inspection 45, 48, 51–52,
developer 54 integration strategy 34
component testing 32, 34 debugging integration testing 27, 29–30, 32–34, 40–43,
14 independent testing 64 mindset 58,
compared to tester's 25–26 tools for 66, 79 see also component
79–80 integration testing, system integration
dry runs see scenario-based review dynamic testing Internet of Things (IoT) systems
analysis, tool support for 80 early testing 16 30, 41, 43 interpersonal skills 25
entry and exit criteria 19, 22, 63, 65, 68–69, intrusive (tool) 79
71–72, 74 for reviews ISO Standards
48–49, 52–53 25010 40
equivalence partitioning 55, 58 error 20246 48, 50
guessing 55, 62 29119-1 14
errors 15–16 absence is, 29119-2 18
fallacy 17 29119-3 22, 67, 72, 77
estimation 29119-4 57 iterative development
techniques 70 test 63, models 28–29, 31–32,
67, 69 tool selection 83 39, 41, 67 see also incremental
see also test planning development models Kanban 29 keyword-
exhaustive testing 16 exit criteria see entry and driven testing 78, 82 logging
exit criteria experience-based test techniques defect management 76
20–21, 55, tool support for 80
57, 61–62 checklist- maintenance testing 27, 42–44 management
based testing 62 error see configuration management, defect
guessing 61–62 exploratory management, project management, quality
testing 55, 62 management, test
expert-based estimation technique 70 management
exploratory testing 21, 23, 62, 68 management, tool support for 22, 24, 78–79,
failures 12–16, 21, 27 acceptance 82–83 metrics-based estimation
testing, typical 38 change-related 41 technique 70 metrics used in reviews 49, 52
component testing, typical 31–32 defect metrics used in testing 19, 63, 65, 67, 71–72,
management, in 76 equivalence 84 mindset, tester and developer compared
partitioning 58 error guessing 62 errors, 25–26 mobile application
defects and 15–16 independent testers contextual testing factors 17–18, 68 non-
64 integration testing, typical 33–34 functional coverage 40, 42
non-functional testing 38 static and model-based testing (MBT)
dynamic testing 41 system testing, strategy 67–
typical 35 test execution, in 20 68 testing 46
psychology 25 tools 80, 82
false negatives 16, 36 false positives 16, monitoring tools 79–80 non-functional
21, 36, 76 functional testing 27, 30–31, 35, coverage 40 non-functional testing 27,
39–40, 41, 30–31, 35, 40, 41,
57, 62 impact analysis 27, 43–44 57, 62
incident reports see defect reports objectives
incremental development models 28–32, 41
see also iterative development models
defect reports 76 reviews 45, 47–48, 50, 53– decision 48 findings 25, 48, 74
54 test levels 27–28, 30–32, 34, 36–36 test meeting 50–52, 54, 77 objectives
objectives 12–15, 17–20, 25, 56, 62, 65, 48, 53 peers 51–52 planning 48
67–69, 71 process 20, 45, 48–50 requirements,
test types 39 pilot review of 14, 25, 46, 66 review types
project 84 45, 48–52, 54 roles 36, 45, 47–50,
open source tools 79, 83 operational 53 reports 51–52, 76 success factors
acceptance testing (OAT) 36–37 performance 45, 53–54 tools to support 80
testing 37, 40–41, 43, 53, 64, 67 work products 13, 28, 45–46, 48–49, 52–
tools 78, 80–81 53,
perspective-based reading 45, 53 pesticide 65–66
paradox 17 risk 73–75
pilot project, introducing tool into organization 84 definition 73 product see product risk
planning project see project risk risk analysis 16,
integration 34, 65 migration 19, 34–35, 37, 63, 68, 75 risk-based
74 planning poker 70 review testing 63, 67–68, 75 test automation
48–49, 53 test see test risks 81–82
planning work products see role-based review 53 root cause analysis 13,
test plan see also estimation 15–16, 32, 51 safety-critical systems 17, 26,
probe effect 79 29, 37, 46, 61 safety requirements 56, 68
product quality 24–25, 40, 47, 52, 65, 69, scenario-based review 45, 52–53 scripting
71–72, 74–76 product risk 17, 19, language 82 Scrum 29 self-organizing teams
29, 63, 68, 71, 73, 75 product risk analysis 29 sequential development models 17–18,
63, 68, 75 project risk 17, 29, 36, 63, 73– 27–29,
74 proof-of-concept (tool) 83–84 32, 39, 67, 70 shift left see
prototyping 29, 30 psychology 25 purpose early testing software development
configuration management 73 confirmation lifecycle 13, 17, 26,
and maintenance testing 27, 41 monitoring 27–31, 39, 56, 64, 66, 76, 84 see
and control 71 reviews 48, 50–52 test plan also incremental development models,
63, 67 test report 63, 72 testing 14–16, 56 iterative development models,
tools 78–79 sequential development models
quality 12–15, 19, 31–32, 34, 36, 64, 68, 79 software testing and development 28–29
cost of 47 data quality 35, 81 specialized testing needs, tool support for
product see product quality 81 Spiral 29 state transition testing 55, 60
quality characteristics 39–40, 42, 48, 68, 70, 73 statement testing and coverage 61 static
quality assurance 12, 15, 65 quality control 15, analysis 45–46, 76, 80 static testing 13, 36,
53 quality risk see product risk quality 45–47, 77, 80 structural coverage, white-
management 15 Rational Unified Process 29 box testing 40 success
reactive test strategies 62, 68 regression factors for reviews 45, 49, 53–54
averse 68 defects (aka regressions) 17, 41, 43 factors for tools 78, 81–82, 84
testing 17, 21, 27, 29, 34, 39, 41, 43, 46, 79 testing’s contributions to 14–15
tests 31–32, 35, 42, 68–69 tools 80–81 system integration testing 27, 32–
regulatory acceptance testing 37 regulatory 34, 41–42
requirements 13, 17, 35–38, 48, defects and failures 33–34
56, 70 requirements elicitation responsibility for 34
error 15 retirement, maintenance testing system testing 27, 30, 32, 34–36, 39, 41–42,
and 43 review 67
tasks