ISTQB CTFL Syllabus v4.0.1
ISTQB CTFL Syllabus v4.0.1
v4.0.1
Copyright Notice
Copyright Notice © International Software Testing Qualifications Board (hereinafter called ISTQB®).
ISTQB® is a registered trademark of the International Software Testing Qualifications Board.
Copyright © 2024 the authors of the Foundation Level v4.0.1 syllabus: Renzo Cerquozzi, Wim Decoutere,
Jean-François Riverin, Arnika Hryszko, Martin Klonk, Meile Posthuma, Eric Riou du Cosquer (chair),
Adam Roman, Lucjan Stapp, Stephanie Ulrich (vice chair), Eshraka Zakaria.
Copyright © 2023 the authors of the Foundation Level v4.0 syllabus: Renzo Cerquozzi, Wim Decoutere,
Klaudia Dussa-Zieger, Jean-François Riverin, Arnika Hryszko, Martin Klonk, Michaël Pilaeten, Meile
Posthuma, Stuart Reid, Eric Riou du Cosquer (chair), Adam Roman, Lucjan Stapp, Stephanie Ulrich (vice
chair), Eshraka Zakaria.
Copyright © 2019 the authors for the update 2019 Klaus Olsen (chair), Meile Posthuma and Stephanie
Ulrich.
Copyright © 2018 the authors for the update 2018 Klaus Olsen (chair), Tauhida Parveen (vice chair), Rex
Black (project manager), Debra Friedenberg, Matthias Hamburg, Judy McKay, Meile Posthuma, Hans
Schaefer, Radoslaw Smilgin, Mike Smith, Steve Toms, Stephanie Ulrich, Marie Walsh, and Eshraka
Zakaria.
Copyright © 2011 the authors for the update 2011 Thomas Müller (chair), Debra Friedenberg, and the
ISTQB WG Foundation Level.
Copyright © 2010 the authors for the update 2010 Thomas Müller (chair), Armin Beer, Martin Klonk, and
Rahul Verma.
Copyright © 2007 the authors for the update 2007 Thomas Müller (chair), Dorothy Graham, Debra
Friedenberg and Erik van Veenendaal.
Copyright © 2005 the authors Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus
Olsen, Maaret Pyhäjärvi, Geoff Thompson, and Erik van Veenendaal.
All rights reserved. The authors hereby transfer the copyright to the ISTQB®. The authors (as current
copyright holders) and ISTQB® (as the future copyright holder) have agreed to the following conditions of
use:
• Extracts, for non-commercial use, from this document may be copied if the source is acknowledged.
Any Accredited Training Provider may use this syllabus as the basis for a training course if the
authors and the ISTQB® are acknowledged as the source and copyright owners of the syllabus and
provided that any advertisement of such a training course may mention the syllabus only after official
accreditation of the training materials has been received from an ISTQB®-recognized Member Board.
• Any individual or group of individuals may use this syllabus as the basis for articles and books, if the
authors and the ISTQB® are acknowledged as the source and copyright owners of the syllabus.
• Any other use of this syllabus is prohibited without first obtaining the approval in writing of the
ISTQB®.
• Any ISTQB®-recognized Member Board may translate this syllabus provided they reproduce the
abovementioned Copyright Notice in the translated version of the syllabus.
Revision History
Table of Contents
Acknowledgements
This document was formally released by the Product Owner / working group chair Eric Riou du Cosquer
on 15.09.2024.
It was produced by a team from the ISTQB joint Foundation Level & Agile Working Groups: Renzo
Cerquozzi (vice chair), Wim Decoutere, Jean-François Riverin, Arnika Hryszko, Martin Klonk, Meile
Posthuma, Eric Riou du Cosquer (chair), Adam Roman, Lucjan Stapp, Stephanie Ulrich (vice chair),
Eshraka Zakaria.
Version 4.0 of this document was formally released by the General Assembly of the ISTQB ® on 21 April
2023
It was produced by a team from the ISTQB joint Foundation Level & Agile Working Groups: Laura Albert,
Renzo Cerquozzi (vice chair), Wim Decoutere, Klaudia Dussa-Zieger, Chintaka Indikadahena, Arnika
Hryszko, Martin Klonk, Kenji Onishi, Michaël Pilaeten (co-chair), Meile Posthuma, Gandhinee Rajkomar,
Stuart Reid, Eric Riou du Cosquer (co-chair), Jean-François Riverin, Adam Roman, Lucjan Stapp,
Stephanie Ulrich (vice chair), Eshraka Zakaria.
The team thanks Stuart Reid, Patricia McQuaid and Leanne Howard for their technical review and the
review team and the Member Boards for their suggestions and input.
The following persons participated in the reviewing, commenting and balloting of this syllabus: Adam
Roman, Adam Scierski, Ágota Horváth, Ainsley Rood, Ale Rebon Portillo, Alessandro Collino, Alexander
Alexandrov, Amanda Logue, Ana Ochoa, André Baumann, André Verschelling, Andreas Spillner, Anna
Miazek, Armin Born, Arnd Pehl, Arne Becher, Attila Gyúri, Attila Kovács, Beata Karpinska, Benjamin
Timmermans, Blair Mo, Carsten Weise, Chinthaka Indikadahena, Chris Van Bael, Ciaran O'Leary, Claude
Zhang, Cristina Sobrero, Dandan Zheng, Dani Almog, Daniel Säther, Daniel van der Zwan, Danilo Magli,
Darvay Tamás Béla, Dawn Haynes, Dena Pauletti, Dénes Medzihradszky, Doris Dötzer, Dot Graham,
Edward Weller, Erhardt Wunderlich, Eric Riou Du Cosquer, Florian Fieber, Fran O'Hara, François
Vaillancourt, Frans Dijkman, Gabriele Haller, Gary Mogyorodi, Georg Sehl, Géza Bujdosó, Giancarlo
Tomasig, Giorgio Pisani, Gustavo Márquez Sosa, Helmut Pichler, Hongbao Zhai, Horst Pohlmann,
Ignacio Trejos, Ilia Kulakov, Ine Lutterman, Ingvar Nordström, Iosif Itkin, Jamie Mitchell, Jan Giesen,
Jean-Francois Riverin, Joanna Kazun, Joanne Tremblay, Joëlle Genois, Johan Klintin, John Kurowski,
Jörn Münzel, Judy McKay, Jürgen Beniermann, Karol Frühauf, Katalin Balla, Kevin Kooh, Klaudia Dussa-
Zieger, Klaus Erlenbach, Klaus Olsen, Krisztián Miskó, Laura Albert, Liang Ren, Lijuan Wang, Lloyd
Roden, Lucjan Stapp, Mahmoud Khalaili, Marek Majernik, Maria Clara Choucair, Mark Rutz, Markus
Niehammer, Martin Klonk, Márton Siska, Matthew Gregg, Matthias Hamburg, Mattijs Kemmink, Maud
Schlich, May Abu-Sbeit, Meile Posthuma, Mette Bruhn-Pedersen, Michal Tal, Michel Boies, Mike Smith,
Miroslav Renda, Mohsen Ekssir, Monika Stocklein Olsen, Murian Song, Nicola De Rosa, Nikita Kalyani,
Nishan Portoyan, Nitzan Goldenberg, Ole Chr. Hansen, Patricia McQuaid, Patricia Osorio, Paul
Weymouth, Pawel Kwasik, Peter Zimmerer, Petr Neugebauer, Piet de Roo, Radoslaw Smilgin, Ralf
Bongard, Ralf Reissing, Randall Rice, Rik Marselis, Rogier Ammerlaan, Sabine Gschwandtner, Sabine
Uhde, Salinda Wickramasinghe, Salvatore Reale, Sammy Kolluru, Samuel Ouko, Stephanie Ulrich, Stuart
Reid, Surabhi Bellani, Szilard Szell, Tamás Gergely, Tamás Horváth, Tatiana Sergeeva, Tauhida
Parveen, Thaer Mustafa, Thomas Eisbrenner, Thomas Harms, Thomas Heller, Tobias Letzkus, Tomas
Rosenqvist, Werner Lieblang, Yaron Tsubery, Zhenlei Zuo and Zsolt Hargitai.
ISTQB Working Group Foundation Level (Edition 2018): Klaus Olsen (chair), Tauhida Parveen (vice
chair), Rex Black (project manager), Eshraka Zakaria, Debra Friedenberg, Ebbe Munk, Hans Schaefer,
Judy McKay, Marie Walsh, Meile Posthuma, Mike Smith, Radoslaw Smilgin, Stephanie Ulrich, Steve
Toms, Corne Kruger, Dani Almog, Eric Riou du Cosquer, Igal Levi, Johan Klintin, Kenji Onishi, Rashed
Karim, Stevan Zivanovic, Sunny Kwon, Thomas Müller, Vipul Kocher, Yaron Tsubery and all Member
Boards for their suggestions.
ISTQB Working Group Foundation Level (Edition 2011): Thomas Müller (chair), Debra Friedenberg. The
core team thanks the review team (Dan Almog, Armin Beer, Rex Black, Julie Gardiner, Judy McKay,
Tuula Pääkkönen, Eric Riou du Cosquer, Hans Schaefer, Stephanie Ulrich, Erik van Veenendaal), and all
Member Boards for the suggestions for the current version of the syllabus.
ISTQB Working Group Foundation Level (Edition 2010): Thomas Müller (chair), Rahul Verma, Martin
Klonk and Armin Beer. The core team thanks the review team (Rex Black, Mette Bruhn-Pederson, Debra
Friedenberg, Klaus Olsen, Judy McKay, Tuula Pääkkönen, Meile Posthuma, Hans Schaefer, Stephanie
Ulrich, Pete Williams, Erik van Veenendaal), and all Member Boards for their suggestions.
ISTQB Working Group Foundation Level (Edition 2007): Thomas Müller (chair), Dorothy Graham, Debra
Friedenberg, and Erik van Veenendaal. The core team thanks the review team (Hans Schaefer,
Stephanie Ulrich, Meile Posthuma, Anders Pettersson, and Wonil Kwon) and all the Member Boards for
their suggestions.
ISTQB Working Group Foundation Level (Edition 2005): Thomas Müller (chair), Rex Black, Sigrid Eldh,
Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi, Geoff Thompson and Erik van Veenendaal. The core
team thanks the review team and all Member Boards for their suggestions.
0. Introduction
0.7. Accreditation
An ISTQB® Member Board may accredit training providers whose course material follows this syllabus.
Training providers should obtain accreditation guidelines from the Member Board or body that performs
the accreditation. An accredited course is recognized as conforming to this syllabus and is allowed to
have an ISTQB® exam as part of the course. The accreditation guidelines for this syllabus follow the
general Accreditation Guidelines published by the Processes Management and Compliance Working
Group.
techniques that can be applied to all software projects independent of the software development lifecycle
(SDLC) employed.
Test objectives can vary, depending upon the context, which includes the work product being tested, the
test level, risks, the software development lifecycle (SDLC) being followed, and factors related to the
business context, e.g., corporate structure, competitive considerations, or time to market.
Testing may also be required to meet contractual or legal requirements, or to comply with regulatory
standards.
2. Exhaustive testing is impossible. Testing everything is not feasible except in trivial cases (Manna
1978). Rather than attempting to test exhaustively, test techniques (see chapter 4), test case prioritization
(see section 5.1.5), and risk-based testing (see section 5.2), should be used to focus test efforts.
3. Early testing saves time and money. Defects that are removed early in the process will not cause
subsequent defects in derived work products. The cost of quality will be reduced since fewer failures will
occur later in the SDLC (Boehm 1981). To find defects early, both static testing (see chapter 3) and
dynamic testing (see chapter 4) should be started as early as possible.
4. Defects cluster together. A small number of system components usually contain most of the defects
discovered or are responsible for most of the operational failures (Enders 1975). This phenomenon is an
illustration of the Pareto principle. Predicted defect clusters, and actual defect clusters observed during
testing or in operation, are an important input for risk-based testing (see section 5.2).
5. Tests wear out. If the same tests are repeated many times, they become increasingly ineffective in
detecting new defects (Beizer 1990). To overcome this effect, existing tests and test data may need to be
modified, and new tests may need to be written. However, in some cases, repeating the same tests can
have a beneficial outcome, e.g., in automated regression testing (see section 2.2.3).
6. Testing is context dependent. There is no single universally applicable approach to testing. Testing is
done differently in different contexts (Kaner 2011).
7. Absence-of-defects fallacy. It is a fallacy (i.e., a misconception) to expect that software verification
will ensure the success of a system. Thoroughly testing all the specified requirements and fixing all the
defects found could still produce a system that does not fulfill the users’ needs and expectations, that
does not help in achieving the customer’s business goals, and that is inferior compared to other
competing systems. In addition to verification, validation should also be carried out (Boehm 1981).
Test monitoring and test control. Test monitoring involves the ongoing checking of all test activities and
the comparison of actual progress against the plan. Test control involves taking the actions necessary to
meet the test objectives. Test monitoring and test control are further explained in section 5.3.
Test analysis includes analyzing the test basis to identify testable features. Associated test conditions
are defined and prioritized, taking the related risks and risk levels into account (see section 5.2). The test
basis and the test object are also evaluated to identify defects they may contain and to assess their
testability. Test analysis is often supported by the use of test techniques (see chapter 4). Test analysis
answers the question “what to test?” in terms of measurable coverage criteria.
Test design includes elaborating the test conditions into test cases and other testware (e.g., test
charters). This activity often involves the identification of coverage items, which serve as a guide to
specify test case inputs. Test techniques (see chapter 4) can be used to support this activity. Test design
also includes defining the test data requirements, designing the test environment and identifying the
necessary infrastructure and tools. Test design answers the question “how to test?”.
Test implementation includes creating or acquiring the testware necessary for test execution (e.g., test
data). Test cases can be organized into test procedures, which are often assembled into test suites.
Manual and automated test scripts are created. Test procedures are prioritized and arranged within a test
execution schedule for efficient test execution (see section 5.1.5). The test environment is built and
verified to be set up correctly.
Test execution includes running the tests in accordance with the test execution schedule (test runs).
Test execution may be manual or automated. Test execution can take many forms, including continuous
testing or pair testing sessions. Actual test results are compared with the expected results. The test
results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to report
the anomalies based on the failures observed (see section 5.5).
Test completion usually occurs at project milestones (e.g., release, end of iteration, test level
completion). For any unresolved defects, change requests or product backlog items are created. Any
testware that may be useful in the future is identified and archived or handed over to the appropriate
teams. The test environment is shut down to an agreed state. The test activities are analyzed to identify
lessons learned and improvements for future iterations, releases, or projects (see section 2.1.6). A test
completion report is created and communicated to the stakeholders.
1.4.3. Testware
Testware is created as output work products from the test activities described in section 1.4.1. There is a
significant variation in how different organizations produce, shape, name, organize and manage their
work products. Proper configuration management (see section 5.4) ensures consistency and integrity of
work products. The following list of work products is not exhaustive:
• Test planning work products include: test plan, test schedule, risk register, entry criteria and
exit criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact
and information about risk mitigation (see section 5.2). Test schedule, risk register, entry criteria
and exit criteria are often a part of the test plan.
• Test monitoring and test control work products include: test progress reports (see section
5.3.2), documentation of control directives (see section 5.3) and information about risks (see
section 5.2).
• Test analysis work products include: (prioritized) test conditions (e.g., acceptance criteria, see
section 4.5.2), and defect reports regarding defects in the test basis (if not fixed directly).
• Test design work products include: (prioritized) test cases, test charters, coverage items, test
data requirements and test environment requirements.
• Test implementation work products include: test procedures, manual and automated test
scripts, test suites, test data, test execution schedule, and test environment items. Examples of
test environment items include: stubs, drivers, simulators, and service virtualizations.
• Test execution work products include: test logs, and defect reports (see section 5.5).
• Test completion work products include: test completion report (see section 5.3.2), action items
for improvement of subsequent projects or iterations, documented lessons learned, and change
requests (e.g., as product backlog items).
• Traceability of test cases to requirements can verify that the requirements are covered by test
cases.
• Traceability of test results to risks can be used to evaluate the level of residual risk in a test
object.
In addition to evaluating coverage, good traceability makes it possible to determine the impact of
changes, facilitates audits, and helps meet IT governance criteria. Good traceability also makes test
progress reports and test completion reports more easily understandable by including the status of test
basis elements. This can also assist in communicating the technical aspects of testing to stakeholders in
an understandable manner. Traceability provides information to assess product quality, process
capability, and project progress against business goals.
• Good communication skills, active listening, being a team player (to interact effectively with all
stakeholders, to convey information to others, to be understood, and to report and discuss
defects)
• Analytical thinking, critical thinking, creativity (to increase effectiveness of testing)
• Technical knowledge (to increase efficiency of testing, e.g., by using appropriate test tools)
• Domain knowledge (to be able to understand and to communicate with end users/business
representatives)
Testers are often the bearers of bad news. It is a common human trait to blame the bearer of bad news.
This makes communication skills crucial for testers. Communicating test results may be perceived as
criticism of the product and of its author. Confirmation bias can make it difficult to accept information that
disagrees with currently held beliefs. Some people may perceive testing as a destructive activity, even
though it contributes greatly to project success and product quality. To try to improve this view, information
about defects and failures should be communicated in a constructive way.
The main benefit of independence of testing is that independent testers are likely to recognize different
kinds of failures and defects compared to developers because of their different backgrounds, technical
perspectives, and biases. Moreover, an independent tester can verify, challenge, or disprove
assumptions made by stakeholders during specification and implementation of the system.
However, there are also some drawbacks. Independent testers may be isolated from the development
team, which may lead to a lack of collaboration, communication problems, or an adversarial relationship
with the development team. Developers may lose a sense of responsibility for quality. Independent
testers may be seen as a bottleneck or be blamed for delays in release.
• Test analysis and design for a given test level begins during the corresponding development
phase of the SDLC, so that testing can adhere to the principle of early testing (see section 1.3)
• Testers are involved in reviewing work products as soon as drafts of these work products are
available, so that this earlier testing and defect detection can support shift left (see section 2.1.5).
• Automated processes are promoted like CI/CD that facilitates establishing stable test
environments
• The visibility on non-functional quality characteristics increases (e.g., performance efficiency,
reliability)
• Automation through a delivery pipeline reduces the need for repetitive manual testing
• The risk of regression is minimized due to the scale and range of automated regression tests
DevOps is not without its risks and challenges, which include:
• The DevOps delivery pipeline must be defined and established
• CI / CD tools must be introduced and maintained
• Test automation requires additional resources and may be difficult to establish and maintain
Although DevOps comes with a high level of automated testing, manual testing – especially from the
user's perspective – will still be needed.
• Component testing (also known as unit testing) focuses on testing components in isolation. It
often requires specific support, such as test harnesses or unit test frameworks. Component
testing is normally performed by developers in their development environments.
• Component integration testing (also known as unit integration testing) focuses on testing the
interfaces and interactions between components. Component integration testing is heavily
dependent on the integration strategy like bottom-up, top-down or big-bang.
• System testing focuses on the overall behavior and capabilities of an entire system or product,
often including functional testing of end-to-end tasks and the non-functional testing of quality
characteristics. For some non-functional quality characteristics, it is preferable to test them on a
complete system in a representative test environment (e.g., usability). Using simulations of sub-
systems is also possible. System testing may be performed by an independent test team, and is
related to specifications for the system.
• System integration testing focuses on testing the interfaces of the system under test and other
systems and external services. System integration testing requires suitable test environments
preferably similar to the operational environment.
• Acceptance testing focuses on validation and on demonstrating readiness for deployment,
which means that the system fulfills the user’s business needs. Ideally, acceptance testing should
be performed by the intended users. The main forms of acceptance testing are: user acceptance
testing (UAT), operational acceptance testing, contractual acceptance testing and regulatory
acceptance testing, alpha testing and beta testing.
Test levels are distinguished by the following non-exhaustive list of attributes, to avoid overlapping of test
activities:
• Test object
• Test objectives
• Test basis
• Defects and failures
• Approach and responsibilities
• Performance efficiency
• Compatibility
• Usability (also known as interaction capability)
• Reliability
• Security
• Maintainability
• Portability (also known as flexibility)
• Safety
It is sometimes appropriate for non-functional testing to start early in the SDLC (e.g., as part of reviews or
component testing). Many non-functional tests are derived from functional tests as they use the same
functional tests, but check that while performing the function, a non-functional constraint is satisfied (e.g.,
checking that a function performs within a specified time, or a function can be ported to a new platform).
The late discovery of non-functional defects can pose a serious threat to the success of a project. Non-
functional testing sometimes needs a very specific test environment, such as a usability lab for usability
testing.
Black-box testing (see section 4.2) is specification-based and derives tests from documentation not
related to the internal structure of the test object. The main objective of black-box testing is checking the
system's behavior against its specifications.
White-box testing (see section 4.3) is structure-based and derives tests from the system's
implementation or internal structure (e.g., code, architecture, work flows, and data flows). The main
objective of white-box testing is to cover the underlying structure by the tests to an acceptable level.
All the four above mentioned test types can be applied to all test levels, although the focus will be
different at each level. Different test techniques can be used to derive test conditions and test cases for
all the mentioned test types.
connected systems. Regression testing may not be restricted to the test object itself but can also be
related to the environment. It is advisable first to perform an impact analysis to recognize the extent of the
regression testing. Impact analysis shows which parts of the software could be affected.
Regression test suites are run many times and generally the number of regression test cases will
increase with each iteration or release, so regression testing is a strong candidate for automation. Test
automation should start early in the project. Where CI is used, such as in DevOps (see section 2.1.4), it is
good practice to also include automated regression tests. Depending on the situation, this may include
regression tests on different test levels.
Confirmation testing and/or regression testing for the test object are needed on all test levels if defects
are fixed and/or changes are made on these test levels.
Even though reviews can be costly to implement, the overall project costs are usually much lower than
when no reviews are performed because less time and effort needs to be spent on fixing defects later in
the project.
Certain code defects can be detected using static analysis more efficiently than in dynamic testing,
usually resulting in both fewer code defects and a lower overall development effort.
• Fixing and reporting. For every defect, a defect report should be created so that corrective
actions can be followed up. Once the exit criteria are reached, the work product can be accepted.
The review results are reported.
• Technical Review. A technical review is performed by technically qualified reviewers and led by
a moderator. The objectives of a technical review are to gain consensus and make decisions
regarding a technical problem, but also to detect anomalies, evaluate quality and build confidence
in the work product, generate new ideas, and to motivate and enable authors to improve.
• Inspection. As inspections are the most formal type of review, they follow the complete generic
process (see section 3.2.2). The main objective is to find the maximum number of anomalies.
Other objectives are to evaluate quality, build confidence in the work product, and to motivate and
enable authors to improve. Metrics are collected and used to improve the SDLC, including the
inspection process. In inspections, the author cannot act as the review leader or scribe.
For simple test items, EP can be easy, but in practice, understanding how the test object will treat
different values is often complicated. Therefore, partitioning should be done with care.
A partition containing valid values is called a valid partition. A partition containing invalid values is called
an invalid partition. The definitions of valid and invalid values may vary among teams and organizations.
For example, valid values may be interpreted as those that should be processed by the test object or as
those for which the specification defines their processing. Invalid values may be interpreted as those that
should be ignored or rejected by the test object or as those for which no processing is defined in the test
object specification.
In EP, the coverage items are the equivalence partitions. To achieve 100% coverage with this test
technique, test cases must exercise all identified partitions (including invalid partitions) by covering each
partition at least once. Coverage is measured as the number of partitions exercised by at least one test
case, divided by the total number of identified partitions, and is expressed as a percentage.
Many test items include multiple sets of partitions (e.g., test items with more than one input parameter),
which means that a test case will cover partitions from different sets of partitions. The simplest coverage
criterion in the case of multiple sets of partitions is called Each Choice coverage (Ammann 2016). Each
Choice coverage requires test cases to exercise each partition from each set of partitions at least once.
Each Choice coverage does not take into account combinations of partitions.
In all states coverage, the coverage items are the states. To achieve 100% all states coverage, test
cases must ensure that all the states are exercised. Coverage is measured as the number of exercised
states divided by the total number of states and is expressed as a percentage.
In valid transitions coverage (also called 0-switch coverage), the coverage items are single valid
transitions. To achieve 100% valid transitions coverage, test cases must exercise all the valid transitions.
Coverage is measured as the number of exercised valid transitions divided by the total number of valid
transitions and is expressed as a percentage.
In all transitions coverage, the coverage items are all the transitions shown in a state table. To achieve
100% all transitions coverage, test cases must exercise all the valid transitions and attempt to execute
invalid transitions. Testing only one invalid transition in a single test case helps to avoid defect masking,
i.e., a situation in which one defect prevents the detection of another. Coverage is measured as the
number of valid and invalid transitions exercised or attempted to be covered by executed test cases,
divided by the total number of valid and invalid transitions, and is expressed as a percentage.
All states coverage is weaker than valid transitions coverage, because it can typically be achieved without
exercising all the transitions. Valid transitions coverage is the most widely used coverage criterion.
Achieving full valid transitions coverage guarantees full all states coverage. Achieving full all transitions
coverage guarantees both full all states coverage and full valid transitions coverage and should be a
minimum requirement for mission and safety-critical software.
Checklist items are often phrased in the form of a question. It should be possible to check each item
separately and directly. These items may refer to requirements, graphical interface properties, quality
characteristics or other forms of test conditions. Checklists can be created to support various test types,
including functional and non-functional testing (e.g., 10 heuristics for usability testing (Nielsen 1994)).
Some checklist entries may gradually become less effective over time because the developers will learn
to avoid making the same errors. New entries may also need to be added to reflect newly found high
severity defects. Therefore, checklists should be regularly updated based on defect analysis. However,
care should be taken to avoid letting the checklist become too long (Gawande 2009).
In the absence of detailed test cases, checklist-based testing can provide guidelines and some degree of
consistency for the testing. If the checklists are high-level, some variability in the actual testing is likely to
occur, resulting in potentially greater coverage but less repeatability.
testability of user stories, break down user stories into tasks (particularly testing tasks), estimate test effort
for all testing tasks, and identify and refine functional and non-functional aspects of the test object.
Wideband Delphi. In this iterative, expert-based technique, experts make experience-based estimations.
Each expert, in isolation, estimates the effort. The results are collected and if there are deviations of an
expert’s estimate that are out of range of the agreed upon boundaries, the experts discuss their current
estimates. Each expert is then asked to make a new estimation based on that feedback, again in
isolation. This process is repeated until a consensus is reached. Planning Poker is a variant of Wideband
Delphi, commonly used in Agile software development. In Planning Poker, estimates are usually made
using cards with numbers that represent the effort size.
Three-point estimation. In this expert-based technique, three estimations are made by the experts: the
most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The
final estimate (E) is their weighted arithmetic mean. In the most popular version of this technique, the
estimate is calculated as E = (a + 4*m + b) / 6. The advantage of this technique is that it allows the
experts to calculate the measurement error: SD = (b – a) / 6. For example, if the estimates (in person-
hours) are: a=6, m=9 and b=18, then the final estimation is 10±2 person-hours (i.e., between 8 and 12
person-hours), because E = (6 + 4*9 + 18) / 6 = 10 and SD = (18 – 6) / 6 = 2.
See (Kan 2003, Koomen 2006, Westfall 2009) for these and many other test estimation techniques.
tests. The higher the layer, the lower the test granularity, the lower the test isolation (i.e., the degree of
dependency on other elements of the system) and the higher the test execution time. Tests in the bottom
layer are small, isolated, fast, and check a small piece of functionality, so usually a lot of them are needed
to achieve a reasonable coverage. The top layer represents complex, high-level, end-to-end tests. These
high-level tests are generally slower than the tests from the lower layers, and they typically check a large
piece of functionality, so usually just a few of them are needed to achieve a reasonable level of coverage.
The number and naming of the layers may differ. For example, the original test pyramid model (Cohn
2009) defines three layers: “unit tests”, “service tests” and “UI tests”. Another popular model defines unit
(component) tests, integration (component integration) tests, and end-to-end tests. Other test levels (see
section 2.2.1) can also be used.
categorization of identified risks, determining their risk likelihood, risk impact and risk level, prioritizing,
and proposing ways to handle them. Categorization helps in assigning mitigation actions, because usually
risks falling into the same category can be mitigated using a similar approach.
Risk assessment can use a quantitative or qualitative approach, or a mix of them. In the quantitative
approach the risk level is calculated as the multiplication of risk likelihood and risk impact. In the
qualitative approach the risk level can be determined using a risk matrix.
Product risk analysis may influence the thoroughness and test scope. Its results are used to:
• Determine the test scope to be carried out
• Determine the particular test levels and propose test types to be performed
• Determine the test techniques to be employed and the coverage to be achieved
• Estimate the test effort required for each task
• Prioritize testing in an attempt to find the critical defects as early as possible
• Determine whether any activities in addition to testing could be employed to reduce risk
• More time for testers to design new, deeper and more effective tests
Potential risks of using test automation include:
• Unrealistic expectations about the benefits of a tool (including functionality and ease of use).
• Inaccurate estimations of time, costs, effort required to introduce a tool, maintain test scripts and
change the existing manual test process.
• Using a test tool when manual testing is more appropriate.
• Relying on a tool too much, e.g., ignoring the need of human critical thinking.
• The dependency on the tool vendor which may go out of business, retire the tool, sell the tool to a
different vendor or provide poor support (e.g., responses to queries, upgrades, and defect fixes).
• Using an open-source software which may be abandoned, meaning that no further updates are
available, or its internal components may require quite frequent updates as a further
development.
• The automation tool is not compatible with the development platform.
• Choosing an unsuitable tool that did not comply with the regulatory requirements and/or safety
standards.
7. References
Standards
ISO/IEC/IEEE 29119-1 (2022) Software and systems engineering – Software testing – Part 1: General
Concepts
ISO/IEC/IEEE 29119-2 (2021) Software and systems engineering – Software testing – Part 2: Test
processes
ISO/IEC/IEEE 29119-3 (2021) Software and systems engineering – Software testing – Part 3: Test
documentation
ISO/IEC/IEEE 29119-4 (2021) Software and systems engineering – Software testing – Part 4: Test
techniques
ISO/IEC 25010, (2023-11) Systems and software engineering – Systems and software Quality
Requirements and Evaluation (SQuaRE) – Product quality models
ISO/IEC 20246 (2017) Software and systems engineering – Work product reviews
ISO/IEC/IEEE 14764:2022 – Software engineering – Software life cycle processes – Maintenance
ISO 31000 (2018) Risk management – Principles and guidelines
Books
Adzic, G. (2009) Bridging the Communication Gap: Specification by Example and Agile Acceptance
Testing, Neuri Limited
Ammann, P. and Offutt, J. (2016) Introduction to Software Testing (2e), Cambridge University Press
Andrews, M. and Whittaker, J. (2006) How to Break Web Software: Functional and Security Testing of
Web Applications and Web Services, Addison-Wesley Professional
Beck, K. (2003) Test Driven Development: By Example, Addison-Wesley
Beizer, B. (1990) Software Testing Techniques (2e), Van Nostrand Reinhold: Boston MA
Boehm, B. (1981) Software Engineering Economics, Prentice Hall, Englewood Cliffs, NJ
Buxton, J.N. and Randell B., eds (1970), Software Engineering Techniques. Report on a conference
sponsored by the NATO Science Committee, Rome, Italy, 27–31 October 1969, p. 16
Chelimsky, D. et al. (2010) The Rspec Book: Behaviour Driven Development with Rspec, Cucumber, and
Friends, The Pragmatic Bookshelf: Raleigh, NC
Cohn, M. (2009) Succeeding with Agile: Software Development Using Scrum, Addison-Wesley
Copeland, L. (2004) A Practitioner’s Guide to Software Test Design, Artech House: Norwood MA
Craig, R. and Jaskiel, S. (2002) Systematic Software Testing, Artech House: Norwood MA
Crispin, L. and Gregory, J. (2008) Agile Testing: A Practical Guide for Testers and Agile Teams, Pearson
Education: Boston MA
Forgács, I., and Kovács, A. (2019) Practical Test Design: Selection of traditional and automated test
design techniques, BCS, The Chartered Institute for IT
Gawande A. (2009) The Checklist Manifesto: How to Get Things Right, New York, NY: Metropolitan
Books
Gärtner, M. (2011), ATDD by Example: A Practical Guide to Acceptance Test-Driven Development,
Pearson Education: Boston MA
Gilb, T., Graham, D. (1993) Software Inspection, Addison Wesley
Hendrickson, E. (2013) Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing, The
Pragmatic Programmers
Hetzel, B. (1988) The Complete Guide to Software Testing, 2nd ed., John Wiley and Sons
Jeffries, R., Anderson, A., Hendrickson, C. (2000) Extreme Programming Installed, Addison-Wesley
Professional
Jorgensen, P. (2014) Software Testing, A Craftsman’s Approach (4e), CRC Press: Boca Raton FL
Kan, S. (2003) Metrics and Models in Software Quality Engineering, 2 nd ed., Addison-Wesley
Kaner, C., Falk, J., and Nguyen, H.Q. (1999) Testing Computer Software, 2nd ed., Wiley
Kaner, C., Bach, J., and Pettichord, B. (2011) Lessons Learned in Software Testing: A Context-Driven
Approach, 1st ed., Wiley
Kim, G., Humble, J., Debois, P. and Willis, J. (2016) The DevOps Handbook, Portland, OR
Koomen, T., van der Aalst, L., Broekman, B. and Vroon, M. (2006) TMap Next for result-driven testing,
UTN Publishers, The Netherlands
Myers, G. (2011) The Art of Software Testing, (3e), John Wiley & Sons: New York NY
O’Regan, G. (2019) Concise Guide to Software Testing, Springer Nature Switzerland
Pressman, R.S. (2019) Software Engineering. A Practitioner’s Approach, 9th ed., McGraw Hill
Roman, A. (2018) Thinking-Driven Testing. The Most Reasonable Approach to Quality Control, Springer
Nature Switzerland
Van Veenendaal, E (ed.) (2012) Practical Risk-Based Testing, The PRISMA Approach, UTN Publishers:
The Netherlands
Watson, A.H., Wallace, D.R. and McCabe, T.J. (1996) Structured Testing: A Testing Methodology Using
the Cyclomatic Complexity Metric, U.S. Dept. of Commerce, Technology Administration, NIST
Westfall, L. (2009) The Certified Software Quality Engineer Handbook, ASQ Quality Press
Whittaker, J. (2002) How to Break Software: A Practical Guide to Testing, Pearson
Whittaker, J. (2009) Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test
Design, Addison Wesley
Whittaker, J. and Thompson, H. (2003) How to Break Software Security, Addison Wesley
Wiegers, K. (2001) Peer Reviews in Software: A Practical Guide, Addison-Wesley Professional
Enders, A. (1975) “An Analysis of Errors and Their Causes in System Programs,” IEEE Transactions on
Software Engineering 1(2), pp. 140-149
Manna, Z., Waldinger, R. (1978) “The logic of computer programming,” IEEE Transactions on Software
Engineering 4(3), pp. 199-229
Marick, B. (2003) Exploration through Example, http://www.exampler.com/old-
blog/2003/08/21.1.html#agile-testing-project-1
Nielsen, J. (1994) “Enhancing the explanatory power of usability heuristics,” Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems: Celebrating Interdependence, ACM Press, pp.
152–158
Salman. I. (2016) “Cognitive biases in software quality and testing,” Proceedings of the 38th International
Conference on Software Engineering Companion (ICSE '16), ACM, pp. 823-826.
Wake, B. (2003) “INVEST in Good Stories, and SMART Tasks,” https://xp123.com/articles/invest-in-good-
stories-and-smart-tasks/
Level 2: Understand (K2) – the candidate can select the reasons or explanations for statements related
to the topic, and can summarize, compare, classify and give examples for the testing concept.
Action verbs: classify, compare, contrast, differentiate, distinguish, exemplify, explain, give examples,
interpret, summarize.
Examples:
• “Classify the different options for writing acceptance criteria.”
• “Compare the different roles in testing” (look for similarities, differences or both).
• “Distinguish between project risks and product risks” (allows concepts to be differentiated).
• “Exemplify the purpose and content of a test plan.”
• “Explain the impact of context on the test process.”
• “Summarize the activities of the review process.”
Level 3: Apply (K3) – the candidate can carry out a procedure when confronted with a familiar task, or
select the correct procedure and apply it to a given context.
Action verbs: apply, implement, prepare, use.
Examples:
• “Apply test case prioritization” (should refer to a procedure, technique, process, algorithm etc.).
• “Prepare a defect report.”
• “Use boundary value analysis to derive test cases.”
References for the cognitive levels of learning objectives:
Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Teaching, and Assessing:
A Revision of Bloom's Taxonomy of Educational Objectives, Allyn & Bacon
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
Business Outcomes: Foundation Level
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
Chapter 1 Fundamentals of Testing
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
1.5 Essential Skills and Good Practices in Testing
2.1.1 Explain the impact of the chosen software development lifecycle on testing K2 X
2.1.2 Recall good testing practices that apply to all software development lifecycles K1 X
2.1.6 Explain how retrospectives can be used as a mechanism for process improvement K2 X X
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
2.3 Maintenance Testing
3.1.1 Recognize types of work products that can be examined by static testing K1 X X
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
Distinguish black-box test techniques, white-box test techniques and experience- K2
4.1.1 X
based test techniques
4.2 Black-box Test Techniques
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
Explain how to write user stories in collaboration with developers and business K2
4.5.1 X X
representatives
4.5.2 Classify the different options for writing acceptance criteria K2 X
5.1.2 Recognize how a tester adds value to iteration and release planning K1 X X X
5.1.7 Summarize the testing quadrants and their relationships with test levels and test types K2 X X
5.2.1 Identify risk level by using risk likelihood and risk impact K1 X X
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
5.2.3 Explain how product risk analysis may influence thoroughness and test scope K2 X X X
5.2.4 Explain what measures can be taken in response to analyzed product risks K2 X X X
5.3.2 Summarize the purposes, content, and audiences for test reports K2 X X X
ISTQB® Foundation Syllabus v4.0.1 is an errata for Foundation Level Syllabus v4.0. This errata contains
the following changes.
Changes in Learning Objectives wording, to align it with the glossary terms
• FL-1.4.1: Summarize the different test activities and tasks -> Explain the different test activities and
related tasks
• FL-2.1.5: Explain the shift-left approach -> Explain shift left
• FL-3.1.1: Recognize types of products that can be examined by the different static test techniques
-> Recognize types of work products that can be examined by static testing
• FL-3.1.3 Compare and contrast static and dynamic testing -> Compare and contrast static testing
and dynamic testing
• FL-4.1.1: Distinguish black-box, white-box and experience-based test techniques -> Distinguish
black-box test techniques, white-box test techniques and experience-based test techniques
• FL-5.2.3: Explain how product risk analysis may influence thoroughness and scope of testing ->
Explain how product risk analysis may influence thoroughness and test scope
Text changes to align it with the glossary terms (artifacts, documentation -> work products, level of risk
-> risk level, goals, objectives of testing, test project objectives -> test objectives, test monitoring and control
-> test monitoring and test control, test documentation -> testware, iterative and incremental development
models -> iterative development models and incremental development models, test environment elements
-> test environment items, software quality characteristics -> quality characteristics, test progress and
completion reports -> test progress reports and test completion reports, test independence -> independence
of testing, stage -> phase, component and component integration testing -> component testing and
component integration testing, performance -> performance efficiency, contractual and regulatory
acceptance testing -> contractual acceptance testing and regulatory acceptance testing, white box -> white-
box, entry/exit criteria -> entry criteria or exit criteria, organizational test policy -> test policy, shift-left, shift-
left approach, shift-left strategy -> shift left, types of tests -> test types, control of the testing -> test control,
stage of testing -> test activity, reporting on test progress -> test progress reporting, reporting on testing for
a completed project -> test completion reporting, false positive -> false-positive result, step -> test step,
scope of testing -> test scope, Test design and implementation tools -> Test design and test implementation
tools, static and dynamic testing -> testing static and dynamic testing)
Update of ISO 25010. A new version of the ISO 25010 standard was published in 2023. It renames
“usability” to “interaction capability”, "portability” to flexibility, and adds a new characteristic “safety”). We
stay with the original characteristics' names, but we add the new names for usability and portability in
section 2.2.2
Three keywords added (test process and traceability in Chapter 1, test strategy in Chapter 5)
• in 6.2 “defect rate” replaced with “failure rate” and “that are too complicated for humans to derive”
replaced by “that are too complicated for humans to determine"
Moreover, several typos were fixed and some terms were unified across the whole syllabus (e.g., conduct
-> perform).
11. Index