ISTQB CTFL Syllabus-V4.0
ISTQB CTFL Syllabus-V4.0
v4.0
Copyright Notice
Copyright Notice © International Software Testing Qualifications Board (hereinafter called ISTQB®).
ISTQB® is a registered trademark of the International Software Testing Qualifications Board.
Copyright © 2023 the authors of the Foundation Level v4.0 syllabus: Renzo Cerquozzi, Wim Decoutere,
Klaudia Dussa-Zieger, Jean-François Riverin, Arnika Hryszko, Martin Klonk, Michaël Pilaeten, Meile
Posthuma, Stuart Reid, Eric Riou du Cosquer (chair), Adam Roman, Lucjan Stapp, Stephanie Ulrich (vice
chair), Eshraka Zakaria.
Copyright © 2019 the authors for the update 2019 Klaus Olsen (chair), Meile Posthuma and Stephanie
Ulrich.
Copyright © 2018 the authors for the update 2018 Klaus Olsen (chair), Tauhida Parveen (vice chair), Rex
Black (project manager), Debra Friedenberg, Matthias Hamburg, Judy McKay, Meile Posthuma, Hans
Schaefer, Radoslaw Smilgin, Mike Smith, Steve Toms, Stephanie Ulrich, Marie Walsh, and Eshraka
Zakaria.
Copyright © 2011 the authors for the update 2011 Thomas Müller (chair), Debra Friedenberg, and the
ISTQB WG Foundation Level.
Copyright © 2010 the authors for the update 2010 Thomas Müller (chair), Armin Beer, Martin Klonk, and
Rahul Verma.
Copyright © 2007 the authors for the update 2007 Thomas Müller (chair), Dorothy Graham, Debra
Friedenberg and Erik van Veenendaal.
Copyright © 2005 the authors Thomas Müller (chair), Rex Black, Sigrid Eldh, Dorothy Graham, Klaus
Olsen, Maaret Pyhäjärvi, Geoff Thompson, and Erik van Veenendaal.
All rights reserved. The authors hereby transfer the copyright to the ISTQB®. The authors (as current
copyright holders) and ISTQB® (as the future copyright holder) have agreed to the following conditions of
use:
• Extracts, for non-commercial use, from this document may be copied if the source is acknowledged.
Any Accredited Training Provider may use this syllabus as the basis for a training course if the
authors and the ISTQB® are acknowledged as the source and copyright owners of the syllabus and
provided that any advertisement of such a training course may mention the syllabus only after official
accreditation of the training materials has been received from an ISTQB®-recognized Member Board.
• Any individual or group of individuals may use this syllabus as the basis for articles and books, if the
authors and the ISTQB® are acknowledged as the source and copyright owners of the syllabus.
• Any other use of this syllabus is prohibited without first obtaining the approval in writing of the
ISTQB®.
• Any ISTQB®-recognized Member Board may translate this syllabus provided they reproduce the
abovementioned Copyright Notice in the translated version of the syllabus.
Revision History
Table of Contents
Acknowledgements
This document was formally released by the General Assembly of the ISTQB® on 21 April 2023
It was produced by a team from the ISTQB joint Foundation Level & Agile Working Groups: Laura Albert,
Renzo Cerquozzi (vice chair), Wim Decoutere, Klaudia Dussa-Zieger, Chintaka Indikadahena, Arnika
Hryszko, Martin Klonk, Kenji Onishi, Michaël Pilaeten (co-chair), Meile Posthuma, Gandhinee Rajkomar,
Stuart Reid, Eric Riou du Cosquer (co-chair), Jean-François Riverin, Adam Roman, Lucjan Stapp,
Stephanie Ulrich (vice chair), Eshraka Zakaria.
The team thanks Stuart Reid, Patricia McQuaid and Leanne Howard for their technical review and the
review team and the Member Boards for their suggestions and input.
The following persons participated in the reviewing, commenting and balloting of this syllabus: Adam
Roman, Adam Scierski, Ágota Horváth, Ainsley Rood, Ale Rebon Portillo, Alessandro Collino, Alexander
Alexandrov, Amanda Logue, Ana Ochoa, André Baumann, André Verschelling, Andreas Spillner, Anna
Miazek, Arnd Pehl, Arne Becher, Attila Gyúri, Attila Kovács, Beata Karpinska, Benjamin Timmermans,
Blair Mo, Carsten Weise, Chinthaka Indikadahena, Chris Van Bael, Ciaran O'Leary, Claude Zhang,
Cristina Sobrero, Dandan Zheng, Dani Almog, Daniel Säther, Daniel van der Zwan, Danilo Magli, Darvay
Tamás Béla, Dawn Haynes, Dena Pauletti, Dénes Medzihradszky, Doris Dötzer, Dot Graham, Edward
Weller, Erhardt Wunderlich, Eric Riou Du Cosquer, Florian Fieber, Fran O'Hara, François Vaillancourt,
Frans Dijkman, Gabriele Haller, Gary Mogyorodi, Georg Sehl, Géza Bujdosó, Giancarlo Tomasig, Giorgio
Pisani, Gustavo Márquez Sosa, Helmut Pichler, Hongbao Zhai, Horst Pohlmann, Ignacio Trejos, Ilia
Kulakov, Ine Lutterman, Ingvar Nordström, Iosif Itkin, Jamie Mitchell, Jan Giesen, Jean-Francois Riverin,
Joanna Kazun, Joanne Tremblay, Joëlle Genois, Johan Klintin, John Kurowski, Jörn Münzel, Judy
McKay, Jürgen Beniermann, Karol Frühauf, Katalin Balla, Kevin Kooh, Klaudia Dussa-Zieger, Klaus
Erlenbach, Klaus Olsen, Krisztián Miskó, Laura Albert, Liang Ren, Lijuan Wang, Lloyd Roden, Lucjan
Stapp, Mahmoud Khalaili, Marek Majernik, Maria Clara Choucair, Mark Rutz, Markus Niehammer, Martin
Klonk, Márton Siska, Matthew Gregg, Matthias Hamburg, Mattijs Kemmink, Maud Schlich, May Abu-
Sbeit, Meile Posthuma, Mette Bruhn-Pedersen, Michal Tal, Michel Boies, Mike Smith, Miroslav Renda,
Mohsen Ekssir, Monika Stocklein Olsen, Murian Song, Nicola De Rosa, Nikita Kalyani, Nishan Portoyan,
Nitzan Goldenberg, Ole Chr. Hansen, Patricia McQuaid, Patricia Osorio, Paul Weymouth, Pawel Kwasik,
Peter Zimmerer, Petr Neugebauer, Piet de Roo, Radoslaw Smilgin, Ralf Bongard, Ralf Reissing, Randall
Rice, Rik Marselis, Rogier Ammerlaan, Sabine Gschwandtner, Sabine Uhde, Salinda Wickramasinghe,
Salvatore Reale, Sammy Kolluru, Samuel Ouko, Stephanie Ulrich, Stuart Reid, Surabhi Bellani, Szilard
Szell, Tamás Gergely, Tamás Horváth, Tatiana Sergeeva, Tauhida Parveen, Thaer Mustafa, Thomas
Eisbrenner, Thomas Harms, Thomas Heller, Tomas Rosenqvist, Werner Lieblang, Yaron Tsubery,
Zhenlei Zuo and Zsolt Hargitai.
ISTQB Working Group Foundation Level (Edition 2018): Klaus Olsen (chair), Tauhida Parveen (vice
chair), Rex Black (project manager), Eshraka Zakaria, Debra Friedenberg, Ebbe Munk, Hans Schaefer,
Judy McKay, Marie Walsh, Meile Posthuma, Mike Smith, Radoslaw Smilgin, Stephanie Ulrich, Steve
Toms, Corne Kruger, Dani Almog, Eric Riou du Cosquer, Igal Levi, Johan Klintin, Kenji Onishi, Rashed
Karim, Stevan Zivanovic, Sunny Kwon, Thomas Müller, Vipul Kocher, Yaron Tsubery and all Member
Boards for their suggestions.
ISTQB Working Group Foundation Level (Edition 2011): Thomas Müller (chair), Debra Friedenberg. The
core team thanks the review team (Dan Almog, Armin Beer, Rex Black, Julie Gardiner, Judy McKay,
Tuula Pääkkönen, Eric Riou du Cosquier Hans Schaefer, Stephanie Ulrich, Erik van Veenendaal), and all
Member Boards for the suggestions for the current version of the syllabus.
ISTQB Working Group Foundation Level (Edition 2010): Thomas Müller (chair), Rahul Verma, Martin
Klonk and Armin Beer. The core team thanks the review team (Rex Black, Mette Bruhn-Pederson, Debra
Friedenberg, Klaus Olsen, Judy McKay, Tuula Pääkkönen, Meile Posthuma, Hans Schaefer, Stephanie
Ulrich, Pete Williams, Erik van Veenendaal), and all Member Boards for their suggestions.
ISTQB Working Group Foundation Level (Edition 2007): Thomas Müller (chair), Dorothy Graham, Debra
Friedenberg, and Erik van Veenendaal. The core team thanks the review team (Hans Schaefer,
Stephanie Ulrich, Meile Posthuma, Anders Pettersson, and Wonil Kwon) and all the Member Boards for
their suggestions.
ISTQB Working Group Foundation Level (Edition 2005): Thomas Müller (chair), Rex Black, Sigrid Eldh,
Dorothy Graham, Klaus Olsen, Maaret Pyhäjärvi, Geoff Thompson and Erik van Veenendaal. The core
team thanks the review team and all Member Boards for their suggestions.
0. Introduction
0.7. Accreditation
An ISTQB® Member Board may accredit training providers whose course material follows this syllabus.
Training providers should obtain accreditation guidelines from the Member Board or body that performs
the accreditation. An accredited course is recognized as conforming to this syllabus, and is allowed to
have an ISTQB® exam as part of the course. The accreditation guidelines for this syllabus follow the
general Accreditation Guidelines published by the Processes Management and Compliance Working
Group.
The syllabus content is not a description of the entire knowledge area of software testing; it reflects the
level of detail to be covered in Foundation Level training courses. It focuses on test concepts and
techniques that can be applied to all software projects independent of the SDLC employed.
3. Early testing saves time and money. Defects that are removed early in the process will not cause
subsequent defects in derived work products. The cost of quality will be reduced since fewer failures will
occur later in the SDLC (Boehm 1981). To find defects early, both static testing (see chapter 3) and
dynamic testing (see chapter 4) should be started as early as possible.
4. Defects cluster together. A small number of system components usually contain most of the defects
discovered or are responsible for most of the operational failures (Enders 1975). This phenomenon is an
illustration of the Pareto principle. Predicted defect clusters, and actual defect clusters observed during
testing or in operation, are an important input for risk-based testing (see section 5.2).
5. Tests wear out. If the same tests are repeated many times, they become increasingly ineffective in
detecting new defects (Beizer 1990). To overcome this effect, existing tests and test data may need to be
modified, and new tests may need to be written. However, in some cases, repeating the same tests can
have a beneficial outcome, e.g., in automated regression testing (see section 2.2.3).
6. Testing is context dependent. There is no single universally applicable approach to testing. Testing is
done differently in different contexts (Kaner 2011).
7. Absence-of-defects fallacy. It is a fallacy (i.e., a misconception) to expect that software verification
will ensure the success of a system. Thoroughly testing all the specified requirements and fixing all the
defects found could still produce a system that does not fulfill the users’ needs and expectations, that
does not help in achieving the customer’s business goals, and that is inferior compared to other
competing systems. In addition to verification, validation should also be carried out (Boehm 1981).
Test monitoring and control. Test monitoring involves the ongoing checking of all test activities and the
comparison of actual progress against the plan. Test control involves taking the actions necessary to
meet the objectives of testing. Test monitoring and control are further explained in section 5.3.
Test analysis includes analyzing the test basis to identify testable features and to define and prioritize
associated test conditions, together with the related risks and risk levels (see section 5.2). The test basis
and the test objects are also evaluated to identify defects they may contain and to assess their testability.
Test analysis is often supported by the use of test techniques (see chapter 4). Test analysis answers the
question “what to test?” in terms of measurable coverage criteria.
Test design includes elaborating the test conditions into test cases and other testware (e.g., test
charters). This activity often involves the identification of coverage items, which serve as a guide to
specify test case inputs. Test techniques (see chapter 4) can be used to support this activity. Test design
also includes defining the test data requirements, designing the test environment and identifying any
other required infrastructure and tools. Test design answers the question “how to test?”.
Test implementation includes creating or acquiring the testware necessary for test execution (e.g., test
data). Test cases can be organized into test procedures and are often assembled into test suites. Manual
and automated test scripts are created. Test procedures are prioritized and arranged within a test
execution schedule for efficient test execution (see section 5.1.5). The test environment is built and
verified to be set up correctly.
Test execution includes running the tests in accordance with the test execution schedule (test runs).
Test execution may be manual or automated. Test execution can take many forms, including continuous
testing or pair testing sessions. Actual test results are compared with the expected results. The test
results are logged. Anomalies are analyzed to identify their likely causes. This analysis allows us to report
the anomalies based on the failures observed (see section 5.5).
Test completion activities usually occur at project milestones (e.g., release, end of iteration, test level
completion) for any unresolved defects, change requests or product backlog items created. Any testware
that may be useful in the future is identified and archived or handed over to the appropriate teams. The
test environment is shut down to an agreed state. The test activities are analyzed to identify lessons
learned and improvements for future iterations, releases, or projects (see section 2.1.6). A test completion
report is created and communicated to the stakeholders.
1.4.3. Testware
Testware is created as output work products from the test activities described in section 1.4.1. There is a
significant variation in how different organizations produce, shape, name, organize and manage their
work products. Proper configuration management (see section 5.4) ensures consistency and integrity of
work products. The following list of work products is not exhaustive:
• Test planning work products include: test plan, test schedule, risk register, and entry and exit
criteria (see section 5.1). Risk register is a list of risks together with risk likelihood, risk impact and
information about risk mitigation (see section 5.2). Test schedule, risk register and entry and exit
criteria are often a part of the test plan.
• Test monitoring and control work products include: test progress reports (see section 5.3.2),
documentation of control directives (see section 5.3) and risk information (see section 5.2).
• Test analysis work products include: (prioritized) test conditions (e.g., acceptance criteria, see
section 4.5.2), and defect reports regarding defects in the test basis (if not fixed directly).
• Test design work products include: (prioritized) test cases, test charters, coverage items, test
data requirements and test environment requirements.
• Test implementation work products include: test procedures, automated test scripts, test
suites, test data, test execution schedule, and test environment elements. Examples of test
environment elements include: stubs, drivers, simulators, and service virtualizations.
• Test execution work products include: test logs, and defect reports (see section 5.5).
• Test completion work products include: test completion report (see section 5.3.2), action items
for improvement of subsequent projects or iterations, documented lessons learned, and change
requests (e.g., as product backlog items).
• Traceability of test cases to requirements can verify that the requirements are covered by test
cases.
• Traceability of test results to risks can be used to evaluate the level of residual risk in a test
object.
In addition to evaluating coverage, good traceability makes it possible to determine the impact of
changes, facilitates test audits, and helps meet IT governance criteria. Good traceability also makes test
progress and completion reports more easily understandable by including the status of test basis
elements. This can also assist in communicating the technical aspects of testing to stakeholders in an
understandable manner. Traceability provides information to assess product quality, process capability,
and project progress against business goals.
• Good communication skills, active listening, being a team player (to interact effectively with all
stakeholders, to convey information to others, to be understood, and to report and discuss
defects)
• Analytical thinking, critical thinking, creativity (to increase effectiveness of testing)
• Technical knowledge (to increase efficiency of testing, e.g., by using appropriate test tools)
• Domain knowledge (to be able to understand and to communicate with end users/business
representatives)
Testers are often the bearers of bad news. It is a common human trait to blame the bearer of bad news.
This makes communication skills crucial for testers. Communicating test results may be perceived as
criticism of the product and of its author. Confirmation bias can make it difficult to accept information that
disagrees with currently held beliefs. Some people may perceive testing as a destructive activity, even
though it contributes greatly to project success and product quality. To try to improve this view, information
about defects and failures should be communicated in a constructive way.
The main benefit of independence of testing is that independent testers are likely to recognize different
kinds of failures and defects compared to developers because of their different backgrounds, technical
perspectives, and biases. Moreover, an independent tester can verify, challenge, or disprove
assumptions made by stakeholders during specification and implementation of the system.
However, there are also some drawbacks. Independent testers may be isolated from the development
team, which may lead to a lack of collaboration, communication problems, or an adversarial relationship
with the development team. Developers may lose a sense of responsibility for quality. Independent
testers may be seen as a bottleneck or be blamed for delays in release.
• Testers are involved in reviewing work products as soon as drafts of this documentation are
available, so that this earlier testing and defect detection can support the shift-left strategy (see
section 2.1.5)
meetings the participants (not only testers, but also e.g., developers, architects, product owner, business
analysts) discuss:
• What was successful, and should be retained?
• What was not successful and could be improved?
• How to incorporate the improvements and retain the successes in the future?
The results should be recorded and are normally part of the test completion report (see section 5.3.2).
Retrospectives are critical for the successful implementation of continuous improvement and it is
important that any recommended improvements are followed up.
Typical benefits for testing include:
• Increased test effectiveness / efficiency (e.g., by implementing suggestions for process
improvement)
• Increased quality of testware (e.g., by jointly reviewing the test processes)
• Team bonding and learning (e.g., as a result of the opportunity to raise issues and propose
improvement points)
• Improved quality of the test basis (e.g., as deficiencies in the extent and quality of the
requirements could be addressed and solved)
• Better cooperation between development and testing (e.g., as collaboration is reviewed and
optimized regularly)
• Component integration testing (also known as unit integration testing) focuses on testing the
interfaces and interactions between components. Component integration testing is heavily
dependent on the integration strategy approaches like bottom-up, top-down or big-bang.
• System testing focuses on the overall behavior and capabilities of an entire system or product,
often including functional testing of end-to-end tasks and the non-functional testing of quality
characteristics. For some non-functional quality characteristics, it is preferable to test them on a
complete system in a representative test environment (e.g., usability). Using simulations of sub-
systems is also possible. System testing may be performed by an independent test team, and is
related to specifications for the system.
• System integration testing focuses on testing the interfaces of the system under test and other
systems and external services . System integration testing requires suitable test environments
preferably similar to the operational environment.
• Acceptance testing focuses on validation and on demonstrating readiness for deployment,
which means that the system fulfills the user’s business needs. Ideally, acceptance testing should
be performed by the intended users. The main forms of acceptance testing are: user acceptance
testing (UAT), operational acceptance testing, contractual and regulatory acceptance testing,
alpha testing and beta testing.
Test levels are distinguished by the following non-exhaustive list of attributes, to avoid overlapping of test
activities:
• Test object
• Test objectives
• Test basis
• Defects and failures
• Approach and responsibilities
• Reliability
• Security
• Maintainability
• Portability
It is sometimes appropriate for non-functional testing to start early in the life cycle (e.g., as part of reviews
and component testing or system testing). Many non-functional tests are derived from functional tests as
they use the same functional tests, but check that while performing the function, a non-functional
constraint is satisfied (e.g., checking that a function performs within a specified time, or a function can be
ported to a new platform). The late discovery of non-functional defects can pose a serious threat to the
success of a project. Non-functional testing sometimes needs a very specific test environment, such as a
usability lab for usability testing.
Black-box testing (see section 4.2) is specification-based and derives tests from documentation external
to the test object. The main objective of black-box testing is checking the system's behavior against its
specifications.
White-box testing (see section 4.3) is structure-based and derives tests from the system's
implementation or internal structure (e.g., code, architecture, work flows, and data flows). The main
objective of white-box testing is to cover the underlying structure by the tests to the acceptable level.
All the four above mentioned test types can be applied to all test levels, although the focus will be
different at each level. Different test techniques can be used to derive test conditions and test cases for
all the mentioned test types.
section 2.1.4), it is good practice to also include automated regression tests. Depending on the situation,
this may include regression tests on different levels.
Confirmation testing and/or regression testing for the test object are needed on all test levels if defects
are fixed and/or changes are made on these test levels.
Code defects can be detected using static analysis more efficiently than in dynamic testing, usually
resulting in both fewer code defects and a lower overall development effort.
Frequent stakeholder feedback throughout the SDLC can prevent misunderstandings about requirements
and ensure that changes to requirements are understood and implemented earlier. This helps the
development team to improve their understanding of what they are building. It allows them to focus on
those features that deliver the most value to the stakeholders and that have the most positive impact on
identified risks.
For simple test objects EP can be easy, but in practice, understanding how the test object will treat
different values is often complicated. Therefore, partitioning should be done with care.
A partition containing valid values is called a valid partition. A partition containing invalid values is called
an invalid partition. The definitions of valid and invalid values may vary among teams and organizations.
For example, valid values may be interpreted as those that should be processed by the test object or as
those for which the specification defines their processing. Invalid values may be interpreted as those that
should be ignored or rejected by the test object or as those for which no processing is defined in the test
object specification.
In EP, the coverage items are the equivalence partitions. To achieve 100% coverage with this technique,
test cases must exercise all identified partitions (including invalid partitions) by covering each partition at
least once. Coverage is measured as the number of partitions exercised by at least one test case, divided
by the total number of identified partitions, and is expressed as a percentage.
Many test objects include multiple sets of partitions (e.g., test objects with more than one input
parameter), which means that a test case will cover partitions from different sets of partitions. The
simplest coverage criterion in the case of multiple sets of partitions is called Each Choice coverage
(Ammann 2016). Each Choice coverage requires test cases to exercise each partition from each set of
partitions at least once. Each Choice coverage does not take into account combinations of partitions.
There exist many coverage criteria for state transition testing. This syllabus discusses three of them.
In all states coverage, the coverage items are the states. To achieve 100% all states coverage, test
cases must ensure that all the states are visited. Coverage is measured as the number of visited states
divided by the total number of states, and is expressed as a percentage.
In valid transitions coverage (also called 0-switch coverage), the coverage items are single valid
transitions. To achieve 100% valid transitions coverage, test cases must exercise all the valid transitions.
Coverage is measured as the number of exercised valid transitions divided by the total number of valid
transitions, and is expressed as a percentage.
In all transitions coverage, the coverage items are all the transitions shown in a state table. To achieve
100% all transitions coverage, test cases must exercise all the valid transitions and attempt to execute
invalid transitions. Testing only one invalid transition in a single test case helps to avoid fault masking,
i.e., a situation in which one defect prevents the detection of another. Coverage is measured as the
number of valid and invalid transitions exercised or attempted to be covered by executed test cases,
divided by the total number of valid and invalid transitions, and is expressed as a percentage.
All states coverage is weaker than valid transitions coverage, because it can typically be achieved without
exercising all the transitions. Valid transitions coverage is the most widely used coverage criterion.
Achieving full valid transitions coverage guarantees full all states coverage. Achieving full all transitions
coverage guarantees both full all states coverage and full valid transitions coverage and should be a
minimum requirement for mission and safety-critical software.
Checklist items are often phrased in the form of a question. It should be possible to check each item
separately and directly. These items may refer to requirements, graphical interface properties, quality
characteristics or other forms of test conditions. Checklists can be created to support various test types,
including functional and non-functional testing (e.g., 10 heuristics for usability testing (Nielsen 1994)).
Some checklist entries may gradually become less effective over time because the developers will learn
to avoid making the same errors. New entries may also need to be added to reflect newly found high
severity defects. Therefore, checklists should be regularly updated based on defect analysis. However,
care should be taken to avoid letting the checklist become too long (Gawande 2009).
In the absence of detailed test cases, checklist-based testing can provide guidelines and some degree of
consistency for the testing. If the checklists are high-level, some variability in the actual testing is likely to
occur, resulting in potentially greater coverage but less repeatability.
software development. In Planning Poker, estimates are usually made using cards with numbers that
represent the effort size.
Three-point estimation. In this expert-based technique, three estimations are made by the experts: the
most optimistic estimation (a), the most likely estimation (m) and the most pessimistic estimation (b). The
final estimate (E) is their weighted arithmetic mean. In the most popular version of this technique, the
estimate is calculated as E = (a + 4*m + b) / 6. The advantage of this technique is that it allows the
experts to calculate the measurement error: SD = (b – a) / 6. For example, if the estimates (in person-
hours) are: a=6, m=9 and b=18, then the final estimation is 10±2 person-hours (i.e., between 8 and 12
person-hours), because E = (6 + 4*9 + 18) / 6 = 10 and SD = (18 – 6) / 6 = 2.
See (Kan 2003, Koomen 2006, Westfall 2009) for these and many other test estimation techniques.
number and naming of the layers may differ. For example, the original test pyramid model (Cohn 2009)
defines three layers: “unit tests”, “service tests” and “UI tests”. Another popular model defines unit
(component) tests, integration (component integration) tests, and end-to-end tests. Other test levels (see
section 2.2.1) can also be used.
• Risk likelihood – the probability of the risk occurrence (greater than zero and less than one)
• Risk impact (harm) – the consequences of this occurrence
These two factors express the risk level, which is a measure for the risk. The higher the risk level, the
more important is its treatment.
Risk assessment can use a quantitative or qualitative approach, or a mix of them. In the quantitative
approach the risk level is calculated as the multiplication of risk likelihood and risk impact. In the
qualitative approach the risk level can be determined using a risk matrix.
Product risk analysis may influence the thoroughness and scope of testing. Its results are used to:
• Determine the scope of testing to be carried out
• Determine the particular test levels and propose test types to be performed
• Determine the test techniques to be employed and the coverage to be achieved
• Estimate the test effort required for each task
• Prioritize testing in an attempt to find the critical defects as early as possible
• Determine whether any activities in addition to testing could be employed to reduce risk
• Re-evaluating whether a test item meets entry criteria or exit criteria due to rework
• Adjusting the test schedule to address a delay in the delivery of the test environment
• Adding new resources when and where needed
Test completion collects data from completed test activities to consolidate experience, testware, and any
other relevant information. Test completion activities occur at project milestones such as when a test level
is completed, an agile iteration is finished, a test project is completed (or cancelled), a software system is
released, or a maintenance release is completed.
7. References
Standards
ISO/IEC/IEEE 29119-1 (2022) Software and systems engineering – Software testing – Part 1: General
Concepts
ISO/IEC/IEEE 29119-2 (2021) Software and systems engineering – Software testing – Part 2: Test
processes
ISO/IEC/IEEE 29119-3 (2021) Software and systems engineering – Software testing – Part 3: Test
documentation
ISO/IEC/IEEE 29119-4 (2021) Software and systems engineering – Software testing – Part 4: Test
techniques
ISO/IEC 25010, (2011) Systems and software engineering – Systems and software Quality Requirements
and Evaluation (SQuaRE) System and software quality models
ISO/IEC 20246 (2017) Software and systems engineering – Work product reviews
ISO/IEC/IEEE 14764:2022 – Software engineering – Software life cycle processes – Maintenance
ISO 31000 (2018) Risk management – Principles and guidelines
Books
Adzic, G. (2009) Bridging the Communication Gap: Specification by Example and Agile Acceptance
Testing, Neuri Limited
Ammann, P. and Offutt, J. (2016) Introduction to Software Testing (2e), Cambridge University Press
Andrews, M. and Whittaker, J. (2006) How to Break Web Software: Functional and Security Testing of
Web Applications and Web Services, Addison-Wesley Professional
Beck, K. (2003) Test Driven Development: By Example, Addison-Wesley
Beizer, B. (1990) Software Testing Techniques (2e), Van Nostrand Reinhold: Boston MA
Boehm, B. (1981) Software Engineering Economics, Prentice Hall, Englewood Cliffs, NJ
Buxton, J.N. and Randell B., eds (1970), Software Engineering Techniques. Report on a conference
sponsored by the NATO Science Committee, Rome, Italy, 27–31 October 1969, p. 16
Chelimsky, D. et al. (2010) The Rspec Book: Behaviour Driven Development with Rspec, Cucumber, and
Friends, The Pragmatic Bookshelf: Raleigh, NC
Cohn, M. (2009) Succeeding with Agile: Software Development Using Scrum, Addison-Wesley
Copeland, L. (2004) A Practitioner’s Guide to Software Test Design, Artech House: Norwood MA
Craig, R. and Jaskiel, S. (2002) Systematic Software Testing, Artech House: Norwood MA
Crispin, L. and Gregory, J. (2008) Agile Testing: A Practical Guide for Testers and Agile Teams, Pearson
Education: Boston MA
Forgács, I., and Kovács, A. (2019) Practical Test Design: Selection of traditional and automated test
design techniques, BCS, The Chartered Institute for IT
Gawande A. (2009) The Checklist Manifesto: How to Get Things Right, New York, NY: Metropolitan
Books
Gärtner, M. (2011), ATDD by Example: A Practical Guide to Acceptance Test-Driven Development,
Pearson Education: Boston MA
Gilb, T., Graham, D. (1993) Software Inspection, Addison Wesley
Hendrickson, E. (2013) Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing, The
Pragmatic Programmers
Hetzel, B. (1988) The Complete Guide to Software Testing, 2nd ed., John Wiley and Sons
Jeffries, R., Anderson, A., Hendrickson, C. (2000) Extreme Programming Installed, Addison-Wesley
Professional
Jorgensen, P. (2014) Software Testing, A Craftsman’s Approach (4e), CRC Press: Boca Raton FL
Kan, S. (2003) Metrics and Models in Software Quality Engineering, 2 nd ed., Addison-Wesley
Kaner, C., Falk, J., and Nguyen, H.Q. (1999) Testing Computer Software, 2nd ed., Wiley
Kaner, C., Bach, J., and Pettichord, B. (2011) Lessons Learned in Software Testing: A Context-Driven
Approach, 1st ed., Wiley
Kim, G., Humble, J., Debois, P. and Willis, J. (2016) The DevOps Handbook, Portland, OR
Koomen, T., van der Aalst, L., Broekman, B. and Vroon, M. (2006) TMap Next for result-driven testing,
UTN Publishers, The Netherlands
Myers, G. (2011) The Art of Software Testing, (3e), John Wiley & Sons: New York NY
O’Regan, G. (2019) Concise Guide to Software Testing, Springer Nature Switzerland
Pressman, R.S. (2019) Software Engineering. A Practitioner’s Approach, 9th ed., McGraw Hill
Roman, A. (2018) Thinking-Driven Testing. The Most Reasonable Approach to Quality Control, Springer
Nature Switzerland
Van Veenendaal, E (ed.) (2012) Practical Risk-Based Testing, The PRISMA Approach, UTN Publishers:
The Netherlands
Watson, A.H., Wallace, D.R. and McCabe, T.J. (1996) Structured Testing: A Testing Methodology Using
the Cyclomatic Complexity Metric, U.S. Dept. of Commerce, Technology Administration, NIST
Westfall, L. (2009) The Certified Software Quality Engineer Handbook, ASQ Quality Press
Whittaker, J. (2002) How to Break Software: A Practical Guide to Testing, Pearson
Whittaker, J. (2009) Exploratory Software Testing: Tips, Tricks, Tours, and Techniques to Guide Test
Design, Addison Wesley
Whittaker, J. and Thompson, H. (2003) How to Break Software Security, Addison Wesley
Wiegers, K. (2001) Peer Reviews in Software: A Practical Guide, Addison-Wesley Professional
Enders, A. (1975) “An Analysis of Errors and Their Causes in System Programs,” IEEE Transactions on
Software Engineering 1(2), pp. 140-149
Manna, Z., Waldinger, R. (1978) “The logic of computer programming,” IEEE Transactions on Software
Engineering 4(3), pp. 199-229
Marick, B. (2003) Exploration through Example, http://www.exampler.com/old-
blog/2003/08/21.1.html#agile-testing-project-1
Nielsen, J. (1994) “Enhancing the explanatory power of usability heuristics,” Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems: Celebrating Interdependence, ACM Press, pp.
152–158
Salman. I. (2016) “Cognitive biases in software quality and testing,” Proceedings of the 38th International
Conference on Software Engineering Companion (ICSE '16), ACM, pp. 823-826.
Wake, B. (2003) “INVEST in Good Stories, and SMART Tasks,” https://xp123.com/articles/invest-in-good-
stories-and-smart-tasks/
Level 2: Understand (K2) – the candidate can select the reasons or explanations for statements related
to the topic, and can summarize, compare, classify and give examples for the testing concept.
Action verbs: classify, compare, contrast, differentiate, distinguish, exemplify, explain, give examples,
interpret, summarize.
Examples:
• “Classify the different options for writing acceptance criteria.”
• “Compare the different roles in testing” (look for similarities, differences or both).
• “Distinguish between project risks and product risks” (allows concepts to be differentiated).
• “Exemplify the purpose and content of a test plan.”
• “Explain the impact of context on the test process.”
• “Summarize the activities of the review process.”
Level 3: Apply (K3) – the candidate can carry out a procedure when confronted with a familiar task, or
select the correct procedure and apply it to a given context.
Action verbs: apply, implement, prepare, use.
Examples:
• “Apply test case prioritization” (should refer to a procedure, technique, process, algorithm etc.).
• “Prepare a defect report.”
• “Use boundary value analysis to derive test cases.”
References for the cognitive levels of learning objectives:
Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Teaching
Assessing: A Revision of Bloom's Taxonomy of Educational Objectives, Allyn & Bacon
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
Business Outcomes: Foundation Level
BUSINESS OUTCOMES
Chapter/
K-
FL-BO10
FL-BO11
FL-BO12
FL-BO13
FL-BO14
FL-BO1
FL-BO2
FL-BO3
FL-BO4
FL-BO5
FL-BO6
FL-BO7
FL-BO8
FL-BO9
section/ Learning objective
level
subsection
2.1.1 Explain the impact of the chosen software development lifecycle on testing K2 X
2.1.2 Recall good testing practices that apply to all software development lifecycles K1 X
2.1.6 Explain how retrospectives can be used as a mechanism for process improvement K2 X X
Recognize types of products that can be examined by the different static test K1
3.1.1 X X
techniques
3.1.2 Explain the value of static testing K2 X X X
Explain how to write user stories in collaboration with developers and business K2
4.5.1 X X
representatives
4.5.2 Classify the different options for writing acceptance criteria K2 X
5.1.2 Recognize how a tester adds value to iteration and release planning K1 X X X
5.1.7 Summarize the testing quadrants and their relationships with test levels and test types K2 X X
5.2.1 Identify risk level by using risk likelihood and risk impact K1 X X
5.2.3 Explain how product risk analysis may influence thoroughness and scope of testing K2 X X X
5.2.4 Explain what measures can be taken in response to analyzed product risks K2 X X X
5.3.2 Summarize the purposes, content, and audiences for test reports K2 X X X
ISTQB® Foundation Syllabus v4.0 is a major update based on the Foundation Level syllabus (v3.1.1) and
the Agile Tester 2014 syllabus. For this reason, there are no detailed release notes per chapter and section.
However, a summary of principal changes is provided below. Additionally, in a separate Release Notes
document, ISTQB® provides traceability between the learning objectives (LO) in the version 3.1.1 of the
Foundation Level Syllabus, 2014 version of the Agile Tester Syllabus, and the learning objectives in the
new Foundation Level v4.0 Syllabus, showing which LOs have been added, updated, or removed.
At the time the syllabus was written (2022-2023) more than one million people in more than 100 countries
have taken the Foundation Level exam, and more than 800,000 are certified testers worldwide. With the
expectation that all of them have read the Foundation Syllabus to be able to pass the exam, this makes the
Foundation Syllabus likely to be the most read software testing document ever! This major update is made
in respect of this heritage and to improve the views of hundreds of thousands more people on the level of
quality that ISTQB® delivers to the global testing community.
In this version all LOs have been edited to make them atomic, and to create one-to-one traceability between
LOs and syllabus sections, thus not having content without also having a LO. The goal is to make this
version easier to read, understand, learn, and translate, focusing on increasing practical usefulness and
the balance between knowledge and skills.
This major release has made the following changes:
• Size reduction of the overall syllabus. Syllabus is not a textbook, but a document that serves to
outline the basic elements of an introductory course on software testing, including what topics
should be covered and on what level. Therefore, in particular:
o In most cases examples are excluded from the text. It is a task of a training provider to
provide the examples, as well as the exercises, during the training
o The “Syllabus writing checklist” was followed, which suggests the maximum text size for
LOs at each K-level (K1 = max. 10 lines, K2 = max. 15 lines, K3 = max. 25 lines)
• Reduction of the number of LOs compared to the Foundation v3.1.1 and Agile v2014 syllabi
o 14 K1 LOs compared with 21 LOs in FL v3.1.1 (15) and AT 2014 (6)
o 42 K2 LOs compared with 53 LOs in FL v3.1.1 (40) and AT 2014 (13)
o 8 K3 LOs compared with 15 LOs in FL v3.1.1 (7) and AT 2014 (8)
• More extensive references to classic and/or respected books and articles on software testing and
related topics are provided
• Major changes in chapter 1 (Fundamentals of Testing)
o Section on testing skills expanded and improved
o Section on the whole team approach (K1) added
o Section on the independence of testing moved to Chapter 1 from Chapter 5
• Major changes in chapter 2 (Testing Throughout the SDLCs)
o Sections 2.1.1 and 2.1.2 rewritten and improved, the corresponding LOs are modified
o More focus on practices like: test-first approach (K1), shift-left (K2), retrospectives (K2)
o New section on testing in the context of DevOps (K2)
o Integration testing level split into two separate test levels: component integration testing
and system integration testing
• Major changes in chapter 3 (Static Testing)
o Section on review techniques, together with the K3 LO (apply a review technique)
removed
• Major changes in chapter 4 (Test Analysis and Design)
o Use case testing removed (but still present in the Advanced Test Analyst syllabus)
o More focus on collaboration-based approach to testing: new K3 LO about using ATDD to
derive test cases and two new K2 LOs about user stories and acceptance criteria
o Decision testing and coverage replaced with branch testing and coverage (first, branch
coverage is more commonly used in practice; second, different standards define the
decision differently, as opposed to “branch”; third, this solves a subtle, but serious flaw
from the old FL2018 which claims that „100% decision coverage implies 100% statement
coverage” – this sentence is not true in case of programs with no decisions)
o Section on the value of white-box testing improved
• Major changes in chapter 5 (Managing the Test Activities)
o Section on test strategies/approaches removed
o New K3 LO on estimation techniques for estimating the test effort
o More focus on the well-known Agile-related concepts and tools in test management:
iteration and release planning (K1), test pyramid (K1), and testing quadrants (K2)
o Section on risk management better structured by describing four main activities: risk
identification, risk assessment, risk mitigation and risk monitoring
• Major changes in chapter 6 (Test Tools)
o Content on some test automation issues reduced as being too advanced for the
foundation level – section on tools selection, performing pilot projects and introducing
tools into organization removed
11. Index