1-Istqb Foundation Level Syllabus 2011
1-Istqb Foundation Level Syllabus 2011
R eleased
Version 201 1
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Copyrig ht Notice
This document may be copied in its entirety, or extracts m ade, if the s ource is ack
nowledged.
Copyrig ht Notice International Software Testing Qualifications Board (hereinaft er called
ISTQB) ISTQB i s a registered trademark of the Intern ational Soft ware Testing Qualifications
Board,
Copyrig ht 2011 th e authors for the update 2011 (Thomas Mller (c hair), Debra
Friedenberg, and the ISTQ B WG Foun dation Level)
Copyrig ht 2010 th e authors for the update 2010 (Thomas Mller (c hair), Armin Beer,
Martin Klonk, Rahul Verma
Copyrig ht 2007 th e authors for the update 2007 (Thomas Mller (c hair), Dorothy
Graham, D ebra Friedenb erg and Eri k van Veenendaal)
Copyrig ht 2005, th e authors (T homas Mll er (chair), R ex Black, Sig rid Eldh, Dorothy
Graham, Klaus Olsen, Maaret Pyhjrvi, G eoff Thompson and Erik van Veenendaal).
All rights reserved.
The auth ors hereby transfer the copyright to the Internati onal Software Testing Q
ualifications Board (ISTQB). The authors (as current copyright holders) and ISTQB (as th e
future cop yright holder) have ag reed to the f ollowing conditions of use:
1)
Any individual or training co mpany may
use this sylla bus as the b asis for a tr aining course if the
authors and the ISTQB are acknowledg ed as the so urce and co pyright owners of the s
yllabus
and provided th at any adve rtisement of such a train ing course may mentio n the syllabu s
only
after submissio n for official accreditatio n of the tr aining mate rials to an ISTQB recognized
National Board.
2)
Any individual or group of individuals m ay use this s yllabus as the basis for articles, bo
oks, or
other derivative writings if t he authors
and the ISTQB are acknowledged as the sour ce and
copy right owner s of the syllabus.
3)
Any ISTQB-recognized Nati onal Board
may translat e this syllab us and license the sylla bus (or
its translation) to other parties.
Version 2 011
Page 2 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
Version
D ate
Remark
ISTQB 2 011
E ffective 1-A pr-2011
Certified Tester Foundation Level Syllabus
Notes
ISTQB 2 010
E ffective 30- Mar-2010
Certified Tester Foundation Level Syllabus
Notes
ISTQB 2 007
0 1-May-2007
Certified Tester Foundation Level Syllabus
ISTQB 2 005
0 1-July-2005
Certified Tester Foundation Level Syllabus
ASQF V2.2
July-2003
ASQF Sy llabus Foundation Level Version 2.2
ISEB V2.0
2 5-Feb-1999
ISEB Software Testin g Foundation Syllabus V 2.0
25 February 1999
Version 2 011
Page 3 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
Table of Contents
Acknowledgements.
....................
..........................................................
...................................................
The E xamination
....................
..............................................................................
...............................
8
Accreditation.......
....................
..............................................................................
...............................
8
1.
Fu ndamentals of Testing (K 2)..........................................................................
.............................
10
1.1
Why is Te sting Necessary (K2) ...............................................................
.............................
11
1.1.1
Software Systems Context (K1) ..........................................................
.............................
11
1.1.2
Cause s of Softwar e Defects (K 2) .......................................................
.............................
11
1.1.3
Role of Testing in S oftware De velopment, M aintenance and Operations (K2) ...............
11
1.1.4
Testing and Quality (K2) .....................................................................
.............................
11
1.2
What is Testing? (K2)
..............................................................................
.............................
13
1.3
1.4
Fundamental Test Pro cess (K1) .............................................................
.............................
15
1.4.1
Test Planning and Control (K1) ..........................................................
.............................
15
1.4.2
Test A nalysis and
Design (K1) ...........................................................
.............................
15
1.4.3
Test Im plementation and Execu tion (K1) ............................................
.............................
16
1.4.4
Evaluating Exit Criteria and Re porting (K1) ........................................
.............................
16
1.4.5
Test C losure Activities (K1) ................................................................
.............................
16
1.5
1.6
Code of E thics ...........
..............................................................................
.............................
20
2.1
Software Developmen t Models (K 2) .......................................................
.............................
22
2.1.1
V-mod el (Sequenti al Development Model) (K2) .................................
.............................
22
2.1.2
Iterative-increment al Development Models ( K2) ................................
.............................
22
2.1.3
Testing within a Lif e Cycle Model (K2) ...............................................
.............................
22
2.2
Test Leve ls (K2) ........
..............................................................................
.............................
24
2.2.1
Compo nent Testin g (K2) .....................................................................
.............................
24
2.2.2
Integration Testing (K2) ......................................................................
.............................
25
2.2.3
System
Testing (K 2) ...........................................................................
.............................
26
2.2.4
Acceptance Testin g (K2).....................................................................
.............................
26
2.3
Test Type s (K2) .........
..............................................................................
.............................
28
2.3.1
Testing
of Function (Functional Testing) (K2 ) ....................................
.............................
28
2.3.2
Testing
of Non-fun ctional Software Characteristics (Non-functional T esting) (K2) .........
28
2.3.3
Testing
of Software Structure/Architecture (Structural.Testing)(K2).............................
29
2.3.4
Testing Related to Changes: R e-testing and Regression Testing (K2 )...........................
29
2.4
Maintena nce Testing (K2) .......................................................................
.............................
30
3.
Sta tic Techniqu es (K2) .......
..............................................................................
.............................
31
3.1
Static Tec hniques and
the Test Process (K2) .........................................
.............................
32
3.2
Review Process (K2) .
..............................................................................
.............................
33
3.2.1
3.2.2
Roles
and Respon sibilities (K1) ..........................................................
.............................
33
3.2.3
Types of Reviews (K2)........................................................................
.............................
34
3.2.4
Succes s Factors for Reviews ( K2)......................................................
.............................
35
3.3
Static An alysis by Too ls (K2) ..................................................................
.............................
36
4.
Te st Design Te chniques (K4) ..........................................................................
.............................
37
4.1
The Test Developmen t Process (K 3) ......................................................
.............................
38
4.2
Categories of Test Design Techni ques (K2) ...........................................
.............................
39
Version 2 011
Page 4 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
4.3
Specification-based or Black-box Techniques (K3) ................................
.............................
40
4.3
.1
Equivalence Partiti oning (K3) ..
...........................................................
.............................
40
4.3
.2
Bound ary Value An alysis (K3) .
...........................................................
.............................
40
4.3
.3
Decisio n Table Tes ting (K3) ....
...........................................................
.............................
40
4.3
.4
State T ransition Testing (K3) ...
...........................................................
.............................
41
4.3
.5
Use C ase Testing ( K2) .............
...........................................................
.............................
41
4.4
Structure- based or W hite-box Techniques (K4 ).....................................
.............................
42
4.4
.1
Statement Testing and Covera ge (K4) ...............................................
.............................
42
4.4
.2
Decisio n Testing a nd Coverage (K4) ..................................................
.............................
42
4.4
.3
Other Structure-ba sed Techniques (K1) .............................................
.............................
42
4.5
Experienc e-based Te chniques (K 2)........................................................
.............................
43
4.6
Choosing Test Techniques (K2) ...
...........................................................
.............................
44
5.
Te st Management (K3) .........................
...........................................................
.............................
45
5.1
Test Orga nization (K2) .................
...........................................................
.............................
47
5.1.1
Test Organization and Independence (K2) .........................................
.............................
47
5.1.2
Tasks of the Test Leader and Tester (K1) ..........................................
.............................
47
5.2
Test Planning and Es timation (K3
..........................................................
.............................
49
5.2
.1
Test Planning (K2) ...................
...........................................................
.............................
49
5.2
.2
Test Planning Activ ities (K3) ....
...........................................................
.............................
49
5.2
.3
Entry C riteria (K2) ....................
...........................................................
.............................
49
5.2
.4
Exit Criteria (K2) .......................
...........................................................
.............................
49
5.2
.5
Test E stimation (K 2) ................
...........................................................
.............................
50
5.2
.6
Test Strategy, Test Approach ( K2) .....................................................
.............................
50
5.3
Test Progress Monito ring and Con trol (K2) ............................................
.............................
51
5.3
.1
Test P ogress Monitoring (K1) .
...........................................................
.............................
51
5.3
.2
Test R eporting (K2) ..................
...........................................................
.............................
51
5.3
.3
Test C ontrol (K2) ......................
...........................................................
.............................
51
5.4
Configuration Manage ment (K2) ..
...........................................................
.............................
52
5.5
Risk and Testing (K2) ...................
...........................................................
.............................
53
5.5
.1
Project Risks (K2) ....................
...........................................................
.............................
53
5.5
.2
Produc t Risks (K2) ...................
...........................................................
.............................
53
5.6
Incident M anagement (K3)...........
...........................................................
.............................
55
6.
To ol Support fo r Testing (K2)................
...........................................................
.............................
57
6.1
Types of Test Tools ( K2)..............
...........................................................
.............................
58
6.1.1
Tool S upport for Te sting (K2) ..
...........................................................
.............................
58
6.1.3
Tool S upport for M anagement of Testing an d Tests (K1 ) ..................
.............................
59
6.1.5
Tool S upport for Te st Specification (K1) .............................................
.............................
59
6.1.6
Tool S upport for Te st Execution
and Logging (K1) ............................
.............................
60
6.1.8
Tool S upport for Sp ecific Testin g Needs (K 1) ....................................
.............................
60
6.2
Effective Use of Tools: Potential
Benefits and Risks (K2) .....................
.............................
62
6.2
.1
Potential Benefits a nd Risks of Tool Support for Testing (for all tools) (K2) ...................
62
6.2
.2
Special Considerations for Som e Types of Tools (K1) .......................
.............................
62
6.3
Introducin g a Tool into an Organization (K1) ..........................................
.............................
64
7.
References..... .......................................
...........................................................
.............................
65
Book s..................
.......................................
...........................................................
.............................
65
Version 2 011
Page 5 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
10.1.1
Gener al Rules .....................................................................................
............................. 71
10.1.2
Current Content ..................................................................................
............................. 71
10.1.3
Learni ng Objectives ............................................................................
............................. 71
10.1.4
Overall Structure .................................................................................
............................. 71
11.
Appendix D Notice to T raining Pro iders .................................................
............................. 73
12.
Appendix E Release Notes .......................................................................
............................. 74
Relea se 2010 ..... ..................................................................................................
............................. 74
Relea se 2011 ..... ..................................................................................................
............................. 74
13.
Index .......... ..................................................................................................
............................. 76
Version 2 011
Page 6 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Ackn owledgements
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2011):
Thomas Mller (chair), Debra Friedenberg. The core tea m thanks the review team (Dan Almog, Armin B
eer, Rex Black, Julie Ga rdiner, Judy McKay, Tuula Pkknen, Eric Riou du Cosquier Hans Schaefer,
Stephanie Ulrich, Erik van Veenendaal) and all National Bo ards for the suggestions for the curre nt
version of the syllabus.
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2010):
Thomas Mller (chair), Rahul Verma, Martin Klonk and Armin Beer. The core tea m thanks the review t
eam (Rex Bl ack, Mette Bruhn-Peders on, Debra Friedenberg, Klaus Olsen, Judy McKa y, Tuula P
kknen, M eile Posthu ma, Hans Schaefer, Stephanie Ulrich, Pete Williams, Erik van Veenen daal) and
all National Boards for their suggestion s.
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2007):
Thomas Mller (chair), Dorothy Graham, Deb ra Friedenberg, and Eri k van Veenendaal. The core team
tha nks the review team (Ha ns Schaefe r, Stephanie Ulrich, Meile Posthuma, Anders Pettersson, and Wo
nil Kwon) an d all the National Boards for their sug gestions.
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2005):
Thomas Mller (chair), Rex Blac k, Sigrid Eld h, Dorothy Graham, Klaus Olsen, Maaret Pyhjrvi, Geoff
Th ompson an d Erik van V eenendaal and the revie w team and all National Boards for their suggestions.
Version 2 011
Page 7 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
K4: analyze
Further details and e xamples of learning obje ctives are given in Appendix B.
All terms listed under Terms jus t below chapter headings shall be re membered ( K1), even if not
explicitly mentioned in the learni ng objectives .
The Examination
The Foundation Lev el Certificate examinatio n will be bas ed on this sy llabus. Answ ers to examination
questio ns may require the use of material ba sed on mor e than one section of this syllabus. All sections
of the sylla bus are exa minable.
The form at of the ex amination is multiple choice.
Exams may be take n as part of a n accredited training co urse or taken independe ntly (e.g., at an
examination center o r in a public exam). Co mpletion of an accredited training cou rse is not a prerequisite for the exa m.
Accreditation
An ISTQ B National Board may accredit training providers whose cour se material follows this syllabus.
Training providers shou ld obtain accreditation guidelines from the board or body tha t perform s the
accreditation. An ac credited co urse is recognized as co nforming to this syllabus, and is allowe d to have
a n ISTQB examination as part of the course.
Further guidance for training pro viders is give n in Append ix D.
Version 2 011
Page 8 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Level of Detail
The leve l of detail in this syllabus allows internationally co nsistent teaching and e xamination. In order to
achieve this goal, the syllabus consi sts of:
o General instructional objectiv es describin g the intention of the Foundation Le vel
A list of information to teach, including a description, and referenc es to additio nal sources if requ ired
Lear ning objecti ves for each knowledge area, describ ing the cog nitive learning outcome and min dset
to be achieved
o A list of terms that students must be able to recall an d understan d
A de scription of the key conc epts to teach, including sources suc h as accepte d literature or standards
The syll abus content is not a des cription of th e entire knowledge area of software testing; it reflects the
level of detail to be covered in Foundatio n Level training courses.
This hea ding shows that Chapte r 2 has learning objective s of K1 (ass umed when a higher level is
shown) and K2 (but not K3), and it is intended to take 11 5 minutes to teach the material in the chapter.
Within each chapter there are a num ber of secti ons. Each se ction also h as the learning objective s and
the a mount of time required. S ubsections that do not h ave a time g iven are included within the time for
th e section.
Version 2 011
Page 9 of 7 8
31-Ma r-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Version 2 011
Page 10 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Bug, defect, error, failure, fault, m istake, quality, risk
Failures can be caus ed by enviro nmental co nditions as w ell. For example, radiation, magnetism,
electronic fields, and pollution can cause faults in firmware or influenc e the execution of softw are by
changin g the hardwa re conditions.
1.1.3 Role of T esting in Software Develop ment, Mai ntenance and Operat ions (K2)
Rigorou s testing of systems and documentation can help to reduce th e risk of problems occurring during
operation and contribute to the quality of the software system, if the defect s found are corrected before the
system is re leased for operational u se.
Software testing ma y also be req uired to me et contractua l or legal re quirements, or industry-specific
standard s.
With the help of testing, it is poss ible to meas ure the quality of software in terms o f defects fou nd, for
both functional a nd non-functional softwar e requirements and characteristics (e.g., reliabili ty, usability,
efficiency, maintainability and portability). For more information on non-functional tes ting see Chapter 2; for
more information on software characte ristics see Software En gineering Software Product Qu ality (ISO
9126).
Testing can give con fidence in th e quality of the software if it finds few or no defec ts. A properly designe d test
that pa sses reduce s the overall level of risk in a system. When testing does find defects, the quality o f the
softwa re system in creases when those defe cts are fixed .
Lessons should be l earned from previous pro jects. By understanding the root causes of defects found in
other projects, processes can be im proved, which in turn sho uld prevent those defects from reoccurring
and, as a consequen ce, improve the quality o f future systems. This is an aspect of quality assurance.
Testing should be in tegrated as one of the qu ality assurance activitie s (i.e., along side development
standard s, training and defect an alysis).
Version 2 011
Page 11 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Version 2 011
Page 12 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Debugging, requirem ent, review, test case, t esting, test objective
Background
A common perceptio n of testing is that it only consists of running test s, i.e., executing the soft ware. This
is part of testing , but not all of the testin g activities.
Test activities exist b efore and af ter test exec ution. These activities i nclude plann ing and control,
choosin g test conditions, designing and executing test cases, checkin g results, ev aluating exi t criteria,
reporting on the testing p rocess and system und r test, and finalizing or completing closure activities after
a test phase has b een completed. Testing also includes reviewing documents (including source co de) and
con ducting static analysis.
Both dyn amic testing and static testing can be used as a means for achieving similar objective s, and will
provide information that can be used to improve both the system being tested and the develop ment and
tes ting process es.
Testing can have the following o bjectives: o Finding defects
o Gain ing confiden ce about th e level of qu ality o Prov iding information for decision-makin g
The thou ght process and activitie s involved i n designing tests early in the life cycle (verifying the test
basis via test design) can help to prevent defects fro m being intro duced into c ode. Review s of docume
nts (e.g., req uirements) and the iden tification and resolution o f issues als o help to prevent defects
appearing in the code.
Different viewpoints in testing tak e different objectives into account. For example, in developm ent testing ( e.g.,
compo nent, integration and sys tem testing), the main ob jective may be to cause as many failures as pos sible
so that defects in t he software are identified and can be fixed. In accepta nce testing, t he main objective may b
e to confirm that the system works as expected, to gain confidence that it has met th e requireme nts. In some
cases the main objectiv e of testing may be to as sess the qua lity of the so ftware (with no intention of fixing
defects), to give information to stakeholders of the risk of releasing the syste m at a given time. Maintenance
testing often incl udes testing t hat no new d efects have been introd uced during development of the chan ges.
During
operational testing, the main obj ective may be to assess system characteristics s uch as reliability or
availability.
Debugging and testi ng are differ ent. Dynami c testing can show failure s that are c aused by de fects.
Debugging is the de velopment activity that fi nds, analyzes and remov es the caus e of the failure.
Subsequ ent re-testin g by a tester ensures th at the fix doe s indeed re solve the failure. The responsibility
for the se activities is usually te ters test an d developers debug.
The pro cess of testin g and the te sting activities are explained in Section 1.4.
Version 2 011
Page 13 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Exhaustive testing
Princi ples
A numb er of testing principles ha ve been sug gested over the past 40 years and offer general guidelin es
common f or all testing .
Principle 1 Testing shows presence of defects
Testing can show th at defects are present, b ut cannot pr ove that there are no defects. Testing reduces
the probability of undiscovered defe cts remainin g in the softw are but, eve n if no defe cts are found, it is
not a pro of of correctn ess.
Principle 2 Exhau stive testing is impossible
Testing everything ( all combinations of inputs and preconditions) is n ot feasible e xcept for trivial cases. I
nstead of ex haustive testing, risk analysis and priorities should be used to focus testing efforts.
Principle 3 Early t esting
To find d efects early, testing acti vities shall be started as early as pos sible in the software or s ystem
develop ment life cycle, and shall be focused on defined o bjectives.
Principle 4 Defec t clustering
Testing effort shall be focused proportionally to the expec ted and later observed d efect density of
modules. A small number of mod ules usually contains m ost of the de fects discov ered during p re-release
testing, or is responsible for most of the operation al failures.
Principle 5 Pesticide parado x
If the sa me tests are repeated ov er and over again, eventually the sa me set of tes t cases will no longer fi
nd any new defects. To overcome thi s pesticide paradox, test cases nee d to be regu larly reviewe d and
revise d, and new a nd different tests need t o be written to exercise d ifferent parts of the softw are or
syste m to find potentially more defects.
Version 2 011
Page 14 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Confirm ation testing, re-testing, exit criteria, incident, regr ession testi ng, test basi s, test condition, test
cov erage, test data, test execution, test log, test plan, test proced ure, test policy, test suit e, test summary
report, testware
Background
The mos t visible part of testing is test executi on. But to b e effective a nd efficient, t est plans should also
include time to be spent on planning the tests, designing test cas es, preparing for execution and
evaluating resul ts.
The fund amental test process consists of the following main activities: o Test planning an d control
o Test analysis and design
o Test implementa tion and exe cution o Evaluating exit criteria and re porting o Test closure activities
Although logically sequential, the activities in the process may overla p or take place concurre ntly. Tailoring
these main activities within the context of the system and th e project is u sually requir ed.
The test analysis an d design activity has the following ma jor tasks:
Reviewing the te st basis (su ch as require ments, soft ware integrit y level 1 (risk level), risk analysis
reports, architecture , design, int erface specifications)
o Evaluating testa bility of the t est basis an d test object s
Identifying and prioritizing te st conditions based on a nalysis of te t items, the specification , beh avior and
structure of the software
o Designing and prioritizing hig h level test cases
o Identifying neces sary test data to support the test co nditions and test cases
o Designing the test environm ent setup an d identifying any require d infrastructu re and tools o Cre
ating bi-directional tracea bility betwee n test basis and test ca es
1
The deg ree to which so ftware complies or must comply with a set of stakeholder-sele cted software a nd/or software-based system
ch aracteristics (e.g., software co mplexity, risk as sessment, safe ty level, security level, desired performance, reliability, or cost)
which a re defined to re flect the importance of the software to its stakeholders.
Version 2 011
Page 15 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
o
Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g.,
a defect in the c ode, in specified test dat a, in the test document, or a mistake in the way t he test
o
was executed)
Repeating test activities as a result of ac tion taken for each discr epancy, for e xample, re-
exec ution of a test that previ ously failed in order to co nfirm a fix ( confirmation testing), exe cution
of a corrected test and/or ex ecution of te sts in order to ensure tha t defects have not been
introduced in un changed areas of the sof tware or that defect fixing did not un cover other
Version 2 011
Page 16 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
o
Clos ing incident reports or ra ising chang e records for any that re main open
o
o
Finalizing and archiving test ware, the test environment and the test infrastruc ture for later reuse
o
Handing over th e testware to the mainten ance organization
o
Analyzing lesson s learned to determine c hanges needed for future releases a nd projects
o
Usin g the information gathered to improve test maturity
Version 2 011
Page 17 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Error guessing, inde pendence
Background
The mindset to be u sed while tes ting and reviewing is different from t hat used while developing software.
With the ri ght mindset developers a re able to test their own code, but se paration of this responsibility to a
te ster is typically done to help focus effort and provide additiona l benefits, such as an indep endent view
by trained a nd professi onal testing resources. In dependent testing may be carried o ut at any lev el of
testing.
A certain degree of i ndependence (avoiding the author bias) often ma kes the tester more effective at
findin g defects an d failures. Independence is not, how ever, a replacement for f amiliarity, and develop ers
can efficiently find m any defects in their own code. Sever al levels of in dependenc e can be defin ed as
shown here from l ow to high:
o Tests designed by the person(s) who wr ote the software under test (low level of independence) o
Tests designed by another person(s) (e.g ., from the d evelopment team)
Tests designed by a person(s) from a dif ferent organizational group (e.g., an independent test team ) or
test spe cialists (e.g ., usability or performance test speci alists)
Tests designed by a person(s) from a dif ferent organization or company (i.e., outsourcing or certification by
an external b ody)
People a nd projects are driven by objectives. People ten d to align the ir plans with the objectives set by
mana gement and other stakeholders, for example, to find defects or to confirm that softwar e meets it s
objectives. Therefore, it is important to clearly state the obje ctives of testing.
Identifyi ng failures d uring testing may be per ceived as criticism again st the produ ct and again st the
author. As a result, t esting is ofte n seen as a destructive activity, eve n though it is very constr uctive in the
m anagement of product ris ks. Looking for failures in a system requires curi osity, profes sional pessimis
m, a critical eye, attentio n to detail, good commu nication wit h development peers, and experien ce on
which to base err or guessing.
If errors, defects or f ailures are communicated in a constructive way, bad feelings between th e testers a
nd the analy sts, designe rs and developers can be avoided. T his applies to defects found during re views
Version 2 011
Page 18 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Start with collab oration rather than battles remind everyone of the common goal of bette r quality
systems
Com municate fin dings on th e product in a neutral, fa ct-focused way without criticizing the person who
crea ted it, for example, write objective an d factual inc ident report s and review findings
o Try t o understand how the other person feels and why they react as they do
o Confirm that the other perso n has unders tood what you have said and vice ve rsa
Version 2 011
Page 19 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Involvem ent in software testing e nables individuals to learn confidential and privil eged informa tion. A
code of ethics is necessary, among other rea sons to ens ure that the information i s not put to inapprop
riate use. Recognizing the ACM and IEEE code of ethics for engineers, the ISTQB states the following
code of ethics:
PUBLIC - Certified software test ers shall act consistently with the pub lic interest
CLIENT AND EMPL OYER - Cert ified software testers sh all act in a manner that is in the best interests of
their c lient and em ployer, cons istent with t he public int erest
PRODUCT - Certified software t esters shall e nsure that t he deliverables they prov ide (on the products
and systems they te st) meet the highest prof essional sta ndards possible
JUDGM ENT- Certifie d software testers shall maintain int egrity and in dependence in their prof essional
judgmen t
MANAGEMENT - Ce rtified software test man agers and le aders shall subscribe to and promote an ethical
approach to the manage ent of softw are testing
PROFE SSION - Certified software testers shall advance the integrity and reputation of the pro fession
consistent with the public interest
COLLEA GUES - Ce rtified softwa re testers sh all be fair to and supportive of their c olleagues, a nd
promote cooperation with software developer s
SELF - Certified software testers shall participate in lifelo ng learning regarding the practice of their professi
on and shall promote an ethical appr oach to the p ractice of the profession
References
1.1.5 Black, 2001, K aner, 2002
1.2 Beiz er, 1990, Black, 2001, M yers, 1979
1.3 Beiz er, 1990, H etzel, 1988, Myers, 1979
1.4 Het zel, 1988
1.4.5 Black, 2001, C raig, 2002 1.5 Blac k, 2001, Hetzel, 1988
Version 2 011
Page 20 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
LO-2.4.1 Compare maintenance testing (t esting an existing system ) to testing a new application with res
pect to test t ypes, triggers for testing and amount of testing (K 2)
LO-2.4.2 Recognize indicator s for mainten ance testing (modificatio n, migration and retirement) (K1)
LO-2.4.3 . Describ e the role of regression t esting and im pact analysis in mainten ance (K2)
Version 2 011
Page 21 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Commer cial Off-The -Shelf (COTS), iterative-incremental development model, validation, verification, Vmodel
Background
Testing does not exist in isolatio n; test activities are relat ed to softwar e development activities. Different
development life cycle models need different approaches to testing.
In practi ce, a V-mod el may have more, fewer or different levels of dev elopment a nd testing, dependi ng on the
pr oject and the software product. For example, ther e may be co mponent integrati on testing after compone nt
testing, and system in tegration te sting after system testing.
Software work produ cts (such as business sc enarios or use cases, re quirements specifications, design
documents and code) pro duced during developm ent are often the basis of testing in o ne or more tes t
levels. References for generic wor k products include Capa bility Maturity Model Inte gration (CMMI) or
Software life cycle pr ocesses (IE EE/IEC 1220 7). Verification and valid ation (and early test design) can
be c arried out d uring the de velopment of the softwar e work prod ucts.
produced using these models may be teste d at several test levels d uring each iteration . An increme nt,
added to others deve loped previo usly, forms a growing p artial system, which sh ould also be tested. Reg
ression testing is increasingly important on all ite rations after the first one . Verification and validation can
be c arried out on each increm ent.
Page 22 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
integrati on to the infrastructure and other systems, or system deploym ent) and acceptance te sting
(function al and/or non-functional, and user a nd/or operational testing).
Version 2 011
Page 23 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Alpha testing, beta t esting, component testin g, driver, field testing, fu nctional requirement,
integrati on, integrati on testing, n on-functional requirement, robustness testing, stu b, system te sting, test
environment, tes t level, test-driven development, user acceptance testing
Background
For each of the test levels, the following can be identified: the generic objectives, the work product(s) being
refe renced for d eriving test cases (i.e., t he test basis ), the test o bject (i.e., wh at is being te sted),
typical defects and failures to b e found, test harness requirements and tool support, and spe cific approac
hes and responsibilities.
Testing a systems configuration data shall b e considered during test planning,
Code
Compon ent testing may include t esting of fun ctionality and specific n on-functional characteris tics, such
as resource-behavior (e.g., searching fo r memory le aks) or robu stness testi ng, as well as structura l
testing (e. g., decision c overage). Test cases are derived fro m work prod ucts such as a specifica tion of
the component, the software design or th e data model.
Typically , component testing occurs with access to the co de being te sted and wit h the support of a
develop ment environ ment, such as a unit test framework or debugging tool. In practice, comp onent
testing u sually involves the programmer who wrote the co de. Defects are typically fixed as so on as they
are found, witho ut formally managing th ese defects.
One app roach to co mponent test ing is to prepare and au tomate test c ases before coding. Thi s is called
a test-first app roach or test-driven development. T his approach is highly iterative and is based o n cycles
of developing test cases, th en building and integratin g small pieces of code, and executing the compo
nent tests c orrecting an y issues and iterating unt il they pass.
Version 2 011
Page 24 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
o
Arch itecture
o
Workflows
o
Use cases
o
Subsystems
o
Infra structure
o
Inter faces
Integration testing tests interfaces between components, interactions with differen t parts of a system, such as
the operating sy stem, file system and ha rdware, and interfaces b etween systems.
There may be more than one level of integra tion testing a nd it may be carried out on test objects of
varying size as follo ws:
Com ponent inte gration testin g tests the i nteractions b etween software components and is done after
component testing
System integration testing tests the inter actions between differen t systems or between hard ware and so
ftware and may be done after syste m testing. In this case, the developin g orga nization ma y control onl y one
side o f the interface. This might be considered as a risk .
Business processes implem ented as workflows may involve a se ries of syste ms. Cross-platform issues
may be significant.
The gre ater the scop e of integration, the more difficult it b ecomes to i solate defects to a speci fic compon ent
or syste m, which may lead to inc reased risk a nd addition al time for tro ubleshooting.
Systema tic integratio n strategies may be based on the sy stem archite cture (such as top-down and
bottom-u p), function al tasks, tran saction proc essing sequ ences, or so me other aspect of the s ystem
or components. In or der to ease fault isolation and detect defects early, integration should normally be
incre mental rather than big bang.
Testing of specific n on-functional characteristics (e.g., performance) may be included in integration
testing as well as fun ctional testi ng.
At each stage of integration, testers concentrate solely on the integra tion itself. Fo r example, if they are
integ rating modu le A with mo dule B they are intereste d in testing the commun ication between the
modules, not the functionality of the individual module as that was done durin g component testing. Both
functio nal and structural approaches may b e used.
Ideally, testers should understand the architecture and influence integ ration planning. If integration tests
are planned before compon ents or systems are built, those components can be built in t he order re
quired for m ost efficient testing.
Version 2 011
Page 25 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
o
Use cases
o
Fun ctional specification
o
Risk analysis re ports
System testing is co ncerned with the behavio r of a whole system/pro duct. The tes ting scope s hall be
clearly addressed in the Master and/or Level Test Pla n for that test level.
In syste m testing, th e test environment should correspon d to the final target or pr oduction environ ment
as much as possibl e in order to minimize th e risk of environment-spe cific failures not being fo und in
testin g.
System testing may include tests based on risks and/or on requirements specifica tions, busin ess
process es, use cases, or other high level text descriptions or models of system b ehavior, interacti ons
with the operating sy stem, and system resources.
System testing should investigat e functional and non-fun ctional requi rements of t he system, and data
qua lity characte ristics. Test ers also need to deal wit h incomplete or undocum ented requirem ents.
Syste m testing of functional requirements starts by usi ng the most appropriate
specifica tion-based ( black-box) techniques fo r the aspect of the system to be tested. For exa mple, a decision
table may be created for combinations of effects described i n business r ules. Structure-based te chniques (w
hite-box) m ay then be u sed to assess the thoroughness of the testing with respect to a structur al element,
User requiremen ts
Page 26 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
use, although it is no t necessaril y the final le vel of testing. For example, a large-s cale system integrati
on test may come after t he acceptance test for a system.
Acceptance testing may occur at various tim es in the life cycle, for example:
o
o
o
Version 2 011
Page 27 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Black-bo x testing, co de coverag e, functional testing, interoperability t esting, load testing, maintainability
testing, performan ce testing, p ortability tes ting, reliability testing, s ecurity testing, stress testing, struct
ural testing, usability testing, white-b ox testing
Background
A group of test activities can be aimed at verifying the software syste m (or a part of a system) based on a
spe cific reason or target for testing.
A test type is focuse d on a particular test obj ective, which could be an y of the follo wing: o A fu nction to
be performed by the software
o A no n-functional quality characteristic, su ch as reliability or usability o The structure or architecture of
the software or syste m
Change related, i.e., confirming that defects have be en fixed (con firmation tes ting) and lo oking for u
nintended changes (reg ression testi ng)
A model of the software may be developed and/or used i n structural testing (e.g., a control flo w model o r
menu structure model), non-functio nal testing (e .g., performance model, usability mo del security threat
modeling), and fu nctional testing (e.g., a process flow model, a state transition model or a plai n language
s pecification ).
Page 28 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
9126). N on-function al testing con siders the external beha vior of the software and in most cas es uses
black-box test design techniques to accomplish that.
Version 2 011
Page 29 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Impact a nalysis, maintenance te sting
Background
Once de ployed, a so ftware system is often in service for years or decades. Durin g this time the system,
its configuration data, or its environm ent are often corrected, changed or extended. The planning of
releases in advance is crucial for successful maintenance testing. A distinction ha s to be made be tween
plann ed releases and hot fixes. Maintena nce testing i s done on an existing operational system, and is
triggered by modifications, mig ration, or retirement of the software or system.
Modifications include planned enhancement changes (e. g., release-based), corrective and emergen cy
changes, and chang es of environ ment, such as planned operating sy stem or database upgrades,
planned upgrade of Commercial-O ff-The-Shelf software, o r patches to correct newly exposed or
discovered vulnerabilities of the o perating sys tem.
Mainten ance testing for migratio n (e.g., from one platform to another) should inclu de operatio nal tests of
the new environment as well as of th e changed software. Migration testin g (conversio n testing) is also
need ed when data from anoth er applicatio n will be mi grated into th e system be ing maintained.
Mainten ance testing for the retire ment of a s ystem may in clude the testing of data migration or archivin
g if long data -retention p eriods are required.
In additi on to testing what has be en changed, maintenance testing in cludes regression testing to parts of
the system that have not been chang ed. The scope of mainte nance testin g is related to the risk of the
change, th e size of the existing sy stem and to the size of th e change. D epending o n the changes,
maintenance testing may be done at any or all test levels a nd for any or all test types. Determi ning how
the existing sys tem may be affected by changes is called impact analysis, a nd is used to help decide how
much re gression te sting to do. T he impact analysis may be used to determin e the regres sion test suite.
Mainten ance testing can be diffic ult if specific ations are out of date or missing, or testers with domain
knowledge a re not availa ble.
References
2.1.3 C MMI, Craig, 2002, Hetzel, 1988, IEE E 12207 2.2 Het zel, 1988
Version 2 011
Page 30 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Version 2 011
Page 31 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
(K2)
Terms
Dynamic testing, static testing
Background
Unlike dynamic testi ng, which re quires the ex ecution of software, static testing tec hniques rely on the
manual examination (reviews ) and autom ated analysis (static ana lysis) of the code or othe r project d
ocumentatio n without the execution of the code.
Reviews are a way o f testing software work p roducts (including code) and can be performed ell before
dynamic test execution. D efects detec ted during r eviews early in the life cy cle (e.g., defects found in
requirement s) are often much cheap er to remov e than those detected by running tes ts on the executing
code.
A review could be do ne entirely as a manual activity, but there is also tool support . The main manual
activity is to examine a w ork product and make c omments about it. Any s oftware work product can be
reviewed, includi ng requirem ents specifications, desig n specifications, code, t est plans, test
specifications, test cases, test scripts, user guides or web pages.
Benefits of reviews i nclude early defect detec tion and correction, dev elopment pr oductivity improve
ments, redu ced develop ment timesc ales, reduce d testing cos t and time, lifetime cost reductio ns, fewer
defects and improved com munication. Reviews can find omissi ons, for exa ple, in requirements, which
are unlike ly to be foun d in dynamic testing.
Reviews, static anal ysis and dynamic testing have the same objective identifying defects. T hey are
complementary; the different techniques can find diffe rent types o f defects eff ectively and efficiently.
Compare d to dynamic testing, static techniques find causes of failures (defects) rather than the failures
the mselves.
Typical defects that are easier to find in reviews than in dynamic testi ng include: d eviations fro m
standard s, requirem ent defects, design defec ts, insufficient maintaina bility and inc orrect interface
specifica tions.
Version 2 011
Page 32 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Entry criteria, formal review, info rmal review, inspection, metric, mode rator, peer review, reviewer, scribe, t
echnical review, walkthro ugh
Background
The diffe rent types of reviews vary from informal, characterized by no written instructions for reviewers, to
systematic, characterized by te am participation, docume nted results of the review, and docume nted
proced ures for cond ucting the r eview. The f ormality of a review process is relate d to factors s uch as
the maturity of the developm ent process, any legal or regulatory requirements or the need for an audit tra
il.
The way a review is carried out d epends on the agreed objectives of the review (e .g., find def ects, gain
und erstanding, educate testers and new team mem bers, or disc ussion and d ecision by consensus).
Page 33 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Moderator: the person who l eads the review of the do cument or set of documents, includi ng planning the
review, running the meeting, and following-up after the meeting. If necessary, the moderator may mediate betw
een the various points o f view and i s often the person upon whom
role s in the review process, and should t ake part in any review meetings.
Scri be (or record er): docume nts all the issues, proble ms and open points that were identified duri ng
the meeting.
Looking at software products or r elated work products from different p erspectives and using checklis ts
can make reviews mo re effective a nd efficient. For exampl e, a checklist based on various perspec tives
such a s user, maint ainer, tester or operation s, or a checklist of typic al requirements problems may help
to uncover pr eviously un detected issu es.
No formal proce ss
o May take the for m of pair pro gramming or a technical lead reviewing designs and code o Results
may be documented
o Varies in usefuln ess depending on the reviewers
o
Walkthrough
o Meeting led by author
o May take the form of scenarios, dry runs, peer group participation
Open-ended ses sions
Optional pre-meeting pre paration of reviewers
Optional preparation of a review repo rt including list of finding s
o
o
o
Technical Review
Documented, defined defect -detection process that i ncludes peers and technical experts with opti onal
manage ment participation
o May be perform ed as a peer review with out manage ment participation o Ideally led by trained
modera tor (not the author)
o Pre-meeting preparation by reviewers o Optional use of c hecklists
Prep aration of a review report which incl udes the list of findings, the verdict whether the
soft ware produc t meets its re quirements and, where appropriate, recommendations relate d to findings
o May vary in practice from quite informal to very forma l
Mai n purposes: discussing, making decisions, evalu ating alternatives, finding defects, solving technical
proble ms and checking conform ance to spe cifications, plans, regulations, and standards
Version 2 011
Page 34 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Inspection
o Led by trained m oderator (no t the author) o Usually conduct ed as a peer examination o Defined
roles
o Inclu des metrics gathering
o For mal process based on rules and checklists
o Specified entry a nd exit criteria for acceptance of the software pr oduct o Pre-meeting preparation
o Inspection report including li st of findings
o For mal follow-up process (with optional p rocess improvement co mponents) o Optional reader
o
Walkthro ughs, technical reviews and inspections can be performed within a peer group,
i.e., colle agues at th e same organizational level. This typ e of review i s called a p eer review.
Defe cts found are welcomed and expres sed objectiv ely
o
People issues a nd psycholo gical aspects are dealt with (e.g., ma king it a positive experie nce for
o
the author)
The review is conducted in an atmosphere of trust; th e outcome will not be used for the
Review techniques are appli ed that are suitable to achieve the ob jectives and to the type and level of
software work produ cts and reviewers
o Checklists or rol es are used if appropriat e to increase effectiveness of defect identification
Trai ning is given in review techniques, es pecially the more formal techniques such as inspection
o
Management supports a good review pro cess (e.g., b y incorporating adequate time for review
activ ities in proje ct schedule s)
o
Version 2 011
Page 35 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Compiler, complexity , control flo w, data flow, static analy sis
Background
The obje ctive of static analysis is to find defects in software source co de and soft ware models. Static an
alysis is performed with out actually executing th e software being examined by the to ol; dynamic testing
doe s execute th e software c ode. Static analysis can locate defects that are h ard to find in d ynamic testi
ng. As with r eviews, static analysis fin ds defects rather than f ailures. Stati c analysis tools analyz e
program code (e.g., co ntrol flow an d data flow), as well as generated o utput such as HTML and X ML.
The valu e of static analysis is:
o Early detection o f defects prior to test ex ecution
o
Early warning ab out suspicious aspects of the code o r design by the calculati on of metrics , such
o
as a high compl exity measure
Prev ention of defects, if less ons are lear ned in devel opment
o Inconsistent interfaces betwe en modules and components o Variables that are not used o r are
improp erly declared
o Unr eachable (de ad) code
o Miss ing and erro neous logic (potentially infinite loops) o Overly complicat ed constructs
o Prog ramming st andards viol ations o Sec urity vulnerabilities
o
Static an alysis tools are typically used by dev elopers (ch ecking against predefine d rules or
programming standa rds) before and during component a nd integration testing or when checking-in code
to configuration manageme nt tools, and by designers during software modeling. Static analysis tools may
produce a lar ge number of warning m essages, which need to be well-man aged to allow the most eff
ective use of the tool.
Compilers may offer some suppo rt for static analysis, including the ca lculation of metrics.
References
3.2 IEE E 1028
3.2.2 Gilb, 1993, va n Veenendaal, 2004 3.2.4 Gilb, 1993, IE EE 1028
3.3 van Veenendaal, 2004
Version 2 011
Page 36 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
LO-4.2.2 Explain the characteristics, com monalities, a nd differenc es between specification -based
testing, structure-ba sed testing and experience-based te sting (K2)
Explain the concepts of stateme nt and decision coverage , and give re asons why these
concept s can also b e used at tes t levels other than component testin g (e.g., on
busines s procedures at system l evel) (K2)
LO-4.4.3
Write te st cases from given contr ol flows usin g statement and decision test desig n
techniqu es (K3)
LO-4.4.4
Assess statement an d decision c overage for completeness with resp ect to define d exit
criteria. (K4)
Version 2 011
Page 37 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Test case specification, test design, test exe cution schedule, test pro cedure specification, test script, tr
aceability
Background
The test developme nt process d escribed in t his section can be done in different w ays, from ve ry informal with
little or no documen tation, to very formal (as it is described below). T he level of formality depends on the
context of the testing, including the maturity of testing an d development process es, time con straints, safe ty or
regulatory requirem ents, and th e people inv olved.
During t est analysis, the test basis documentation is analyzed in order to determi ne what to te st, i.e., to
identify the te st conditions. A test cond ition is defin ed as an item or event t hat could be verified by one or
m ore test cases (e.g., a fun ction, trans action, quality characteri stic or struct ural element ).
Establis hing traceability from test conditions back to the s pecification s and require ments enables both
effe ctive impact analysis wh en requirem ents change , and deter mining requirements cov erage for a set
of tests. During test analysis the detailed test approach is implemented t o select the test design t
echniques to use based on, among other conside rations, the identified risks (see Chapter 5 for more on
risk analysis).
During t est design th e test cases and test data are create d and specified. A test case consists of a set of
in put values, e xecution pre conditions, expected res ults and exe cution postc onditions, defined to cover
a certain tes t objective(s ) or test condition(s). The Standard for Software Test Documentation (IEE E
STD 829-1998) describes the con tent of test design specifications
(containing test cond itions) and test case spe cifications.
Expected results sho uld be prod uced as part of the specification of a test case and include outputs,
changes to data and states, and any other co nsequences of the test. If expected results have not been
defined, then a plausible, but erroneou s, result may be interpreted as the correct one. Expected results
sho uld ideally b e defined prior to test execution.
During t est impleme ntation the te st cases are developed, implemented, prioritized and organi zed in the
test procedure s pecification (IEEE STD 829-1998). T he test proce dure specifies the sequ ence of actions
for the exe cution of a test. If tests are run using a test execution tool, the sequence of actions is specified i
n a test scrip t (which is an automated test procedure).
The various test pro cedures and automated test scripts are subseque ntly formed into a test execution schedule
that defines t he order in which the various test pr ocedures, an d possibly automated test scripts, are exec uted.
The test execution schedule will take into account such factors a s regression tests, prioritization, and technical
an d logical dependencies.
Version 2 011
Page 38 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
(K2)
Terms
Black-bo x test desig n technique, experience-based test d esign technique, test design technique, whitebox test desig n technique
Background
The pur pose of a tes t design tec hnique is to identify test conditions, t est cases, a nd test data.
It is a cla ssic distinction to denot e test techni ques as black-box or white-box. Black-box test d esign
techniques (also called specification-based t echniques) are a way to derive and select test conditio ns,
test cases, or test dat a based on an analysis of the test b asis documentation. Thi s includes both
functional and non-functional te sting. Blac k-box testing, by definition, does not use
any infor mation regarding the internal structure of the co mponent or s ystem to be tested. White-box test
design techniqu es (also call ed structural or structure- based techn iques) are b ased on an analysis of the
structure of the co mponent or system. Black-box and white-box te sting may al so be combine d with
exper ience-based techniques to leverage the experien ce of develo pers, tester s and users to determine
w hat should be tested.
Some techniques fall clearly into a single cat egory; other s have elem ents of more than one category .
This syllabus refers t o specification-based te st design techniques as black-box te chniques an d structure
-based test design tech niques as wh ite-box techniques. In a ddition expe rience-based test design t
echniques a re covered.
Commo n characteris tics of specification-bas ed test design technique s include:
Models, either fo rmal or informal, are use d for the sp ecification of the problem to be solved, the soft ware
or its c omponents
o
o
Infor mation abou t how the so ftware is constructed is used to deri ve the test c ases (e.g., c ode
and detailed design information)
The extent of coverage of th e software c an be measu red for existing test cas es, and further test case s
can be derived system atically to in crease cove rage
Commo n characteris tics of expe rience-based test design techniques include: o The knowledge and experien
ce of peopl e are used to derive the test cases
o
The knowledge of testers, de velopers, us ers and oth er stakeholders about th e software, it s usa
ge and its environment is one source of informati on
o
Knowledge abou t likely defe cts and their distribution is another so urce of infor mation
Version 2 011
Page 39 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
150 minutes
Terms
Boundary value analysis, decision table testi ng, equivale nce partitioning, state transition testin g, use
case testing
Version 2 011
Page 40 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
The strength of deci sion table te sting is that it creates co mbinations of conditions that otherwi se might
no t have been exercised during testing. It may be a pplied to all situations when the acti on of the softw
are depends on several logical deci sions.
A state table shows the relationship between the states and inputs, a nd can highlight possible transitio ns
that are in valid.
Tests can be design ed to cover a typical sequence of states, to cover every state, to exercise every
transitio n, to exercis e specific sequences of transitions or to test invalid transition s.
State transition testi ng is much used within th e embedde d software in dustry and technical automation
in gener al. However, the techniq ue is also suitable for modeling a business object having specific states
or testing s creen-dialo gue flows (e.g., for Intern et applications or busine ss scenarios).
Version 2 011
Page 41 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Techniques (K4)
Terms
Code coverage, decision covera ge, statemen t coverage, structure-based testing
Background
Structur e-based or w hite-box testing is based on an identified structu e of the software or the system, as
seen in t he following examples:
Com ponent level: the structure of a softw are component, i.e., stat ements, dec isions, branches or e ven
distinct paths
Inte gration level: the structure may be a call tree (a diagram in wh ich modules call other modules)
o
System level: th e structure m ay be a me nu structure, business process or web page struc ture
In this s ection, three code-relate d structural test design te chniques for code coverage, based on stateme
nts, branches and decisions, are disc ussed. For decision testing, a contro l flow diagram may be used to
visu alize the alternatives for each decision.
f rom decision points in th e code and s how the tra nsfer of control to differe nt location s in the code.
Decision coverage is determined by the number of all de cision outco mes covered by (designed or
execute d) test cases divided by the number of all possibl e decision o utcomes in t he code under test.
Decision testing is a form of control flow testing as it follo ws a specific flow of cont rol through the decision
points. Dec ision covera ge is stronge r than state ment coverage; 100% de cision coverage guarante es
100% st atement coverage, but n ot vice versa.
Page 42 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Exploratory testing, ( fault) attack
Background
Experien ce-based te sting is where tests are derived from the testers skill and intu ition and th eir
experien ce with similar applicatio ns and technologies. W hen used to augment sys tematic
techniques, these te chniques ca n be useful i n identifying special tests not easily c aptured by formal
techniques, especially when applied after more formal approaches. However, this technique m ay yield wid
ely varying degrees of effectiveness, depending on the testers experienc e.
A commonly used ex perience-ba sed techniq ue is error g uessing. Ge nerally tester s anticipate defects based
on experience. A structured a pproach to th e error gues sing techniq ue is to enumera te a list of possible
defects and to d esign tests th at attack th ese defects. This systematic approach is called fa ult attack. T hese
defect and failure lists can be built based o n experience, available defect and failure data, and from co mmon
know ledge about why softwa re fails.
Exploratory testing is concurrent test design, test executi on, test logging and lear ning, based on a test
char ter containin g test objectives, and ca rried out within time- boxes. It is an approach that is most us
eful where th ere are few or inadequate specifications and sev ere time pressure, or in order to augment or
complement other, more form al testing. It can serve as a check on the test proc ess, to help e nsure that t
he most seri ous defects are found.
Version 2 011
Page 43 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
No specific terms.
Background
The choice of which test techniq ues to use d epends on a number of factors, inclu ding the type of system,
regulatory s tandards, customer or co ntractual re quirements, level of risk, type of risk, test objective ,
document ation available, knowledg e of the testers, time and budget, de velopment life cycle, us e case
models and previous experience with types of defect s found.
Some techniques ar e more applicable to certain situations and test levels; others are applicable to all test
le vels.
When creating test cases, tester s generally u se a combin ation of test techniques including pro cess, rule
and data-driven techniques t o ensure adequate coverage of the object under test.
References
Craig, 2002, Hetzel, 1988, I EEE STD 829-1998
Beiz er, 1990, C opeland, 200 4
C opeland, 200 4, Myers, 1 979
C opeland, 200 4, Myers, 1 979
B eizer, 1990, Copeland, 2004
B eizer, 1990, Copeland, 2004
C opeland, 200 4
4.4.3 B eizer, 1990, Copeland, 2004
Kaner, 2002
Beiz er, 1990, C opeland, 200 4
Version 2 011
Page 44 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
planning (K1)
LO-5.2.7
Recall ty pical factors that influence the effort related to testing (K1)
LO-5.2.8
Different iate betwee n two conce ptually differ ent estimatio n approach es: the metri csbased approach and the expert- based approach (K2)
LO-5.2.9
Recognize/justify adequate entry and exit criteria for spec ific test levels and group s of
test cas es (e.g., for integration te sting, acceptance testin g or test cases for usability
testing) (K2)
Version 2 011
Page 45 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
LO-5.6.2
Write an incident rep ort covering the observa tion of a fail ure during testing. (K3)
Version 2 011
Page 46 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Tester, test leader, t est manager
For larg e, complex or safety criti cal projects, it is usually best to have multiple lev els of testing, with some
or all of the lev els done by independent testers. Development staff may participate in testing, especially at
the low er levels, but their lack of objectivity often limits th eir effectiveness. The indepen dent testers may
have the authority to require an d define test processes a nd rules, but testers should take o n such
process-related r oles only in the presence of a clear management mandate to do so.
The benefits of inde pendence in clude:
o
Independent testers see oth er and differe nt defects, and are unbiased
An i ndependent tester can v erify assump tions people made during specificati on and implementation of
the system
Drawba cks include:
o Isola tion from th e developme nt team (if tr eated as totally independent) o Developers may lose a sense of
Independent tes ters may be seen as a bottleneck or blamed for delays in rel ease
Testing tasks may b e done by pe ople in a specific testing role, or ma y be done by someone in another
role, such a s a project m anager, quality manage , developer, business an d domain e xpert, infrastructure
or IT operations.
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Contribute the te sting perspe ctive to othe r project activities, such as integratio n planning
Plan the tests considering the context and understa nding the test objectives and risks inclu ding selecti ng
test appro aches, esti mating the ti me, effort and cost of testing, acquiri ng
Write test summary reports b ased on the information gathered du ring testing
Version 2 011
Page 48 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Test app roach, test strategy
Defining the overall approac h of testing, including the definition o f the test lev els and entry and exit
criteria
o
Inte grating and c oordinating the testing activities into the software life cycle a ctivities
(acquisition, supply, development, opera tion and maintenance)
o Making decisions about what to test, wha t roles will perform the t est activities , how the tes t
o
activ ities should be done, and how the test results will be evaluat ed
o
Assigning resou rces for the different acti vities define d
o
Defining the am ount, level of detail, structure and tem plates for th e test docu mentation
Sele cting metric s for monitoring and controlling test p reparation and executio n, defect resolution and
risk issues
Setting the level of detail for test procedu res in order to provide enough inform ation to sup port
reproducible tes t preparation and execution
Page 49 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
o
Tho roughness m easures, such as cover age of code, functionality or risk
o
Esti mates of defect density o r reliability m easures
o
Cost
o
Residual risks, such as defe cts not fixed or lack of te st coverage in certain ar eas
o
Sch edules such as those ba sed on time to market
The outcome of testing: the number of d efects and the amount of rework requ ired
Version 2 011
Page 50 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
(K2)
Terms
Defect density, failure rate, test c ontrol, test monitoring, t est summary report
Analyzed information and m etrics to sup port recommendations a nd decisions about future actio ns, such
as an assessm ent of defects remaining, the econo mic benefit of continued testing, outstanding risks, and
the level of confidence in the teste d software
The outline of a test summary report is given in Standard for Software Test Documentation (IEEE Std 8291998).
Metrics should be co llected durin g and at the end of a tes t level in ord er to assess : o The adequacy of
Setting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a dev eloper before
accepting them into a b uild
Version 2 011
Page 51 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Configuration manag ement, version control
Background
The pur pose of configuration management is to establish and maintain the integrity of the pro ducts
(compon ents, data and documen tation) of th e software o r system thro ugh the project and pro duct life
cycle .
For testing, configur ation management may involve ensuring the following:
All items of testw are are iden tified, versio n controlled, tracked for changes, related to eac h other and
related to de velopment items (test o bjects) so th at traceabilit y can be maintained
thro ughout the t est process
All i dentified documents and software ite ms are referenced unambiguously in test doc umentation
For the tester, config uration management helps to unique ly identify (a nd to reprod uce) the tested item,
test documents , the tests and the test harness(es).
During t est planning, the configuration mana gement procedures and infrastructur e (tools) should be
chosen, documented and implem ented.
Version 2 011
Page 52 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Product risk, project risk, risk, risk-based testing
Background
Risk can be defined as the chan ce of an eve nt, hazard, t hreat or situa tion occurri ng and resul ting in
undesira ble consequ ences or a potential pro blem. The level of risk will be determ ined by the likelihood
of an adverse event ha ppening an d the impact (the harm re sulting from that event).
Improper attitude to ard or expectations of t esting (e.g., not appreciating the value of finding d
efects during testing)
o Tec hnical issues :
Problems in defining the right req uirements
The exte nt to which requirement s cannot be met given ex isting const raints
Test env ironment not ready on time
Late data conversion , migration planning and development and testing data conversion/migration tools
Low qua lity of the de sign, code, configuration data, test data and tests
Supplier issues:
Failure o f a third party
Contract ual issues
When a nalyzing, managing and mitigating th ese risks, th e test manag er is followi ng well-esta blished
project management principles. The Standard for Software Test Doc umentation (IEEE Std 8 29-1998) ou
tline for test plans requi res risks and contingencies to be sta ted.
Risks are used to decide where to start testin g and wher to test more; testing is used to reduce the risk of
a n adverse effect occurring, or to reduce the impa ct of an adve rse effect.
Version 2 011
Page 53 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
In additi on, testing m ay support the identification of new risks, may he lp to determ ine what ris ks should
be reduced, and may lower uncertain ty about risk s.
Version 2 011
Page 54 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Incident logging, incident management, incident report
Background
Since on e of the obj ectives of tes ting is to find defects, the discrepan cies between actual and expecte d
outcomes need to be l ogged as incidents. An i ncident must be investig ated and may turn out to be a
defect. A ppropriate a ctions to dis pose incidents and defects should be defined. Inc idents and defe cts
should b e tracked fr om discover y and classification to correction and confirmation of the solution. In
order to manage all i ncidents to completion, an organization should es tablish an in cident management
process and rules f or classification.
Incident s may be rai sed during development, review, testing or use o f a software product. The y may be
raise d for issues in code or the working sy stem, or in any type of d ocumentatio n including requirem
ents, devel opment docu ments, test documents, and user inf ormation suc h as Help or installation
guides.
Incident reports hav e the followin g objectives:
Prov ide developers and other parties wit h feedback a bout the pro blem to enable identifica tion, isola tion
and correction as n ecessary
o Prov ide test lead ers a mean s of tracking the quality o f the system under test and the progress
of the testing
o Prov ide ideas for test process improvem ent
Details o f the inciden t report may include:
o
o
o
o
Description of the incident to enable reproduction an d resolution, including log s, database dumps or
screen shots
o Sco pe or degree of impact on stakeholde r(s) interests o Sev erity of the i mpact on the system
Status of the incident (e.g., open, deferred, duplicate, waiting to be fixed, fixe d awaiting re-test, closed)
o Conclusions, rec ommendatio ns and app ovals
o Glob al issues, s uch as other areas that m ay be affected by a change resultin g from the incident o
Change history, such as the sequence of actions tak en by projec t team mem bers with respect
Version 2 011
Page 55 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
References
Black, 2001, H etzel, 1988
Black, 2001, H etzel, 1988
5.2.5 Black, 2001, C raig, 2002, I EEE Std 829 -1998, Kaner 2002 5.3.3 Black, 2001, C raig, 2002, Hetzel,
1988, IEEE Std 8 29-1998 5.4 Craig, 2002
5.5.2 Black, 2001 , IEEE Std 82 9-1998 5.6 Blac k, 2001, IEE E Std 829-1 998
Version 2 011
Page 56 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Version 2 011
Page 57 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Configuration manag ement tool, coverage tool, debugging tool, dyna mic analysis tool, incident
management tool, load testing tool, modeling tool, monito ring tool, performance te sting tool, probe effect,
re quirements managemen t tool, revie w tool, security tool, stati c analysis tool, stress te ting tool, test
comparator, test data preparation to ol, test desi gn tool, test harness, test execution tool, test man
agement to ol, unit test fr amework tool
Auto mate activities that require significan t resources when done manually (e. g., static testing)
Auto mate activities that cann ot be executed manually (e.g., large scale performance testi ng of clien tserver app lications)
Incr ease reliability of testing (e.g., by aut omating large data comp arisons or simulating beh avior)
The term test frameworks is als o frequently used in the industry, in at least three meanings: o Reusable
and e tensible testing libraries that can be used to buil d testing tools (called test
harn esses as we ll)
o A ty pe of design of test auto mation (e.g., data-driven, keyword-driven) o Overall process of execution
of testing
For the purpose of th is syllabus, the term te st frameworks is used in its first two meanings a described
in Section 6.1.6.
Some tools clearly s upport one a ctivity; others may supp ort more tha n one activit y, but are classifie d
under the activity with which they are most closely associat ed. Tools from a single provider, especially
those that ha ve been de signed to work together, may be bun dled into one package.
Some types of test t ools can be intrusive, which means th at they can affect the ac tual outcome of the
test. For exampl e, the actual timing may be different due to the ex tra instructio ns that are execute d by
the tool, or you may get a differe nt measure of code cov erage. The consequence of intrusive tools is call
ed the probe effect.
Version 2 011
Page 58 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Some tools offer sup port more a ppropriate for developers (e.g., tools that are used during compon ent and
component integ ration testing). Such tools are marked with (D) in the list below.
Tool Sup port for Managem ent of Testing and T ests (K1)
Management tools apply to all test activities over the enti re software life cycle.
Test Management T ools
These to ols provide interfaces for executing tests, tracking defects an d managing requirements, along
with support fo r quantitative analysis a nd reporting of the test objects. They also support tracing t he test
objec ts to requirement specifications and might have an independent version control capability or an
interface to an e ternal one.
Requirements Management To ols
These to ols store re quirement statements, store the attrib utes for the requirements (including priority),
provide uni que identifiers and suppo rt tracing th e requireme nts to individual tests. Th ese tools may
also help with identifyi ng inconsist ent or missing requirements.
Incident Manageme nt Tools (Defect Tracking Tools)
These to ols store and manage in cident reports, i.e., defe cts, failures, change requ ests or perceived
problems and anom alies, and help in managing the life c ycle of incide nts, optionally with support for
statistica l analysis.
Configuration Man agement Tools
Although not strictly test tools, these are nec essary for st orage and v ersion management of testware and
related software especially when configurin g more than one hardware/software environ ment in term s of
operating system ve rsions, comp ilers, brows ers, etc.
Version 2 011
Page 59 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Tool Sup port for T est Exec ution and Logging (K1)
Test Ex ecution Too ls
These to ols enable tests to be ex ecuted automatically, or semi-autom atically, usin g stored inputs and exp
ected outco mes, through the use of a scripting language and usually provide a test lo g for each test run. They
can also be used to record tests, and usually support scriptin g languages or GUI-based configura tion for para
meterization of data and other customization in the tests.
depende ncies or memory leaks. They are typ ically used in componen t and compo nent integra tion
testing, and when te sting middle ware.
Perform ance Testin g/Load Tes ting/Stress Testing Tools
Perform ance testing tools monito r and report on how a sy stem behaves under a v ariety of sim ulated
usage c onditions in terms of num ber of conc urrent users, their ramp- up pattern, frequency an d relative
percentage of transactio ns. The simu lation of loa d is achieve d by means of creating virtual users
carrying out a selected set of transacti ons, spread across various test mach ines commo nly known as
load gene rators.
Monitoring Tools
Monitori ng tools continuously an alyze, verify and report on usage of specific syste m resource s, and give
warnings of possible service problems.
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
migratio n rules to ensure that the processed data is correct, complete and compli es with a pre - defined
context-spec ific standard .
Other testing tools exist for usability testing.
Version 2 011
Page 61 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Terms
Data-dri ven testing, keyword-driv en testing, s cripting lang uage
6.2.1 Potential Benefits and Risks of Tool Support f or Testing (for all to ols) (K2)
Simply p urchasing or leasing a t ool does not guarantee success with that tool. Each type of tool may
require additional effort to achieve real a nd lasting benefits. Ther e are potential benefits and opportun ities
with th e use of tools in testing, but there are also risks.
Potential benefits of using tools include:
o
Repetitive work is reduced (e .g., running regression tests, re-ente ring the sam e test data, and
chec king against coding stan dards)
o
Gre ater consistency and repeatability (e.g., tests executed by a t ool in the sa me order with the
same frequency, and tests derived from equirement s)
o Obje ctive asses sment (e.g., static measu res, covera ge)
Eas e of access to information about test s or testing (e.g., statistics and graphs about test prog ress,
incide nt rates and performance )
Risks of using tools include:
o
Unr ealistic expe ctations for the tool (inclu ding functionality and ea se of use)
Underestimating the time, cost and effort for the initial introductio n of a tool (in cluding train ing and
external exp ertise)
Underestimating the time an d effort need ed to achiev e significant and continuing benefits from the tool
(including the need for changes in the testing process an d continuous improvement of the way the tool is
used)
o Underestimating the effort required to ma intain the test assets generated by t he tool
Over-reliance on the tool (re placement fo r test design or use of a utomated te sting where manual testing
w ould be better)
Neglecting relationships and interoperab ility issues between critical tools, such as requirements management
too ls, version c ontrol tools, incident management to ols, defect t racking tools and
Page 62 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
A data-driven testing approach separates ou t the test inputs (the data ), usually into a spreadsheet, and
use s a more ge neric test script that can read the inp ut data and e xecute the s ame test script with diffe
rent data. Testers who are not familiar with the scripting lang uage can then create th e test data for these
predefined scripts.
There are other techniques employed in data-driven techniques, where instead of hard-coded data
combinations placed in a spreadsheet, data is generated using algori thms based on configurable
parameters at run ti me and supplied to the a pplication. F or example, a tool may use an algorithm, which
ge nerates a ra ndom user I D, and for r epeatability in pattern, a seed is employed for controlli ng
randomn ess.
In a key word-driven testing appr oach, the sp readsheet co ntains keyw ords describ ing the actio ns to
be taken (also called action word s), and test data. Testers (even if th ey are not familiar with the scripting
language) c an then define tests usin g the keywo rds, which c an be tailore d to the application being
tested.
Technic al expertise in the scripti ng language is needed fo r all approaches (either by testers or by
specialis ts in test au tomation).
Regardl ess of the scripting techn ique used, the expected results for e ach test nee d to be stor ed for
later co mparison.
Static Analysis Too ls
Static an alysis tools applied to s ource code can enforce c oding standards, but if a pplied to existing
code ma y generate a large quantity of messa ges. Warnin g messages do not stop the code fro m being
tra nslated into an executa ble program, but ideally s hould be addressed so that maintenance of the co
de is easier in the future. A gradual implementation of the analysis tool with initial filters to exclude some
mess ages is an effective appr oach.
Test Management T ools
Test management to ols need to interface wit h other tools or spreadsh eets in order to produce useful
information in a format that fits th e needs of the organization.
Version 2 011
Page 63 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
(K1)
Terms
No specific terms.
Background
The main considerations in selec ting a tool fo r an organization includ e:
Ass essment of o rganizationa l maturity, strengths and weaknesses and identification of opp ortunities for
an improve d test proces s supported by tools
o Evaluation again st clear requ irements and objective criteria
A proof-of-conce pt, by using a test tool during the ev aluation pha se to establi sh whether it perf orms effectiv
ely with the software un der test and within the cu rrent infrast ructure or to
o
identify changes needed to t hat infrastru cture to effec tively use the tool
Evaluation of the vendor (including traini ng, support a nd commer cial aspects) or service support
o
sup pliers in case of non-commercial tool s
Identification of internal requirements for coaching an d mentoring in the use of the tool
o
Evaluation of training needs considering the current test teams te st automati on skills
o
Esti mation of a c ost-benefit ratio based o n a concrete business c ase
Introducing the selec ted tool into an organization starts with a pilot pr oject, which has the following
objective s:
o Lear n more detail about the tool
o
Evaluate how th e tool fits with existing processes and practices, a nd determin e what would nee
d to change
o
Decide on standard ways of using, managing, storing and maintaining the tool and the tes t asse ts
(e.g., deciding on na ming conven tions for files and tests, c reating libraries and defining the modularity of
test suites)
o Ass ess whether the benefits will be achie ved at reas onable cost
Success factors for the deploym ent of the too l within an o rganization include: o Rolling out the to ol to
the rest of the organization incrementally
o Adapting and improving processes to fit with the use of the tool o Prov iding training and
coaching/mentoring for new us ers
o Defining usage g uidelines
o Implementing a way to gathe r usage information from the actual use
o Monitoring tool u se and bene fits
o Prov iding support for the tes t team for a given tool
o Gat hering lesso ns learned from all teams
References
6.2.2 B uwalda, 200 1, Fewster, 1 999 6.3 Few ster, 1999
Version 2 011
Page 64 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Refere nces
Standards
ISTQB G lossary of T erms used in Software T esting Version 2.1
[CMMI] Chrissis, M. ., Konrad, M . and Shru m, S. (2004) CMMI, Guidelines for Process Integration and
Pro duct Improve ment, Addis on Wesley: Reading, M A
See Section 2.1
[IEEE Std 829-1998] IEEE Std 8 29 (1998) IEEE Stand ard for Software Test Documentation, See
Sections 2.3, 2. 4, 4.1, 5.2, 5.3, 5.5, 5.6
[IEEE 10 28] IEEE Std 1028 (2008) IEEE Standard for Software Reviews and Audits, See Section 3.2
[IEEE 12 207] IEEE 1 2207/ISO/IE C 12207-20 08, Softwar e life cycle processes, See Section 2.1
[ISO 912 6] ISO/IEC 9126-1:2001 , Software E ngineering Software P roduct Quality, See Section 2.3
Books
[Beizer, 1990] Beize r, B. (1990) Software Te sting Techni ques (2nd e dition), Van Nostrand Reinhold:
Boston
See Sections 1.2, 1. 3, 2.3, 4.2, 4.3, 4.4, 4.6
[Black, 2 001] Black, R. (2001) Managing the Testing Pro cess (3rd ed ition), John Wiley & Son s: New
York
See Sections 1.1, 1. 2, 1.4, 1.5, 2.3, 2.4, 5.1, 5.2, 5.3, 5.5 , 5.6
[Buwald a, 2001] Buw alda, H. et al. (2001) Integrated Test Design and Automation , Addison W esley:
Reading, MA
See Section 6.2
[Copela nd, 2004] Co peland, L. ( 2004) A Prac titioners Gu ide to Software Test De sign, Artech House:
Norwood, M A
See Sections 2.2, 2. 3, 4.2, 4.3, 4.4, 4.6
[Craig, 2002] Craig, Rick D. and Jaskiel, Stefan P. (2002) Systematic Software Te sting, Artec h House:
Norwood, M A
See Sections 1.4.5, 2.1.3, 2.4, 4.1, 5.2.5, 5.3, 5.4
[Fewster , 1999] Fewster, M. and Graham, D. (1999) Soft ware Test A utomation, Addison Wesley:
Reading, MA
See Sections 6.2, 6. 3
[Gilb, 1993]: Gilb, To m and Graham, Dorothy (1993) Software Inspection, Addiso n Wesley: Reading, MA
See Sections 3.2.2, 3.2.4
[Hetzel, 1988] Hetzel, W. (1988) Complete Guide to Soft ware Testing, QED: Wellesley, MA See Sections
1.3, 1. 4, 1.5, 2.1, 2.2, 2.3, 2.4, 4.1, 5.1, 5.3
[Kaner, 2002] Kaner, C., Bach, J. and Petttic ord, B. (200 ) Lessons Learned in S oftware Testing, John
Wiley & Sons: New York
See Sections 1.1, 4. 5, 5.2
Version 2 011
Page 65 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
[Myers 1979] Myers, Glenford J. (1979) The Art of Softwa re Testing, John Wiley & Sons: New York See
Sections 1.2, 1. 3, 2.2, 4.3
[van Vee nendaal, 20 04] van Veenendaal, E. (ed.) (2004) The Testing Practitioner (Chapters 6 , 8, 10),
UT N Publishers: The Nether lands
See Sections 3.2, 3. 3
Version 2 011
Page 66 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
To h ave more im pact/value a s an interna tionally-bas ed initiative than from any country-specific appr
oach
o To d evelop a co mmon international body of understanding and k nowledge about testing
thro ugh the syllabus and ter minology, and to increas e the level of knowledge about testin g for
all participants
Version 2 011
Page 67 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Tak e a course th at has been accredited to ISTQB sta ndards (by one of the ISTQB-recogn ized National
Boards ).
Version 2 011
Page 68 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Actual deviation of the comp onent or sy stem from its expected delivery, service or result
Similarities: testing more than one comp onent, and can test non-functional aspects
Diffe rences: integration testi ng concentra tes on interfaces and interactions, a nd system te sting conc
entrates on whole-system aspects, such as end- to-end proc essing
Version 2 011
Page 69 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Exampl e
o
o
Analyze product risks and propose preve ntive and co rrective miti gation activities
Describe which portions of an incident report are factual and whic h are inferre d from results
Reference
(For the cognitive lev els of learning objectives)
Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Tea ching, and Assessi ng: A
Revisi on of Bloom' s Taxonomy of Educatio nal Objective s, Allyn & Bacon
Version 2 011
Page 70 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
References
SR1. Sources and re ferences will be given fo r concepts i n the syllabu s to help training provide rs find out
more inform ation about the topic. (R EFS)
SR2. W here there ar e not readily identified and clear sources, more d etail should be provided in the
syllabus. For example, definition s are in the
Glossary, so only the ter ms are listed in the syllab us.
(NON-REF DETAIL)
Version 2 011
Page 71 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Version 2 011
Page 72 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Version 2 011
Page 73 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
test levels can be created as well as a test plan on the project level covering mu ltiple test le vels. Latter i
s named Master Test Pla n in this syllabus and in the IS TQB Glossa ry.
Code of Ethics has been moved from the CTAL to CTFL.
Release 2011
Change s made with the mainten ance release 2011
1. General: Wo rking Party eplaced by Working Gro up
Replaced po st-conditions by postcon ditions in or der to be co nsistent with the ISTQB Glossary 2.1.
First occurrence: ISTQB replaced by ISTQB
Introduction to this Sylla bus: Descriptions of Cognitive Levels of Knowledge removed, because this was
redund ant to Appendix B.
Version 2 011
Page 74 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Section 1.6: Because th e intent was not to define a Learning Objective fo r the Code of Ethics, the
cognitive level for the se ction has be en removed.
Section 2.2. 1, 2.2.2, 2.2.3 and 2.2.4, 3.2.3: Fixed formatting i ssues in lists .
Section 2.2. 2 The word failure was not correct fo r isolate f ailures to a s pecific com ponent .
Therefo re replaced with defect in that sente nce.
Section 2.3: Corrected fo rmatting of bullet list of test objective s related to test terms in section Test
Types (K2).
Section 2.3. 4: Updated d escription of debugging to be consistent with Ve rsion 2.1 of the ISTQB Glos
sary.
Section 2.4 r emoved word extensive from inclu des extensi ve regressio n testing, because the
extensive depends on the change (size, risks, value, etc.) as written in the next sentenc e.
Section 3.2: The word i ncluding ha s been removed to clarify the senten ce.
Section 3.2. 1: Because the activities of a formal review had been incorrec tly formatte d, the
review proce ss had 12 m ain activitie s instead of six, as inten ded. It has been change d back to six,
which makes this section compliant with the Syllabus 2 007 and the ISTQB Advanced Level Syllabus
2007.
Section 4: W ord develo ped replaced by define d because test cases ge t defined and not developed.
Section 4.2: Text change to clarify ho w black-box and white-b ox testing could be used in conjunction with
experie nce-based te chniques.
Section 4.3. 5 text change ..between actors, inclu ding users and the syst em.. to
between actors (users o r systems), .
Section 4.3. 5 alternative path replac ed by alterna tive scenari o.
Section 4.4. 2: In order to clarify the t erm branch testing in the text of Section 4.4, a sentence to clarify
the focus of branch testing ha s been changed.
Section 4.5, Section 5.2.6: The term experience d-based testing has bee n replaced b y the correct term
experience-based.
Section 6.1: Heading 6.1.1 Understa nding the Meaning and Purpose of T ool Support for Testing (K2)
replaced by 6.1.1 Tool Support for Testing (K2).
Section 7 / B ooks: The 3rd edition of [Black,2001] listed, repl acing 2 nd edition.
Appendix D: Chapters re quiring exercises have b een replaced by the gen eric require ment
that all Lear ning Objectiv es K3 and h igher requir e exercises. This is a req uirement specified in the
ISTQB Accreditati on Process (Version 1.2 6).
Appendix E: The change d learning objectives be tween Versio n 2007 and 2010 are no w correctly list ed.
Version 2 011
Page 75 of 78
31-M ar-2011
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
Index
action word .............
....................
40
bug .......................... ................................
............ 63
24, 27
architecture ............. ..
15,21, 22, 25, 28, 29
archivin g ................. ..........................
17, 30
automation .............. ................................
29
benefits of independence .......................
11
captured script ........ ................................
62
checklis ts ................ ..........................
34, 35
choosin g test techni ..........................que
44
code co verage ................
47
benefits of using too ...............................
62
beta tes ting ............. ..........................
36
complexity............... ..............
24, 27
black-box technique ....................
37, 39, 40
28
bottom-u p................ ................................
42
45, 48, 52
configur ation management .........
25
boundary value analysis .........................
coverag e 15, 24, 28, 29, 37, 38, 39, 40, 42,
50, 5 1, 58, 60, 62
58
coverag e tool .......... ....................
............
custom- developed s oftware........
............ 27
data flow ................. ....................
............ 36
data-driv en approac h..................
............ 63
data-driv en testing .. ....................
............ 62
debuggi ng ............... ..............
13, 24, 29, 58
debuggi ng tool ........ ....................
...... 24, 58
decision coverage... ....................
...... 37, 42
decision table testin ..................
...... 40, 41
decision testing ....... ....................
............ 42
51
e xploratory testing.............................
43, 50
fa ctory accept ance testing ..................
.... 27
fa ilure 10, 11, 13, 14, 18, 2 1, 24, 26, 3 2, 36,
defect 10 , 11, 13, 14, 16, 18, 21, 24, 26, 28, 29,
3 1, 32, 33, 34, 35, 36, 37, 39, 40, 41, 43, 4 4, 45,
47, 49, 50, 51, 53, 54, 55, 59,
60, 6 9
50, 51
................. 50, 51
fa ult ..............
....................
........... 10, 11, 43
fa ult attack ........................
24, 2
9, 32, 33, 36, 38, 44, 47, 49, 50, 52,
....................... 43
fi eld testing .......................
53, 5
5, 59, 67
21, 22
................. 24, 27
fo llow-up.......
....................
................. 31, 33
fu nctional requirement .....
................. 24, 26
fu nctional specification.....
....................... 28
fu nctional tas k ..................
driver....
................... ................................
24
....................... 25
fu nctional test ...................
....................... 28
fu nctional testing ..............
....................... 28
fu nctionality ...............
24, 25, 28, 50, 53, 62
im pact analysis ................
....................... 33
integration13, 22, 24, 25, 27, 29, 36, 40, 41,
42, 45, 48, 59, 60, 69
IS O 9126 ...............................
11, 29, 30, 65
d evelopment model.............................
.... 22
Version 2 011
Page 76 of 78
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
58, 60
10, 11, 16
33, 34, 35
27
reliability .................. ..
11,13,28,
reliability testing ...... ................................
50, 53, 58
28
requirem ent ............. ........
13,22,
requirem ents manag..............ementtool
24, 32, 34
58
requirem ents specifi................cation
26, 28
responsibilities ........ ....................
24, 31, 33
re-testing . 29, See confirmation testing, See
confir mation testing
review1 3, 19, 31, 32, 33, 34, 35, 36, 47, 48,
53, 5 5, 58, 67, 71
review t ool ............... ................................
58
reviewer .................. ..........................
33, 34
risk11, 12, 13, 14, 25 , 26, 29, 30, 38, 44, 45,
risks .....
................... ..............
11,
25, 49, 53
risks of
using tool .... ................................
62
robustne ss testing... ................................
24
roles .....
........... 8, 31, 33, 34, 35, 47, 48, 49
root cause ............... ..........................
10, 11
scribe ...
................... ..........................
33, 34
security
................... ..
27,28,36,
47, 50, 58
.... 27
s oftware deve lopment.............
8, 11, 21, 22
s oftware deve lopment model ..............
.... 22
s pecial consid erations for some types of tool 62
te st case ..................................................
38
s pecification- based technique.....
29, 39, 40
s pecification- based testin g
stakeholders..
12,13,16, 18, 26, 39, 45, 54
state transitio n testing ......
.................
.................. .... 37
40, 41
static analysis...........tool
31, 36, 58, 59, 63
static techniq ................ue
.................
31, 32
stress testing.............tool
.................
58, 60
39, 42
te st cases .....
37, 38, 39
....................
te st design to ol ..................................
58, 59
....................... 28
te st closure .......................
te st effort .............................................
.... 50
....................... 38
te st condition ...........s
13, 15, 16, 28, 38, 39
te st control........................
38
................. 15, 50
te st data ........
15,16,38, 48, 58, 60, 62, 63
te st data preparation.tool
................. 58, 60
te st design13, 15, 22, 37, 38, 39, 43, 4 8, 58,
62
45
te st design specification......................
....
Version 2 011
Page 77 of 78
31-M ar-2011
49
Certified Test er
International
Software Te sting
Foundation Level Syllabus
Q ualifications Board
44, 4 5, 48, 49
......
45, 51
45, 51
......
16, 32, 38
test stra tegy ............ ....................
............
47
test suit e ................. ....................
............
29
test summary report ........
15,16,
test objective.......
13, 22, 28, 43, 44, 48, 51
58
test type .................. ........
21,28,
test-driv en developm ent .........................
45, 48, 51
............
30, 48, 75
24
tester 10 , 13, 18, 34, 41, 43, 45, 47, 48, 52,
62, 6 7
48
tester tasks ............. ................................
te sting principles
10, 14
te stware
15, 16, 17, 48, 52
to ol support .. 24, 32, 42, 57, 62
u sability
11, 27, 28, 45, 47, 53
u sability testin g
28, 45
u se case test 37, 40
Version 2 011
57, 58
11
Page 78 of 78
u se case testing
37, 40, 41
u se cases
22, 26, 28, 41
u ser acceptan ce testing
27
v alidation
v erification
59
25
31-M ar-2011
22
22
v ersion contro l 52
V-model
22
walkthrough.. 31, 33, 34
white-box test design technique 39, 42
white-box testing
28, 42