0% found this document useful (0 votes)
389 views178 pages

1-Istqb Foundation Level Syllabus 2011

Document may be copied in its entirety, or extracts m ade, if the s ource is ack nowledged. ISTQB I s a registered trademark of the Intern ational Soft ware Testing Qualifications Board. Authors hereby transfer the copyright to the Internati onal Software Testing Q ualifications Board (ISTQB) Copyrig ht (c) 2011 th e authors for the update 2011 (Thomas Muller (c hair),

Uploaded by

saimatabassum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
389 views178 pages

1-Istqb Foundation Level Syllabus 2011

Document may be copied in its entirety, or extracts m ade, if the s ource is ack nowledged. ISTQB I s a registered trademark of the Intern ational Soft ware Testing Qualifications Board. Authors hereby transfer the copyright to the Internati onal Software Testing Q ualifications Board (ISTQB) Copyrig ht (c) 2011 th e authors for the update 2011 (Thomas Muller (c hair),

Uploaded by

saimatabassum
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as RTF, PDF, TXT or read online on Scribd
You are on page 1/ 178

C ertified Tester

Found atio n Level Sy llabu s

R eleased
Version 201 1

Internatio nal Software Testin g Qualification s Board

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Copyrig ht Notice
This document may be copied in its entirety, or extracts m ade, if the s ource is ack
nowledged.
Copyrig ht Notice International Software Testing Qualifications Board (hereinaft er called
ISTQB) ISTQB i s a registered trademark of the Intern ational Soft ware Testing Qualifications
Board,

Copyrig ht 2011 th e authors for the update 2011 (Thomas Mller (c hair), Debra
Friedenberg, and the ISTQ B WG Foun dation Level)
Copyrig ht 2010 th e authors for the update 2010 (Thomas Mller (c hair), Armin Beer,
Martin Klonk, Rahul Verma
Copyrig ht 2007 th e authors for the update 2007 (Thomas Mller (c hair), Dorothy
Graham, D ebra Friedenb erg and Eri k van Veenendaal)
Copyrig ht 2005, th e authors (T homas Mll er (chair), R ex Black, Sig rid Eldh, Dorothy
Graham, Klaus Olsen, Maaret Pyhjrvi, G eoff Thompson and Erik van Veenendaal).
All rights reserved.
The auth ors hereby transfer the copyright to the Internati onal Software Testing Q
ualifications Board (ISTQB). The authors (as current copyright holders) and ISTQB (as th e
future cop yright holder) have ag reed to the f ollowing conditions of use:
1)
Any individual or training co mpany may
use this sylla bus as the b asis for a tr aining course if the
authors and the ISTQB are acknowledg ed as the so urce and co pyright owners of the s
yllabus
and provided th at any adve rtisement of such a train ing course may mentio n the syllabu s
only
after submissio n for official accreditatio n of the tr aining mate rials to an ISTQB recognized

National Board.
2)
Any individual or group of individuals m ay use this s yllabus as the basis for articles, bo
oks, or
other derivative writings if t he authors
and the ISTQB are acknowledged as the sour ce and
copy right owner s of the syllabus.
3)
Any ISTQB-recognized Nati onal Board
may translat e this syllab us and license the sylla bus (or
its translation) to other parties.

Version 2 011

Page 2 of 7 8

Internationa l Software Testing Q ualifications Board

31-Ma r-2011

Certified Test er
International

Software Te sting

Foundation Level Syllabus


Q ualifications Board

Revision Hist ory

Version
D ate
Remark

ISTQB 2 011
E ffective 1-A pr-2011
Certified Tester Foundation Level Syllabus

Maintena nce Releas e see App endix E Release

Notes

ISTQB 2 010
E ffective 30- Mar-2010
Certified Tester Foundation Level Syllabus

Maintena nce Releas e see App endix E Release

Notes

ISTQB 2 007
0 1-May-2007
Certified Tester Foundation Level Syllabus

Maintena nce Releas e

ISTQB 2 005
0 1-July-2005
Certified Tester Foundation Level Syllabus

ASQF V2.2
July-2003
ASQF Sy llabus Foundation Level Version 2.2

Lehrpla Grundlage n des Softwa re-testens

ISEB V2.0
2 5-Feb-1999
ISEB Software Testin g Foundation Syllabus V 2.0

25 February 1999

Version 2 011

Page 3 of 7 8

Internationa l Software Testing Q ualifications Board

31-Ma r-2011

Certified Test er
International

Software Te sting

Foundation Level Syllabus


Q ualifications Board

Table of Contents

Acknowledgements.
....................
..........................................................
...................................................

Introduction to this Syllabus........


..............................................................................
...............................
8

Purpo se of this D ocument ......


..............................................................................
...............................
8

The C ertified Tester Foundati on Level in S oftware Testing.................................


...............................
8

Learning Objectives/Cognitive Level of Knowledge.............................................


...............................
8

The E xamination
....................
..............................................................................
...............................
8

Accreditation.......
....................
..............................................................................
...............................
8

Level of Detail .....


....................
..............................................................................
...............................
9

How this Syllabus is Organized............................................................................


...............................
9

1.
Fu ndamentals of Testing (K 2)..........................................................................
.............................
10

1.1
Why is Te sting Necessary (K2) ...............................................................
.............................
11

1.1.1
Software Systems Context (K1) ..........................................................
.............................
11

1.1.2
Cause s of Softwar e Defects (K 2) .......................................................
.............................
11

1.1.3
Role of Testing in S oftware De velopment, M aintenance and Operations (K2) ...............
11

1.1.4
Testing and Quality (K2) .....................................................................
.............................
11

1.1.5 How Much Testing is Enough? (K2) ...................................................


.............................
12

1.2
What is Testing? (K2)
..............................................................................
.............................
13

1.3

Seven Testing Principles (K2) .................................................................


.............................
14

1.4
Fundamental Test Pro cess (K1) .............................................................
.............................
15

1.4.1
Test Planning and Control (K1) ..........................................................
.............................
15

1.4.2
Test A nalysis and
Design (K1) ...........................................................
.............................
15

1.4.3
Test Im plementation and Execu tion (K1) ............................................
.............................
16

1.4.4
Evaluating Exit Criteria and Re porting (K1) ........................................
.............................
16

1.4.5
Test C losure Activities (K1) ................................................................
.............................
16

1.5

The Psychology of Testing (K2) ..............................................................


.............................
18

1.6
Code of E thics ...........
..............................................................................
.............................
20

2. Te sting Throughout the Software Life Cycle (K2) ............................................


.............................
21

2.1
Software Developmen t Models (K 2) .......................................................
.............................
22

2.1.1
V-mod el (Sequenti al Development Model) (K2) .................................
.............................
22

2.1.2
Iterative-increment al Development Models ( K2) ................................
.............................
22

2.1.3
Testing within a Lif e Cycle Model (K2) ...............................................
.............................
22

2.2
Test Leve ls (K2) ........
..............................................................................

.............................
24

2.2.1
Compo nent Testin g (K2) .....................................................................
.............................
24

2.2.2
Integration Testing (K2) ......................................................................
.............................
25

2.2.3
System
Testing (K 2) ...........................................................................
.............................
26

2.2.4
Acceptance Testin g (K2).....................................................................
.............................
26

2.3
Test Type s (K2) .........
..............................................................................
.............................
28

2.3.1
Testing
of Function (Functional Testing) (K2 ) ....................................
.............................
28

2.3.2
Testing
of Non-fun ctional Software Characteristics (Non-functional T esting) (K2) .........
28

2.3.3
Testing
of Software Structure/Architecture (Structural.Testing)(K2).............................
29

2.3.4
Testing Related to Changes: R e-testing and Regression Testing (K2 )...........................
29

2.4
Maintena nce Testing (K2) .......................................................................
.............................
30

3.
Sta tic Techniqu es (K2) .......
..............................................................................
.............................
31

3.1
Static Tec hniques and
the Test Process (K2) .........................................
.............................
32

3.2
Review Process (K2) .
..............................................................................
.............................
33

3.2.1

Activiti es of a Formal Review (K 1)......................................................


.............................
33

3.2.2
Roles
and Respon sibilities (K1) ..........................................................
.............................
33

3.2.3
Types of Reviews (K2)........................................................................
.............................
34

3.2.4
Succes s Factors for Reviews ( K2)......................................................
.............................
35

3.3
Static An alysis by Too ls (K2) ..................................................................
.............................
36

4.
Te st Design Te chniques (K4) ..........................................................................
.............................
37

4.1
The Test Developmen t Process (K 3) ......................................................
.............................
38

4.2
Categories of Test Design Techni ques (K2) ...........................................
.............................

39

Version 2 011
Page 4 of 7 8
31-Ma r-2011

Internationa l Software Testing Q ualifications Board

Certified Test er
International

Software Te sting

Foundation Level Syllabus


Q ualifications Board

4.3
Specification-based or Black-box Techniques (K3) ................................
.............................
40

4.3
.1
Equivalence Partiti oning (K3) ..
...........................................................
.............................
40

4.3
.2
Bound ary Value An alysis (K3) .
...........................................................
.............................

40

4.3
.3
Decisio n Table Tes ting (K3) ....
...........................................................
.............................
40

4.3
.4
State T ransition Testing (K3) ...
...........................................................
.............................
41

4.3
.5
Use C ase Testing ( K2) .............
...........................................................
.............................
41

4.4
Structure- based or W hite-box Techniques (K4 ).....................................
.............................
42

4.4
.1
Statement Testing and Covera ge (K4) ...............................................
.............................
42

4.4
.2
Decisio n Testing a nd Coverage (K4) ..................................................
.............................

42

4.4
.3
Other Structure-ba sed Techniques (K1) .............................................
.............................
42

4.5
Experienc e-based Te chniques (K 2)........................................................
.............................
43

4.6
Choosing Test Techniques (K2) ...
...........................................................
.............................
44

5.
Te st Management (K3) .........................
...........................................................
.............................
45

5.1
Test Orga nization (K2) .................
...........................................................
.............................
47

5.1.1
Test Organization and Independence (K2) .........................................
.............................
47

5.1.2
Tasks of the Test Leader and Tester (K1) ..........................................
.............................
47

5.2
Test Planning and Es timation (K3
..........................................................
.............................
49

5.2
.1
Test Planning (K2) ...................
...........................................................
.............................
49

5.2
.2
Test Planning Activ ities (K3) ....
...........................................................
.............................
49

5.2
.3
Entry C riteria (K2) ....................
...........................................................
.............................
49

5.2
.4
Exit Criteria (K2) .......................
...........................................................
.............................
49

5.2
.5
Test E stimation (K 2) ................
...........................................................
.............................
50

5.2
.6
Test Strategy, Test Approach ( K2) .....................................................
.............................
50

5.3
Test Progress Monito ring and Con trol (K2) ............................................
.............................
51

5.3
.1
Test P ogress Monitoring (K1) .
...........................................................
.............................
51

5.3
.2
Test R eporting (K2) ..................
...........................................................
.............................
51

5.3
.3
Test C ontrol (K2) ......................
...........................................................
.............................
51

5.4
Configuration Manage ment (K2) ..
...........................................................
.............................
52

5.5
Risk and Testing (K2) ...................
...........................................................
.............................
53

5.5
.1
Project Risks (K2) ....................
...........................................................
.............................
53

5.5
.2
Produc t Risks (K2) ...................
...........................................................
.............................
53

5.6
Incident M anagement (K3)...........
...........................................................
.............................
55

6.
To ol Support fo r Testing (K2)................
...........................................................
.............................
57

6.1
Types of Test Tools ( K2)..............
...........................................................
.............................
58

6.1.1
Tool S upport for Te sting (K2) ..
...........................................................
.............................
58

6.1.2 Test T ool Classific ation (K2) ....


...........................................................
.............................
58

6.1.3
Tool S upport for M anagement of Testing an d Tests (K1 ) ..................
.............................
59

6.1.4 Tool S upport for Static Testing (K1) ...................................................


.............................
59

6.1.5
Tool S upport for Te st Specification (K1) .............................................
.............................
59

6.1.6
Tool S upport for Te st Execution
and Logging (K1) ............................
.............................

60

6.1.7 Tool S upport for Pe rformance a nd Monitori ng (K1) ............................


.............................
60

6.1.8
Tool S upport for Sp ecific Testin g Needs (K 1) ....................................
.............................
60

6.2
Effective Use of Tools: Potential
Benefits and Risks (K2) .....................
.............................
62

6.2
.1
Potential Benefits a nd Risks of Tool Support for Testing (for all tools) (K2) ...................
62

6.2
.2
Special Considerations for Som e Types of Tools (K1) .......................
.............................
62

6.3
Introducin g a Tool into an Organization (K1) ..........................................
.............................
64

7.
References..... .......................................
...........................................................

.............................
65

Stand ards ........... .......................................


...........................................................
.............................
65

Book s..................

.......................................
...........................................................
.............................
65

8. Appendix A S yllabus Background......


...........................................................
.............................
67

Histor y of this Do cument ...........................


...........................................................
.............................
67

Objec tives of the Foundation Certificate Qualification .........................................


.............................
67

Objec tives of the International Qualificatio n (adapted fr om ISTQB meeting at Sollentuna,

Nove mber 2001) . .......................................


...........................................................
.............................
67

Entry Requiremen ts for this Qualification.......................................................................................... 67

Version 2 011
Page 5 of 7 8
31-Ma r-2011

Internationa l Software Testing Q ualifications Board

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Back ground and History of the Foundation Certificate in Software Testing


..................................... 68
9.
Appendix B L earning Objectives/Cog nitive Level o f Knowledg e ..................
............................. 69
Level 1: Remember (K1) ......................................................................................
............................. 69
Level 2: Understand (K2) .....................................................................................
............................. 69
Level 3: Apply (K3 ) ...............................................................................................
............................. 69
Level 4: Analyze (K4) ...........................................................................................
............................. 69
10.
Appendix C Rules Applied to the IS TQB..................................................
............................. 71
Foun dation Syllab us.............................................................................................
............................. 71

10.1.1
Gener al Rules .....................................................................................
............................. 71

10.1.2
Current Content ..................................................................................
............................. 71

10.1.3
Learni ng Objectives ............................................................................
............................. 71

10.1.4
Overall Structure .................................................................................
............................. 71

11.
Appendix D Notice to T raining Pro iders .................................................
............................. 73
12.
Appendix E Release Notes .......................................................................
............................. 74
Relea se 2010 ..... ..................................................................................................
............................. 74
Relea se 2011 ..... ..................................................................................................
............................. 74
13.
Index .......... ..................................................................................................
............................. 76

Version 2 011

Page 6 of 7 8

Internationa l Software Testing Q ualifications Board

31-Ma r-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Ackn owledgements
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2011):
Thomas Mller (chair), Debra Friedenberg. The core tea m thanks the review team (Dan Almog, Armin B
eer, Rex Black, Julie Ga rdiner, Judy McKay, Tuula Pkknen, Eric Riou du Cosquier Hans Schaefer,
Stephanie Ulrich, Erik van Veenendaal) and all National Bo ards for the suggestions for the curre nt
version of the syllabus.
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2010):
Thomas Mller (chair), Rahul Verma, Martin Klonk and Armin Beer. The core tea m thanks the review t
eam (Rex Bl ack, Mette Bruhn-Peders on, Debra Friedenberg, Klaus Olsen, Judy McKa y, Tuula P
kknen, M eile Posthu ma, Hans Schaefer, Stephanie Ulrich, Pete Williams, Erik van Veenen daal) and
all National Boards for their suggestion s.
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2007):
Thomas Mller (chair), Dorothy Graham, Deb ra Friedenberg, and Eri k van Veenendaal. The core team
tha nks the review team (Ha ns Schaefe r, Stephanie Ulrich, Meile Posthuma, Anders Pettersson, and Wo
nil Kwon) an d all the National Boards for their sug gestions.
International Softwar e Testing Qualifications Board Working Group F oundation Le vel (Edition 2005):
Thomas Mller (chair), Rex Blac k, Sigrid Eld h, Dorothy Graham, Klaus Olsen, Maaret Pyhjrvi, Geoff
Th ompson an d Erik van V eenendaal and the revie w team and all National Boards for their suggestions.

Version 2 011

Page 7 of 7 8

Internationa l Software Testing Q ualifications Board

31-Ma r-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Introduction to this Syllabus


Purpose of this Docum ent
This syllabus forms the basis for the International Softwa re Testing Qualification a t the Found ation Level. T he
International Software Testing Qualifications B oard (ISTQ B) provides it to the Nati onal Boards for them to
accredit the tr aining provi ders and to derive examination questions in their local languag e. Training providers
will determine appropriate t eaching met hods and pr oduce cours eware for accre ditation. The syllabus w ill
help candidates in their preparation for the exa mination. Information on the history and ba ckground of the
syllabus can be fou nd in Appen dix A.

The Certified T ester Foundation Level in Software Testing


The Foundation Lev el qualificatio n is aimed a t anyone in volved in software testing . This includ es
people i n roles such as testers, t est analysts, test engine ers, test consultants, test managers, user
accepta nce testers a nd software developers. This Foundation Level q ualification is also appro priate for
anyone who wan ts a basic un derstanding of software testing, suc h as project managers, quality
managers, software development managers, business an alysts, IT di rectors and management
consultants. Holders of the Foundation Certificate will be able to go o n to a higher-level softwa re testing
qualification.

Learning Obje ctives/Cognitive Level of Knowledge


Learning objectives are indicated for each section in this syllabus and classified as follows: o K1:
remember
o K2: understand o K3: apply
o

K4: analyze

Further details and e xamples of learning obje ctives are given in Appendix B.
All terms listed under Terms jus t below chapter headings shall be re membered ( K1), even if not
explicitly mentioned in the learni ng objectives .

The Examination
The Foundation Lev el Certificate examinatio n will be bas ed on this sy llabus. Answ ers to examination
questio ns may require the use of material ba sed on mor e than one section of this syllabus. All sections
of the sylla bus are exa minable.
The form at of the ex amination is multiple choice.

Exams may be take n as part of a n accredited training co urse or taken independe ntly (e.g., at an
examination center o r in a public exam). Co mpletion of an accredited training cou rse is not a prerequisite for the exa m.

Accreditation
An ISTQ B National Board may accredit training providers whose cour se material follows this syllabus.
Training providers shou ld obtain accreditation guidelines from the board or body tha t perform s the
accreditation. An ac credited co urse is recognized as co nforming to this syllabus, and is allowe d to have
a n ISTQB examination as part of the course.
Further guidance for training pro viders is give n in Append ix D.

Version 2 011

Page 8 of 7 8

Internationa l Software Testing Q ualifications Board

31-Ma r-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Level of Detail
The leve l of detail in this syllabus allows internationally co nsistent teaching and e xamination. In order to
achieve this goal, the syllabus consi sts of:
o General instructional objectiv es describin g the intention of the Foundation Le vel
A list of information to teach, including a description, and referenc es to additio nal sources if requ ired
Lear ning objecti ves for each knowledge area, describ ing the cog nitive learning outcome and min dset
to be achieved
o A list of terms that students must be able to recall an d understan d
A de scription of the key conc epts to teach, including sources suc h as accepte d literature or standards
The syll abus content is not a des cription of th e entire knowledge area of software testing; it reflects the
level of detail to be covered in Foundatio n Level training courses.

How this Sylla bus is Organized


There ar e six major chapters. The top-level h eading for each chapter shows the highest level of learning
objectives t hat is covered within the chapter and specifies th e time for th e chapter. Fo r example:

2. Tes ting Throughout the Software Life Cycle (K2)

115 min utes

This hea ding shows that Chapte r 2 has learning objective s of K1 (ass umed when a higher level is
shown) and K2 (but not K3), and it is intended to take 11 5 minutes to teach the material in the chapter.
Within each chapter there are a num ber of secti ons. Each se ction also h as the learning objective s and
the a mount of time required. S ubsections that do not h ave a time g iven are included within the time for
th e section.

Version 2 011

Page 9 of 7 8

Internationa l Software Testing Q ualifications Board

31-Ma r-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

1. Funda mentals of Test ing (K2 )


1 55 minu tes
Learning Obje ctives for Fundamentals of Testing
The obje ctives identify what you will be able to do followi ng the comp letion of each module.

1.1 Wh y is Testing Necessary? (K2)


LO-1.1.1 Describ e, with examples, the way in which a defect in software can c ause harm to a person, to
the environment or to a company K2)
LO-1.1.2 Distinguish between the root cause of a defect and its effects (K2) LO-1.1.3 Give rea sons why
testing is nece ssary by giving example s (K2)
LO-1.1.4 Describ e why testing is part of q uality assurance and giv e examples of how testing contributes
to higher quality (K2)
LO-1.1.5 Explain and compar e the terms e rror, defect, fault, failure, and the corresponding terms mistake
and bug, using examples (K2)

1.2 Wh at is Testing? (K2)


LO-1.2.1 Recall th e common objectives of testing (K1)
LO-1.2.2 Provide examples for the objecti ves of testin g in different phases of th e software life cycle (K2)
LO-1.2.3 Different iate testing from debugging (K2)

1.3 Se ven Testin g Principles (K2)


LO-1.3.1 Explain the seven principles in te sting (K2)

1.4 Fu ndamental Test Pro cess (K1)


LO-1.4.1 Recall th e five fundamental test activities and respective tasks from planning to closure (K1)

1.5 Th e Psychology of Testing (K2)


LO-1.5.1 Recall th e psycholog ical factors that influenc e the succes s of testing (K1) LO-1.5.2 Contrast
the mindset of a tester and of a developer (K2)

Version 2 011

Page 10 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

1.1 Why is Testing Necessary (K 2)


20 minutes

Terms
Bug, defect, error, failure, fault, m istake, quality, risk

Software Systems Context (K1)


Software systems are an integral part of life, from busine ss applications (e.g., ban king) to consumer
products (e.g., cars). Most people have had an experienc e with software that did not work as expecte d.
Software that does not work corre ctly can lead to many pro blems, including loss of money, time or busin
ess reputation, and could even cause injury or death.

Causes of Softwa re Defects (K2)


A huma n being can make an err or (mistake), which prod ces a defect (fault, bug) in the program code, or
in a docum ent. If a defect in code is executed, th e system m ay fail to do what it should do (or do so
mething it shouldnt), causing a failure. Defects in software, systems or documents m ay result in failures,
but not all defec ts do so.
Defects occur because human beings are fallible and bec ause there is time pressure, complex code, complexity
of infrastructure , changing technologies, and/or man y system int eractions.

Failures can be caus ed by enviro nmental co nditions as w ell. For example, radiation, magnetism,
electronic fields, and pollution can cause faults in firmware or influenc e the execution of softw are by
changin g the hardwa re conditions.

1.1.3 Role of T esting in Software Develop ment, Mai ntenance and Operat ions (K2)
Rigorou s testing of systems and documentation can help to reduce th e risk of problems occurring during
operation and contribute to the quality of the software system, if the defect s found are corrected before the
system is re leased for operational u se.
Software testing ma y also be req uired to me et contractua l or legal re quirements, or industry-specific
standard s.

Testing and Quality (K2)

With the help of testing, it is poss ible to meas ure the quality of software in terms o f defects fou nd, for
both functional a nd non-functional softwar e requirements and characteristics (e.g., reliabili ty, usability,
efficiency, maintainability and portability). For more information on non-functional tes ting see Chapter 2; for
more information on software characte ristics see Software En gineering Software Product Qu ality (ISO
9126).
Testing can give con fidence in th e quality of the software if it finds few or no defec ts. A properly designe d test
that pa sses reduce s the overall level of risk in a system. When testing does find defects, the quality o f the
softwa re system in creases when those defe cts are fixed .

Lessons should be l earned from previous pro jects. By understanding the root causes of defects found in
other projects, processes can be im proved, which in turn sho uld prevent those defects from reoccurring
and, as a consequen ce, improve the quality o f future systems. This is an aspect of quality assurance.
Testing should be in tegrated as one of the qu ality assurance activitie s (i.e., along side development
standard s, training and defect an alysis).
Version 2 011

Page 11 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

How Mu ch Testin g is Enou gh? (K2)


Deciding how much testing is enough should take accoun t of the level of risk, including technical, safety,
and business risks, and project constr aints such as time and b udget. Risk is discusse d further in
Chapter 5.
Testing should provide sufficient information to stakeholders to make informed decisions abou t the
release of the softwa re or system being tested, for the next development step or h andover to customers.

Version 2 011

Page 12 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

1.2 What i s Testing? (K2)


30 minutes

Terms
Debugging, requirem ent, review, test case, t esting, test objective

Background
A common perceptio n of testing is that it only consists of running test s, i.e., executing the soft ware. This
is part of testing , but not all of the testin g activities.
Test activities exist b efore and af ter test exec ution. These activities i nclude plann ing and control,
choosin g test conditions, designing and executing test cases, checkin g results, ev aluating exi t criteria,
reporting on the testing p rocess and system und r test, and finalizing or completing closure activities after
a test phase has b een completed. Testing also includes reviewing documents (including source co de) and
con ducting static analysis.
Both dyn amic testing and static testing can be used as a means for achieving similar objective s, and will
provide information that can be used to improve both the system being tested and the develop ment and
tes ting process es.
Testing can have the following o bjectives: o Finding defects
o Gain ing confiden ce about th e level of qu ality o Prov iding information for decision-makin g

Prev enting defe ts

The thou ght process and activitie s involved i n designing tests early in the life cycle (verifying the test
basis via test design) can help to prevent defects fro m being intro duced into c ode. Review s of docume
nts (e.g., req uirements) and the iden tification and resolution o f issues als o help to prevent defects
appearing in the code.
Different viewpoints in testing tak e different objectives into account. For example, in developm ent testing ( e.g.,
compo nent, integration and sys tem testing), the main ob jective may be to cause as many failures as pos sible
so that defects in t he software are identified and can be fixed. In accepta nce testing, t he main objective may b
e to confirm that the system works as expected, to gain confidence that it has met th e requireme nts. In some
cases the main objectiv e of testing may be to as sess the qua lity of the so ftware (with no intention of fixing
defects), to give information to stakeholders of the risk of releasing the syste m at a given time. Maintenance
testing often incl udes testing t hat no new d efects have been introd uced during development of the chan ges.
During

operational testing, the main obj ective may be to assess system characteristics s uch as reliability or
availability.
Debugging and testi ng are differ ent. Dynami c testing can show failure s that are c aused by de fects.
Debugging is the de velopment activity that fi nds, analyzes and remov es the caus e of the failure.
Subsequ ent re-testin g by a tester ensures th at the fix doe s indeed re solve the failure. The responsibility
for the se activities is usually te ters test an d developers debug.
The pro cess of testin g and the te sting activities are explained in Section 1.4.

Version 2 011

Page 13 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

1.3 Seven Testing Principles (K2)


35 minutes

Terms
Exhaustive testing

Princi ples
A numb er of testing principles ha ve been sug gested over the past 40 years and offer general guidelin es
common f or all testing .
Principle 1 Testing shows presence of defects
Testing can show th at defects are present, b ut cannot pr ove that there are no defects. Testing reduces
the probability of undiscovered defe cts remainin g in the softw are but, eve n if no defe cts are found, it is
not a pro of of correctn ess.
Principle 2 Exhau stive testing is impossible
Testing everything ( all combinations of inputs and preconditions) is n ot feasible e xcept for trivial cases. I
nstead of ex haustive testing, risk analysis and priorities should be used to focus testing efforts.
Principle 3 Early t esting
To find d efects early, testing acti vities shall be started as early as pos sible in the software or s ystem
develop ment life cycle, and shall be focused on defined o bjectives.
Principle 4 Defec t clustering
Testing effort shall be focused proportionally to the expec ted and later observed d efect density of
modules. A small number of mod ules usually contains m ost of the de fects discov ered during p re-release
testing, or is responsible for most of the operation al failures.
Principle 5 Pesticide parado x
If the sa me tests are repeated ov er and over again, eventually the sa me set of tes t cases will no longer fi
nd any new defects. To overcome thi s pesticide paradox, test cases nee d to be regu larly reviewe d and
revise d, and new a nd different tests need t o be written to exercise d ifferent parts of the softw are or
syste m to find potentially more defects.

Principle 6 Testing is context dependen t


Testing is done differently in diffe rent contexts. For example, safety-critical software is tested differently
from an e-commerce s ite.
Principle 7 Absen ce-of-error s fallacy
Finding and fixing de fects does n ot help if the system built is unusabl e and does n ot fulfill the users
needs a nd expectations.

Version 2 011

Page 14 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

1.4 Funda mental Test Pro cess (K 1)


35 minutes

Terms
Confirm ation testing, re-testing, exit criteria, incident, regr ession testi ng, test basi s, test condition, test
cov erage, test data, test execution, test log, test plan, test proced ure, test policy, test suit e, test summary
report, testware

Background
The mos t visible part of testing is test executi on. But to b e effective a nd efficient, t est plans should also
include time to be spent on planning the tests, designing test cas es, preparing for execution and
evaluating resul ts.
The fund amental test process consists of the following main activities: o Test planning an d control
o Test analysis and design
o Test implementa tion and exe cution o Evaluating exit criteria and re porting o Test closure activities
Although logically sequential, the activities in the process may overla p or take place concurre ntly. Tailoring
these main activities within the context of the system and th e project is u sually requir ed.

Test Pla nning and Control (K1)


Test pla nning is the activity of defining the ob jectives of t esting and th e specification of test activities
in order to meet the objectives a nd mission.
Test con trol is the on going activit y of comparing actual progress agai nst the plan, and reportin g the
status, i ncluding deviations from the plan. It i nvolves takin g actions n ecessary to meet the mission and
obje ctives of the project. In o rder to control testing, th e testing activities should be monitored througho ut
the project. Test planning takes in to account the feedback from monito ring and co ntrol activities .
Test pla nning and co ntrol tasks a re defined i n Chapter 5 of this syllabus.

Test Ana lysis and Design ( K1)


Test ana lysis and de sign is the activity during which gene ral testing objectives are transformed into
tangible test conditions and test cases.

The test analysis an d design activity has the following ma jor tasks:
Reviewing the te st basis (su ch as require ments, soft ware integrit y level 1 (risk level), risk analysis
reports, architecture , design, int erface specifications)
o Evaluating testa bility of the t est basis an d test object s
Identifying and prioritizing te st conditions based on a nalysis of te t items, the specification , beh avior and
structure of the software
o Designing and prioritizing hig h level test cases
o Identifying neces sary test data to support the test co nditions and test cases
o Designing the test environm ent setup an d identifying any require d infrastructu re and tools o Cre
ating bi-directional tracea bility betwee n test basis and test ca es
1

The deg ree to which so ftware complies or must comply with a set of stakeholder-sele cted software a nd/or software-based system
ch aracteristics (e.g., software co mplexity, risk as sessment, safe ty level, security level, desired performance, reliability, or cost)
which a re defined to re flect the importance of the software to its stakeholders.

Version 2 011

Page 15 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Test Implementation and E xecution (K1)


Test imp lementation and execution is the activity where t est procedures or scripts are specified by
combini ng the test c ases in a pa rticular order and includi ng any other information needed for test
execution, the enviro nment is set up and the tests are run .
Test imp lementation and execution has the f ollowing major tasks:
o Finalizing, implementing and prioritizing test cases (i cluding the identification of test data)
Developing and prioritizing te st procedur es, creating test data an d, optionally, preparing t est harn esses
and w riting autom ated test sc ripts
o Cre ating test suites from the test procedu res for efficient test exe cution
o Verifying that the test enviro nment has b een set up correctly
o Verifying and updating bi-dir ectional trac eability between the test basis and test cases
o
Exe cuting test p rocedures either manually or by usin g test execu tion tools, according to th e
planned sequen ce
o Log ging the outc ome of test execution an d recording the identities and versions of the software
und er test, test tools and testware
o

Com paring actu al results with expected results

o
Reporting discrepancies as incidents and analyzing them in order to establish their cause (e.g.,

a defect in the c ode, in specified test dat a, in the test document, or a mistake in the way t he test
o
was executed)

Repeating test activities as a result of ac tion taken for each discr epancy, for e xample, re-

exec ution of a test that previ ously failed in order to co nfirm a fix ( confirmation testing), exe cution

of a corrected test and/or ex ecution of te sts in order to ensure tha t defects have not been

introduced in un changed areas of the sof tware or that defect fixing did not un cover other

defects (regression testing)

Evaluati ng Exit Criteria and Reporting (K1)


Evaluati ng exit criteria is the acti vity where te st execution is assessed against the defined objective s.
This sho uld be done f or each test level (see Section 2.2).
Evaluati ng exit criteria has the following majo r tasks:
o
o
o

Checking test lo gs against th e exit criteri a specified i n test planni ng


Ass essing if more tests are n eeded or if the exit crite ria specified should be c hanged
Writing a test summary repo rt for stakeh olders

Test Closure Acti vities (K1)


Test clo sure activities collect data from completed test activities to consolidate experience, testware, facts
and n umbers. Te st closure ac tivities occu r at project m ilestones su ch as when a software system is r
eleased, a te st project is completed (or cancelled), a milestone has been achieve d, or a maintenance
rele ase has bee n completed.

Version 2 011

Page 16 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Test clo sure activities include the


following m ajor tasks:
o

Checking which planned deliverables ha ve been deli vered

o
Clos ing incident reports or ra ising chang e records for any that re main open
o

Documenting th e acceptanc e of the syst em

o
Finalizing and archiving test ware, the test environment and the test infrastruc ture for later reuse
o
Handing over th e testware to the mainten ance organization
o
Analyzing lesson s learned to determine c hanges needed for future releases a nd projects
o
Usin g the information gathered to improve test maturity

Version 2 011

Page 17 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

1.5 The Ps ycholog y of Testing (K2)


25 minutes

Terms
Error guessing, inde pendence

Background
The mindset to be u sed while tes ting and reviewing is different from t hat used while developing software.
With the ri ght mindset developers a re able to test their own code, but se paration of this responsibility to a
te ster is typically done to help focus effort and provide additiona l benefits, such as an indep endent view
by trained a nd professi onal testing resources. In dependent testing may be carried o ut at any lev el of
testing.
A certain degree of i ndependence (avoiding the author bias) often ma kes the tester more effective at
findin g defects an d failures. Independence is not, how ever, a replacement for f amiliarity, and develop ers
can efficiently find m any defects in their own code. Sever al levels of in dependenc e can be defin ed as
shown here from l ow to high:
o Tests designed by the person(s) who wr ote the software under test (low level of independence) o
Tests designed by another person(s) (e.g ., from the d evelopment team)
Tests designed by a person(s) from a dif ferent organizational group (e.g., an independent test team ) or
test spe cialists (e.g ., usability or performance test speci alists)
Tests designed by a person(s) from a dif ferent organization or company (i.e., outsourcing or certification by
an external b ody)
People a nd projects are driven by objectives. People ten d to align the ir plans with the objectives set by
mana gement and other stakeholders, for example, to find defects or to confirm that softwar e meets it s
objectives. Therefore, it is important to clearly state the obje ctives of testing.
Identifyi ng failures d uring testing may be per ceived as criticism again st the produ ct and again st the
author. As a result, t esting is ofte n seen as a destructive activity, eve n though it is very constr uctive in the
m anagement of product ris ks. Looking for failures in a system requires curi osity, profes sional pessimis
m, a critical eye, attentio n to detail, good commu nication wit h development peers, and experien ce on
which to base err or guessing.
If errors, defects or f ailures are communicated in a constructive way, bad feelings between th e testers a
nd the analy sts, designe rs and developers can be avoided. T his applies to defects found during re views

as w ell as in testi ng.


The tester and test l eader need good interpersonal skills to communi cate factual information a bout
defects, progress and risks in a c onstructive way. For the author of the software or document, defect
information ca n help them improve the ir skills. Defects found and fixed during testing will save time and
money later, and r educe risks.
Commu nication prob lems may occur, particularly if testers are seen only as messengers of unwanted
news abo ut defects. However, the re are sever al ways to im prove comm unication a nd relationships
between testers and others:

Version 2 011

Page 18 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Start with collab oration rather than battles remind everyone of the common goal of bette r quality
systems
Com municate fin dings on th e product in a neutral, fa ct-focused way without criticizing the person who
crea ted it, for example, write objective an d factual inc ident report s and review findings
o Try t o understand how the other person feels and why they react as they do
o Confirm that the other perso n has unders tood what you have said and vice ve rsa

Version 2 011

Page 19 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

1.6 Code o f Ethics


10 minutes

Involvem ent in software testing e nables individuals to learn confidential and privil eged informa tion. A
code of ethics is necessary, among other rea sons to ens ure that the information i s not put to inapprop
riate use. Recognizing the ACM and IEEE code of ethics for engineers, the ISTQB states the following
code of ethics:
PUBLIC - Certified software test ers shall act consistently with the pub lic interest
CLIENT AND EMPL OYER - Cert ified software testers sh all act in a manner that is in the best interests of
their c lient and em ployer, cons istent with t he public int erest
PRODUCT - Certified software t esters shall e nsure that t he deliverables they prov ide (on the products
and systems they te st) meet the highest prof essional sta ndards possible
JUDGM ENT- Certifie d software testers shall maintain int egrity and in dependence in their prof essional
judgmen t
MANAGEMENT - Ce rtified software test man agers and le aders shall subscribe to and promote an ethical
approach to the manage ent of softw are testing
PROFE SSION - Certified software testers shall advance the integrity and reputation of the pro fession
consistent with the public interest
COLLEA GUES - Ce rtified softwa re testers sh all be fair to and supportive of their c olleagues, a nd
promote cooperation with software developer s
SELF - Certified software testers shall participate in lifelo ng learning regarding the practice of their professi
on and shall promote an ethical appr oach to the p ractice of the profession

References
1.1.5 Black, 2001, K aner, 2002
1.2 Beiz er, 1990, Black, 2001, M yers, 1979
1.3 Beiz er, 1990, H etzel, 1988, Myers, 1979
1.4 Het zel, 1988
1.4.5 Black, 2001, C raig, 2002 1.5 Blac k, 2001, Hetzel, 1988

Version 2 011

Page 20 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

2. Testing Throu ghout t he Soft ware Life


115 minutes
Cycl e (K2)
Learning Obje ctives for Testing Through out the Software
Life Cycl
The obje ctives identify what you will be able to do followi ng the comp letion of each module.

2.1 So ftware Development Models (K2)


LO-2.1.1 Explain the relationship between developme nt, test activities and work products in the develop
ment life cycle, by giving examples using project and product types (K2)
LO-2.1.2 Recognize the fact t hat software developmen t models m ust be adapt ed to the con text of
projec t and produ ct characteristics (K1)
LO-2.1.3 Recall characteristics of good te sting that are applicable t o any life cycle model (K 1)

2.2 Te st Levels (K2)


LO-2.2.1 Compare the differe nt levels of t esting: major objectives, typical objec ts of testing , typical t
argets of tes ting (e.g., fu nctional or structural) an d related wor k products, people who test, types of de
fects and failures to be identified (K2 )

2.3 Te st Types (K2)


LO-2.3.1 Compar e four softwa re test type s (functional, non-functional, structur al and chang e-related) by
example (K2)
LO-2.3.2 Recognize that functional and str uctural tests occur at any test level (K1)
LO-2.3.3 Identify and describe non-functional test types based on non-functional requireme nts (K2)
LO-2.3.4 Identify and describe test types b ased on the analysis of a software systems structure or
architecture (K2)
LO-2.3.5 Describ e the purpose of confirm ation testing and regression testing ( K2)

2.4 Maintenance Testing ( K2)

LO-2.4.1 Compare maintenance testing (t esting an existing system ) to testing a new application with res
pect to test t ypes, triggers for testing and amount of testing (K 2)
LO-2.4.2 Recognize indicator s for mainten ance testing (modificatio n, migration and retirement) (K1)
LO-2.4.3 . Describ e the role of regression t esting and im pact analysis in mainten ance (K2)

Version 2 011

Page 21 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

2.1 Software Deve lopmen t Models (K2)


20 minutes

Terms
Commer cial Off-The -Shelf (COTS), iterative-incremental development model, validation, verification, Vmodel

Background
Testing does not exist in isolatio n; test activities are relat ed to softwar e development activities. Different
development life cycle models need different approaches to testing.

V-model (Sequential Develo pment M odel) (K2)


Although variants of the V-model exist, a com mon type o f V-model us es four test levels, correspo nding
to the four development levels .
The four levels used in this sylla bus are: o Com ponent (unit ) testing
o Inte gration testin g o System testing
o

Acc eptance testing

In practi ce, a V-mod el may have more, fewer or different levels of dev elopment a nd testing, dependi ng on the
pr oject and the software product. For example, ther e may be co mponent integrati on testing after compone nt
testing, and system in tegration te sting after system testing.

Software work produ cts (such as business sc enarios or use cases, re quirements specifications, design
documents and code) pro duced during developm ent are often the basis of testing in o ne or more tes t
levels. References for generic wor k products include Capa bility Maturity Model Inte gration (CMMI) or
Software life cycle pr ocesses (IE EE/IEC 1220 7). Verification and valid ation (and early test design) can
be c arried out d uring the de velopment of the softwar e work prod ucts.

Iterative incremen tal Devel opment Models (K2)


Iterative -incremental developme nt is the proc ess of estab lishing requirements, designing, building and
testing a system in a series of short deve lopment cyc les. Examples are: proto typing, Rapid Application
Develop ment (RAD), Rational Unified Process (RUP) and agile devel opment models. A system that is

produced using these models may be teste d at several test levels d uring each iteration . An increme nt,
added to others deve loped previo usly, forms a growing p artial system, which sh ould also be tested. Reg
ression testing is increasingly important on all ite rations after the first one . Verification and validation can
be c arried out on each increm ent.

Testing within a Life Cycle Model (K2 )


In any life cycle model, there are several characteristics o f good testi ng:
o For every develo pment activity there is a corresponding testing a ctivity
o Eac h test level has test obje ctives specific to that lev el
The analysis an d design of t ests for a given test level should begin during the corresponding dev
elopment activity
Testers should b e involved i n reviewing d ocuments as soon as dr afts are available in the dev elopment
life cycle
Test lev els can be c ombined or reorganized depending on the nature of the proje ct or the sys tem architecture.
For exa mple, for th e integration of a Comme rcial Off-Th e-Shelf (COT S) software product into a syste m, the
purch aser may perform integra tion testing at the syste m level (e.g.,
Version 2 011

Page 22 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

integrati on to the infrastructure and other systems, or system deploym ent) and acceptance te sting
(function al and/or non-functional, and user a nd/or operational testing).

Version 2 011

Page 23 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

2.2 Test L evels (K2)


40 minutes

Terms
Alpha testing, beta t esting, component testin g, driver, field testing, fu nctional requirement,
integrati on, integrati on testing, n on-functional requirement, robustness testing, stu b, system te sting, test
environment, tes t level, test-driven development, user acceptance testing

Background
For each of the test levels, the following can be identified: the generic objectives, the work product(s) being
refe renced for d eriving test cases (i.e., t he test basis ), the test o bject (i.e., wh at is being te sted),
typical defects and failures to b e found, test harness requirements and tool support, and spe cific approac
hes and responsibilities.
Testing a systems configuration data shall b e considered during test planning,

Component Testin g (K2)


Test bas is:
o Com ponent requ irements o Deta iled design
o

Code

Typical test objects: o Com ponents


o Prog rams
o Data conversion / migration programs o Data base modules
Component testing (also known as unit, module or progra m testing) searches for defects in, and verifies the
functioni ng of, software modules, programs, o bjects, class es, etc., that are separately testable. It may be done
in isolation from the rest of the system, depending on the context of the develop ment life cycle and the s ystem.
Stubs , drivers an d simulators may be use d.

Compon ent testing may include t esting of fun ctionality and specific n on-functional characteris tics, such
as resource-behavior (e.g., searching fo r memory le aks) or robu stness testi ng, as well as structura l
testing (e. g., decision c overage). Test cases are derived fro m work prod ucts such as a specifica tion of
the component, the software design or th e data model.

Typically , component testing occurs with access to the co de being te sted and wit h the support of a
develop ment environ ment, such as a unit test framework or debugging tool. In practice, comp onent
testing u sually involves the programmer who wrote the co de. Defects are typically fixed as so on as they
are found, witho ut formally managing th ese defects.
One app roach to co mponent test ing is to prepare and au tomate test c ases before coding. Thi s is called
a test-first app roach or test-driven development. T his approach is highly iterative and is based o n cycles
of developing test cases, th en building and integratin g small pieces of code, and executing the compo
nent tests c orrecting an y issues and iterating unt il they pass.

Version 2 011

Page 24 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

2.2.2 Integration Testing (K2)


Test bas is:

Software and system design

o
Arch itecture

o
Workflows

o
Use cases

Typical test objects:

o
Subsystems

Data base imple mentation

o
Infra structure

o
Inter faces

System configuration and configuration d ata

Integration testing tests interfaces between components, interactions with differen t parts of a system, such as
the operating sy stem, file system and ha rdware, and interfaces b etween systems.

There may be more than one level of integra tion testing a nd it may be carried out on test objects of
varying size as follo ws:
Com ponent inte gration testin g tests the i nteractions b etween software components and is done after
component testing
System integration testing tests the inter actions between differen t systems or between hard ware and so
ftware and may be done after syste m testing. In this case, the developin g orga nization ma y control onl y one
side o f the interface. This might be considered as a risk .

Business processes implem ented as workflows may involve a se ries of syste ms. Cross-platform issues
may be significant.
The gre ater the scop e of integration, the more difficult it b ecomes to i solate defects to a speci fic compon ent
or syste m, which may lead to inc reased risk a nd addition al time for tro ubleshooting.

Systema tic integratio n strategies may be based on the sy stem archite cture (such as top-down and
bottom-u p), function al tasks, tran saction proc essing sequ ences, or so me other aspect of the s ystem
or components. In or der to ease fault isolation and detect defects early, integration should normally be
incre mental rather than big bang.
Testing of specific n on-functional characteristics (e.g., performance) may be included in integration
testing as well as fun ctional testi ng.
At each stage of integration, testers concentrate solely on the integra tion itself. Fo r example, if they are
integ rating modu le A with mo dule B they are intereste d in testing the commun ication between the
modules, not the functionality of the individual module as that was done durin g component testing. Both
functio nal and structural approaches may b e used.
Ideally, testers should understand the architecture and influence integ ration planning. If integration tests
are planned before compon ents or systems are built, those components can be built in t he order re
quired for m ost efficient testing.

Version 2 011

Page 25 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

2.2.3 System Testing (K 2)


Test bas is:

System and software requirement specification

o
Use cases

o
Fun ctional specification

o
Risk analysis re ports

Typical test objects:


o
o

System, user and operation manuals


System configuration and configuration d ata

System testing is co ncerned with the behavio r of a whole system/pro duct. The tes ting scope s hall be
clearly addressed in the Master and/or Level Test Pla n for that test level.
In syste m testing, th e test environment should correspon d to the final target or pr oduction environ ment
as much as possibl e in order to minimize th e risk of environment-spe cific failures not being fo und in
testin g.
System testing may include tests based on risks and/or on requirements specifica tions, busin ess
process es, use cases, or other high level text descriptions or models of system b ehavior, interacti ons
with the operating sy stem, and system resources.
System testing should investigat e functional and non-fun ctional requi rements of t he system, and data
qua lity characte ristics. Test ers also need to deal wit h incomplete or undocum ented requirem ents.
Syste m testing of functional requirements starts by usi ng the most appropriate
specifica tion-based ( black-box) techniques fo r the aspect of the system to be tested. For exa mple, a decision
table may be created for combinations of effects described i n business r ules. Structure-based te chniques (w
hite-box) m ay then be u sed to assess the thoroughness of the testing with respect to a structur al element,

such as men u structure or web page navigation (s ee Chapter 4).

An indep endent test team often c arries out s ystem testin g.

Accepta nce Testi ng (K2)


Test bas is:
o

User requiremen ts

o System requirem ents o Use cases


o Business processes o Risk analysis re ports
Typical test objects:
o Business processes on fully integrated s ystem o Operational and maintenance processes
o User procedures
o For ms
o Reports
o Configuration data
Acceptance testing is often the r esponsibility of the customers or use rs of a syste m; other stakeholders
may be involved as well.
The goal in acceptan ce testing is to establish confidence in the syste m, parts of th e system or specific nonfunctional character istics of the system. Finding defects is not the ma in focus in accepta nce testing.
Acceptance testing may assess the s ystems rea diness for de ployment a nd
Version 2 011

Page 26 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

use, although it is no t necessaril y the final le vel of testing. For example, a large-s cale system integrati
on test may come after t he acceptance test for a system.
Acceptance testing may occur at various tim es in the life cycle, for example:
o
o
o

A C OTS software product m ay be accep tance tested when it is installed or integrated


Acc eptance testing of the usability of a c omponent may be done during component testin g
Acc eptance testing of a new functional enhancement may come before syste m testing

Typical forms of acc eptance testing include t he following:


User acceptance testing
Typically verifies the fitness for use of the sy tem by business users.
Operati onal (acceptance) testi ng
The acc eptance of t he system by the system administrat ors, includin g: o Testing of backup/restore
o Disa ster recovery o User manageme nt o Mai ntenance tasks
o Data load and migration tasks
o

Periodic checks of security vulnerabilities

Contract and regul ation acceptance testin g


Contract acceptance testing is p erformed ag ainst a contr acts accept ance criteria for producing customdeveloped s oftware. Acceptance crit eria should b e defined w hen the parties agree to the contract .
Regulation acceptance testing is performed against any reg ulations that must be ad hered to, such as
government, legal or safety regul ations.
Alpha and beta (or field) testing
Developers of marke t, or COTS, software often want to get feedback from potential or existing customers
in their market before the software product is put up for sale commercially. Alpha t esting is performed at
the d eveloping o rganizations site but not by the developing team. Beta testing , or field-testing, is perfo
rmed by customers or po tential custo mers at their own locatio ns.
Organiz ations may u se other ter ms as well, s uch as facto ry acceptance testing an d site accep tance
testing f or systems t hat are tested before and after being moved to a customers s ite.

Version 2 011

Page 27 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

2.3 Test T ypes (K 2)


40 minutes

Terms
Black-bo x testing, co de coverag e, functional testing, interoperability t esting, load testing, maintainability
testing, performan ce testing, p ortability tes ting, reliability testing, s ecurity testing, stress testing, struct
ural testing, usability testing, white-b ox testing

Background
A group of test activities can be aimed at verifying the software syste m (or a part of a system) based on a
spe cific reason or target for testing.
A test type is focuse d on a particular test obj ective, which could be an y of the follo wing: o A fu nction to
be performed by the software
o A no n-functional quality characteristic, su ch as reliability or usability o The structure or architecture of
the software or syste m
Change related, i.e., confirming that defects have be en fixed (con firmation tes ting) and lo oking for u
nintended changes (reg ression testi ng)
A model of the software may be developed and/or used i n structural testing (e.g., a control flo w model o r
menu structure model), non-functio nal testing (e .g., performance model, usability mo del security threat
modeling), and fu nctional testing (e.g., a process flow model, a state transition model or a plai n language
s pecification ).

Testing of Functio n (Functional Testing) (K2)


The func tions that a system, subsystem or co mponent are to perform may be described in work products
such as a requirements specification, use cases , or a functi onal specific ation, or the y may be
undocumented. T he functions are what the system does.
Function al tests are based on fu nctions and features (de scribed in documents or understood by the
testers) and their int eroperability with specific systems, a nd may be performed at all test level s (e.g.,
tests for components may be ba sed on a co mponent specification).
Specific ation-based techniques may be used to derive te st conditions and test ca ses from the function
ality of the s oftware or sy stem (see C hapter 4). Functional tes ting conside rs the exter nal behavior of

the softw are (black-box testing).


A type of functional testing, security testing, investigates the functions (e.g., a fire wall) relating to detectio
n of threats, such as viruses, from m alicious outsiders. Anoth er type of fu nctional testing, interoperability
testin g, evaluates the capability of the sof tware produ ct to interact with one or more specified component
s or system s.

2.3.2 Testing of Non-fu nctional Software C haracteristics (Non -functional Testin


g) (K2)
Non-fun ctional testing includes, but is not limited to, perfo rmance testing, load testing, stress testing,
usability testing, maintain ability testing, reliability testing and portability te sting. It is th e testing of how
the s ystem works.
Non-fun ctional testing may be pe rformed at a ll test levels. The term non-functiona l testing describes the tests
required to measure ch aracteristics of systems and software that can be quantified on a varying scale, such as
response times for performance te sting. These tests can be referenced to a quality m odel such as the one de
fined in Software Engineering Software Product Quality (I SO
Version 2 011

Page 28 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

9126). N on-function al testing con siders the external beha vior of the software and in most cas es uses
black-box test design techniques to accomplish that.

Testing of Softwa re Structure/Archit cture (St ructural Testing) (K 2)


Structur al (white-box) testing may be perform ed at all test levels. Structural techniques are best used aft
er specification-based techniques, in order to hel p measure t he thorough ness of testin g through
assessment of coverage of a type of structure.
Coverage is the exte nt that a str ucture has b een exercise d by a test s uite, expressed as a percenta ge
of the items being covered. If cov erage is not 100%, then more tests may be desi gned to test th ose
items th at were missed to increa se coverag e. Coverage techniques are covered in Chapter 4.
At all tes t levels, but especially in component testing and component integration t esting, tools can be
used to measure the code co verage of ele ments, suc as statements or decisions. Structural testing m ay
be base d on the arc hitecture of the system, s uch as a calling hierarch y.
Structur al testing ap proaches can also be applied at syst em, system integration or acceptanc e testing l
evels (e.g., t o business m odels or me nu structures).

Testing Related to Changes : Re-testing and R egression Testing (K2)


After a defect is dete cted and fix ed, the softw are should b e re-tested to confirm that the original defect
h as been successfully rem oved. This is called confirmation. D ebugging (lo cating and fi xing a defect) i s
a develop ment activity, not a testin g activity.
Regression testing is the repeate d testing of an already t ested program, after mo dification, to discover
any defects introduced or uncovere d as a result of the chan ge(s). These defects ma y be either in the
software being tested, or in another related o r unrelated software co ponent. It is perform ed when the
software, or its environm ent, is changed. The ex tent of regression testing is based o n the risk of not
finding defects in software that was working p reviously.
Tests should be rep eatable if they are to be u sed for confirmation testing and to assist regres sion
testing.
Regression testing m ay be performed at all t est levels, an d includes functional, no n-functional and
structura l testing. Re gression tes t suites are run many tim es and gen erally evolve slowly, so regressi
on testing is a strong can didate for a utomation.

Version 2 011

Page 29 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

2.4 Maintenance T esting (K2)


15 minutes

Terms
Impact a nalysis, maintenance te sting

Background
Once de ployed, a so ftware system is often in service for years or decades. Durin g this time the system,
its configuration data, or its environm ent are often corrected, changed or extended. The planning of
releases in advance is crucial for successful maintenance testing. A distinction ha s to be made be tween
plann ed releases and hot fixes. Maintena nce testing i s done on an existing operational system, and is
triggered by modifications, mig ration, or retirement of the software or system.
Modifications include planned enhancement changes (e. g., release-based), corrective and emergen cy
changes, and chang es of environ ment, such as planned operating sy stem or database upgrades,
planned upgrade of Commercial-O ff-The-Shelf software, o r patches to correct newly exposed or
discovered vulnerabilities of the o perating sys tem.
Mainten ance testing for migratio n (e.g., from one platform to another) should inclu de operatio nal tests of
the new environment as well as of th e changed software. Migration testin g (conversio n testing) is also
need ed when data from anoth er applicatio n will be mi grated into th e system be ing maintained.
Mainten ance testing for the retire ment of a s ystem may in clude the testing of data migration or archivin
g if long data -retention p eriods are required.
In additi on to testing what has be en changed, maintenance testing in cludes regression testing to parts of
the system that have not been chang ed. The scope of mainte nance testin g is related to the risk of the
change, th e size of the existing sy stem and to the size of th e change. D epending o n the changes,
maintenance testing may be done at any or all test levels a nd for any or all test types. Determi ning how
the existing sys tem may be affected by changes is called impact analysis, a nd is used to help decide how
much re gression te sting to do. T he impact analysis may be used to determin e the regres sion test suite.
Mainten ance testing can be diffic ult if specific ations are out of date or missing, or testers with domain
knowledge a re not availa ble.

References
2.1.3 C MMI, Craig, 2002, Hetzel, 1988, IEE E 12207 2.2 Het zel, 1988

2.2.4 C opeland, 200 4, Myers, 1 979


B eizer, 1990, Black, 2001, Copeland, 2004
Black, 2001, I SO 9126
B eizer, 1990, Copeland, 2004, Hetzel, 1988
H etzel, 1988, IEEE STD 829-1998
2.4 Blac k, 2001, Cr aig, 2002, H etzel, 1988, IEEE STD 8 29-1998

Version 2 011

Page 30 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

3. Static T echniques (K2 )


60 minutes
Learning Obje ctives for Static T echniques
The obje ctives identify what you will be able to do followi ng the comp letion of each module.

3.1 Sta tic Techniques and the Test Process (K2)


LO-3.1.1 Recognize software work products that can b e examined by the different static techniqu es (K1)
LO-3.1.2 Describ e the importa nce and value of considering static techniques fo r the asses sment of
softw are work products (K2)
LO-3.1.3 Explain the difference between static and dy namic techniques, consid ering objectives, types of
defects to be identified, and the role of these techniques within the softw are life cycle (K2)

3.2 Re view Process (K2)


LO-3.2.1 Recall th e activities, roles and responsibilitie s of a typical formal review (K1) LO-3.2.2 Explain
the differences between different typ es of review s: informal re view, techn cal
review, walkthrough and inspection (K2)
LO-3.2.3 Explain the factors f or successful performanc e of reviews (K2)

3.3 Sta tic Analy sis by Too ls (K2)


LO-3.3.1 Recall ty pical defect s and errors identified by static analysis and compare them to reviews and
dynami c testing (K1)
LO-3.3.2 Describ e, using exa mples, the typical benefits of static an alysis (K2)
LO-3.3.3 List typic al code and design defects that may be identifie d by static a nalysis tools (K1)

Version 2 011

Page 31 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

3.1 Static Techniques and the Test Proce ss


15 minutes

(K2)

Terms
Dynamic testing, static testing

Background
Unlike dynamic testi ng, which re quires the ex ecution of software, static testing tec hniques rely on the
manual examination (reviews ) and autom ated analysis (static ana lysis) of the code or othe r project d
ocumentatio n without the execution of the code.
Reviews are a way o f testing software work p roducts (including code) and can be performed ell before
dynamic test execution. D efects detec ted during r eviews early in the life cy cle (e.g., defects found in
requirement s) are often much cheap er to remov e than those detected by running tes ts on the executing
code.
A review could be do ne entirely as a manual activity, but there is also tool support . The main manual
activity is to examine a w ork product and make c omments about it. Any s oftware work product can be
reviewed, includi ng requirem ents specifications, desig n specifications, code, t est plans, test
specifications, test cases, test scripts, user guides or web pages.
Benefits of reviews i nclude early defect detec tion and correction, dev elopment pr oductivity improve
ments, redu ced develop ment timesc ales, reduce d testing cos t and time, lifetime cost reductio ns, fewer
defects and improved com munication. Reviews can find omissi ons, for exa ple, in requirements, which
are unlike ly to be foun d in dynamic testing.
Reviews, static anal ysis and dynamic testing have the same objective identifying defects. T hey are
complementary; the different techniques can find diffe rent types o f defects eff ectively and efficiently.
Compare d to dynamic testing, static techniques find causes of failures (defects) rather than the failures
the mselves.
Typical defects that are easier to find in reviews than in dynamic testi ng include: d eviations fro m
standard s, requirem ent defects, design defec ts, insufficient maintaina bility and inc orrect interface
specifica tions.

Version 2 011

Page 32 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

3.2 Review Proce ss (K2)


25 minutes

Terms
Entry criteria, formal review, info rmal review, inspection, metric, mode rator, peer review, reviewer, scribe, t
echnical review, walkthro ugh

Background
The diffe rent types of reviews vary from informal, characterized by no written instructions for reviewers, to
systematic, characterized by te am participation, docume nted results of the review, and docume nted
proced ures for cond ucting the r eview. The f ormality of a review process is relate d to factors s uch as
the maturity of the developm ent process, any legal or regulatory requirements or the need for an audit tra
il.
The way a review is carried out d epends on the agreed objectives of the review (e .g., find def ects, gain
und erstanding, educate testers and new team mem bers, or disc ussion and d ecision by consensus).

Activities of a Formal Review (K1)


A typical formal revie w has the fo llowing mai n activities:
Plan ning
Defining the review crite ria
Selecting th e personnel
Allocating roles
Defining the entry and e xit criteria for more forma l review typ es (e.g., insp ections)
Selecting wh ich parts of documents to review
Checking entry criteria (for more form al review types)
Kick-off
Distributing documents

Explaining th e objectives , process an d documents to the participants


Indiv idual prepar ation
Preparing for the review meeting by reviewing th e document(s)
Noting poten tial defects, questions and comments
Exa mination/eva luation/recording of results (review m eeting)
Discussing o r logging, with documented results or minutes (fo r more formal review typ es)
Noting defec ts, making recommendations regarding handling the defects, making decisions about the
defects
Examining/evaluating and recording issues during any physical meetings or tracking a ny group electr onic
commu nications
Rew ork
Fixing defects found (typically done b y the author )
Recording updated status of defects (in formal reviews)
Foll ow-up
Checking th at defects ha ve been ad dressed
Gathering metrics
Checking on exit criteria (for more formal review types)

Roles an d Respon sibilities (K1)


A typical formal revie w will includ e the roles b elow:
Manager: decide s on the exe cution of re views, alloca tes time in p roject sched ules and determines if
the review objectives have been met.
Version 2 011

Page 33 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Moderator: the person who l eads the review of the do cument or set of documents, includi ng planning the
review, running the meeting, and following-up after the meeting. If necessary, the moderator may mediate betw
een the various points o f view and i s often the person upon whom

the success of th e review re sts.


o Auth or: the writer or person with chief res ponsibility f or the docum ent(s) to be reviewed.
Reviewers: individuals with a specific technical or bu siness backg round (also called check ers or inspectors)
who, after the necessary pre paration, identify and des cribe finding s (e.g., defe cts) in the product unde r
review. Re viewers should be chos en to repres ent different perspectives and

role s in the review process, and should t ake part in any review meetings.
Scri be (or record er): docume nts all the issues, proble ms and open points that were identified duri ng
the meeting.
Looking at software products or r elated work products from different p erspectives and using checklis ts
can make reviews mo re effective a nd efficient. For exampl e, a checklist based on various perspec tives
such a s user, maint ainer, tester or operation s, or a checklist of typic al requirements problems may help
to uncover pr eviously un detected issu es.

Types of Reviews (K2)


A single software pr oduct or related work product may be the subject of more tha n one review. If more
tha n one type of review is used, the ord er may vary. For example, an inform al review m ay be carried o
ut before a technical rev iew, or an in spection ma y be carried out on a req uirements specifica tion
before a walkthroug h with customers. The m ain charact eristics, optio ns and purp oses of comm on
review types are:
Informal Review
o

No formal proce ss

o May take the for m of pair pro gramming or a technical lead reviewing designs and code o Results
may be documented
o Varies in usefuln ess depending on the reviewers
o

Mai n purpose: i nexpensive way to get s ome benefit

Walkthrough
o Meeting led by author
o May take the form of scenarios, dry runs, peer group participation
Open-ended ses sions
Optional pre-meeting pre paration of reviewers
Optional preparation of a review repo rt including list of finding s

o
o
o

Optional scribe (who is not th e author)


May vary in practice from quite informal to very forma l
Mai n purposes: learning, gaining understanding, find ing defects

Technical Review
Documented, defined defect -detection process that i ncludes peers and technical experts with opti onal
manage ment participation
o May be perform ed as a peer review with out manage ment participation o Ideally led by trained
modera tor (not the author)
o Pre-meeting preparation by reviewers o Optional use of c hecklists
Prep aration of a review report which incl udes the list of findings, the verdict whether the
soft ware produc t meets its re quirements and, where appropriate, recommendations relate d to findings
o May vary in practice from quite informal to very forma l
Mai n purposes: discussing, making decisions, evalu ating alternatives, finding defects, solving technical
proble ms and checking conform ance to spe cifications, plans, regulations, and standards
Version 2 011

Page 34 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Inspection
o Led by trained m oderator (no t the author) o Usually conduct ed as a peer examination o Defined
roles
o Inclu des metrics gathering
o For mal process based on rules and checklists
o Specified entry a nd exit criteria for acceptance of the software pr oduct o Pre-meeting preparation
o Inspection report including li st of findings
o For mal follow-up process (with optional p rocess improvement co mponents) o Optional reader
o

Mai n purpose: fi nding defects

Walkthro ughs, technical reviews and inspections can be performed within a peer group,
i.e., colle agues at th e same organizational level. This typ e of review i s called a p eer review.

Success Factors for Review s (K2)


Success factors for reviews include:
o
o
o

Eac h review has clear predefined objectives


The right people for the revie w objectives are involved
Testers are valued reviewers who contrib ute to the re view and al so learn about the product

whic h enables th em to prepa re tests earlier


o

Defe cts found are welcomed and expres sed objectiv ely

o
People issues a nd psycholo gical aspects are dealt with (e.g., ma king it a positive experie nce for
o
the author)

The review is conducted in an atmosphere of trust; th e outcome will not be used for the

evaluation of the participants

Review techniques are appli ed that are suitable to achieve the ob jectives and to the type and level of
software work produ cts and reviewers
o Checklists or rol es are used if appropriat e to increase effectiveness of defect identification
Trai ning is given in review techniques, es pecially the more formal techniques such as inspection
o
Management supports a good review pro cess (e.g., b y incorporating adequate time for review
activ ities in proje ct schedule s)
o

The re is an emphasis on learning and pr ocess improvement

Version 2 011

Page 35 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

3.3 Static Analysis by Too ls (K2)


20 minutes

Terms
Compiler, complexity , control flo w, data flow, static analy sis

Background
The obje ctive of static analysis is to find defects in software source co de and soft ware models. Static an
alysis is performed with out actually executing th e software being examined by the to ol; dynamic testing
doe s execute th e software c ode. Static analysis can locate defects that are h ard to find in d ynamic testi
ng. As with r eviews, static analysis fin ds defects rather than f ailures. Stati c analysis tools analyz e
program code (e.g., co ntrol flow an d data flow), as well as generated o utput such as HTML and X ML.
The valu e of static analysis is:
o Early detection o f defects prior to test ex ecution
o
Early warning ab out suspicious aspects of the code o r design by the calculati on of metrics , such
o
as a high compl exity measure

Identification of defects not easily found by dynamic testing


o
Dete cting dependencies and inconsistencies in software models such as links
o

Impr oved mainta inability of code and des ign

Prev ention of defects, if less ons are lear ned in devel opment

Typical defects disco vered by st atic analysis tools include :


o
Refe rencing a v ariable with a n undefined value

o Inconsistent interfaces betwe en modules and components o Variables that are not used o r are
improp erly declared
o Unr eachable (de ad) code
o Miss ing and erro neous logic (potentially infinite loops) o Overly complicat ed constructs
o Prog ramming st andards viol ations o Sec urity vulnerabilities
o

Synt ax violation of code an d software m odels

Static an alysis tools are typically used by dev elopers (ch ecking against predefine d rules or
programming standa rds) before and during component a nd integration testing or when checking-in code
to configuration manageme nt tools, and by designers during software modeling. Static analysis tools may
produce a lar ge number of warning m essages, which need to be well-man aged to allow the most eff
ective use of the tool.
Compilers may offer some suppo rt for static analysis, including the ca lculation of metrics.

References
3.2 IEE E 1028
3.2.2 Gilb, 1993, va n Veenendaal, 2004 3.2.4 Gilb, 1993, IE EE 1028
3.3 van Veenendaal, 2004

Version 2 011

Page 36 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

4. Test De sign Te chniques (K4)


2 85 minu tes
Learning Obje ctives for Test Design Tec hniques
The obje ctives identify what you will be able to do followi ng the comp letion of each module.

4.1 Th e Test De velopment Process (K3)


LO-4.1.1 Different iate betwee n a test desi gn specification, test case specificati on and test procedure
specification (K2)
LO-4.1.2 Compare the terms test conditio n, test case and test proc edure (K2)
LO-4.1.3 Evaluate the quality of test cases in terms of clear traceability to the requirements and expecte d
results (K2)
LO-4.1.4 Translate test cases into a well-structured te st procedure specificatio n at a level of detail
relevant to the knowledge of the tester s (K3)

4.2 Ca tegories o f Test Design Techniques (K 2)


LO-4.2.1 Recall r easons that both specification-based (black-box) and structur e-based (white-box) test design
tech niques are useful and lis t the comm on technique s for each ( K1)

LO-4.2.2 Explain the characteristics, com monalities, a nd differenc es between specification -based
testing, structure-ba sed testing and experience-based te sting (K2)

4.3 Sp ecificatio n-based or Black-b ox Techniques (K3)


LO-4.3.1 Write te st cases from given softw are models using equiv alence partitioning, boundary value an
alysis, decis ion tables a nd state transition diagrams/tables (K 3)
LO-4.3.2 Explain the main purpose of eac h of the four testing tech niques, what level and type of testing
could use the technique, and how cov erage may be measured (K2)
LO-4.3.3 Explain the concept of use case testing and its benefits ( K2)

4.4 Structure-based or W hite-box Techniques (K4)


LO-4.4.1
Describ e the concept and value of code cove rage (K2)
LO-4.4.2

Explain the concepts of stateme nt and decision coverage , and give re asons why these
concept s can also b e used at tes t levels other than component testin g (e.g., on
busines s procedures at system l evel) (K2)
LO-4.4.3
Write te st cases from given contr ol flows usin g statement and decision test desig n
techniqu es (K3)
LO-4.4.4
Assess statement an d decision c overage for completeness with resp ect to define d exit
criteria. (K4)

4.5 Ex perience- based Tec hniques ( K2)


LO-4.5.1 Recall r easons for writing test cases based on intuition, experience a nd knowledge about co
mmon defe cts (K1)
LO-4.5.2 Compare experience -based techniques with specification -based testing technique s (K2)

4.6 Choosing Te st Techniques (K2)


LO-4.6.1 Classify test design techniques a ccording to their fitness to a given co ntext, for th e test basis,
re spective models and software characteristics (K 2)

Version 2 011

Page 37 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

4.1 The Te st Deve lopmen t Process (K3)


15 minutes

Terms
Test case specification, test design, test exe cution schedule, test pro cedure specification, test script, tr
aceability

Background
The test developme nt process d escribed in t his section can be done in different w ays, from ve ry informal with
little or no documen tation, to very formal (as it is described below). T he level of formality depends on the
context of the testing, including the maturity of testing an d development process es, time con straints, safe ty or
regulatory requirem ents, and th e people inv olved.

During t est analysis, the test basis documentation is analyzed in order to determi ne what to te st, i.e., to
identify the te st conditions. A test cond ition is defin ed as an item or event t hat could be verified by one or
m ore test cases (e.g., a fun ction, trans action, quality characteri stic or struct ural element ).
Establis hing traceability from test conditions back to the s pecification s and require ments enables both
effe ctive impact analysis wh en requirem ents change , and deter mining requirements cov erage for a set
of tests. During test analysis the detailed test approach is implemented t o select the test design t
echniques to use based on, among other conside rations, the identified risks (see Chapter 5 for more on
risk analysis).
During t est design th e test cases and test data are create d and specified. A test case consists of a set of
in put values, e xecution pre conditions, expected res ults and exe cution postc onditions, defined to cover
a certain tes t objective(s ) or test condition(s). The Standard for Software Test Documentation (IEE E
STD 829-1998) describes the con tent of test design specifications
(containing test cond itions) and test case spe cifications.
Expected results sho uld be prod uced as part of the specification of a test case and include outputs,
changes to data and states, and any other co nsequences of the test. If expected results have not been
defined, then a plausible, but erroneou s, result may be interpreted as the correct one. Expected results
sho uld ideally b e defined prior to test execution.
During t est impleme ntation the te st cases are developed, implemented, prioritized and organi zed in the
test procedure s pecification (IEEE STD 829-1998). T he test proce dure specifies the sequ ence of actions
for the exe cution of a test. If tests are run using a test execution tool, the sequence of actions is specified i
n a test scrip t (which is an automated test procedure).

The various test pro cedures and automated test scripts are subseque ntly formed into a test execution schedule
that defines t he order in which the various test pr ocedures, an d possibly automated test scripts, are exec uted.
The test execution schedule will take into account such factors a s regression tests, prioritization, and technical
an d logical dependencies.

Version 2 011

Page 38 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

4.2 Catego ries of Test Design Techniques


15 minutes

(K2)

Terms
Black-bo x test desig n technique, experience-based test d esign technique, test design technique, whitebox test desig n technique

Background
The pur pose of a tes t design tec hnique is to identify test conditions, t est cases, a nd test data.
It is a cla ssic distinction to denot e test techni ques as black-box or white-box. Black-box test d esign
techniques (also called specification-based t echniques) are a way to derive and select test conditio ns,
test cases, or test dat a based on an analysis of the test b asis documentation. Thi s includes both
functional and non-functional te sting. Blac k-box testing, by definition, does not use
any infor mation regarding the internal structure of the co mponent or s ystem to be tested. White-box test
design techniqu es (also call ed structural or structure- based techn iques) are b ased on an analysis of the
structure of the co mponent or system. Black-box and white-box te sting may al so be combine d with
exper ience-based techniques to leverage the experien ce of develo pers, tester s and users to determine
w hat should be tested.
Some techniques fall clearly into a single cat egory; other s have elem ents of more than one category .
This syllabus refers t o specification-based te st design techniques as black-box te chniques an d structure
-based test design tech niques as wh ite-box techniques. In a ddition expe rience-based test design t
echniques a re covered.
Commo n characteris tics of specification-bas ed test design technique s include:
Models, either fo rmal or informal, are use d for the sp ecification of the problem to be solved, the soft ware
or its c omponents
o

Test cases can be derived s ystematically from these models

Commo n characteris tics of struct ure-based t est design techniques in clude:

o
Infor mation abou t how the so ftware is constructed is used to deri ve the test c ases (e.g., c ode
and detailed design information)
The extent of coverage of th e software c an be measu red for existing test cas es, and further test case s
can be derived system atically to in crease cove rage
Commo n characteris tics of expe rience-based test design techniques include: o The knowledge and experien
ce of peopl e are used to derive the test cases

o
The knowledge of testers, de velopers, us ers and oth er stakeholders about th e software, it s usa
ge and its environment is one source of informati on
o

Knowledge abou t likely defe cts and their distribution is another so urce of infor mation

Version 2 011

Page 39 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

4.3 Specification-b ased o r Black- box


Techniques ( K3)

150 minutes

Terms
Boundary value analysis, decision table testi ng, equivale nce partitioning, state transition testin g, use
case testing

Equivalence Parti tioning (K3)


In equiv alence partitioning, inputs to the soft ware or syste m are divid ed into group s that are expecte d
to exhibit similar behavior, so they are likely to be processed in the same way. Equivale nce partition s (or
classes) can be fo und for both valid data, i.e., values that should be accepte d and invalid data, i.e., v
alues that sh ould be rejected. Partitio ns can also be identified for outputs, internal valu es, time-rel ated
values (e.g., before or after an event) and for interface parameters (e.g., int egrated components bei ng
tested during integration testing). Tests can b e
designe d to cover all valid and invalid partitions. Equivalence partition ing is applicable at all levels of
testing.
Equivale nce partition ing can be used to achi eve input an d output coverage goals. It can be ap plied to
human input, input via interfaces to a syste m, or interfa ce paramet ers in integra tion testing.

Boundary Value A nalysis ( K3)


Behavior at the edge of each eq uivalence partition is more likely to be incorrect than behavior within the
partition, so bou ndaries are a n area where testing is likely to yield defects. The maximum and minimum
values of a partition ar e its boundar y values. A boundary value for a valid partition is a valid bo undary
value ; the bound ary of an inv alid partition is an invalid boundary v alue. Tests can be designe d to cover
bo th valid and invalid boun dary values. When desi gning test ca ses, a test f or each bo undary value is
chosen.
Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect-finding
capability is h igh. Detaile d specificatio ns are helpful in determ ining the inte resting boundaries.
This tec hnique is oft en considere d as an exte nsion of eq uivalence partitioning or other black-b ox test
design techniqu es. It can be used on eq uivalence cla sses for use r input on s creen as well as, for exam
ple, on time ranges (e.g., time out, tr ansactional speed requirements) or table ranges (e.g., table size is
256*256 ).

Decision Table Testing (K3)


Decision tables are a good way to capture system require ments that contain logical conditions , and to
docu ment internal system design. They m ay be used t o record com plex business rules that a system is
to implem ent. When creating decision tables, th e specification is analyzed, and con ditions and acti ons
of the sy stem are ide ntified. The input conditions and actions are mos t often stated in such a w ay that
they must be true or false (Boolean). The decision ta ble contains the triggering conditio ns, often com
binations of true and false for all input condition s, and the resulting actio ns for each co mbination of
conditions. Each column of the table corresponds to a busin ess rule that defines a unique co mbination
of conditions and which res ult in the execution of the actions associated with that rule. The coverage stan
dard commonly used wit h decision table testing is to have at least one tes t per colum n in the table,
which typic ally involves covering all combinatio ns of triggering conditions.

Version 2 011

Page 40 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

The strength of deci sion table te sting is that it creates co mbinations of conditions that otherwi se might
no t have been exercised during testing. It may be a pplied to all situations when the acti on of the softw
are depends on several logical deci sions.

State Tra nsition Testing (K 3)


A syste m may exhibit a different response de pending on current cond itions or previous history (its state). In
this case, t hat aspect of the system can be sho wn with a sta te transition diagram. It allows the teste r to view
the software in terms of its states, transitions between states, the inputs or e vents that trigg er state changes
(transitions) and the actions wh ich may result from those transitions. The states of the system or object under
test are separate, id entifiable an d finite in nu mber.

A state table shows the relationship between the states and inputs, a nd can highlight possible transitio ns
that are in valid.
Tests can be design ed to cover a typical sequence of states, to cover every state, to exercise every
transitio n, to exercis e specific sequences of transitions or to test invalid transition s.
State transition testi ng is much used within th e embedde d software in dustry and technical automation
in gener al. However, the techniq ue is also suitable for modeling a business object having specific states
or testing s creen-dialo gue flows (e.g., for Intern et applications or busine ss scenarios).

Use Case Testing (K2)


Tests can be derived from use c ases. A use case describ es interactio ns between actors (users or
systems), which pro duce a result of value to a system us er or the customer. Use cases may be
described at the abstract level (business use case, technology-free, b usiness pro cess level) or at the
syst em level (sys tem use cas e on the sy stem functio nality level). Each use case has preconditions
which need to be met for the u se case to work successfully. Each use case terminates with post
conditions which are the observable results and final state of the system a fter the use case has be en
completed. A use case usually has a mainstre am (i.e., most likely) scenario and alte rnative scenarios.
Use cases describe the process flows throu gh a system based on its actual likely use, so th e test
cases d erived from u se cases are most usef ul in uncovering defects in the proces s flows duri ng realworld use of the system. Us e cases are very useful for designing acceptance tests with customer/user
participation. The y also help uncover integ ration defects caused by the interaction and inte rference of d
ifferent components, which individual component testing wou ld not see. Designin g test cases from use c
ases may be combined w ith other sp ecification-b ased test techniques.

Version 2 011

Page 41 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

4.4 Structu re-base d or W hite-box


60 minutes

Techniques (K4)

Terms
Code coverage, decision covera ge, statemen t coverage, structure-based testing

Background
Structur e-based or w hite-box testing is based on an identified structu e of the software or the system, as
seen in t he following examples:
Com ponent level: the structure of a softw are component, i.e., stat ements, dec isions, branches or e ven
distinct paths
Inte gration level: the structure may be a call tree (a diagram in wh ich modules call other modules)
o

System level: th e structure m ay be a me nu structure, business process or web page struc ture

In this s ection, three code-relate d structural test design te chniques for code coverage, based on stateme
nts, branches and decisions, are disc ussed. For decision testing, a contro l flow diagram may be used to
visu alize the alternatives for each decision.

Stateme nt Testing and Coverage (K4)


In comp onent testing , statement coverage is the assessm ent of the percentage of executable stateme
nts that have been exercised by a te st case suite. The statem ent testing technique derives test cas es to
execut e specific sta tements, normally to increase statem ent coverag e.
Statement coverage is determine d by the nu mber of exec utable statements cover ed by (desig ned or execu
ted) test cas es divided b y the numb er of all executable state ments in the code under test.

Decision Testing and Cove rage (K4)


Decision coverage, related to bra nch testing, is the asses sment of th e percentage of decision outcome s
(e.g., the True and False options of an IF statement) that have been exercised by a test case suite. The
deci sion testing technique de rives test cases to exec ute specific d ecision outc omes. Branches originate

f rom decision points in th e code and s how the tra nsfer of control to differe nt location s in the code.
Decision coverage is determined by the number of all de cision outco mes covered by (designed or
execute d) test cases divided by the number of all possibl e decision o utcomes in t he code under test.
Decision testing is a form of control flow testing as it follo ws a specific flow of cont rol through the decision
points. Dec ision covera ge is stronge r than state ment coverage; 100% de cision coverage guarante es
100% st atement coverage, but n ot vice versa.

Other Structure-b ased Tech niques (K 1)


There are stronger l evels of struc tural coverage beyond d ecision cov erage, for example, condition
coverag e and multiple condition coverage.
The concept of coverage can also be applied at other test levels For example, at the integration level the
percentage of modules, component s or classes that have be en exercised by a test case suite co uld be
expressed as mod ule, compon ent or class coverage.
Tool sup port is usefu l for the stru ctural testin g of code.
Version 2 011

Page 42 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

4.5 Experi ence-based Te chniques (K2)


30 minutes

Terms
Exploratory testing, ( fault) attack

Background
Experien ce-based te sting is where tests are derived from the testers skill and intu ition and th eir
experien ce with similar applicatio ns and technologies. W hen used to augment sys tematic
techniques, these te chniques ca n be useful i n identifying special tests not easily c aptured by formal
techniques, especially when applied after more formal approaches. However, this technique m ay yield wid
ely varying degrees of effectiveness, depending on the testers experienc e.
A commonly used ex perience-ba sed techniq ue is error g uessing. Ge nerally tester s anticipate defects based
on experience. A structured a pproach to th e error gues sing techniq ue is to enumera te a list of possible
defects and to d esign tests th at attack th ese defects. This systematic approach is called fa ult attack. T hese
defect and failure lists can be built based o n experience, available defect and failure data, and from co mmon
know ledge about why softwa re fails.

Exploratory testing is concurrent test design, test executi on, test logging and lear ning, based on a test
char ter containin g test objectives, and ca rried out within time- boxes. It is an approach that is most us
eful where th ere are few or inadequate specifications and sev ere time pressure, or in order to augment or
complement other, more form al testing. It can serve as a check on the test proc ess, to help e nsure that t
he most seri ous defects are found.

Version 2 011

Page 43 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

4.6 Choosi ng Test Techniques (K2)


15 minutes

Terms
No specific terms.

Background
The choice of which test techniq ues to use d epends on a number of factors, inclu ding the type of system,
regulatory s tandards, customer or co ntractual re quirements, level of risk, type of risk, test objective ,
document ation available, knowledg e of the testers, time and budget, de velopment life cycle, us e case
models and previous experience with types of defect s found.
Some techniques ar e more applicable to certain situations and test levels; others are applicable to all test
le vels.
When creating test cases, tester s generally u se a combin ation of test techniques including pro cess, rule
and data-driven techniques t o ensure adequate coverage of the object under test.

References
Craig, 2002, Hetzel, 1988, I EEE STD 829-1998
Beiz er, 1990, C opeland, 200 4
C opeland, 200 4, Myers, 1 979
C opeland, 200 4, Myers, 1 979
B eizer, 1990, Copeland, 2004
B eizer, 1990, Copeland, 2004
C opeland, 200 4
4.4.3 B eizer, 1990, Copeland, 2004
Kaner, 2002
Beiz er, 1990, C opeland, 200 4

Version 2 011

Page 44 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5. Test Ma nagement (K3 )


1 70 minu tes
Learning Obje ctives for Test Manageme nt
The obje ctives identify what you will be able to do followi ng the comp letion of each module.

5.1 Te st Organiz ation (K2)


LO-5.1.1 Recognize the impor tance of independent te sting (K1)
LO-5.1.2 Explain the benefits and drawba cks of indep endent testin g within an organization (K2) LO-5.1.3
Recognize the differ ent team members to be considered for the creation of a test team
(K1)
LO-5.1.4 Recall th e tasks of a typical test leader and t ester (K1)

5.2 Te st Planning and Estimation ( K3)


LO-5.2.1
Recognize the differ ent levels and objectives of test plan ning (K1)
LO-5.2.2
Summar ize the purpose and content of the te st plan, test design spec ification and test
procedure documen ts according to the Stan dard for Software Test Documentation
(IEEE Std 829-1998 ) (K2)
LO-5.2.3
Different iate betwee n conceptually different test approac hes, such as analytical, modelbased, methodical, p rocess/standard compli nt, dynamic/heuristic, co nsultative and
regression-averse (K 2)
LO-5.2.4
Different iate betwee n the subjec t of test planning for a system and scheduling test
execution (K2)
LO-5.2.5
Write a test execution schedule for a given s et of test cas es, consider ing prioritiza tion,
and tech nical and lo gical depend encies (K3)
LO-5.2.6
List test preparation and executi on activities that should b e considered during tes t

planning (K1)
LO-5.2.7
Recall ty pical factors that influence the effort related to testing (K1)
LO-5.2.8
Different iate betwee n two conce ptually differ ent estimatio n approach es: the metri csbased approach and the expert- based approach (K2)
LO-5.2.9
Recognize/justify adequate entry and exit criteria for spec ific test levels and group s of
test cas es (e.g., for integration te sting, acceptance testin g or test cases for usability
testing) (K2)

5.3 Te st Progres s Monito ring and C ontrol (K2)


LO-5.3.1
Recall common metr ics used for monitoring test preparation and exe cution (K1)
LO-5.3.2
Explain and compar e test metric s for test reporting and te st control (e.g., defects found
and fixe d, and tests passed and failed) related to purpos e and use (K2)
LO-5.3.3
Summar ize the purpose and content of the te st summary report document according to
the Sta ndard for Sof tware Test Documentation (IEEE Std 829-1998) (K2)

5.4 Configuration Manage ment (K2)


LO-5.4.1 Summar ize how configuration m anagement supports test ing (K2)

5.5 Ris k and Te sting (K2)


LO-5.5.1
Describ e a risk as a possible problem that w ould threaten
the achieve ment of one or
more sta keholders p roject objectives (K2)
LO-5.5.2
Remem ber that the level of risk is determine d by likelihoo d (of happe ning) and impact
(harm re sulting if it does happen ) (K1)
LO-5.5.3
Distinguish between the project and product risks (K2)
LO-5.5.4
Recognize typical product and pr oject risks ( K1)
LO-5.5.5
Describ e, using exa mples, how risk analysis and risk ma nagement may be used for test
planning (K2)

Version 2 011
Page 45 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5.6 Incident Man agement (K3)


LO-5.6.1
Recognize the content of an inci dent report a ccording to the Standard for Software

Test Do cumentation (IEEE Std 8 29-1998) (K 1)

LO-5.6.2
Write an incident rep ort covering the observa tion of a fail ure during testing. (K3)

Version 2 011

Page 46 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5.1 Test Organization (K2)


30 minutes

Terms
Tester, test leader, t est manager

Test Organization and Inde pendence (K2)


The effectiveness of finding defects by testin g and review s can be improved by us ing independent
testers. Options for i ndependence include th e following:
o No independent testers; dev elopers test their own code o Independent testers within th e development
teams
o
Independent test team or gro up within th e organizatio n, reporting to project management or exec
utive mana gement
o Independent testers from the business o ganization o r user comm unity
o
Independent test specialists for specific test types such as usability testers, s ecurity testers or
certification teste rs (who certify a software product ag ainst standa rds and regulations)
o

Independent testers outsour ced or exter nal to the org anization

For larg e, complex or safety criti cal projects, it is usually best to have multiple lev els of testing, with some
or all of the lev els done by independent testers. Development staff may participate in testing, especially at
the low er levels, but their lack of objectivity often limits th eir effectiveness. The indepen dent testers may
have the authority to require an d define test processes a nd rules, but testers should take o n such
process-related r oles only in the presence of a clear management mandate to do so.
The benefits of inde pendence in clude:
o

Independent testers see oth er and differe nt defects, and are unbiased

An i ndependent tester can v erify assump tions people made during specificati on and implementation of
the system
Drawba cks include:
o Isola tion from th e developme nt team (if tr eated as totally independent) o Developers may lose a sense of

responsibility for qu ality

Independent tes ters may be seen as a bottleneck or blamed for delays in rel ease

Testing tasks may b e done by pe ople in a specific testing role, or ma y be done by someone in another
role, such a s a project m anager, quality manage , developer, business an d domain e xpert, infrastructure
or IT operations.

Tasks of the Test Leader an d Tester (K1)


In this s yllabus two test positions are covered , test leader and tester. The activities and tasks perform ed
by people in these tw o roles depend on the pr oject and pr oduct context, the people in the roles, an d the
organization.
Sometim es the test leader is called a test manager or test coordinato r. The role of the test leader may be
performed by a project m anager, a development manager, a quality assu rance mana ger or the manager
of a tes t group. In l arger projects two positions may exist: test leader and test manager. Typically t he test
leade r plans, mo nitors and co ntrols the testing activiti es and tasks as defined in Section 1.4.
Typical test leader ta sks may include:
o
Coordinate the t est strategy and plan with project managers and others
o
Write or review a
test strategy for the project, and test policy for t he organization
Version 2 011
Page 47 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Contribute the te sting perspe ctive to othe r project activities, such as integratio n planning

Plan the tests considering the context and understa nding the test objectives and risks inclu ding selecti ng
test appro aches, esti mating the ti me, effort and cost of testing, acquiri ng

resources, defining test levels, cycles, an d planning incident management


Initia te the specification, pre paration, imp lementation and execution of tests, monitor the test results and
check the exit criteria
Adapt planning b ased on test results and progress (sometimes d ocumented i n status rep orts) and take
any action necessary to compe nsate for pro blems
o Set up adequate configuratio n management of testw are for trace ability
Intro duce suitable metrics fo r measuring test progress and evalua ting the quality of the te sting and the
product
o Decide what sho uld be auto mated, to what degree, and how
o Sele ct tools to s upport testing and organize any training in tool us e for testers
o Decide about th e implement ation of the test environm ent
o

Write test summary reports b ased on the information gathered du ring testing

Typical tester tasks may include:


o Review and cont ribute to test plans
o Analyze, review and assess user requirements, specifications an d models for testability o Cre ate
test specifications
Set up the test environment (often coordinating with s ystem administration and network management)
o Prep are and acq uire test data
Implement tests on all test levels, execute and log th e tests, eval uate the resu lts and docu ment the
deviations fr om expected results
o Use test adminis tration or m anagement tools and test monitoring tools as required o Auto mate tests
(may be sup ported by a d eveloper or a test autom ation exper t)
o Measure perform ance of co mponents and systems (if applicable) o Review tests de veloped by
others
People who work on test analysi s, test desig n, specific test types or t est automati on may be specialis ts
in these roles. Depending on the test level and the risks related to the product and the project, different
people may tak e over the role of tester, keeping som e degree of independence. Typically testers at t he
component and integration level w ould be developers, testers at the accepta nce test level would be
business experts and users, and teste rs for operational accep tance testing w ould be ope rators.

Version 2 011

Page 48 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5.2 Test Pl anning and Estimation (K3)


40 minu tes

Terms
Test app roach, test strategy

Test Pla nning (K2)


This section covers the purpose of test planning within de velopment and implementation proje cts, and for
maintenance activities. Planning may be docume nted in a master test pla n and in sep arate test plans for
test levels such as system testing and acceptance testin g. The outli ne of a test-planning document i s
covered by the Standa rd for Softw are Test Doc umentation (IEEE Std 8 29-1998).
Planning is influence d by the test policy of th e organizatio n, the scop e of testing, objectives, risks, constrai
nts, criticality , testability and the availability of resources. As the project a nd test plann ing progress , more infor
mation becomes available and more detail can b e included in the plan.
Test pla nning is a co ntinuous act ivity and is p erformed in all life cycle processes a nd activities. Feedbac k
from test activities is used to recog nize changi ng risks so t hat planning can be adjusted.

Test Pla nning Activities (K3 )


Test pla nning activities for an entire system o r part of a s ystem may i nclude: o Dete rmining the scope and
risks and ide ntifying the o bjectives of testing

Defining the overall approac h of testing, including the definition o f the test lev els and entry and exit
criteria
o
Inte grating and c oordinating the testing activities into the software life cycle a ctivities
(acquisition, supply, development, opera tion and maintenance)
o Making decisions about what to test, wha t roles will perform the t est activities , how the tes t
o
activ ities should be done, and how the test results will be evaluat ed

Sch eduling test analysis and design activ ities


o

Sch eduling test implementation, executio n and evaluation

o
Assigning resou rces for the different acti vities define d
o
Defining the am ount, level of detail, structure and tem plates for th e test docu mentation
Sele cting metric s for monitoring and controlling test p reparation and executio n, defect resolution and
risk issues
Setting the level of detail for test procedu res in order to provide enough inform ation to sup port
reproducible tes t preparation and execution

Entry Criteria (K2)


Entry criteria define when to start testing suc h as at the beginning of a test level or when a set of tests is
ready for exe cution.
Typically entry criteria may cover the following: o Test environmen t availability and readine ss o Test tool
readine ss in the tes t environment o Testable code a vailability
o

Test data availa bility

Exit Crit eria (K2)


Exit crite ria define w hen to stop testing such as at the en d of a test le vel or when a set of tests has
achieve d specific goal.
Version 2 011

Page 49 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Typically exit criteria may cover the following:

o
Tho roughness m easures, such as cover age of code, functionality or risk

o
Esti mates of defect density o r reliability m easures

o
Cost

o
Residual risks, such as defe cts not fixed or lack of te st coverage in certain ar eas
o
Sch edules such as those ba sed on time to market

Test Est imation (K 2)


Two app roaches for the estimati on of test effort are:
The metrics-based approach: estimating the testing e ffort based on metrics o f former or s imilar proj ects
or base d on typical values
The expert-based approach: estimating the tasks bas ed on estimates made by the owner of the tasks or
by experts
Once th e test effort is estimated, resources can be identified and a sc hedule can be drawn up.
The testing effort ma y depend on a number o f factors, inc luding:
Characteristics o f the product: the quality of the specification and other information used for test models (i.e.,
the test basis), the size of th e product, t he complexity of the prob lem domain, the

requ irements for reliability an d security, a nd the requirements for documentation


Characteristics o f the development proce ss: the stability of the organization, tools used, te st process,
skills of the people involved, an d time pressure

The outcome of testing: the number of d efects and the amount of rework requ ired

Test Stra tegy, Tes t Approa ch (K2)


The test approach is the implem entation of th e test strategy for a specific project. The test app roach is
defined and refined in the test plans and test designs. It typically in cludes the d ecisions ma de based o n
the (test) p rojects goal and risk as sessment. It is the startin g point for planning the t est process, for
selectin g the test design techniques and test types to be applied, and for defining the entry and exit
criteria .
The sele cted approach depends on the conte xt and may consider ris ks, hazards and safety, available
resources and skills, the technolog y, the nature of the system (e.g., custom built vs. COTS), test
objectives, and regulations.
Typical approaches include:
Analytical appro aches, such as risk-base d testing where testing i s directed to areas of gre atest risk
o
Model-based approaches, su ch as stochastic testing using statistical information about failure
rate s (such as re liability grow th models) or usage (such as operat ional profile s)
o
Met hodical appr oaches, suc h as failure- b ased (includ ing error guessing and fault attacks),
exp erience-base d, checklist-based, and quality chara cteristic-bas ed
Proc ess- or standard-compli ant approac hes, such as those specified by indus try-specific standards or
the various agile methodologies
Dynamic and heuristic approaches, such as explorat ory testing where testing is more rea ctive to eve nts
than pre-planned, an d where execution and evaluation ar e concurrent tasks
Consultative app roaches, such as those in which test coverage is driven primarily by the a dvice and
guidance of technology and/or business domain experts outside the test team
Regression-averse approach es, such as those that in clude reuse of existing test material, extensive
automation of functional regres sion tests, a nd standard test suites
Different approaches may be co mbined, for e xample, a risk-based dynamic appro ach.

Version 2 011

Page 50 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5.3 Test P rogress Monitor ing and Control


20 minutes

(K2)

Terms
Defect density, failure rate, test c ontrol, test monitoring, t est summary report

Test Progress Monitoring ( K1)


The pur pose of test monitoring is to provide feedback an d visibility about test acti vities. Inform ation to be
mo nitored may be collecte d manually or automatica lly and may be used to measure exit criteria, such as
cov erage. Metric s may also be used to assess progr ess against the planned schedul e and budget.
Common test metrics include:
Perc entage of work done in test case pre paration (or percentage of planned test cases prep ared)
o Perc entage of work done in test environ ment prepara tion
o Test case execution (e.g., nu mber of test cases run/n ot run, and test cases p assed/failed)
o Defe ct informati on (e.g., defe ct density, d efects foun d and fixed, failure rate, a nd re-test r esults) o
Test coverage of requiremen ts, risks or code
o Subjective confi dence of testers in the product o Date s of test milestones
Testing costs, including the c ost compar ed to the ben efit of finding the next d efect or to run the next test

Test Rep orting (K2)


Test reporting is concerned with summarizin g information about the t esting endea vor, including: o Wha t
happened during a period of testin g, such as d ates when e xit criteria we re met

Analyzed information and m etrics to sup port recommendations a nd decisions about future actio ns, such
as an assessm ent of defects remaining, the econo mic benefit of continued testing, outstanding risks, and
the level of confidence in the teste d software
The outline of a test summary report is given in Standard for Software Test Documentation (IEEE Std 8291998).
Metrics should be co llected durin g and at the end of a tes t level in ord er to assess : o The adequacy of

the test objectives for t hat test level


o The adequacy of the test ap proaches tak en
o The effectiveness of the testing with resp ect to the ob jectives

Test Con trol (K2)


Test con trol describe s any guidin g or corrective actions t aken as a re sult of inform ation and m etrics
gathered and report ed. Actions m ay cover an y test activity and may affect any oth er software life cycle
activity or task.
Example s of test con trol actions include:
o
o
o

Making decisions based on information f rom test mo nitoring


Re- prioritizing tests when an identified ris k occurs (e.g., software delivered late)
Changing the te st schedule due to availability or una vailability of a test environment

Setting an entry criterion requiring fixes to have been re-tested (confirmation tested) by a dev eloper before
accepting them into a b uild

Version 2 011

Page 51 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5.4 Config uration Manage ment (K 2)


10 minutes

Terms
Configuration manag ement, version control

Background
The pur pose of configuration management is to establish and maintain the integrity of the pro ducts
(compon ents, data and documen tation) of th e software o r system thro ugh the project and pro duct life
cycle .
For testing, configur ation management may involve ensuring the following:
All items of testw are are iden tified, versio n controlled, tracked for changes, related to eac h other and
related to de velopment items (test o bjects) so th at traceabilit y can be maintained
thro ughout the t est process
All i dentified documents and software ite ms are referenced unambiguously in test doc umentation
For the tester, config uration management helps to unique ly identify (a nd to reprod uce) the tested item,
test documents , the tests and the test harness(es).
During t est planning, the configuration mana gement procedures and infrastructur e (tools) should be
chosen, documented and implem ented.

Version 2 011

Page 52 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5.5 Risk a nd Testing (K2)


30 minutes

Terms
Product risk, project risk, risk, risk-based testing

Background
Risk can be defined as the chan ce of an eve nt, hazard, t hreat or situa tion occurri ng and resul ting in
undesira ble consequ ences or a potential pro blem. The level of risk will be determ ined by the likelihood
of an adverse event ha ppening an d the impact (the harm re sulting from that event).

Project Risks (K2)


Project risks are the risks that surround the projects capa bility to deli ver its objectives, such as:
Org anizational f actors:
Skill, training and sta ff shortages
Personnel issues
Political issues, such as:
Problems with testers communicatin g their needs and test re sults
Failure by the team to follow up on in formation found in testing and revie s
(e.g., not im proving deve lopment an d testing pra ctices)

Improper attitude to ard or expectations of t esting (e.g., not appreciating the value of finding d
efects during testing)
o Tec hnical issues :
Problems in defining the right req uirements
The exte nt to which requirement s cannot be met given ex isting const raints
Test env ironment not ready on time
Late data conversion , migration planning and development and testing data conversion/migration tools
Low qua lity of the de sign, code, configuration data, test data and tests

Supplier issues:
Failure o f a third party
Contract ual issues
When a nalyzing, managing and mitigating th ese risks, th e test manag er is followi ng well-esta blished
project management principles. The Standard for Software Test Doc umentation (IEEE Std 8 29-1998) ou
tline for test plans requi res risks and contingencies to be sta ted.

Product Risks (K2 )


Potential failure area s (adverse future events or hazards) in the software or syste m are known as product
risks, as they are a risk to the quality of the produ ct. These in clude:
o Fail ure-prone software delive red
o The potential tha t the software/hardware could cause harm to an individual or company o Poor
software ch aracteristics (e.g., functionality, reliability, usability and performance)
Poor data integrity and quality (e.g., data migration issues, data conversion pr oblems, data tran sport
proble ms, violation of data standards)
o

Software that does not perform its intended functions

Risks are used to decide where to start testin g and wher to test more; testing is used to reduce the risk of
a n adverse effect occurring, or to reduce the impa ct of an adve rse effect.

Version 2 011

Page 53 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Product risks are a special type


of risk to the success of a project. Te sting as a ris k-control activity
provides feedback about the residual risk by measuring t he effectiven ess of critic al defect rem oval and
of c ontingency p lans.
A risk-ba sed approach to testing provides pr oactive opportunities to r educe the levels of prod uct risk,
starting in the in itial stages of a project. It involves the identification of produc t risks and t heir use in g
uiding test planning and control, spec ification, pre paration an d execution of tests. In a risk-based a
pproach the risks identified may be used to:
o Dete rmine the te st technique s to be employed
o Dete rmine the e xtent of testi ng to be carr ied out
o Prioritize testing in an attemp t to find the critical defe cts as early as possible
Dete rmine whet her any non-testing activities could b e employed t o reduce risk (e.g., providing training
to inexp erienced des igners)
Risk-bas ed testing draws on the collective kn owledge an d insight of t he project stakeholders to
determin e the risks a nd the levels of testing required to address thos e risks.
To ensure that the c hance of a product failur e is minimize d, risk man agement activities provide a
disciplin ed approach to:
o Ass ess (and reassess on a regular basis) what can go wrong (risks) o Dete rmine what risks are im
portant to deal with
o

Implement actio ns to deal with those risks

In additi on, testing m ay support the identification of new risks, may he lp to determ ine what ris ks should
be reduced, and may lower uncertain ty about risk s.

Version 2 011

Page 54 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

5.6 Inciden t Management (K3)


40 minu tes

Terms
Incident logging, incident management, incident report

Background
Since on e of the obj ectives of tes ting is to find defects, the discrepan cies between actual and expecte d
outcomes need to be l ogged as incidents. An i ncident must be investig ated and may turn out to be a
defect. A ppropriate a ctions to dis pose incidents and defects should be defined. Inc idents and defe cts
should b e tracked fr om discover y and classification to correction and confirmation of the solution. In
order to manage all i ncidents to completion, an organization should es tablish an in cident management
process and rules f or classification.
Incident s may be rai sed during development, review, testing or use o f a software product. The y may be
raise d for issues in code or the working sy stem, or in any type of d ocumentatio n including requirem
ents, devel opment docu ments, test documents, and user inf ormation suc h as Help or installation
guides.
Incident reports hav e the followin g objectives:
Prov ide developers and other parties wit h feedback a bout the pro blem to enable identifica tion, isola tion
and correction as n ecessary
o Prov ide test lead ers a mean s of tracking the quality o f the system under test and the progress
of the testing
o Prov ide ideas for test process improvem ent
Details o f the inciden t report may include:
o
o
o
o

Date of issue, is suing organization, and a uthor


Exp ected and ac tual results
Identification of the test item (configuratio n item) and environment
Software or system life cycle process in which the inc ident was o bserved

Description of the incident to enable reproduction an d resolution, including log s, database dumps or
screen shots
o Sco pe or degree of impact on stakeholde r(s) interests o Sev erity of the i mpact on the system

Urg ency/priority to fix

Status of the incident (e.g., open, deferred, duplicate, waiting to be fixed, fixe d awaiting re-test, closed)
o Conclusions, rec ommendatio ns and app ovals
o Glob al issues, s uch as other areas that m ay be affected by a change resultin g from the incident o
Change history, such as the sequence of actions tak en by projec t team mem bers with respect

to the incident to isolate, rep air, and confirm it as fixed


o Refe rences, including the ide ntity of the test case sp ecification that revealed the problem
The structure of an i ncident report is also cov ered in the Standard for Software T est Documentation
(IEE E Std 829-1998).

Version 2 011

Page 55 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

References
Black, 2001, H etzel, 1988
Black, 2001, H etzel, 1988
5.2.5 Black, 2001, C raig, 2002, I EEE Std 829 -1998, Kaner 2002 5.3.3 Black, 2001, C raig, 2002, Hetzel,
1988, IEEE Std 8 29-1998 5.4 Craig, 2002
5.5.2 Black, 2001 , IEEE Std 82 9-1998 5.6 Blac k, 2001, IEE E Std 829-1 998

Version 2 011

Page 56 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

6. Tool Su pport for Test ing (K2)


8 0 minutes
Learning Obje ctives for Tool Su pport for Testing
The obje ctives identify what you will be able to do followi ng the comp letion of each module.

6.1 Ty pes of Tes t Tools ( K2)


LO-6.1.1
Classify different types of test to ols accordin to their purpose and to the activities of
the fundamental test process an d the softwa e life cycle (K2)
LO-6.1.3
Explain the term test tool and the purpose of tool support for testing ( K2)

6.2 Effective Us e of Tools: Potential Benefits and Ris ks (K2)


LO-6.2.1 Summar ize the potential benefits and risks o f test autom ation and too l support for testing (K2)
LO-6.2.2 Remem ber special consideration s for test execution tools , static anal ysis, and test
management tools ( K1)

6.3 Int roducing a Tool int o an Organization (K1)


LO-6.3.1 State th e main principles of intro ducing a tool into an org anization (K1 )
LO-6.3.2 State th e goals of a proof-of-con cept for tool evaluation and a piloting phase for to ol impleme
ntation (K1)
LO-6.3.3 Recognize that factors other tha n simply acquiring a tool are required for good too l support (K1)

LO-6.1.2 Intentiona lly skipped

Version 2 011

Page 57 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

6.1 Types of Test Tools ( K2)


45 minutes

Terms
Configuration manag ement tool, coverage tool, debugging tool, dyna mic analysis tool, incident
management tool, load testing tool, modeling tool, monito ring tool, performance te sting tool, probe effect,
re quirements managemen t tool, revie w tool, security tool, stati c analysis tool, stress te ting tool, test
comparator, test data preparation to ol, test desi gn tool, test harness, test execution tool, test man
agement to ol, unit test fr amework tool

Tool Sup port for T esting (K2)


Test tools can be used for one o r more activities that support testing. These include:
Tools that are directly used i n testing such as test ex ecution tools, test data generation to ols and result
comp arison tools
Tools that help i n managing the testing process such as those used to manag e tests, test results, data,
req uirements, incidents, defects, etc., and for reporting and mon itoring test exec ution
Tools that are us ed in recon naissance, or, in simple terms: explor ation (e.g., tools that mo nitor file a
ctivity for an application )
Any tool that aids in testing (a spreadsheet is also a test tool in this meaning)
Tool sup port for testing can have one or mor e of the following purposes depending on the con text:
o
Impr ove the effic iency of test activities by automating repetitive tasks or supp orting manu al test
o
activ ities like test planning, t est design, t est reporting and monitor ing

Auto mate activities that require significan t resources when done manually (e. g., static testing)
Auto mate activities that cann ot be executed manually (e.g., large scale performance testi ng of clien tserver app lications)
Incr ease reliability of testing (e.g., by aut omating large data comp arisons or simulating beh avior)

The term test frameworks is als o frequently used in the industry, in at least three meanings: o Reusable
and e tensible testing libraries that can be used to buil d testing tools (called test
harn esses as we ll)
o A ty pe of design of test auto mation (e.g., data-driven, keyword-driven) o Overall process of execution
of testing
For the purpose of th is syllabus, the term te st frameworks is used in its first two meanings a described
in Section 6.1.6.

Test Too l Classifi cation (K2 )


There are a number of tools that support diffe rent aspect s of testing. Tools can be classified based on sever al
criteria su ch as purpose, commer cial / free / open- source / shareware, technology used and so f orth. Tools
are classified in this sylla bus according to the testing activities that they support.

Some tools clearly s upport one a ctivity; others may supp ort more tha n one activit y, but are classifie d
under the activity with which they are most closely associat ed. Tools from a single provider, especially
those that ha ve been de signed to work together, may be bun dled into one package.
Some types of test t ools can be intrusive, which means th at they can affect the ac tual outcome of the
test. For exampl e, the actual timing may be different due to the ex tra instructio ns that are execute d by
the tool, or you may get a differe nt measure of code cov erage. The consequence of intrusive tools is call
ed the probe effect.
Version 2 011

Page 58 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Some tools offer sup port more a ppropriate for developers (e.g., tools that are used during compon ent and
component integ ration testing). Such tools are marked with (D) in the list below.

Tool Sup port for Managem ent of Testing and T ests (K1)
Management tools apply to all test activities over the enti re software life cycle.
Test Management T ools
These to ols provide interfaces for executing tests, tracking defects an d managing requirements, along
with support fo r quantitative analysis a nd reporting of the test objects. They also support tracing t he test
objec ts to requirement specifications and might have an independent version control capability or an
interface to an e ternal one.
Requirements Management To ols
These to ols store re quirement statements, store the attrib utes for the requirements (including priority),
provide uni que identifiers and suppo rt tracing th e requireme nts to individual tests. Th ese tools may
also help with identifyi ng inconsist ent or missing requirements.
Incident Manageme nt Tools (Defect Tracking Tools)
These to ols store and manage in cident reports, i.e., defe cts, failures, change requ ests or perceived
problems and anom alies, and help in managing the life c ycle of incide nts, optionally with support for
statistica l analysis.
Configuration Man agement Tools
Although not strictly test tools, these are nec essary for st orage and v ersion management of testware and
related software especially when configurin g more than one hardware/software environ ment in term s of
operating system ve rsions, comp ilers, brows ers, etc.

Tool Sup port for S tatic Tes ting (K1)


Static testing tools provide a cost effective w ay of finding more defects at an earli er stage in th e develop
ment process.
Review Tools
These to ols assist with review processes, ch ecklists, review guideline s and are us ed to store and
commun icate review comments and report o n defects and effort. They can be of further help b y providin
g aid for online reviews f or large or g eographicall y dispersed teams.

Static Analysis Too ls (D)


These to ols help dev elopers and testers find defects prio r to dynamic testing by providing support for
enforcing coding standards (i ncluding secure coding), analysis of structures an d dependen cies. They ca n
also help i n planning or risk analysis by providi ng metrics for the code ( e.g., comple xity).

Modeling Tools (D)


These to ols are use d to validate software models (e.g., physical data model (PDM ) for a relational
databas e), by enum erating incon sistencies and finding d efects. Thes e tools can o ften aid in
generating some test cases base d on the mo del.

Tool Sup port for T est Spec ification ( K1)


Test Design Tools
These to ols are use d to generate test inputs or executable tests and/ or test oracle s from requirem ents,
graphical user inte rfaces, desi gn models (s tate, data o r object) or code.

Version 2 011

Page 59 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Test Data Preparation Tools


Test data preparation tools manipulate databases, files or data trans missions to set up test da ta to be
used during the execution of tests to ensure security t hrough data anonymity.

Tool Sup port for T est Exec ution and Logging (K1)
Test Ex ecution Too ls
These to ols enable tests to be ex ecuted automatically, or semi-autom atically, usin g stored inputs and exp
ected outco mes, through the use of a scripting language and usually provide a test lo g for each test run. They
can also be used to record tests, and usually support scriptin g languages or GUI-based configura tion for para
meterization of data and other customization in the tests.

Test Harness/Unit Test Frame work Tools (D)


A unit test harness or framework facilitates th e testing of components or parts of a system by simulatin g
the enviro nment in wh ich that test object will ru n, through the provision of mock objects as stubs or
drivers.
Test Comparators
Test co mparators de termine diffe rences betw een files, da tabases or t est results. Test executi on tools
typically includ e dynamic c omparators, but post-execution comp arison may be done by a separate compariso
n tool. A test comparator may use a test oracle, especially if it is automated.

Covera ge Measure ment Tools (D)


These to ols, through intrusive or non-intrusive means, m easure the percentage of specific types of code
str uctures that have been e xercised (e.g., statements, branches or decision s, and module or function
calls) by a set of tests.
Security Testing To ols
These to ols are use d to evaluate the security characteris tics of softw are. This includes evalu ating the
ability of the soft ware to prot ect data confidentiality, in tegrity, authentication, authorization , availability,
and non-repudiation. Security tools are mostl y focused on a particular technology, platform, and purpos e.

Tool Sup port for P erforman ce and Monitoring (K1)


Dynamic Analysis Tools (D)
Dynamic analysis to ols find defe cts that are e vident only when softwa re is executing, such as time

depende ncies or memory leaks. They are typ ically used in componen t and compo nent integra tion
testing, and when te sting middle ware.
Perform ance Testin g/Load Tes ting/Stress Testing Tools
Perform ance testing tools monito r and report on how a sy stem behaves under a v ariety of sim ulated
usage c onditions in terms of num ber of conc urrent users, their ramp- up pattern, frequency an d relative
percentage of transactio ns. The simu lation of loa d is achieve d by means of creating virtual users
carrying out a selected set of transacti ons, spread across various test mach ines commo nly known as
load gene rators.
Monitoring Tools
Monitori ng tools continuously an alyze, verify and report on usage of specific syste m resource s, and give
warnings of possible service problems.

Tool Sup port for S pecific T esting Needs (K1)


Data Qu ality Assessment
Data is a t the center of some projects such as data conve rsion/migrat ion projects and applica tions like data
warehouse s and its attributes can v ary in terms of criticality and volume. In such contexts, tools ne ed to be em
ployed for da ta quality assessment to review and verify the da ta conversio n and
Version 2 011
Page 60 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

migratio n rules to ensure that the processed data is correct, complete and compli es with a pre - defined
context-spec ific standard .
Other testing tools exist for usability testing.

Version 2 011

Page 61 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

6.2 Effecti ve Use of Tools: Potential


20 minutes

Benefits and Risks ( K2)

Terms
Data-dri ven testing, keyword-driv en testing, s cripting lang uage

6.2.1 Potential Benefits and Risks of Tool Support f or Testing (for all to ols) (K2)
Simply p urchasing or leasing a t ool does not guarantee success with that tool. Each type of tool may
require additional effort to achieve real a nd lasting benefits. Ther e are potential benefits and opportun ities
with th e use of tools in testing, but there are also risks.
Potential benefits of using tools include:
o
Repetitive work is reduced (e .g., running regression tests, re-ente ring the sam e test data, and
chec king against coding stan dards)
o
Gre ater consistency and repeatability (e.g., tests executed by a t ool in the sa me order with the
same frequency, and tests derived from equirement s)
o Obje ctive asses sment (e.g., static measu res, covera ge)
Eas e of access to information about test s or testing (e.g., statistics and graphs about test prog ress,
incide nt rates and performance )
Risks of using tools include:
o

Unr ealistic expe ctations for the tool (inclu ding functionality and ea se of use)

Underestimating the time, cost and effort for the initial introductio n of a tool (in cluding train ing and
external exp ertise)
Underestimating the time an d effort need ed to achiev e significant and continuing benefits from the tool
(including the need for changes in the testing process an d continuous improvement of the way the tool is
used)
o Underestimating the effort required to ma intain the test assets generated by t he tool
Over-reliance on the tool (re placement fo r test design or use of a utomated te sting where manual testing
w ould be better)

Neglecting versi on control of test assets within the tool

Neglecting relationships and interoperab ility issues between critical tools, such as requirements management
too ls, version c ontrol tools, incident management to ols, defect t racking tools and

tools from multiple vendors


Risk of tool vend or going out of business, retiring the tool, or selli ng the tool to a different ven dor
o Poor response fr om vendor for support, u pgrades, an d defect fixe s
o Risk of suspension of open-s ource / free tool project
o Unfo reseen, such as the ina bility to support a new pl atform

Special Considera tions for Some Typ es of Too ls (K1)


Test Ex ecution Too ls
Test exe cution tools execute test objects usi ng automated test scripts . This type o f tool often requires
significant e ffort in order to achieve significant b enefits.
Capturing tests by re cording the actions of a manual test er seems attractive, but this approac h does not
scal e to large numbers of aut omated test scripts. A c aptured scrip t is a linear representati on with
specific data and actions as part of each script. This type of scrip t may be unstable when unexpected
events o ccur.
Version 2 011

Page 62 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

A data-driven testing approach separates ou t the test inputs (the data ), usually into a spreadsheet, and
use s a more ge neric test script that can read the inp ut data and e xecute the s ame test script with diffe
rent data. Testers who are not familiar with the scripting lang uage can then create th e test data for these
predefined scripts.
There are other techniques employed in data-driven techniques, where instead of hard-coded data
combinations placed in a spreadsheet, data is generated using algori thms based on configurable
parameters at run ti me and supplied to the a pplication. F or example, a tool may use an algorithm, which
ge nerates a ra ndom user I D, and for r epeatability in pattern, a seed is employed for controlli ng
randomn ess.
In a key word-driven testing appr oach, the sp readsheet co ntains keyw ords describ ing the actio ns to
be taken (also called action word s), and test data. Testers (even if th ey are not familiar with the scripting
language) c an then define tests usin g the keywo rds, which c an be tailore d to the application being
tested.
Technic al expertise in the scripti ng language is needed fo r all approaches (either by testers or by
specialis ts in test au tomation).
Regardl ess of the scripting techn ique used, the expected results for e ach test nee d to be stor ed for
later co mparison.
Static Analysis Too ls
Static an alysis tools applied to s ource code can enforce c oding standards, but if a pplied to existing
code ma y generate a large quantity of messa ges. Warnin g messages do not stop the code fro m being
tra nslated into an executa ble program, but ideally s hould be addressed so that maintenance of the co
de is easier in the future. A gradual implementation of the analysis tool with initial filters to exclude some
mess ages is an effective appr oach.
Test Management T ools
Test management to ols need to interface wit h other tools or spreadsh eets in order to produce useful
information in a format that fits th e needs of the organization.

Version 2 011

Page 63 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

6.3 Introducing a T ool into an Org anizatio n


15 minutes

(K1)

Terms
No specific terms.

Background
The main considerations in selec ting a tool fo r an organization includ e:
Ass essment of o rganizationa l maturity, strengths and weaknesses and identification of opp ortunities for
an improve d test proces s supported by tools
o Evaluation again st clear requ irements and objective criteria
A proof-of-conce pt, by using a test tool during the ev aluation pha se to establi sh whether it perf orms effectiv
ely with the software un der test and within the cu rrent infrast ructure or to

o
identify changes needed to t hat infrastru cture to effec tively use the tool

Evaluation of the vendor (including traini ng, support a nd commer cial aspects) or service support
o
sup pliers in case of non-commercial tool s

Identification of internal requirements for coaching an d mentoring in the use of the tool
o

Evaluation of training needs considering the current test teams te st automati on skills

o
Esti mation of a c ost-benefit ratio based o n a concrete business c ase

Introducing the selec ted tool into an organization starts with a pilot pr oject, which has the following
objective s:
o Lear n more detail about the tool

o
Evaluate how th e tool fits with existing processes and practices, a nd determin e what would nee
d to change
o
Decide on standard ways of using, managing, storing and maintaining the tool and the tes t asse ts
(e.g., deciding on na ming conven tions for files and tests, c reating libraries and defining the modularity of
test suites)
o Ass ess whether the benefits will be achie ved at reas onable cost
Success factors for the deploym ent of the too l within an o rganization include: o Rolling out the to ol to
the rest of the organization incrementally
o Adapting and improving processes to fit with the use of the tool o Prov iding training and
coaching/mentoring for new us ers
o Defining usage g uidelines
o Implementing a way to gathe r usage information from the actual use
o Monitoring tool u se and bene fits
o Prov iding support for the tes t team for a given tool
o Gat hering lesso ns learned from all teams

References
6.2.2 B uwalda, 200 1, Fewster, 1 999 6.3 Few ster, 1999

Version 2 011

Page 64 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Refere nces
Standards
ISTQB G lossary of T erms used in Software T esting Version 2.1
[CMMI] Chrissis, M. ., Konrad, M . and Shru m, S. (2004) CMMI, Guidelines for Process Integration and
Pro duct Improve ment, Addis on Wesley: Reading, M A
See Section 2.1
[IEEE Std 829-1998] IEEE Std 8 29 (1998) IEEE Stand ard for Software Test Documentation, See
Sections 2.3, 2. 4, 4.1, 5.2, 5.3, 5.5, 5.6
[IEEE 10 28] IEEE Std 1028 (2008) IEEE Standard for Software Reviews and Audits, See Section 3.2
[IEEE 12 207] IEEE 1 2207/ISO/IE C 12207-20 08, Softwar e life cycle processes, See Section 2.1
[ISO 912 6] ISO/IEC 9126-1:2001 , Software E ngineering Software P roduct Quality, See Section 2.3

Books
[Beizer, 1990] Beize r, B. (1990) Software Te sting Techni ques (2nd e dition), Van Nostrand Reinhold:
Boston
See Sections 1.2, 1. 3, 2.3, 4.2, 4.3, 4.4, 4.6
[Black, 2 001] Black, R. (2001) Managing the Testing Pro cess (3rd ed ition), John Wiley & Son s: New
York
See Sections 1.1, 1. 2, 1.4, 1.5, 2.3, 2.4, 5.1, 5.2, 5.3, 5.5 , 5.6
[Buwald a, 2001] Buw alda, H. et al. (2001) Integrated Test Design and Automation , Addison W esley:
Reading, MA
See Section 6.2
[Copela nd, 2004] Co peland, L. ( 2004) A Prac titioners Gu ide to Software Test De sign, Artech House:
Norwood, M A
See Sections 2.2, 2. 3, 4.2, 4.3, 4.4, 4.6
[Craig, 2002] Craig, Rick D. and Jaskiel, Stefan P. (2002) Systematic Software Te sting, Artec h House:
Norwood, M A
See Sections 1.4.5, 2.1.3, 2.4, 4.1, 5.2.5, 5.3, 5.4

[Fewster , 1999] Fewster, M. and Graham, D. (1999) Soft ware Test A utomation, Addison Wesley:
Reading, MA
See Sections 6.2, 6. 3
[Gilb, 1993]: Gilb, To m and Graham, Dorothy (1993) Software Inspection, Addiso n Wesley: Reading, MA
See Sections 3.2.2, 3.2.4
[Hetzel, 1988] Hetzel, W. (1988) Complete Guide to Soft ware Testing, QED: Wellesley, MA See Sections
1.3, 1. 4, 1.5, 2.1, 2.2, 2.3, 2.4, 4.1, 5.1, 5.3
[Kaner, 2002] Kaner, C., Bach, J. and Petttic ord, B. (200 ) Lessons Learned in S oftware Testing, John
Wiley & Sons: New York
See Sections 1.1, 4. 5, 5.2
Version 2 011

Page 65 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

[Myers 1979] Myers, Glenford J. (1979) The Art of Softwa re Testing, John Wiley & Sons: New York See
Sections 1.2, 1. 3, 2.2, 4.3
[van Vee nendaal, 20 04] van Veenendaal, E. (ed.) (2004) The Testing Practitioner (Chapters 6 , 8, 10),
UT N Publishers: The Nether lands
See Sections 3.2, 3. 3

Version 2 011

Page 66 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Appen dix A Syllabus Back ground


History of this Docume nt
This document was prepared bet ween 2004 and 2011 by a Working Group comp rised of mem bers
appointe d by the Int ernational So ftware Testing Qualifications Board (ISTQB). It w as initially reviewe d by
a selected review p anel, and the n by repres entatives dra wn from the international software testing com
munity. The rules used in the produc tion of this d ocument ar e shown in Appendix C.
This document is the syllabus for the Internat ional Foundation Certific ate in Software Testing, the first
level internation al qualificatio n approved by the ISTQB (www.istqb.org).

Objectives of t he Found ation Ce rtificate Qualificat ion


To g ain recognition for testing as an esse ntial and pr ofessional s oftware engi neering spec ialization
o To p rovide a standard framework for the developmen t of testers' careers
To e nable profe sionally qua lified testers to be recognized by employers, customers and peers, and to
raise the profile of testers
o To p romote con istent and good testing practices within all softw are engineering disciplin es
o To i dentify testing topics that are relevant and of value to industry
To e nable softw are suppliers to hire certified testers and thereby gain commercial advant age over their
compe titors by adv ertising their tester recru itment policy
To p rovide an op portunity for testers and those with a n interest in testing to a cquire an internationally
recognized qualification in the subject

Objectives of t he International Q ualification (adapted from ISTQB meeti ng


at Sollentuna, Novemb er 2001)
o
o
o
o

To b e able to compare testing skills acro ss different c ountries


To e nable testers to move a cross countr y borders m ore easily
To e nable multin ational/international projects to have a common understandin g of testing issues
To i ncrease the number of q ualified testers worldwid e

To h ave more im pact/value a s an interna tionally-bas ed initiative than from any country-specific appr
oach
o To d evelop a co mmon international body of understanding and k nowledge about testing
thro ugh the syllabus and ter minology, and to increas e the level of knowledge about testin g for
all participants

o To p romote testing as a prof ession in mo re countries


o To e nable testers to gain a r ecognized q ualification in their native language
o To e nable sharin g of knowle dge and res ources acros s countries
To p rovide intern ational recognition of te sters and this qualification due to participation from many
countries

Entry Require ments for this Qua lification


The entry criterion for taking the ISTQB Foun dation Certi ficate in Software Testin g examination is that
can didates have an interest in software testing. How ever, it is str ongly recommended tha candidates
also:
o Have at least a
minimal back ground in either softwar e developme nt or software testing, s uch as
six months experience as a system or user acceptance tester or as a softwar e developer

Version 2 011
Page 67 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Tak e a course th at has been accredited to ISTQB sta ndards (by one of the ISTQB-recogn ized National
Boards ).

Backg round an d Histor y of the F oundatio n Certifi cate in S oftware


Testing
The inde pendent cer tification of software testers began in the UK with the British Computer Society' s
Information Systems E xamination B oard (ISEB) , when a Software Testi ng Board was set up in 19 98
(www.bcs .org.uk/iseb). In 2002, ASQF in Ger many began to support a German tes ter qualification
scheme (www.asqf. de). This syllabus is bas ed on the IS EB and ASQF syllabi; it includes reorganized ,
updated a nd additional content, and the empha sis is directe d at topics that will provide the most
practical help to testers.
An existing Foundation Certificate in Softwar e Testing (e.g., from ISEB, ASQF or an ISTQB-recogniz ed
National Board) awar ded before this Internati onal Certific ate was rele ased, will be deemed to be equiv
alent to the I nternational Certificate. T he Foundation Certificat e does not e xpire and does not need t o
be renewed. The date it was awarded is shown on the Certificate.
Within each participa ting country, local aspec ts are contr olled by a na tional ISTQ B-recognized
Software Testing Board. Duties o f National Boards are sp ecified by th e ISTQB, b ut are imple mented
within ea ch country. The duties o f the countr y boards are expected to include accreditation of training
providers and the setting of exams.

Version 2 011

Page 68 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

9. Appen dix B Learnin g Objectives/ Cogniti ve Level of


Kno wledge
The follo wing learnin g objectives are defined as applying to this sylla bus. Each topic in the syllabus will
be examined acc ording to th e learning ob jective for it .

Level 1: Remember (K1 )


The candidate will re cognize, re member and recall a term or concept.
Keywords: Remem ber, retrieve, recall, reco gnize, know
Exampl e
Can rec ognize the d efinition of failure as:
o

No n-delivery of service to a n end user or any other s takeholder or

Actual deviation of the comp onent or sy stem from its expected delivery, service or result

Level 2: Understand (K2)


The candidate can select the reasons or explanations for statements related to the topic, and can
summarize, compar e, classify, c ategorize and give examples for the testing conc ept.
Keywords: Summarize, generalize, abstract , classify, compare, map , contrast, ex emplify, inte rpret,
translate , represent, infer, conclu de, categorize, construct models
Exampl es
Can explain the reas on why test s should be designed as early as pos sible:
o To find defects w hen they are cheaper to remove o To find the most important d efects first
Can explain the similarities and d ifferences between inte gration and s ystem testin g:
o

Similarities: testing more than one comp onent, and can test non-functional aspects

Diffe rences: integration testi ng concentra tes on interfaces and interactions, a nd system te sting conc
entrates on whole-system aspects, such as end- to-end proc essing

Level 3: Apply (K3)


The candidate can select the correct application of a con cept or techn ique and apply it to a gi ven
context.
Keywords: Implement, execute, use, follow a procedure, apply a pro cedure
Exampl e
o
o

Can identify boundary value s for valid and invalid partitions


Can select test c ases from a given state transition di agram in order to cover a ll transitions

Level 4: Analy ze (K4)


The candidate can separate info rmation related to a procedure or technique into its constituen t parts for
better understand ing, and ca n distinguish between fac ts and infer ences. Typical application is to
analyze a document, software or project situ ation and pro pose appro priate actions to solve a problem or
task.
Keywords: Analyze, organize, find coherence, integrate, outline, par se, structure, attribute, deconstr uct,
differentiate, discrim inate, distinguish, focus , select

Version 2 011

Page 69 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Exampl e
o
o

Analyze product risks and propose preve ntive and co rrective miti gation activities
Describe which portions of an incident report are factual and whic h are inferre d from results

Reference
(For the cognitive lev els of learning objectives)
Anderson, L. W. and Krathwohl, D. R. (eds) (2001) A Taxonomy for Learning, Tea ching, and Assessi ng: A
Revisi on of Bloom' s Taxonomy of Educatio nal Objective s, Allyn & Bacon

Version 2 011

Page 70 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

10. Appen dix C Rules A pplied to the ISTQB


Found ation Syllabus
The rules listed here were used in the development and review of this syllabus. (A TAG is sh own after
eac h rule as a shorthand ab breviation of the rule.)

10.1.1 General Rules


SG1. The syllabus s hould be un derstandable and absorbable by peo ple with zero to six mont hs (or
more) e xperience in testing. (6-M ONTH)
SG2. The syllabus s hould be practical rather than theoretical. (PRAC TICAL)
SG3. The syllabus s hould be clear and unam biguous to i ts intended readers. (CL EAR) SG4. The
syllabus s hould be un derstandable to people from different countries, and easily translata ble into diffe
rent langua ges. (TRANS LATABLE)
SG5. The syllabus s hould use A merican English. (AMERICAN-ENGLISH)

10.1.2 Current Content


SC1. The syllabus s hould includ e recent testing concepts and should reflect curre nt best practices in
softwa re testing where this is generally agr eed. The syllabus is sub ject to revie w every three to five
years. (RECENT )
SC2. The syllabus s hould minimize time-related issues, such as curre nt market c onditions, to enable it to
have a s helf life of three to five ye ars. (SHEL F-LIFE).

10.1.3 Learning Objectives


LO1. Le arning objectives should distinguish between item s to be reco gnized/rem embered (cognitive
level K1 ), items the c andidate should unders tand conceptually (K2), it ems the can didate should be able
to p ractice/use (K3), and items the candidate should be able to u se to analyz e a document, software or
project situation in co ntext (K4). (KNOWLED GE-LEVEL)
LO2. The description of the cont ent should b e consistent with the lear ning objecti ves. (LO-CONSIS
TENT)
LO3. To illustrate the learning objectives, sa mple exam questions for each major section shou ld be issued
along with th e syllabus. (L O-EXAM)

10.1.4 Overall Structure


ST1. Th e structure of the syllabus should be clear and allow cross-ref erencing to and from oth er parts, fro

m exam qu estions and f rom other re levant documents. (CR SS-REF)


ST2. Overlap betwe en sections of the syllabu s should be minimized. (OVERLAP)
ST3. Each section o f the syllabu s should have the same structure. (STRUCTURE -CONSISTE NT) ST4.
Th e syllabus s hould contai n version, date of issue and page nu mber on every page. (VERSI ON)
ST5. Th e syllabus s hould include a guideline for the amount of time t o be spent in each section (to reflect
th e relative im portance of each topic). (TIME-SPE NT)

References
SR1. Sources and re ferences will be given fo r concepts i n the syllabu s to help training provide rs find out
more inform ation about the topic. (R EFS)
SR2. W here there ar e not readily identified and clear sources, more d etail should be provided in the
syllabus. For example, definition s are in the
Glossary, so only the ter ms are listed in the syllab us.
(NON-REF DETAIL)

Version 2 011
Page 71 of 78
31-M ar-2011
Internationa l Software Testing Q ualifications Board

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Sourc es of Infor mation


Terms used in the sy llabus are defined in the ISTQB Glo ssary of Ter ms used in S oftware Testing. A
version of the Gloss ary is availab le from IST QB.
A list of recommend ed books on software tes ting is also issued in parallel with thi s syllabus. The main
bo ok list is part of the Refe rences secti on.

Version 2 011

Page 72 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Appendix D No tice to T raining Provider s


Each ma jor subject heading in the syllabus is assigned a n allocated time in minutes. The purp ose of this
is bo th to give g uidance on t he relative proportion of time to be allocated to e ach section o f an accredit
ed course, and to give a n approxima te minimum time for the teaching of e ach section . Training
providers may spend m ore time tha n is indicated and candidates may spend more ti me again in reading
and research. A course curriculum does not have to follow the sa me order as the syllabus.
The syll abus contains references to establish ed standards, which must be used in the prepara tion of
trainin g material. Each standard used must be the vers ion quoted in the current version of this syllabus.
Other publications, templates or standards not referenced in this syllabus may also b e used an d
referenced , but will not be examined.
All K3 a nd K4 Learning Objective s require a practical exercise to be i ncluded in th e training material s.

Version 2 011

Page 73 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Appendix E Re lease N otes


Release 2010
Changes to Learning Objectives (LO) include so me clarificati on
Wording cha nged for the following L Os (content and level of LO remains unchanged): LO-1.2.2, LO1.3.1, LO -1.4.1, LO-1. 5.1, LO-2.1.1, LO-2.1.3, LO-2.4.2, LO-4. 1.3, LO-4.2. 1, LO-4.2.2, LO-4.3.1, LO
-4.3.2, LO- 4.3.3, LO-4.4 .1, LO-4.4.2, LO -4.4.3, LO- 4.6.1, LO-5.1 .2, LO-5.2.2 , LO-5.3.2, LO-5.3.3, LO 5.5.2, LO-5. 6.1, LO-6.1. 1, LO-6.2.2, LO-6.3.2.
LO-1.1.5 ha s been reworded and up graded to K2 . Because a comparison of terms of def ect related te
rms can be expected.
LO-1.2.3 (K 2) has been added. The content was already cov ered in the 2007 syllabus.
LO-3.1.3 (K 2) now combines the content of LO-3.1.3 and LO -3.1.4.
LO-3.1.4 ha s been removed from the 2010 syllab us, as it is partially redu ndant
with LO-3.1.3.
LO-3.2.1 ha s been reworded for con sistency with the 2010 sy llabus conte nt.
LO-3.3.2 ha s been modified, and its level has be en changed from K1 to K 2, for consistency with LO3.1.2.
LO 4.4.4 ha s been modi fied for clarity, and has been change d from a K3 to a K4. Reason: LO-4.4.4 had
already been written in a K4 man ner.
LO-6.1.2 (K 1) was dropp ed from the 2010 syllab us and was replaced with LO(K2). There is no L O-6.1.2 in t he 2010 syll abus.
Consistent use for test approach according to the definition in the glossary. The term test strategy will not
be required as term to recall.
Chapter 1.4 now contains the conce pt of traceability between test basis and test cases.
Chapter 2.x now contains test objects and test ba sis.
Re-testing is now the ma in term in the glossary in stead of co nfirmation testing.
The aspect data quality and testing h as been add ed at sever al locations in the syllabu s:
data quality and risk in C hapter 2.2, 5.5, 6.1.8.
Chapter 5.2.3 Entry Crite ria are adde d as a new subchapter. Reason: Consistency to Exit Criteria (-> e
ntry criteria added to LO-5.2.9).
Consistent use of the terms test strategy and tes t approach w ith their definition in the glossary.
Chapter 6.1 shortened because the tool descriptions were to o large for a 45 minute lesson.
IEEE Std 829:2008 has been release d. This version of the syllabus does not yet consider this new
edition. Section 5.2 refers to the docum ent Master Test Plan. The content of the Master Test Plan is cove
red by the concept that the documen t Test Plan covers diff erent levels of pla nning: Test p lans for the

test levels can be created as well as a test plan on the project level covering mu ltiple test le vels. Latter i
s named Master Test Pla n in this syllabus and in the IS TQB Glossa ry.
Code of Ethics has been moved from the CTAL to CTFL.

Release 2011
Change s made with the mainten ance release 2011
1. General: Wo rking Party eplaced by Working Gro up
Replaced po st-conditions by postcon ditions in or der to be co nsistent with the ISTQB Glossary 2.1.
First occurrence: ISTQB replaced by ISTQB
Introduction to this Sylla bus: Descriptions of Cognitive Levels of Knowledge removed, because this was
redund ant to Appendix B.
Version 2 011

Page 74 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Section 1.6: Because th e intent was not to define a Learning Objective fo r the Code of Ethics, the
cognitive level for the se ction has be en removed.
Section 2.2. 1, 2.2.2, 2.2.3 and 2.2.4, 3.2.3: Fixed formatting i ssues in lists .
Section 2.2. 2 The word failure was not correct fo r isolate f ailures to a s pecific com ponent .
Therefo re replaced with defect in that sente nce.
Section 2.3: Corrected fo rmatting of bullet list of test objective s related to test terms in section Test
Types (K2).
Section 2.3. 4: Updated d escription of debugging to be consistent with Ve rsion 2.1 of the ISTQB Glos
sary.
Section 2.4 r emoved word extensive from inclu des extensi ve regressio n testing, because the
extensive depends on the change (size, risks, value, etc.) as written in the next sentenc e.
Section 3.2: The word i ncluding ha s been removed to clarify the senten ce.
Section 3.2. 1: Because the activities of a formal review had been incorrec tly formatte d, the
review proce ss had 12 m ain activitie s instead of six, as inten ded. It has been change d back to six,
which makes this section compliant with the Syllabus 2 007 and the ISTQB Advanced Level Syllabus
2007.
Section 4: W ord develo ped replaced by define d because test cases ge t defined and not developed.
Section 4.2: Text change to clarify ho w black-box and white-b ox testing could be used in conjunction with
experie nce-based te chniques.
Section 4.3. 5 text change ..between actors, inclu ding users and the syst em.. to
between actors (users o r systems), .
Section 4.3. 5 alternative path replac ed by alterna tive scenari o.
Section 4.4. 2: In order to clarify the t erm branch testing in the text of Section 4.4, a sentence to clarify
the focus of branch testing ha s been changed.
Section 4.5, Section 5.2.6: The term experience d-based testing has bee n replaced b y the correct term
experience-based.
Section 6.1: Heading 6.1.1 Understa nding the Meaning and Purpose of T ool Support for Testing (K2)
replaced by 6.1.1 Tool Support for Testing (K2).
Section 7 / B ooks: The 3rd edition of [Black,2001] listed, repl acing 2 nd edition.
Appendix D: Chapters re quiring exercises have b een replaced by the gen eric require ment
that all Lear ning Objectiv es K3 and h igher requir e exercises. This is a req uirement specified in the
ISTQB Accreditati on Process (Version 1.2 6).
Appendix E: The change d learning objectives be tween Versio n 2007 and 2010 are no w correctly list ed.

Version 2 011

Page 75 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

Index
action word .............
....................

40
bug .......................... ................................

............ 63

alpha te sting ........... ..........................

24, 27
architecture ............. ..
15,21, 22, 25, 28, 29
archivin g ................. ..........................

17, 30
automation .............. ................................

29
benefits of independence .......................

11
captured script ........ ................................

62
checklis ts ................ ..........................

34, 35
choosin g test techni ..........................que
44
code co verage ................

47
benefits of using too ...............................

28, 29, 37, 42, 58


commercial off the s............helf(COTS)
22
compiler .................. ................................

62
beta tes ting ............. ..........................

36
complexity............... ..............

24, 27
black-box technique ....................

11, 36, 50, 59


compon ent integrati on testing22, 25, 29, 59,
60
compon ent testing22 , 24, 25, 27, 29, 37, 41,

37, 39, 40

black-box test desig.............ntechnique


39
black-box testing ..... ................................

28
bottom-u p................ ................................

42
45, 48, 52
configur ation management .........

Configuration manag ement tool .............


58
confirma tion testing. .. 13, 15, 16, 21, 28, 29

25
boundary value analysis .........................

contract acceptance testing


........ ............ 27

control flow .............. ..............


28, 36, 37, 42

coverag e 15, 24, 28, 29, 37, 38, 39, 40, 42,
50, 5 1, 58, 60, 62
58
coverag e tool .......... ....................
............
custom- developed s oftware........
............ 27
data flow ................. ....................
............ 36
data-driv en approac h..................
............ 63
data-driv en testing .. ....................
............ 62
debuggi ng ............... ..............
13, 24, 29, 58
debuggi ng tool ........ ....................
...... 24, 58
decision coverage... ....................
...... 37, 42
decision table testin ..................
...... 40, 41
decision testing ....... ....................
............ 42

dynamic analysis too l .......................


58, 60

d ynamic testi ng ................


..... 13, 31, 32, 36
e mergency ch ange ..........
....................... 30
e nhancement ...................
................. 27, 30
e ntry criteria .....................
....................... 33
e quivalence partitioning ...
....................... 40
e rror .................................
10, 11, 18, 43, 50
e rror guessin g ..................
........... 18, 43, 50
e xhaustive te sting ............
....................... 14
e xit criteria13, 15, 16, 33, 35, 45, 48, 4 9, 50,

51

e xpected resu lt......................


16, 38, 48,
63
e xperience-ba sed technique.......
37, 39,
43
e xperience-ba sed test de sign techniq ue 39

e xploratory testing.............................
43, 50
fa ctory accept ance testing ..................
.... 27
fa ilure 10, 11, 13, 14, 18, 2 1, 24, 26, 3 2, 36,

defect 10 , 11, 13, 14, 16, 18, 21, 24, 26, 28, 29,
3 1, 32, 33, 34, 35, 36, 37, 39, 40, 41, 43, 4 4, 45,
47, 49, 50, 51, 53, 54, 55, 59,

43, 46, 50, 51, 53, 54,


69
fa ilure rate ........................

60, 6 9
50, 51

................. 50, 51
fa ult ..............

defect d ensity .......... ..........................

defect tracking tool.. ................................ 59


develop ment .. 8, 11, 12, 13, 14, 18, 21, 22,

....................
........... 10, 11, 43
fa ult attack ........................

24, 2
9, 32, 33, 36, 38, 44, 47, 49, 50, 52,

....................... 43
fi eld testing .......................

53, 5
5, 59, 67
21, 22

................. 24, 27
fo llow-up.......
....................

develop ment model ..........................

........... 33, 34, 35


fo rmal review ....................

drawbacks of indepe ndence...................


47

................. 31, 33
fu nctional requirement .....
................. 24, 26
fu nctional specification.....
....................... 28
fu nctional tas k ..................

driver....
................... ................................
24

....................... 25
fu nctional test ...................

........... 31, 33, 34


inspection .........................

....................... 28
fu nctional testing ..............

..... 31, 33, 34, 35


inspection leader..............

....................... 28
fu nctionality ...............
24, 25, 28, 50, 53, 62
im pact analysis ................

....................... 33
integration13, 22, 24, 25, 27, 29, 36, 40, 41,
42, 45, 48, 59, 60, 69

........... 21, 30, 38


incident...
15, 16, 17, 19, 2 4, 46, 48, 5 5, 58,
59, 62

integration tes ting22, 24,


25, 29, 36, 40, 45,
59, 60, 69
interoperability testing ......
....................... 28
introducing a tool into an o rganization 57, 64

IS O 9126 ...............................
11, 29, 30, 65
d evelopment model.............................
.... 22

incident loggi ng ................


....................... 55
incident mana gement .......

it erative-incremental development mo del 22

k eyword-drive n approach ........................


63
k eyword-drive n testing ............................
62
kick-off.....................................................
33
learning objec tive ...
8, 9, 10, 21, 31, 3 7, 45,
57, 69, 70, 71

........... 48, 55, 58


incident mana gement tool
................. 58, 59
incident report ..................
................. 46, 55
independence ..................
........... 18, 47, 48
informal review .................

Version 2 011

Page 76 of 78

Internationa l Software Testing Q ualifications Board

load testing .......................


........... 28, 58, 60
31-M ar-2011

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

perform ance testing tool .............


......
load testing tool ....... ....................
............ 58

maintainability testing .................


............
28
mainten ance testing ...................
......
21, 30

management tool .... ..............


48, 58, 59, 63
maturity ................... ..............
17, 33, 38, 64
metric ...................... ....................
33, 35, 45

mistake ................... ....................

58, 60

pesticide paradox .... ....................


............
14
portability testing ..... ....................
............
28
probe effect............. ....................
............
58
procedure................ ....................
............
16
product risk ............. ..............
18, 45, 53, 54
project risk .............. ....................
12, 45, 53

10, 11, 16

modelling tool .......... ....................


............
59
moderator ............... ....................

prototyping .............. ....................


............
22
quality 8 , 10, 11, 13, 19, 28, 37, 38, 47, 48,
50, 5 3, 55, 59

33, 34, 35

monitori ng tool ........ ....................


......
48, 58

non-func tional requirement.........


21, 24, 26

non-func tional testin g .................


......
11, 28

objective s for testing ...................


............
13
off-the-shelf............. ....................
............
22
operational acceptan ce testing ...............
27
operational test ....... ....................
13, 23, 30

patch ....................... ....................


............
30
peer review ............. ....................
33, 34, 35

perform ance testing ....................


......
28, 58

rapid application dev elopment (R AD)


..... 22
Rational Unified Pro cess (RUP) .............
22
recorder ..................
................................
34
regressi on testing ... ..
15,16,21,
Regulation acceptance...............testing
28, 29, 30

27
reliability .................. ..
11,13,28,
reliability testing ...... ................................
50, 53, 58

28
requirem ent ............. ........
13,22,
requirem ents manag..............ementtool
24, 32, 34

58
requirem ents specifi................cation
26, 28
responsibilities ........ ....................
24, 31, 33
re-testing . 29, See confirmation testing, See
confir mation testing
review1 3, 19, 31, 32, 33, 34, 35, 36, 47, 48,
53, 5 5, 58, 67, 71
review t ool ............... ................................
58
reviewer .................. ..........................
33, 34
risk11, 12, 13, 14, 25 , 26, 29, 30, 38, 44, 45,

49, 5 0, 51, 53, 54


54
risk-bas ed approach
...........

risk-bas ed testing .... ....................


50, 53, 54

risks .....
................... ..............
11,
25, 49, 53

risks of
using tool .... ................................
62
robustne ss testing... ................................
24
roles .....
........... 8, 31, 33, 34, 35, 47, 48, 49
root cause ............... ..........................
10, 11

scribe ...
................... ..........................
33, 34

scripting language... ....................


60, 62, 63

security
................... ..
27,28,36,
47, 50, 58

s ecurity testing ....................................


.... 28
s ecurity tool .......................................
58, 60
simulators............................................
.... 24
site acceptan ce testing .......................

.... 27
s oftware deve lopment.............
8, 11, 21, 22
s oftware deve lopment model ..............
.... 22
s pecial consid erations for some types of tool 62

te st case ..................................................
38
s pecification- based technique.....
29, 39, 40
s pecification- based testin g
stakeholders..
12,13,16, 18, 26, 39, 45, 54
state transitio n testing ......
.................
.................. .... 37

40, 41

statement cov erage .........


.......................
42
statement testing..............
.......................
42
static analysis
.................
32, 36

static analysis...........tool
31, 36, 58, 59, 63
static techniq ................ue
.................
31, 32

static testing .....................


.................
13, 32

stress testing ....................


...........
28, 58, 60

stress testing.............tool
.................
58, 60

structural test ...............ing


.....
24,
structure-bas ed technique
................
28, 29, 42

39, 42

structure-bas ed test design technique .... 42


structure-bas ed testing .....................
37, 42
stub .....................................................
.... 24
s uccess factors ...................................
.... 35
s ystem integr ation testing .................
22, 25
s ystem testin g13, 22, 24, 25, 26, 27, 49, 69
te chnical revi ew ....................
31, 33, 34, 35
te st analysis ..........................
15, 38, 48, 49
te st approach ........................
38, 48, 50, 51
te st basis .............................................
.... 15
te st case.13, 14, 15, 16, 2 4, 28, 32, 3 7, 38,
39, 40, 41, 42, 45, 51,
55, 59, 69
te st case spe cification......
........... 37, 38, 55

te st cases .....

37, 38, 39

....................

te st design to ol ..................................
58, 59

....................... 28
te st closure .......................

Test Develop ment Process.................


.... 38

........... 10, 15, 16


te st condition ....................

te st effort .............................................
.... 50

....................... 38
te st condition ...........s
13, 15, 16, 28, 38, 39
te st control........................

te st environment . 15, 16, 17, 24, 26, 48, 51


te st estimati on.......................................
50
te st execution13, 15, 16, 3 2, 36, 38, 4 3, 45,
57, 58, 60

........... 15, 45, 51


te st coverage ...................

38

................. 15, 50
te st data ........
15,16,38, 48, 58, 60, 62, 63
te st data preparation.tool
................. 58, 60
te st design13, 15, 22, 37, 38, 39, 43, 4 8, 58,
62
45
te st design specification......................
....

te st execution schedule ...


.......................

te st execution tool .....


16, 38, 57, 58, 60, 62
te st harness..................... 16, 24, 52, 58,
60
te st implemen tation..........
...........
16, 38,

te st design te chnique ..................

Version 2 011

Page 77 of 78

Internationa l Software Testing Q ualifications Board

31-M ar-2011

49

Certified Test er
International

Software Te sting
Foundation Level Syllabus
Q ualifications Board

test leader ............... ..............


18, 45, 47, 55
test leader tasks ...... ....................
............ 47
test level . 21, 22, 24, 28, 29, 30, 37, 40, 42,

44, 4 5, 48, 49

test proc edure specification..............


37, 38
test prog ress monitoring .........................
51
test repo rt................ ....................

......
45, 51

15, 16, 43, 60

test repo rting........... ....................

test log ....................


..............

45, 51

test man agement ....


....................
...... 45, 58
test man agement to ol .................
...... 58, 63
test man ager...........
....................
.. 8, 47, 53

......

test scri pt ................ ....................

16, 32, 38
test stra tegy ............ ....................

............
47
test suit e ................. ....................

............
29
test summary report ........

test mon itoring ........


....................
...... 48, 51

15,16,

test objective.......
13, 22, 28, 43, 44, 48, 51

58
test type .................. ........

test orac le ...............


....................
............ 60

21,28,
test-driv en developm ent .........................

test orga nization .....


....................
............ 47
test plan .. 15, 16, 32, 45, 48, 49, 52, 53, 54

test planning ........... ..


15, 16, 45, 49, 52, 54
test pla nning activities
.........................
49
test proc edure......... ..
15, 16, 37, 38, 45, 49

45, 48, 51

test tool classificatio n..................

............

30, 48, 75

24
tester 10 , 13, 18, 34, 41, 43, 45, 47, 48, 52,

62, 6 7
48
tester tasks ............. ................................

test-first approach ... ................................


24

types of test t ool


te sting and qu ality

u nit test frame work


24, 58, 60
u nit test frame work tool
58, 60
u pgrades
30

te sting principles
10, 14
te stware
15, 16, 17, 48, 52
to ol support .. 24, 32, 42, 57, 62

u sability
11, 27, 28, 45, 47, 53
u sability testin g
28, 45
u se case test 37, 40

to ol support fo r management of testin g


and
tests 59
to ol support fo r performance and monitoring
60

to ol support fo r static testing 59


to ol support fo r test execution and log
ging60
to ol support fo r test specification
to ol support fo r testing 57, 62
to p-down
25
traceability
38, 48, 52
transaction processing se quences

Version 2 011

57, 58

11

Page 78 of 78

u se case testing
37, 40, 41
u se cases
22, 26, 28, 41
u ser acceptan ce testing
27
v alidation
v erification

59

25

31-M ar-2011

22
22

v ersion contro l 52
V-model
22
walkthrough.. 31, 33, 34
white-box test design technique 39, 42
white-box testing
28, 42

Internationa l Software Testing Q ualifications Board

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy