0% found this document useful (0 votes)
71 views45 pages

Tai Lieu ISTQB Advance Tóm Tắt

The document provides an overview of the ISTQB Advanced Level certification topics including test basics, test processes, risk management, test techniques, quality characteristics, static testing, defect management, improving testing processes, test tools and automation, and people skills. It outlines key concepts and definitions for each chapter such as test levels, metrics, risk-based testing, black box and white box techniques, reviews, defect lifecycles, and skills needed for test analysts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views45 pages

Tai Lieu ISTQB Advance Tóm Tắt

The document provides an overview of the ISTQB Advanced Level certification topics including test basics, test processes, risk management, test techniques, quality characteristics, static testing, defect management, improving testing processes, test tools and automation, and people skills. It outlines key concepts and definitions for each chapter such as test levels, metrics, risk-based testing, black box and white box techniques, reviews, defect lifecycles, and skills needed for test analysts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

ISTQB ADVANCED LEVEL

Tài liệu tóm tắt lý thuyết

Giảng viên: Tạ Thị Thinh

Email: thinhtt0204@gmail.com

Zalo/SDT: 0986775464

Skype: ta.thinh0204

Website: qr-solutions.com.vn

1
Tip trả lời .................................................................................................................................................................. 5

CHAPTER 1-Test basics and Test Metrics............................................................................................................ 5

1.1 Development models ......................................................................................................................... 5

1.2 Test Levels ......................................................................................................................................... 5

1.3 Specific Systems ................................................................................................................................ 6

1.4 Test metrics ........................................................................................................................................ 7

1.5 Business Value of Testing ............................................................................................................... 10

CHAPTER 2- Test processes ................................................................................................................................. 11

2.2 Test plan, monitoring and control


Test policy- The document that describes the organization's philosophy, objectives, and key metrics
for testing—and possibly quality assurance as well.

Test strategy - describes the organization’s general, project-independent methods for testing

Test approach- The implementation of the test strategy for a specific project & deviations

Master test plan (or project test plan) - describes the implementation of the test strategy for a
particular project

Level test plan (or phase test plan) - describes the particular activities to be carried out within each
test level
............................................................................................................................................................... 11

2.3 Test analysis..................................................................................................................................... 14

2.4 Test Design ...................................................................................................................................... 15

2.6 Test Execution ................................................................................................................................. 16

2.8 Test Closure Activities..................................................................................................................... 16

2.9 Test document templates.................................................................................................................. 16

CHAPTER 3. Risk management........................................................................................................................... 18

3.4 How to Do Risk-Based Testing, related-test project risk ................................................................. 19

3.5 Distributed, Outsourced, and Insourced Testing .............................................................................. 20

CHAPTER 4- Test Techniques ............................................................................................................................ 22

4.1 Black-box Test Techniques (Specification based/ Requirement based) .......................................... 25

4.1.1 Equivalence partitioning/ class (EP) ............................................................................................. 25

4.1.2 Boundary value analysis (BVA) ................................................................................................... 25

2
4.2While Box test design (Structure based) ......................................................................................................... 28

4.4 Experience_base techniques ............................................................................................................ 29

4.4.1 Defect Taxonomies/ Category ...................................................................................................... 29

4.4.2 Error Guessing ............................................................................................................................. 29

4.4.3 Checklist-based Testing ................................................................................................................ 30

4.4.4 Exploratory Testing ( Free test, Monkey test, Random test) ........................................................ 30

CHAPTER 5- Characteristic................................................................................................................................. 31

5.1 Quality Characteristics for Business Domain Testing For Test Analyst includes:........................... 31

5.2 Quality Characteristics for Technical Domain Testing for Technical Test Analyst......................... 32

CHAPTER 6. Static testing ................................................................................................................................... 32

6.1 Review process ................................................................................................................................ 32

CHAPTER7. Defect Management ........................................................................................................................ 37

7.1 Introduction...................................................................................................................................... 37

7.2 The Defect Lifecycle and the Software Development Lifecycle ..................................................... 37

7.3 Defect Classification ........................................................................................................................ 38

CHAPTER 8. Improving the Testing Process ..................................................................................................... 39

8.2 Test Improvement Process ............................................................................................................... 39

8.4 Improving the Testing Process with TMMi ..................................................................................... 39

8.5 Improving the Testing Process with TPI Next ................................................................................. 40

8.6 Improving the Testing Process with CTP ........................................................................................ 40

8.7 Improving the Testing Process with STEP ...................................................................................... 40

CHAPTER 9. Test Tools and Automation ........................................................................................................... 41

9.1 Introduction...................................................................................................................................... 41

CHAPTER 10. People Skills and Team Composition......................................................................................... 42

10.1 Individual Skills ............................................................................................................................. 42

10.3 Fitting Testing Within an Organization ......................................................................................... 43

10.4 Motivation...................................................................................................................................... 45

3
4
Tip trả lời
Câu trả lời thường đúng Câu trả lời thường sai
Should be Must be, have to

May be Only

Can be All, full

Prove

CHAPTER 1-Test basics and Test Metrics


1.1 Development models
A- V-model

• Analysis & designing tests start early in the project


• Prevent defect
• Activities of each test level occurs concurrently with project activities

B- Agile project:

• Use a less formalized process


• Less test documentation
• Require the earliest involvement from the Test Analyst and throughout project lifecycle:
• Good change management and configuration management are critical for testing

1.2 Test Levels


Item Component Integration System Testing Acceptance test
testing ( Unit Testing
test)

Objective focuses on focuses on focuses on Establishing


components interactions the confidence
that are and interfaces behavior
and Validate the
separately
capabilities system
testable
of a whole
system or
product

Special Done in two different levels Incomplete or Forms:


isolation, use of integration undocumented
harnesses, specification User acceptance
testing:
stubs, and testing (UAT)
drivers. Component
Operational
integration testing

5
System integration acceptance testing
testing (OAT)

Contractual and
regulatory
acceptance testing

Alpha and beta


testing ( field
testing)

Environment Development Specific correspond to the


environment environment production
with framework, environment
debug tool,...
Approach Test-first Big-bang integration Black box testing Acceptance testing
approach (Merge code use may also occur at
component Experience based other times
Test driven integration test)
development
(TDD) Incremental
integration( use for
system integration
test, hardware-
software)

- Top-down

- Bottom- up

Characteristics for each level:

• Clearly goals and scope


• Traceability (if available)
• Entry and exit criteria,
• Test deliverables,
• Test techniques
• Measurements and metrics
• Test tools,
• other standards

1.3 Specific Systems

System of system

6
• The integration of commercial off-the-shelf (COTS) software, along with some
amount of custom development, often taking place over a long period.
• Significant technical, lifecycle, and organizational complexity and heterogeneity.
• Different development lifecycles and other process among disparate teams.
• Serious potential reliability issues due to intersystem coupling, where one inherently
weaker system creates ripple-effect failures across the entire system of systems.
• System integration testing, including interoperability testing

Safety-Critical Systems:

• Defects can cause death.


• Focus on quality as a very important project priority.
• Various regulations and standards
• Traceability helps demonstrate compliance.

1.4 Test metrics


The measurements objectives:

• Analysis, to discover what trends and causes may be discernible via the test results
• Reporting, to communicate test findings to stakeholders
• Control, to change the course of the testing or the project as a whole and to monitor the
results of that course correction

Metric benefit:

• Enables testers to report results in a consistent way


• Enables coherent tracking of progress over time.
• Determine the overall success of a project

Test progress monitoring and control dimensions

7
Metrics related to product risks include:

• Percentage of risks completely covered by passing tests


• Percentage of risks for which some or all tests fail
• Percentage of risk not yet completely tested
• Percentage of risks covered, sorted by risk category
• Percentage of risks identified after the initial quality risk analysis

Metrics related to defects include:

• Cumulative number reported (found) versus cumulative number resolved (fixed)


• Mean time between failure or failure arrival rate
• Breakdown of the number or percentage of defects categorized by the following:

o Particular test items or components


o Root causes
o Source of defect (e.g., requirement specification, new feature, regression, etc.)
o Test releases
o Phase introduced, detected, and removed
o Priority/severity
o Reports rejected or duplicated

• Trends in the lag time from defect reporting to resolution


• Number of defect fixes that introduced new defects (sometimes called daughter bugs

Metrics related to tests include:

8
• Total number of tests planned, specified (implemented), run, passed, failed, blocked, and
skipped
• Regression and confirmation test status, including trends and totals for regression test and
confirmation test failures
• Hours of testing planned per day versus actual hours achieved
• Availability of the test environment (percentage of planned test hours when the test
environment is usable by the test team)

Metrics related to test coverage include:

• Requirements and design elements coverage


• Risk coverage
• Environment/configuration coverage
• Code coverage

Project metrics- measurements about progress


Test activities Task Measurement
Analysis • Review and evaluate the test • Number of defects found
basis during test analysis (QnA)
• Identifying test conditions • Number of test conditions
(list of features to be tested) identified

Design • Designing and prioritizing • Percentage of test conditions


test cases covered by test cases
• Identifying test data and test • Number of defects found
environment, tools, during test design
infrastructure
Implementation • Automation test: test • Percentage of test cases
activities procedures, test scripts, test automated
suites • Percentage of test
• Building or verify the test environments configured
environment • Percentage of test data
• Preparing test data records loaded

Execution • Run tests • Test cases status


activities • Analyzing anomalies • Percentage of test conditions
• Reporting defects covered by executed
• Logging test results • Defects status
• Retest and regression test • Code coverage
• Retest and regression test
status

Evaluate Exit • When to stop a test level? • 5 dimensions:


criteria & • When to release
Report - Coverage ( thoroughness)

- Defect (functional, non-functional)

9
- Cost

- Time

- Remain Risk ( open serious defect,


untested, issue)

Acceptance test team:

- Confidence ( by
survey)

Closure • Create CR for unresolved • All metrics was collected


activities defects
• Creating a test summary
report
• Collect and hand over
testware
• Analyzing lessons learned
• improve test process

Product metrics - measurement quality of testing:


a. Measurements about code coverage.
b. Measurements about defects
c. Measurements about confidence (acceptance test)

Basili's Techniques: Goal Question Metric technique is one way to evolve meaningful
metrics

1.5 Business Value of Testing


Quantitative values Qualitative values
• finding defects • improved reputation for quality
• reducing risk by running tests, • smoother and more-predictable releases
• delivering information on project, • increased confidence
process, and product status • protection from legal liability
• reducing risk of loss of whole missions
or even lives

Cost of quality

10
Costs of Costs of detection Costs of internal Costs of external
prevention failure failure

•Training •Write test case •Fix bug prior to •Support customer


•Early test •Review document delivery •Fix bug after
•Build process •Execution test •Re-test delivery
•Regression test

CHAPTER 2- Test processes


2.2 Test plan, monitoring and control
Test policy- The document that describes the organization's philosophy, objectives, and key metrics
for testing—and possibly quality assurance as well.

Test strategy - describes the organization’s general, project-independent methods for testing

Test approach- The implementation of the test strategy for a specific project & deviations

Master test plan (or project test plan) - describes the implementation of the test strategy for a
particular project

Level test plan (or phase test plan) - describes the particular activities to be carried out within each
test level

2.2.1 Test plan

Test plan template:

11
a. Test items (those things that will be delivered for testing)
b. Features to be tested & Features not to be tested
c. Approach (including test strategies, test activities (test process), test levels, test types, test
techniques,…, and the extent of testing)
d. Test criteria (for example, entry criteria, exit criteria, suspension and resumption criteria)
e. Test deliverables (such as reports, charts, and so forth)
f. Test tasks
g. Environmental needs
h. Responsibilities
i. Staffing and training needs
j. Schedule
k. Risks and contingencies

The strategy:
l. 1. Analytical: requirement based or risk based
m. 2. Model based: based on aspect of product (model, embedded)
n. 3. Methodical based: error guessing or checklist from standard (ISO 25010)
o. 4. Process compliant: follow Agile process (rules)
p. 5. Consultative: guide by expert, user
q. 6. Regression averse: highly automation test
r. 7. Reactive/ Dynamic/ Heuristic: exploratory testing

Entry Criteria and Exit Criteria

Entry Criteria (smoke test) Exit Criteria

When to start a test level? When to stop a test level?

When to release?

How much testing is enough?

12
Check available (readiness): Check 5 criteria:

- Documents - Coverage
- Prior test levels met exit criteria - Defect (functional, non-functional)
- Test environment - Cost/ effort
- Test tool - Time
- Test data - (importance) Residual risks: issues/
open serious defect, untested.

2.2.2 Test Estimation


Factors Affecting Estimation:

Process factors: People factors:

• testing activities • managers and technical leaders


• hand-offs • project stakeholders
• control processes • project team
• lifecycle • Stability, turnover
• schedules and budgets • test environment support
• Honesty, commitment, transparency,
and open, shared

Material factors: Delaying factors:

• automation and tools • High complexity


• test environment • Lots of stakeholders
• debugging environment • Many sub-teams
• test oracle • ramp up, train, and orient
• project documentation • develop new tools, techniques, or
• similar projects technologies
• custom hardware
• Tricky timing
• Fragile test data
• mission-critical and safety-critical
systems

A large, complex system

Estimating Techniques:

Expert-based: wisdom (predict) by owner or expert ( test manager, PM)

• Intuition, guesses, and experience


• Work breakdown structures

13
• Wideband Dephi
• Test Point Analysis

Metrics-based: based on historical of similar projects, or typical values

• Company standards and norms


• The typical or believed-typical percentages
• Organizational history and metric
• Industry averages, metrics

2.2.1 Morning and control


Test Monitoring Test control

- Compare actual with plan to assess test - Take decision (corrective actions):
progress, quality, cost, time.
+ Re-prioritizing
- Create test reports
+ Changing
2.7 Contents for Test Reports ( IEEE 829)
+ Re-evaluating
- Summary of testing
- Analysis + Set an entry criterion for bug fixing
- (Variances) Deviations from plan
- Metrics
(Evaluation) Residual risks

2.3 Test analysis


Test case: A set of input values, execution preconditions, expected results, and execution post conditions
developed for a particular objective or test condition

Test condition: An item or event of a component or system that could be verified by one or more test
cases (e.g., a function, transaction, feature, quality attribute, or structural element).

Test execution: The process of running a test on the component or system under test, producing actual
result(s).

Test procedure: A document specifying a sequence of actions for the execution of a test. Also known as
test script or manual test script.

Some advantages of specifying test conditions at a detailed level include:

• Providing better and more detailed monitoring and control for a Test Manager
• Contributes to defect prevention
• stakeholders can understand
• Enables test design
• clearer horizontal traceability within a test level

14
Some disadvantages of specifying test conditions at a detailed level
• Include: Potentially time-consuming
• Maintainability can become difficult in a changing environment
• Level of formality needs to be defined and implemented across the team

Test conditions specific more detail in cases:


• Lightweight test design
• Little or no formal requirements
• The project is large-scale, complex or high risk and requires a level of monitoring and
control
Test conditions specific less detail in case:
• Component level testing
• Less complex projects
• Acceptance testing
2.4 Test Design
Concrete test cases (Low level) Logical test cases (High level)

• A test case with implement level (I/O) • A test case without implement level (I/O)

Advantage: Advantage:

• Provide all the specific information • Provide guidelines for test.


and procedures needed for test • Provide better risk coverage
• inexperienced tester • Defined early in the requirements
• the same results. process
• The level of detail of document • No detailed and formal documentation
required. • Experienced tester

Disadvantage: Disadvantage:

• Require a significant amount of • less reproducible, making verification


maintenance effort difficult
• Limit tester ingenuity during execution • More experienced testing staff may be
• Require that the test basis be well defined needed to execute
• Their traceability to test conditions may • the lack of details for automating
take more effort

2.4.3. Test Oracles


A test oracle is a source we use to determine the expected results of a test
A test oracle is in:

• The existing system ( legacy system)


• Specification
• A user manual
• An individual's specialized knowledge
• Never use the code itself as an oracle, even for structural testing.

15
2.6 Test Execution
Test execution begins (precondition) when:

• Readiness of the test environment


• Configuration and release management for the system under test
• The readiness of a defect management and a test management system

Pre-conditions for test execution, including: testware, test environment, configuration


management, and, defect management.

2.8 Test Closure Activities

Test artifacts
Test completion handover Lessons learned Archiving

•All planned tests •Known defects •Quality risk •All test work
should be either run communicated to analysis? product archived in
or skipped ? operation and •Metrics analysis? CM?
•All known defects support team? •Root cause analysis
should be either •Tests and test and action defined?
fixed, deferred e, or environments to •Process
accepted? maintenance team? improvement?
•Regression test •Any unanticipated
documented? deviation?

2.9 Test document templates

IEEE-829 Templates

Test design template Test procedure template

Test case template Test transmittal report template

16
17
CHAPTER 3. Risk management
3.2.2 Risk categories
Risk: event could happen in the future and result negative consequences
Risk level or Risk priority number = likelihood (probability) x impact (harm)
Project risk Product risk (quality risks)
A risk to the project’s capability to deliver A risk to quality of product
products: Scope, cost, Time
Related to management and control of the Directly related to the test object
(test) project: - Failure software delivered
- Skill, training - The potential could cause harm to an individual or
- People company
- Customer, vendor - Poor software characteristics (e.g., functionality,
- Technical, tool reliability, usability and performance)
- Schedules. - Poor data integrity and quality
- Budget - Software that does not perform its intended
functions
- Work product: SRS, code, design, test
-
documents

Actions: PM and Test manager Actions: Tester (TA, TTA)


- Mitigation or reduce risk - Risk-based testing
+ test techniques
+ levels and types of testing
+ the extent of testing
+ Prioritize testing
+ any activities in addition to testing

Project Risks

A specific list of all possible test-related project risks like that:

• Test environment and tool readiness


• Test staff availability and qualification
• Low quality of test deliverables
• Too much change in scope or product definition
• Sloppy, ad hoc testing effort

However, some project risks can and should be mitigated successfully by the Test
Manager and Test Analyst.
Product risk:
Failure Mode and Effect Analysis (FMEA) . The system is both complex and safety
critical

18
3.4 How to Do Risk-Based Testing, related-test project risk

Risk management includes 4 primary activities:

a. Risk identification using techniques:

TM TA TTA
• Expert interviews Không có Không có
• Independent
assessments • Project retrospectives • Project retrospectives
• Use of risk templates • Checklists
• Project retrospectives Sample risks: • Use of risk templates
• Risk workshops and • Independent
brainstorming - Functional assessments
• Checklists - Usability
• Calling on past - Portability Sample:
experience
- Performance
- Security
- Reliability

b. Risk analysis, assessing the level of risk

Risk level = likelihood (probability) x impact (harm)

Technical factors should we consider:

• Complexity
• Personnel and training issues
• conflict/communication
• Supplier and vendor

19
• Geographical distribution
• new technologies and designs
• lack of quality—in the tools and technology used
• Bad managerial or technical leadership
• Time, resource, and management pressure
• Lack of earlier testing and quality assurance tasks in the lifecycle
• High rates of changes
• High defect rates
• Lack of sufficiently documented requirements

Business factors should we consider:

• The frequency of use and importance


• Potential damage to image
• Loss of customers and business
• Potential financial, ecological, or social losses or liability
• Civil or criminal legal sanctions
• Loss of licenses, permits, and the like
• The lack of reasonable workarounds
• The visibility of failure and the associated negative publicity

Factors People TM TA TTA


Technical factors ( about Developer, designer, TTA, x x
likelihood) technical leader
Business factors ( about impact) BA, Product Owner, TA, x x
Customer care, customer

Lightweight techniques for risk level = likelihood x impact

Heavy-weight end of the scale:

• Hazard analysis
• Cost of exposure
• Failure Mode and Effect Analysis (FMEA) Risk level= severity (Minor, Major,
Serious, Critical) x priority (L,M,H) x detection (%).
The system is both complex and safety critical

• Quality Function Deployment (QFD).


• Fault Tree Analysis (FTA),

c. Risk mitigation ( "risk control" because it consists of mitigation, contingency,


transference, and acceptance actions for various risks)

3.5 Distributed, Outsourced, and Insourced Testing


Same company Location Co-located
Distributed testing Same employees multiple
Insourced testing not employees single or multiple Co-located
Outsourced testing not employees single or multiple Not co-located

20
Main issues:

• Divide the test work across the multiple locations explicitly


• Gaps and areas of overlap between teams
• Inefficiency, potential rework during test execution
• Confusion about the meaning of the test results if the results for similar tests disagree

Test Manager Solutions:

• Clear channels of communication and well-defined expectations for missions, tasks,


and deliverables
• Alignment of methodologies
• For distributed testing, the division of the test work across the multiple locations must be
explicit and intelligently decided
• Develop and maintain trust that all of the test team(s) will carry out their roles properly
in spite of organizational, cultural, language, and geographical

Test Analyst solutions:

• Pay special attention to effective communication and information transfer


• Defects that are accurately recorded can be routed to co-workers for follow-up as
needed

3.6.3 Nonfunctional Testing Issues


In iterative lifecycles, Test design and implementation activities that take longer than the
timescales of a single iteration should be organized as separate work activities outside of the
iterations.
3.6.4 Managing reactive test strategies and experience-based test techniques

• Using session-based test management: you break the test execution effort into test
sessions. A test session is the basic unit of testing work. It is generally limited in time,
typically between 30 and 120 minutes in length. The period assigned to the test session is
called the time box.

Some proponents of session-based test management use an acronym, PROOF:

• Past: What did you do, and what did you see?
• Results: What bugs did you find? What worked?
• Outlook: What still needs to be done?
• Obstacles: What got in the way of good testing?
• Feelings: How do you feel about this test session?

21
CHAPTER 4- Test Techniques
Category test case or test data

Valid ( The system work) Invalid (the system doesn’t work)

Successful Unsuccessful

Happy Unhappy

Normal Abnormal

Constructive Negative

Test case format:

Concrete test cases (Low level) Logical test cases (High level)

• A test case with implement level (I/O) • A test case without implement level (I/O)

Advantage: Advantage:

• Provide all the specific information • Provide guidelines for test.


and procedures needed for test • Provide better risk coverage
• inexperienced tester • Defined early in the requirements
• the same results. process
• The level of detail of document • No detailed and formal documentation
required. • Experienced tester

Disadvantage: Disadvantage:

• Require a significant amount of • less reproducible, making verification


maintenance effort difficult
• Limit tester ingenuity during execution • More experienced testing staff may be
• Require that the test basis be well defined needed to execute
• Their traceability to test conditions may • the lack of details for automating
take more effort

Other: Test case types

GUI Function Flow

Purpose Test each field or item on Test combination of inputs, Test end to end of

22
a screen events, pre-conditions system

- Xác nhận dữ liệu đúng Environment


format, đúng định dạng corresponding to
hay chưa product environment

Test level Integration test Integration test System test or system


integration test

Acceptance test

Test Equivalence partition or Decision table Use case


techniques Boundary value
State transition test
Checklist

23
Categories of Test Techniques and Their Characteristics

Black box test White box test Experience based

( Specification based or ( Structure based)


Requirement based)

- Design tests from documents - Design tests from how the - Design tests from
software is constructed knowledge or
experience
- Measure code coverage
- Find defects that
was miss by black
box, white box

- Formal or systematical - Formal or systematical - Informal

Process: Process:

1. 1. Equivalence partitioning 1. Statement coverage 1. Defect


taxonomy
2. 2. Boundary Value analysis 2. Decision coverage
2. Error guessing
3. 3. Decision table 3. Path coverage
3. Exploratory
4. 4. Cause effect graph 4. LCSAJ
testing
5. 5. State transition testing 5. Condition coverage
4. Checklist
6. 6. Classification tree 6. Condition decision
coverage
7. 7. Orthogonal array
7. Condition Determination
8. 8. Use case testing
coverage
9. 9. User story
8. Multiple coverage

24
4.1 Black-box Test Techniques (Specification based/ Requirement based)

4.1.1 Equivalence partitioning/ class (EP)


- Divide (partition) the inputs, outputs, etc. into areas

- One value for each area, test both valid and invalid areas

4.1.2 Boundary value analysis (BVA)


- Test at the edge of each equivalence partition

- Two-point boundary: The maximum and minimum values

- Three- point boundary: Before, on, over

4.1.3 Decision tables


- combinations of inputs, situations or events
- expressing the input conditions by TRUE or FALSE
- Full decision table and collapse decision
Example: Login of gmail
Input conditions Full decision = all combinations of inputs= 2*2*2=8
valid username? F F F F T T T T
Valid password? F F T T F F T T
Space is enough? F T F T F T F T
Output
Login success F F F F F F T T

25
Restricted access
turn on

Input conditions Collapse decision


valid username? F T T T
Valid password? - F T T
Don’t care
Space is enough? - - F T
Output
Login success F F T T
Restricted turn on - - T F
-

4.1.5 Cause effect graph:


• Generated from any source which describes the functional logic (i.e., the "rules") of a
program, such as user stories or flow charts.
• Useful to gain a graphical overview of a program's logical structure. Used as the
basis for creating decision tables
4.1.5 State transition testing
four basic parts:

- State , transition, event, action (có thể có hoặc không)


State transition testing is much used within the embedded software and automotive system

2 Test case types:

- a typical scenario (a normal situation: start to end) to the coverage every states/ every
transitions
- specific sequences of transitions: N-1 Switch -----à N Transitions

26
State table: Cột là State, Hàng Ngang là Event

4.1.6 Use case testing


- Test the whole system
- Test from system test level and over
- Describe interactions between actors (user, system) and system
- Useful to uncover the defect types:

27
+ integration defects caused by interaction and interference

+ in the process flows during real-world use of the system

1 Use case includes:

+ 1 basic flow (mainstream)

+ n alternate flow (exception)

+ some errors

4.1.7 Classification Tree Method


• Similar to equivalence partitioning: a leaf in a classification tree is similar to a class in an
equivalence partitioning

• combinations of the values of configuration


4.1.8 Pairwise Testing/ Orthogonal Arrays/ 2-wise
- combinations of the values of configuration
The process is the following:

• Identify the inputs/preconditions (IPs)


• For each of the IPs find and count the possible values it can have (e.g., (IP1;n=2);
(IP2;n=4) and so on)
• Find out how many occurrences you have of each n, (e.g., 3 times n= 2, 1 times n= 4 and
so on) (e.g., 23 41))
• Choose an array that was too big, we could just fill in the superfluous cells with valid
values chosen at random.
• Design test cases corresponding to each row in the orthogonal array
4.1.9 User Story Testing
User stories include functionality and non-functional criteria, and also include acceptance criteria
that must be met for the user story to be considered complete

4.2While Box test design (Structure based)

4.3.1 Statement coverage (statement testing)


Line of code: Statement, comments ( // , /* */), Blank

Percentage of executable statements exercised.

4.3.2 Decision coverage/ Decision testing (Branch coverage)


T, F à Decision outcomes

Percentage of decision outcomes exercised

4.3.3 Path coverage (path testing)


Percentage of paths exercised.

28
4.3.4 LCSAJ coverage (Linear Code Sequence And Jump)

Summary White box test

Control flow Data flow

Statement coverage Condition coverage

Decision coverage Condition decision coverage

Path coverage Condition Determination coverage

LCSAJ Multiple condition coverage

4.4 Experience_base techniques

4.4.1 Defect Taxonomies/ Category


Purpose: Used for classification of failures and used to:
- Control the design of test cases
- Provide a precise statement about release quality
- Allocate testing resources
4.4.2 Error Guessing
Design test from past failures or common mistakes by developer

29
4.4.3 Checklist-based Testing
List of questions to remind, checked (questions from standards or common defects)

4.4.4 Exploratory Testing ( Free test, Monkey test, Random test)


- informal tests ( no process, no documents) are designed, executed, logged, and evaluated
dynamically

- use session-based: write a test charter contain some guidelines for test within a defined time-
box

- most useful when there are few or inadequate specifications or time pressure

30
CHAPTER 5- Characteristic
ISO 25010 (old ISO 9126)

5.1 Quality Characteristics for Business Domain Testing For Test Analyst includes:

5.1.1 Functional (Suitability)


• Correctness

• Data accuracy

• Data consistency

• Incorrect handling

• Computational accuracy

• Appropriateness

• Meet the needs of users

• Functional suitability to use

• Completeness

• Coverage of specific tasks

• Complete user's objectives

- Test types: specification based technical: EP, BVA, decision table, use case… and review

31
5.1.2 Interoperability (= Interaction test) (Compatibility)
- Cover all the intended target environments (including variations in the hardware,
software, middleware, operating system, etc.) to ensure the data exchange will work
properly
- Test types: use cases, pairwise testing, classification trees
5.1.3 Usability
Test techniques: Heuristic evaluation (Reactive), review, surveys and questionnaires (such as
SUMI (Software Usability Measurement Inventory) and WAMMI (Website Analysis and
MeasureMent Inventory)

5.2 Quality Characteristics for Technical Domain Testing for Technical Test Analyst
• Reliability, • Security, • Performance, • Maintainablity

5.2.1 Security Testing


Test techniques: reviews, static analysis tools ( security Vulnerability), checklist based
technique to prevent the past defects

5.2.2 Reliability Testing


- Maturity - the degree to which a component or system meets needs for reliability under
normal operation (define in SLA)
- Recoverability

5.2.3 Performance Efficiency Testing


- Load testing
- Stress testing
- Scalability testing

5.2.4 Maintainability Testing

5.2.5 Portability Testing

CHAPTER 6. Static testing


6.1 Review process
Review process

32
Planning

Kick off meeting

Individual preparation

Review meeting

Rework

Follow-up

Types of reviews in Foundation:


• Informal review ( Pair review, no process, no documented)
• Walkthrough ( Lead by author, follow process and option preparation, for Demo or
training
• Technical review (peer review, follow process and option Review meeting)
• Inspection ( most formal process, find the most defects, lead by Moderator)
In addition to these, Test Managers may also be involved in:
• Management reviews
• Audits

A formal review should include the following essential roles and responsibilities:

• The manager: The manager allocates resources, schedules reviews, and the like.
However, they might not be allowed to attend based on the review type.
• The moderator or leader: This is the chair of the review meeting.
• The author: The person who wrote the item under review and fix bug
• The reviewers: The people who review the item under review, possibly finding defects in
it.
• The scribe or secretary or recorder: The person who writes down the findings.

6.1.1 Management Reviews


Objective:

• Monitor progress, assess status, and make decisions about future actions
• Decisions about the future of the project, such as adapting the level of resources,
implementing corrective actions or changing the scope of the project

33
6.1.2 Audits
Objectives:
• Demonstrate conformance to a defined set of criteria, most likely an applicable
standard, regulatory constraint, or a contractual obligation.
• Provide independent evaluation of compliance to processes, regulations, standards, etc.
6.1.3 Managing Reviews
The defects to escape in review ( risk):
• Problems with the review process (e.g., poor entry/exit criteria)
• Improper composition of the review team
• Inadequate review tools (checklists, etc.)
• Insufficient reviewer training and experience
• Too little preparation and review meeting time
6.1.4 Managing Formal Reviews

Formal reviews have a number of characteristics such as:


• Defined entry and exit criteria
• Checklists to be used by the reviewers
• Deliverables such as reports, evaluation sheets or other review summary sheets
• Metrics for reporting on the review effectiveness, efficiency, and progress

6.1.5 Differences between Static and Dynamic Testing


Static testing Dynamic testing

Find defects without execution of Find failures with execution of


code code (run package)

Include: Review (manual), Static Include techniques to design test


analysis (tool, e.g: compiler, case, test data, test input,
Jenkins) expected results.

Retest, regression test,


automation test, dynamic analysis

Find problems: (Typical defects) Find problems: Failures, poor


non-functional (performance,
Review: Requirement defects,
security), code Coverage,
Design defects, Incorrect
memory leak
interface specifications,

34
Static analysis: Coding defects,
Deviations from standards
(coding conventions), Security
vulnerabilities, Maintainability
defects

6.1.5 Checklist review


Requirements review:
• Source of the requirement (e.g., person, department)

• Testability of each requirement

• Priority of each requirement

• Acceptance criteria for each requirement

• Availability of a use case calling structure, if applicable

• Unique identification of each requirement/use case/user story

• Versioning of each requirement/use case/user story

• Traceability for each requirement from business/marketing requirements

• Traceability between requirements and/or use cases (if applicable)

• Use of consistent terminology (e.g., uses a glossary)

A simple checklist for use case reviews:

• Is the basic behavior (path) clearly defined?

• Are all alternative behaviors (paths) identified, complete with error handling?

• Are the user interface messages defined?

• Is there only one basic behavior (there should be, otherwise there are multiple use cases)?

• Is each behavior testable?

User Story Reviews:

35
• Is the story appropriate for the target iteration/sprint?

• Is the story written from the view of the person who is requesting it?

• Are the acceptance criteria defined and testable?

• Is the feature clearly defined and distinct?

• Is the story independent of any others?

• Is the story prioritized?

• Does the story follow the commonly used format: As a < type of user >, I want < some goal >
so that < some reason > [Cohn04]

36
CHAPTER7. Defect Management

7.1 Introduction
BS 7925-1 defines: An incident is every (significant) unplanned event observed during testing,
and requiring further investigation.
IEEE 1044: uses the term “anomaly” instead of “incident.” “Any condition that deviates from the
expected based on requirements specifications, design documents, user documents, standards,
etc. or from someone’s perception or experience.”
7.2 The Defect Lifecycle and the Software Development Lifecycle
The life cycle phases defined for an incident are:

If the two phases are the same, then perfect phase containment has been achieved:
• Phase containment means the defect was introduced and found in the same phase
and didn’t “escape” to a later phase.
• Phase containment is an effective way to reduce the costs of defects.
7.2.2 Defect Report Fields
A defect include fields:

- A title and a short summary


- Date
- Identification of the test item (version) and environment
- A description including logs, database dumps, screenshots, or recordings
- Expected and actual results
- Severity (impact)
- Priority (business importance)
- State

37
Defect reports require:

• The information: clearly identify the scenario in which the problem was detected
• Non-functional defect reports: require more details regarding the environment, other
performance parameters (e.g., size of the load), sequence of steps and expected
results
• Usability failure: state what the user expected the software to do
• In cases, the tester may use the "reasonable person" test to determine that the
usability is unacceptable

7.3 Defect Classification

IEEE Information
Defect Recognition
Project Activity
Project Phase defect introduced
( Investigation)

Project Phase defect detected

Suspected Cause

Repeatability
Symptom

Investigation

Root cause
Source

Type

Severity
Priority
Disposition
Defect terminal status may be:
• Closed: defect is fixed and the fix verified through a confirmation test
• Cancelled: the defect report is invalid
• Irreproducible: the anomaly can no longer be observed
• Deferred: the anomaly relates to a real defect, but that defect will not be fixed during
the project

38
CHAPTER 8. Improving the Testing Process

8.2 Test Improvement Process


Test improvement models:
• Test Maturity Model integration (TMMi),
• Systematic Test and Evaluation Process (STEP),
• Critical Testing Processes (CTP)
• TPI Next

8.2.2 Types of Process Improvement

Process reference models Content reference models Without models


- provide a maturity - provides business- - analytical
measurement: driven evaluations approaches and
- Compare organization with - Guideline to address retrospective
model its highest priority meetings
- Evaluation organization issues to improve
within the framework
- provide a roadmap for - select the - select the
improving appropriate roadmap appropriate roadmap
2 types: staged models, continuous 1 type: continuous models No
models

Test process
improvement
models

Process reference Content reference


models models

Staged Continuous Continuous


representation representation representation

TMMi TPI NEXT CTP, STEP

8.4 Improving the Testing Process with TMMi

39
The Testing Maturity Model integration (TMMi) is composed of five maturity levels and is
intended to complement CMMI.

8.5 Improving the Testing Process with TPI Next


The TPI Next model defines 20 key areas, each of which covers a specific aspect of the test
process, such as test strategy, metrics, test tools and test environment.
8.6 Improving the Testing Process with CTP
Critical Testing Processes (CTP) assessment model is that certain testing processes are
critical.

8.7 Improving the Testing Process with STEP

Basic premises of the methodology include:

o A requirements-based testing strategy

o Testing starts at the beginning of the lifecycle

o Tests are used as requirements and usage models

o Testware design leads software design

o Defects are detected earlier or prevented altogether

o Defects are systematically analyzed

o Testers and developers work together

40
CHAPTER 9. Test Tools and Automation
9.1 Introduction
Support management Support for static testing
Test management tools Tools that support reviews
Requirements management tools ( Static analysis tools (D)
traceability matrix)
Defect management tools
Configuration management tools
Continuous integration tools (D)

Support for test execution and logging ( Support for performance measurement and
automation test) dynamic analysis
Test execution tools Performance testing tools
Coverage tools Monitoring tools
Test harnesses (D) Dynamic analysis tools (D)
Unit test framework tools (D)

Support for test design and implementation Support for specialized testing needs
Test design tools ( test inputs, test case) Usability testing
Model-Based testing tools Accessibility testing
Test data preparation tools Security testing
Portability testing

Test execution tool:

data-driven scripting technique keyword-driven scripting technique


Data files store test input and expected results data files store test input, expected results and
in table or spreadsheet keywords in table or spreadsheet

support capture/playback tools Writing script manually


Keyword-driven testing involves the Test Analyst in providing the main inputs: keywords and
data.
9.2.3 Return on Investment (ROI)
Non-recurring costs (initial cost) include the following:

41
- Defining tool requirements to meet the objectives and goals
- Evaluating and selecting the correct tool and tool vendor
- Purchasing, adapting or developing the tool
- Performing the initial training for the tool
- Integrating the tool with other tools
- Procuring the hardware/software needed to support the tool
Recurring costs include the following:
- Owning the tool
- Licensing and support fees
- Maintenance costs for the tool itself
- Maintenance of artifacts created by the tool
- Ongoing training and mentoring costs
- Porting the tool to different environments
- Adapting the tool to future needs
- Improving the quality and processes to ensure optimal use of the selected tools

CHAPTER 10. People Skills and Team Composition


10.1 Individual Skills

1. Tester’s skills:
o Skills in Foundation (requires curiosity, professional pessimism, a critical eye,
attention to detail, good communication)
o The ability to analyze a specification, risk analysis, design test cases, running tests and
recording the results.
2. Test Manager’s skill
o Project management (making a plan, tracking progress and reporting to stakeholders)
o Technical skills, interpersonal skills
o Work effectively with others, the successful test professional must also be well-
organized, attentive to detail and possess strong written and verbal communication skills.

42
To build the best team

Balancing the
Environment strengths
Skills Performance
Strong and Hired for the of and
assessment goal for
weak areas long term continuous weaknesses
spreadsheet individuals
learning of the
individuals

10.3 Fitting Testing Within an Organization

Outsource
Testers external
Testers from to the
the business organization
Test team or organization
group
Others
Developers or
Author testers
developers test
their own code

Benefits of test independence include

- Recognize different kinds of failures

- Verify, challenge, or disprove assumptions

Drawbacks of test independence include

- Isolation from the development team


- Developers may lose a sense of responsibility for quality
- Independent testers may be seen as a bottleneck or blamed for delays in release
- Independent testers may lack some important information (e.g., about the test object)
Cross-team has 3 types:

43
• Distributed testing
• In-sourced testing
• Out-sourced testing
Advantage: the inherent independence of testing + lower budget and overcoming shortage of
staff.
Disadvantages: People not having taken close part in the development will be less prepared for
the testing task and might take longer to produce test specifications.
The risks related to distributed, in-sourced, and outsourced testing fall within the areas of:

• Process descriptions
• Distribution of work
• Quality of work
• Culture
• Trust
Tasks of a Test Manager and Tester

Testing Test Manager (leader) tasks Tester tasks


Activities
Planning - Write and update the test plan - Review and contribute to test plans
- Coordinate - Create the detailed schedule
- Share testing perspectives

Analysis - Initiate - Analyze, review, and assess the test


and design - Support basis
Implement - Choose tools - Identify test conditions
and - Set up configuration management - Design, set up, and verify test
execution - Decide environment(s)
- Design test cases and test procedures
- Priority
- Prepare and acquire test data
- Execute tests, evaluate the results
- Automate tests (decide, implement)
- Evaluate non-functional
- Review tests developed by others

Monitoring - Monitor test progress and results, - Use management tools


and control and check the status of exit
criteria

44
- Create test progress reports
- Adapt planning
- Take corrective actions (decision)

10.4 Motivation
• Recognition
• Approval

• Respect
• Responsibility
• Rewards

45

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy