Tai Lieu ISTQB Advance Tóm Tắt
Tai Lieu ISTQB Advance Tóm Tắt
Email: thinhtt0204@gmail.com
Zalo/SDT: 0986775464
Skype: ta.thinh0204
Website: qr-solutions.com.vn
1
Tip trả lời .................................................................................................................................................................. 5
Test strategy - describes the organization’s general, project-independent methods for testing
Test approach- The implementation of the test strategy for a specific project & deviations
Master test plan (or project test plan) - describes the implementation of the test strategy for a
particular project
Level test plan (or phase test plan) - describes the particular activities to be carried out within each
test level
............................................................................................................................................................... 11
2
4.2While Box test design (Structure based) ......................................................................................................... 28
4.4.4 Exploratory Testing ( Free test, Monkey test, Random test) ........................................................ 30
CHAPTER 5- Characteristic................................................................................................................................. 31
5.1 Quality Characteristics for Business Domain Testing For Test Analyst includes:........................... 31
5.2 Quality Characteristics for Technical Domain Testing for Technical Test Analyst......................... 32
7.1 Introduction...................................................................................................................................... 37
7.2 The Defect Lifecycle and the Software Development Lifecycle ..................................................... 37
9.1 Introduction...................................................................................................................................... 41
10.4 Motivation...................................................................................................................................... 45
3
4
Tip trả lời
Câu trả lời thường đúng Câu trả lời thường sai
Should be Must be, have to
May be Only
Prove
B- Agile project:
5
System integration acceptance testing
testing (OAT)
Contractual and
regulatory
acceptance testing
- Top-down
- Bottom- up
System of system
6
• The integration of commercial off-the-shelf (COTS) software, along with some
amount of custom development, often taking place over a long period.
• Significant technical, lifecycle, and organizational complexity and heterogeneity.
• Different development lifecycles and other process among disparate teams.
• Serious potential reliability issues due to intersystem coupling, where one inherently
weaker system creates ripple-effect failures across the entire system of systems.
• System integration testing, including interoperability testing
Safety-Critical Systems:
• Analysis, to discover what trends and causes may be discernible via the test results
• Reporting, to communicate test findings to stakeholders
• Control, to change the course of the testing or the project as a whole and to monitor the
results of that course correction
Metric benefit:
7
Metrics related to product risks include:
8
• Total number of tests planned, specified (implemented), run, passed, failed, blocked, and
skipped
• Regression and confirmation test status, including trends and totals for regression test and
confirmation test failures
• Hours of testing planned per day versus actual hours achieved
• Availability of the test environment (percentage of planned test hours when the test
environment is usable by the test team)
9
- Cost
- Time
- Confidence ( by
survey)
Basili's Techniques: Goal Question Metric technique is one way to evolve meaningful
metrics
Cost of quality
10
Costs of Costs of detection Costs of internal Costs of external
prevention failure failure
Test strategy - describes the organization’s general, project-independent methods for testing
Test approach- The implementation of the test strategy for a specific project & deviations
Master test plan (or project test plan) - describes the implementation of the test strategy for a
particular project
Level test plan (or phase test plan) - describes the particular activities to be carried out within each
test level
11
a. Test items (those things that will be delivered for testing)
b. Features to be tested & Features not to be tested
c. Approach (including test strategies, test activities (test process), test levels, test types, test
techniques,…, and the extent of testing)
d. Test criteria (for example, entry criteria, exit criteria, suspension and resumption criteria)
e. Test deliverables (such as reports, charts, and so forth)
f. Test tasks
g. Environmental needs
h. Responsibilities
i. Staffing and training needs
j. Schedule
k. Risks and contingencies
The strategy:
l. 1. Analytical: requirement based or risk based
m. 2. Model based: based on aspect of product (model, embedded)
n. 3. Methodical based: error guessing or checklist from standard (ISO 25010)
o. 4. Process compliant: follow Agile process (rules)
p. 5. Consultative: guide by expert, user
q. 6. Regression averse: highly automation test
r. 7. Reactive/ Dynamic/ Heuristic: exploratory testing
When to release?
12
Check available (readiness): Check 5 criteria:
- Documents - Coverage
- Prior test levels met exit criteria - Defect (functional, non-functional)
- Test environment - Cost/ effort
- Test tool - Time
- Test data - (importance) Residual risks: issues/
open serious defect, untested.
Estimating Techniques:
13
• Wideband Dephi
• Test Point Analysis
- Compare actual with plan to assess test - Take decision (corrective actions):
progress, quality, cost, time.
+ Re-prioritizing
- Create test reports
+ Changing
2.7 Contents for Test Reports ( IEEE 829)
+ Re-evaluating
- Summary of testing
- Analysis + Set an entry criterion for bug fixing
- (Variances) Deviations from plan
- Metrics
(Evaluation) Residual risks
Test condition: An item or event of a component or system that could be verified by one or more test
cases (e.g., a function, transaction, feature, quality attribute, or structural element).
Test execution: The process of running a test on the component or system under test, producing actual
result(s).
Test procedure: A document specifying a sequence of actions for the execution of a test. Also known as
test script or manual test script.
• Providing better and more detailed monitoring and control for a Test Manager
• Contributes to defect prevention
• stakeholders can understand
• Enables test design
• clearer horizontal traceability within a test level
14
Some disadvantages of specifying test conditions at a detailed level
• Include: Potentially time-consuming
• Maintainability can become difficult in a changing environment
• Level of formality needs to be defined and implemented across the team
• A test case with implement level (I/O) • A test case without implement level (I/O)
Advantage: Advantage:
Disadvantage: Disadvantage:
15
2.6 Test Execution
Test execution begins (precondition) when:
Test artifacts
Test completion handover Lessons learned Archiving
•All planned tests •Known defects •Quality risk •All test work
should be either run communicated to analysis? product archived in
or skipped ? operation and •Metrics analysis? CM?
•All known defects support team? •Root cause analysis
should be either •Tests and test and action defined?
fixed, deferred e, or environments to •Process
accepted? maintenance team? improvement?
•Regression test •Any unanticipated
documented? deviation?
IEEE-829 Templates
16
17
CHAPTER 3. Risk management
3.2.2 Risk categories
Risk: event could happen in the future and result negative consequences
Risk level or Risk priority number = likelihood (probability) x impact (harm)
Project risk Product risk (quality risks)
A risk to the project’s capability to deliver A risk to quality of product
products: Scope, cost, Time
Related to management and control of the Directly related to the test object
(test) project: - Failure software delivered
- Skill, training - The potential could cause harm to an individual or
- People company
- Customer, vendor - Poor software characteristics (e.g., functionality,
- Technical, tool reliability, usability and performance)
- Schedules. - Poor data integrity and quality
- Budget - Software that does not perform its intended
functions
- Work product: SRS, code, design, test
-
documents
Project Risks
However, some project risks can and should be mitigated successfully by the Test
Manager and Test Analyst.
Product risk:
Failure Mode and Effect Analysis (FMEA) . The system is both complex and safety
critical
18
3.4 How to Do Risk-Based Testing, related-test project risk
TM TA TTA
• Expert interviews Không có Không có
• Independent
assessments • Project retrospectives • Project retrospectives
• Use of risk templates • Checklists
• Project retrospectives Sample risks: • Use of risk templates
• Risk workshops and • Independent
brainstorming - Functional assessments
• Checklists - Usability
• Calling on past - Portability Sample:
experience
- Performance
- Security
- Reliability
• Complexity
• Personnel and training issues
• conflict/communication
• Supplier and vendor
19
• Geographical distribution
• new technologies and designs
• lack of quality—in the tools and technology used
• Bad managerial or technical leadership
• Time, resource, and management pressure
• Lack of earlier testing and quality assurance tasks in the lifecycle
• High rates of changes
• High defect rates
• Lack of sufficiently documented requirements
• Hazard analysis
• Cost of exposure
• Failure Mode and Effect Analysis (FMEA) Risk level= severity (Minor, Major,
Serious, Critical) x priority (L,M,H) x detection (%).
The system is both complex and safety critical
20
Main issues:
• Using session-based test management: you break the test execution effort into test
sessions. A test session is the basic unit of testing work. It is generally limited in time,
typically between 30 and 120 minutes in length. The period assigned to the test session is
called the time box.
• Past: What did you do, and what did you see?
• Results: What bugs did you find? What worked?
• Outlook: What still needs to be done?
• Obstacles: What got in the way of good testing?
• Feelings: How do you feel about this test session?
21
CHAPTER 4- Test Techniques
Category test case or test data
Successful Unsuccessful
Happy Unhappy
Normal Abnormal
Constructive Negative
Concrete test cases (Low level) Logical test cases (High level)
• A test case with implement level (I/O) • A test case without implement level (I/O)
Advantage: Advantage:
Disadvantage: Disadvantage:
Purpose Test each field or item on Test combination of inputs, Test end to end of
22
a screen events, pre-conditions system
Acceptance test
23
Categories of Test Techniques and Their Characteristics
- Design tests from documents - Design tests from how the - Design tests from
software is constructed knowledge or
experience
- Measure code coverage
- Find defects that
was miss by black
box, white box
Process: Process:
24
4.1 Black-box Test Techniques (Specification based/ Requirement based)
- One value for each area, test both valid and invalid areas
25
Restricted access
turn on
- a typical scenario (a normal situation: start to end) to the coverage every states/ every
transitions
- specific sequences of transitions: N-1 Switch -----à N Transitions
26
State table: Cột là State, Hàng Ngang là Event
27
+ integration defects caused by interaction and interference
+ some errors
28
4.3.4 LCSAJ coverage (Linear Code Sequence And Jump)
29
4.4.3 Checklist-based Testing
List of questions to remind, checked (questions from standards or common defects)
- use session-based: write a test charter contain some guidelines for test within a defined time-
box
- most useful when there are few or inadequate specifications or time pressure
30
CHAPTER 5- Characteristic
ISO 25010 (old ISO 9126)
5.1 Quality Characteristics for Business Domain Testing For Test Analyst includes:
• Data accuracy
• Data consistency
• Incorrect handling
• Computational accuracy
• Appropriateness
• Completeness
- Test types: specification based technical: EP, BVA, decision table, use case… and review
31
5.1.2 Interoperability (= Interaction test) (Compatibility)
- Cover all the intended target environments (including variations in the hardware,
software, middleware, operating system, etc.) to ensure the data exchange will work
properly
- Test types: use cases, pairwise testing, classification trees
5.1.3 Usability
Test techniques: Heuristic evaluation (Reactive), review, surveys and questionnaires (such as
SUMI (Software Usability Measurement Inventory) and WAMMI (Website Analysis and
MeasureMent Inventory)
5.2 Quality Characteristics for Technical Domain Testing for Technical Test Analyst
• Reliability, • Security, • Performance, • Maintainablity
32
Planning
Individual preparation
Review meeting
Rework
Follow-up
A formal review should include the following essential roles and responsibilities:
• The manager: The manager allocates resources, schedules reviews, and the like.
However, they might not be allowed to attend based on the review type.
• The moderator or leader: This is the chair of the review meeting.
• The author: The person who wrote the item under review and fix bug
• The reviewers: The people who review the item under review, possibly finding defects in
it.
• The scribe or secretary or recorder: The person who writes down the findings.
• Monitor progress, assess status, and make decisions about future actions
• Decisions about the future of the project, such as adapting the level of resources,
implementing corrective actions or changing the scope of the project
33
6.1.2 Audits
Objectives:
• Demonstrate conformance to a defined set of criteria, most likely an applicable
standard, regulatory constraint, or a contractual obligation.
• Provide independent evaluation of compliance to processes, regulations, standards, etc.
6.1.3 Managing Reviews
The defects to escape in review ( risk):
• Problems with the review process (e.g., poor entry/exit criteria)
• Improper composition of the review team
• Inadequate review tools (checklists, etc.)
• Insufficient reviewer training and experience
• Too little preparation and review meeting time
6.1.4 Managing Formal Reviews
34
Static analysis: Coding defects,
Deviations from standards
(coding conventions), Security
vulnerabilities, Maintainability
defects
• Are all alternative behaviors (paths) identified, complete with error handling?
• Is there only one basic behavior (there should be, otherwise there are multiple use cases)?
35
• Is the story appropriate for the target iteration/sprint?
• Is the story written from the view of the person who is requesting it?
• Does the story follow the commonly used format: As a < type of user >, I want < some goal >
so that < some reason > [Cohn04]
36
CHAPTER7. Defect Management
7.1 Introduction
BS 7925-1 defines: An incident is every (significant) unplanned event observed during testing,
and requiring further investigation.
IEEE 1044: uses the term “anomaly” instead of “incident.” “Any condition that deviates from the
expected based on requirements specifications, design documents, user documents, standards,
etc. or from someone’s perception or experience.”
7.2 The Defect Lifecycle and the Software Development Lifecycle
The life cycle phases defined for an incident are:
If the two phases are the same, then perfect phase containment has been achieved:
• Phase containment means the defect was introduced and found in the same phase
and didn’t “escape” to a later phase.
• Phase containment is an effective way to reduce the costs of defects.
7.2.2 Defect Report Fields
A defect include fields:
37
Defect reports require:
• The information: clearly identify the scenario in which the problem was detected
• Non-functional defect reports: require more details regarding the environment, other
performance parameters (e.g., size of the load), sequence of steps and expected
results
• Usability failure: state what the user expected the software to do
• In cases, the tester may use the "reasonable person" test to determine that the
usability is unacceptable
IEEE Information
Defect Recognition
Project Activity
Project Phase defect introduced
( Investigation)
Suspected Cause
Repeatability
Symptom
Investigation
Root cause
Source
Type
Severity
Priority
Disposition
Defect terminal status may be:
• Closed: defect is fixed and the fix verified through a confirmation test
• Cancelled: the defect report is invalid
• Irreproducible: the anomaly can no longer be observed
• Deferred: the anomaly relates to a real defect, but that defect will not be fixed during
the project
38
CHAPTER 8. Improving the Testing Process
Test process
improvement
models
39
The Testing Maturity Model integration (TMMi) is composed of five maturity levels and is
intended to complement CMMI.
40
CHAPTER 9. Test Tools and Automation
9.1 Introduction
Support management Support for static testing
Test management tools Tools that support reviews
Requirements management tools ( Static analysis tools (D)
traceability matrix)
Defect management tools
Configuration management tools
Continuous integration tools (D)
Support for test execution and logging ( Support for performance measurement and
automation test) dynamic analysis
Test execution tools Performance testing tools
Coverage tools Monitoring tools
Test harnesses (D) Dynamic analysis tools (D)
Unit test framework tools (D)
Support for test design and implementation Support for specialized testing needs
Test design tools ( test inputs, test case) Usability testing
Model-Based testing tools Accessibility testing
Test data preparation tools Security testing
Portability testing
41
- Defining tool requirements to meet the objectives and goals
- Evaluating and selecting the correct tool and tool vendor
- Purchasing, adapting or developing the tool
- Performing the initial training for the tool
- Integrating the tool with other tools
- Procuring the hardware/software needed to support the tool
Recurring costs include the following:
- Owning the tool
- Licensing and support fees
- Maintenance costs for the tool itself
- Maintenance of artifacts created by the tool
- Ongoing training and mentoring costs
- Porting the tool to different environments
- Adapting the tool to future needs
- Improving the quality and processes to ensure optimal use of the selected tools
1. Tester’s skills:
o Skills in Foundation (requires curiosity, professional pessimism, a critical eye,
attention to detail, good communication)
o The ability to analyze a specification, risk analysis, design test cases, running tests and
recording the results.
2. Test Manager’s skill
o Project management (making a plan, tracking progress and reporting to stakeholders)
o Technical skills, interpersonal skills
o Work effectively with others, the successful test professional must also be well-
organized, attentive to detail and possess strong written and verbal communication skills.
42
To build the best team
Balancing the
Environment strengths
Skills Performance
Strong and Hired for the of and
assessment goal for
weak areas long term continuous weaknesses
spreadsheet individuals
learning of the
individuals
Outsource
Testers external
Testers from to the
the business organization
Test team or organization
group
Others
Developers or
Author testers
developers test
their own code
43
• Distributed testing
• In-sourced testing
• Out-sourced testing
Advantage: the inherent independence of testing + lower budget and overcoming shortage of
staff.
Disadvantages: People not having taken close part in the development will be less prepared for
the testing task and might take longer to produce test specifications.
The risks related to distributed, in-sourced, and outsourced testing fall within the areas of:
• Process descriptions
• Distribution of work
• Quality of work
• Culture
• Trust
Tasks of a Test Manager and Tester
44
- Create test progress reports
- Adapt planning
- Take corrective actions (decision)
10.4 Motivation
• Recognition
• Approval
• Respect
• Responsibility
• Rewards
45