Test Strategy Template
Test Strategy Template
TEST STRATEGY
DOCUMENT VERSION:
DATE:
READERSHIP:
SUMMARY:
Amendment History
V0.1
Approval
Project Manager
Page 1 of 25
CONTENTS
1. Introduction......................................................................................................................................... 3
1.1 Context....................................................................................................................................... 3
1.2 Purpose...................................................................................................................................... 3
1.3 Scope to be Tested.................................................................................................................... 3
1.4 Out of Scope (Not Tested).......................................................................................................... 3
2. Testing Approch.................................................................................................................................. 3
2.1 Purpose...................................................................................................................................... 3
2.2 Test Objectives........................................................................................................................... 3
2.3 Traditional Testing Approach...................................................................................................... 3
2.4 Overview of Test Phases............................................................................................................ 3
2.4.1 Component (unit) Testing................................................................................................... 3
2.4.2 System Functional Testing................................................................................................. 3
2.4.3 End to End (E2E) Testing.................................................................................................. 3
2.4.4 Technical (non-functional) Testing.....................................................................................3
2.4.5 User Acceptance Testing (UAT)........................................................................................3
2.4.6 Operational Acceptance Testing (OAT).............................................................................3
2.4.7 Regression Testing............................................................................................................ 3
2.5 Proposed Test Approach............................................................................................................ 3
2.5.1 Release Schedule.............................................................................................................. 3
2.5.2 Testing Schedule............................................................................................................... 3
2.6 Risk Approach............................................................................................................................ 3
3. Test Deliverables................................................................................................................................. 3
3.1 Testing Deliverables................................................................................................................... 3
3.2 Detailed Test Plans..................................................................................................................... 3
3.3 Test Scripts................................................................................................................................. 3
3.4 Test Progress Reporting............................................................................................................. 3
4. Test Management............................................................................................................................... 3
4.1 Resource Management.............................................................................................................. 3
4.2 Assumptions and Dependencies................................................................................................ 3
5. Defect Management............................................................................................................................ 3
5.1 Defect Management Approach................................................................................................... 3
5.2 Defect Status and Process......................................................................................................... 3
5.3 Defect Severity........................................................................................................................... 3
5.4 Defect Priority............................................................................................................................. 3
5.5 Test Progress Reporting Metrics................................................................................................ 3
6. Test Tools........................................................................................................................................... 3
6.1 Introduction................................................................................................................................. 3
6.2 Overview of Testing Tool............................................................................................................ 3
6.3 Test Tool Requirement and description......................................................................................3
APPENDIX A – Example Testing Risk Log.................................................................................................. 3
APPENDIX B – Example Detailed Test Phase Description.........................................................................3
APPENDIX C – Test Plan Contents............................................................................................................ 3
APPENDIX D – Testing Roles and Responsibilities..................................................................................3
Page 2 of 25
1. INTRODUCTION
1.1 Context
Project context
1.2 Purpose
This document sets the strategy for all testing within the scope of the project
The delivery of the solution and the overall business strategy are excluded from the scope of this
document.
All aspects of the non-functional requirements
2. TESTING APPROCH
2.1 Purpose
This subsection describes the testing approach that will be adopted by the project
Page 3 of 25
2.2 Test Objectives
The test objectives are:
- To demonstrate that the solution meets all requirements
- To identify Defects (faults and failures to meet the actual requirements) with an agreed rectification
plan
- To mitigate risk and demonstrate that the release is fit for purpose and meets user expectations.
Component Build
It shows that for each requirement, specification or design documentation, there is an associated testing
phase (i.e. Component design is associated with Component testing
Where possible, testing should be carried out according to the V-Model approach using the Requirements
Traceability Matrix as a key input to Test design and planning
Page 4 of 25
- User Acceptance Tests
- Operational Acceptance Tests
Each Test Phase outlined below should be described, including the following details:
- Owner
- Objective of the phase
- Test Approach; execution, environments, data, resources & location
- Scope
- Exclusions
- Entry & Exit criteria
- Sign-off procedures
- Testing tools to be used
This is the testing that is carried out within the early stages of the development lifecycle:
Describe here the key components and the Owners (eg THE CLIENT team, Vendor etc) that is responsible
for testing the component
All the System Functional Tests to be carried out should be documented in the Detailed System Functional
Test Plan to be produced before testing begins.
Non-functional requirements should have been gathered in the Requirements Traceability Matrix.
A range of test volume scenarios will be specified in the Non-Functional Testing Detailed Test Plan.
Page 5 of 25
The scenarios will be comparable with the expected operational volumes. A set of exceptional volume Tests
will also be specified to demonstrate the robustness of the solution in exceptional volume conditions.
A subset of these tests will also be executed (i.e. re-run) as part of the Operational Acceptance Testing
(OAT)
Once these tests are passed, the solution can be promoted to operational status.
If there are any unresolved priority 1 or priority 2 defects, the Application Manager may reserve the right not
to accept the system into operational support.
The following table outlines the delivery schedule of different code releases:
Page 6 of 25
2.5.2 Testing Schedule
It is often impractical to perform a full exhaustive set of tests for a solution since this would be very costly in
terms of both money and time and because the vendors should have tested their products prior to release to
THE CLIENT.
The objective is to optimise the testing resources and reduce test time without compromising the quality of
the final solution.
Therefore, all test major test activities will carry risks, and an impact and likelihood analysis should be
carried out to validate the choices being made
.
Page 7 of 25
3. TEST DELIVERABLES
Within the Detailed Test Plan, a full description of the following should be provided:
the test environment
all required test scripts
test data
interfaces (Integration) required.
Once the Detailed Test Plans have been approved, the test scripts can be documented.
Page 8 of 25
requirements have been covered by the test scripts and enables the Testing team to track issues
related to specific test scripts.
- Test Name – A unique reference number followed by the test name identifying the test
- Requirement cross reference - A reference to the requirement(s) and source documentation
- Revision History - with original, review and update details related to specific changes to the
test
- Prerequisites – reference to any scripts that need to be run before individual scripts can be
executed.
- Test Description - A summary description of the purpose of the test
- Test Data – The test data to be used
- Test Steps – The instructions for running the test, e.g. the actions that need to be performed
in order to exercise the piece of functionality being tested
- Expected Results – A definition of the test results that expect to be observed if the test is
successful. Enough information should be supplied to enable the tester to determine
unambiguously whether or not the test has been passed
- Actual Results – The Actual results that were observed and a reference to any test evidence.
As a rule the tester will store evidence of the test results where possible. This will include a
record of the build being tested, whether the test passed or failed and a list of any test
observations raised
- Pass / Fail - A record of whether the test was passed or failed.
Page 9 of 25
3.4 Test Progress Reporting
Progress reports will be produced at regular intervals (typically weekly).The report will show:
- Test Phase
- System Under Test
- Test environment
- No of total tests
- No of tests completed
- No of tests passed
- No of tests failed
Where appropriate, a detailed report highlighting all outstanding risks and potential business and/or
operational impacts will also be produced.
4. TEST MANAGEMENT
Depending on the scale and nature of the system (i.e. provided by an external vendor), it may be
possible to combine all the roles so that combination of a Test Manager and Test Analysts should be
able to fulfil all the testing responsibilities.
Page 10 of 25
4.2 Assumptions and Dependencies
Assumptions
List here any assumptions e.g.
- The vendors are responsible for fully testing their software before it is released to THE
CLIENT.
- Vendors are available to review any test results and defects that the team feel may be
associated with the product software
- It is expected that all users are on IE 7+.
- The project Business Analysts are available to input into the creation of the test cases.
- The test documentation will be created by the test analysts.
Dependencies
Page 11 of 25
5. DEFECT MANAGEMENT
Status Description
Identified A new incident is identified.
Assigned An owner has been agreed and a fix is being created
Fixed Development (i.e. Vendor) has a fix for the defect.
Released For Retest When the fix is released (i.e. code drop by the
vendor) for the test team to re-test
Closed Fix has been successfully tested or it is agreed no
action is required.
Wherever possible, the description of the Defect will be written in non-technical terms or the impact of
the Defect will be described in non-technical terms.
Defects will be logged in the following situations:
- When the actual result does not match the expected result and the expected result is correct
- When an expected result does not match an actual result but the actual result is found to be
correct. In this case the action will be to correct the expected result and the Defect log will
provide an audit trail
- When there is an unexpected outcome to a test that is not covered by the expected result.
This may result in the creation of a new entry in the requirement catalogue
- When a Defect is raised to which no immediate acceptable response is available.
Page 12 of 25
Once the project enters the System Test execution phase, typically each morning during test
execution, the Testing Team will review all Defects raised since the previous meeting to determine
any conflicts or impacts across the various phases of test.
After each review session, the status of the defect will be updated and any re-testing of the defect fix
and regression testing will be carried out under the guidance of the Test Manager.
Page 13 of 25
The following flow chart provides an overview of the Defect management process.
Raise Defect
Assign
defect
Defect fixed
pass
Defect
Closed
Page 14 of 25
5.3 Defect Severity
If this defect can’t be resolved in the specified period, the level of risk
on Go-Live will be assessed
Page 15 of 25
Incident has little or no impact on testing progress.
4 - Low
Target resolution: as agreed.
The Key Performance Indicator that will be used to measure the success of testing is:
- Test Execution:
o Number of Planned Test Cases (total)
o Number of Planned Test Cases (Cum)
o Number of Passes Test Cases (Cum)
o Number of Failed Test Cases (Cum)
o Number of Test Cases in Progress (Cum)
- Defects
o Total defects raised (and by priority)
o Total defects fixed (and by priority)
o Total defects in progress (and by priority)
o Total defects closed (and by priority)
o Total defects by functional area
o Defect severity by Root cause
o Defect severity by application
o Defect severity by Defect Type
o Defect state by application
Page 16 of 25
6. TEST TOOLS
6.1 Introduction
This section describes the types of tools that are required to manage the testing activities contained
within this document.
Describe here which tool is going to be used, and how it allows the user to organise and manage the
testing activities.
Page 17 of 25
APPENDIX A – EXAMPLE TESTING RISK LOG
Page 18 of 25
APPENDIX B – EXAMPLE DETAILED TEST PHASE DESCRIPTION
Security Testing
Error handling
Penetration testing
Entry Criteria The following entry criteria must be met before the commencement of System
testing:
100% of agreed functionality has been delivered (subject to the
functionality contained in the release being tested)
System Functional Test Plan has been reviewed and signed off by the
agreed reviewers and approvers. This will primarily be the Project
Team members
Page 19 of 25
Item Description
process defined and implemented
Entry Criteria will be assessed in the prior to test execution. Variances will be
noted and documented by the Test Manager and System Test Team Lead in
a report along with a risk assessment and recommendation to proceed.
Where entry criteria have not been met the decision to proceed with test
execution is up to the discretion of the:
IT&S Project Manager
Sign-off Completion is achieved when the exit criteria have been met. E-mail sign-off
of the System Test, Test Summary Report (TSR) is performed by the
Approvers outlined in the Testing Strategy and System Test Plan.
Tools The Test Manager is responsible for monitoring progress of the System
Testing and ensuring all tests and results are documented.
Test cases will be entered into Test Director or Excel & then executed. Actual
results will be compared to expected results. Test Results (passed or failed)
are logged in Test Director or Excel along with any defects found. Status
reports will be prepared from Test Director or Excel.
Page 20 of 25
APPENDIX C – TEST PLAN CONTENTS
NOTES:
Within the Test Plan, a full description will be provided of the test environment, all required test scripts and
harnesses, and all interfaces required with third party systems. Where the environment is not fully
representative of the live environment, the reasons for the limitation will be provided and a risk assessment
undertaken to determine the impact of this on the validity of the results obtained during the tests.
The Test Plan will also specify the input data sources and any expected outputs, including volumes and
types of data. An explanation will be provided for each data type and flow relating it to the predicted or
measured live environment
Page 22 of 25
APPENDIX D – SAMPLE TESTING ROLES AND RESPONSIBILITIES
The following table outlines the test team roles and their responsibilities:
Page 23 of 25
Testing Role Responsibility/Accountable
Create, maintain and ensure sign-off of the Test
Summary Reports.
Test Analysts (TA) Provide input into the Test Plans.
Undertake testing activities ensuring these meet
agreed specifications.
Create, maintain and execute the test cases in Test
Director or Excel.
Devise, create and maintain test data.
Analyse and store test results in Test Director or
Excel.
Raise, maintain and retest defects in Test Director
or Excel)
Provide input into the Test Summary Reports.
Technical Test Analysts Provide input into the Technical Test Plans.
(TTA)
Undertake technical testing activities ensuring these
meet agreed specifications.
Create, maintain and execute the test cases in Test
Director or Excel.
Devise, create and maintain test data.
Analyse and store test results in Test Director or
Excel.
Raise, maintain and retest defects in Test Director
or Excel.
Provide input into the Test Summary Reports.
Business Analysts (BA) Provide business input into the Test Plans, test
cases and test data.
Execute test cases stored in Test Director or Excel.
Analyse and store test results in Test Director or
Excel.
Raise, maintain and retest defects in Test Director
or Excel.
Provide input into the Test Summary Reports i.e.
business workarounds and impact assessment.
Technical Lead (Team) Provide solution details to Test Analysts
Review detailed test plans produced by Test
Analysts
Input into and review test cases produced by Test
Analysts
Review and categories/priorities test results
Validate, raise and progress defects to resolution
Vendors Input into the test cases
Review and sign-off the DTP and test cases/scripts
Review of Test results
Ownership of defects associated with the vendor
Page 24 of 25
Testing Role Responsibility/Accountable
solution
Responsibility for issue resolution if associated with
the vendor product/solution
Assist in testing and defect reproduction for de-bug
information purposes
Global Operations (GO) Deliver OAT Detailed Test Plan
Delivery and reporting of OAT testing results and
progress
Management of the OAT environments
Execute OAT tests
Validate, raise and progress defects to resolution
Sign-off OAT
Business User from THE Input into the development of the User Acceptance
CLIENT Legal test scripts
Review and sign off User Acceptance Detailed Test
Plans
Review and sign off User Acceptance test
requirements and scripts
Agree acceptance criteria based on the successful
completion of test execution
Perform User Acceptance Testing
Sign-off UAT
Page 25 of 25