0% found this document useful (0 votes)
39 views24 pages

ISTQB Certification

Uploaded by

lagl95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views24 pages

ISTQB Certification

Uploaded by

lagl95
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Road to success:

1. Summarize, read, learn and memmorize Certified Tester Fundation Level of ISTQB
certification.
2. Look for some common interview question and preparate structured responses to them
(including current work experience).
3. Extract some basic concepts on SQL and Other software tools listed in Apymsa work
request.
4. Repasar Agile.
5. Hacer algunos ejercicios sobre extraer pruebas a un software en particular.
ISTQB (R)
What is testing?
Software testing is a set of activities to discover defects and evaluate the quality of software
artifacts. These artifacts, when being tested, are known as test objects.
Testing do not only consists of executing tests, it also includes other activities and must be aligned
with the software development lifecycle.
Testing does not only focuses entirely on verifying the test object. Testing involves verification like
checking whether the system meets specified requirements, but it also involves validation, which
means checking whether the system meets users' and other stakeholders' needs in its operational
environment.
Testing may be dynamic or static. Dynamic testing involves the execution of software, Static testing
does not. Static testing includes reviews and static analysis. Dynamic testing uses different types of
test techniques and test approaches to derive test cases.
Test Objectives:

 Evaluating work products such as requirements, user stories, designs and code.
 Triggering failures and finding defects.
 Ensuring required coverage of a test object.
 Reducing the level of risk of inadequate software quality.
 Verifying whether specified requirements have been fuldilled.
 Verifying that a test object complies with contractual, legal, and regulatory requirements.
 Provide information to stakeholders to allow them to make informed decisions.
 Building confidence in the quality of the test object.
 Validating whether the test object is complete and works as expected by the stakeholders.
Testing and Debugging are separate activities. Testing can trigger failures that are caused by defects
in the software (dynamic testing) or can directly find defects in the test object (static testing). When
dynamic testing triggers a failure, debugging is concerned with finding causes of this failure
(defects), analyzing these causes, and eliminating them. The typical debugging process in this cases
involves:

 Reproduction of a failure
 Diagnosis (finding the root cause)
 Fixing the cause
Subsequent confirmation testing checks whether the fixes resolved the problem. Preferably,
confirmation testing is done by the same person who performed the initial test. Subsequent
regression testing can also be performed, to check whether the fixes are causing failures in other
parts of the test object.
When static testing identifies a defect, debugging is concerned with removing it. There is no need
for reproduction or diagnosis, since static testing directly finds defects, an cannot cause failures.
Why is testing necessary?
Testing provides cost-effective means of detecting defects. These defects can then be removed (by
debugging - a non-testing activity), so testing indirectly contributes to higher quality test objects.
Testing provides a means of directly evaluating the quality of a test object at various stages in the
SDLC. These measures are used as part of a larger project management activity, contributing to
decisions to move to the next stage of the SDLC, such as release decision. Testers ensure that their
understanding of users' needs are considered throughout the development lifecycle. The alternative
is to involve a representative set of users as part of the development project, which is not usually
possible due to the high costs and lack of availability of suitable users.
While people often use the terms "testing" and "quality assurance (QA)" interchangeably, testing
and QA are not the same. Testing is a form of quality control (QC). QC is a product-oriented,
corrective approach that focuses on those activities supporting the achievement of appropriate levels
of quality. Testing is a major form of quality control, while others include formal methods (model
checking and proof of correctness), simulation and prototyping.
QA is a process-oriented, preventive approach that focuses on the implementation and improvement
of processes. It works on the basis that if a good process is followed correctly, then it will generate a
good product. QA applies to both the development and testing processes, and is the responsibility of
everyone on a project. Test results are used by QA and QC. In QC they are used to fix defects, while
in QA they provide feedback on how well the development and testing processes are performing.
Errors, Defects, Failures and Root Causes
Human beings make errors (mistakes), which produce defects (faults, bugs), which in turn may
result in failures. Humans make errors for various reasons, such as time pressure, complexity of
work products, processes, infraestructure or interactions, or simply because they are tired or lack of
adequate training.
Defects can be found in documentation, such as requirements specification or a test script, in source
code, or in a supporting artifact such as a build file. Defects in artifacts produced earlier in the
SDLC, if undetected, often lead to defective artifacts later in the lifecycle. If a defect in code is
executed, the system may fail to do what it should do, or do something it shouldn't, causing a
failure. Some defects will always result in a failure if executed, while others will only result in a
failure in specific circumstances, and some may never result in a failure.
A root cause is a fundamental reason for the occurrence of a problem (a situation that leads to an
error). Root causes are identified through root cause analysis, which is typically performed when a
failure occurs or a defect is identified. It is believed that further similar failures or defects can be
prevented or their frequency reduced by addressing the root cause, such as by removing it.
The seven testing principles:
1. Testing shows the presence, not the absence of defects: Testing can show that defects are
present in the test object, but cannot prove that there are no defects. Testing reduces the
probability of defects remaining undiscovered in the test object, but even if no defects are
found, testing cannot prove test object correctness.
2. Exhaustive testing is impossible: Testing everything is not feasible except in trivial cases.
Rather than attempting to test exhaustively, test techniques, test case prioritization, and risk-
based testing, should be used to focus testing efforts.
3. Early testing saves time and money: Defects that are removed early in the process will
not cause subsequent defects in derived work products. The cost of quality will be reduced
since fewer failures will occur later in the SDLC. To find defects early, both static testing
and dynamic testing should be started as early as possible.
4. Defects cluster together: A small number of system components usually contain most of
the defects discovered or are responsible for most of the operational failures. This
phenomenon is an ilustration of the pareto principle. Predicted defect clusters, and actual
defect clusters observed during testing or in operation, are an important input for risk-based
testing.
5. Tests wear out: If same tests are repeated many times, they become increasingly
ineffective in detecting new defects. To overcome this effect, existing tests and test data
may need to be modified, and new tests may need to be written. However, in some cases,
repeating the same tests can have a beneficial outcome, like in automated regression testing.
6. Testing is context dependent: There is no single universally applicable approach to
testing. Testing is done differently in different contexts.
7. Absence-of-defects fallacy: It is a fallacy to expect that software verification will ensure
the success of a system. Thoroughly testing all the specified requirements and fixing all the
defects found could still produce a system that does not fulfill the users' needs and
expectations, that does not help in achieving the customer's business goals, and that is
inferior compared to other competing systems. In addition to verification, validation should
also be carried out.
Test activities, Testware and Test Roles.
Testing is context dependent, but there are some common sets of test activities at a high level
without which testing is less likely to achieve test objectives. These sets of test activities form a test
process. The test process can be tailored to a given situation based on various factors. Which test
activities are included in this test process, how they are implemented, and when they occur is
normally decided as part of the test planning for the specific situation.
A test process usually consists of the main groups of activities described below. Although many of
these activities may appear to follow a logical sequence, they are often implemented iteratively or in
parallel. These testing activities usually need to be tailored to the system and the project.

 Test planning: Defining the test objectives and then selecting an approach that best
achieves the objectives within the constraints imposed by the overall context.
 Test monitoring and control: Test monitoring involves the ongoing checking of all test
activities and the comparison of actual progress against the plan. Test control involves
taking the actions necessary to meet the objectives of testing.
 Test analysis: Includes analyzing the test basis to identify testable features and to define
and prioritize associated test conditions, together with the related risks and risk levels. The
test basis and the test objects are also evaluated to identify defects that may contain and to
assess their testability. Test analysis is often supported using test techniques. Test analysis
answers the question "what to test?" in terms of measurable coverage criteria.
 Test design: Includes elaborating the test conditions into test cases and other testware (like
test charters). This activity often involves the identification of coverage items, which sever
as a guide to specify test case inputs. Test techniques can be used to support this activity.
Test design also includes defining the test data requirements, designing the test environment
and identifying any other required infrastructure and tools. Test design answers the question
"how to test?".
 Test implementation: It includes creating or acquiring the testware necessary for test
execution (like test data). Test cases can be organized into test procedures and are often
assembled into test suites. Manual and automated test scripts are created. Test procedures
are prioritized and arranged within a test execution schedule for efficient test execution.
The test environment is built and verified to be set up correctly.
 Test execution: Includes running the tests in accordance with the test execution schedule
(test runs). Test execution may be manual or automated. Test execution can take many
forms, including continuous testing or pair testing sessions. Actual test results are compared
with the expected results. The test results are logged. Anomalies are analyzed to identify
their likely causes. This analysis allows us to report the anomalies based on the failures
observed.
 Test completion: these activities usually occur at project milestones (release, end of
iteration, test level completion) for any unresolved defects, change requests or product
backlog items created. Any testware that may be useful in the future is identified and
archived or handed over to the appropriate teams. The test environment is shut down to an
agreed state. The test activities are analyzed to identify lessons learned and improvements
for future iterations, releases, or projects. A test completion report is created and
communicated to the stakeholders.

Testing is not performed in isolation, test activities are an integral part of the development processes
carried out within an organization. Testing is also funded by stakeholders and its final goal is to help
fulfill the stakeholders’ business needs. Therefore, the way the testing is carried out will depend on
a number of contextual factors including:

 Stakeholders: Needs, expectations, requirements, willingness to cooperate, etc.


 Team members: Skills, knowledge, level of experience, availability, training needs, etc.
 Business domain: Criticality of the test object, identified risks, market needs, specific legal
regulations, etc.
 Technical factors: Type of software, product architecture, technology used, etc.
 Project constraints: Scope, time, budget, resources, etc.
 Organizational factors: Organizational structure, existing policies, practices used, etc.
 Software development lifecycle: Engineering practices, development methods, etc.
 Tools: Availability, usability, compliance, etc.
Testware:
Is created as output work products from the test activites. There is a significant variation on how
different organizations produce, shape, name, organize and manage their work products.
 Test planning work products: Test plan, test schedule, risk register, and entry and exit
criteria. Risk register is a list of risks together with risk likelihood, risk impact and
information about risk mitigation. Test schedule, risk register and entry and exit criteria are
often a part of the test plan.
 Test monitoring and control work products: Test progress reports, documentation of
control directives and risk information.
 Test analysis work products: test conditions (acceptance criteria), defect reports regarding
defects in the test basis (if not fixed directly).
 Test design work products: test cases, test charters, coverage items, test data requirements
and test environment requirements.
 Test implementation work products: Test procedures, automated test scripts, test suites,
test data, test execution schedule and test environment elements like stubs, drivers,
simulators and service virtualizations.
 Test execution work products: Test logs, defect reports.
 Test completion work products: Test completion report, action items for improvement of
subsequent projects or iterations, documented lessons learned, and change requests (as
product backlog items).
Traceability between the Test Basis and Testware.
For monitoring and control it is important to maintain traceability throughout the test process
between the test basis elements, testwares associated with these elements (test conditions, risks, test
cases), test results, and detected defects.
Accurate traceability supports coverage evaluation. Measurable coverage criteria should be defined
in the test basis. The coverage criteria can function as key performance indicators to drive the
activities that show to what extent the test objectives have been achieved

 Traceability of test cases to requirements can verify that the requirements are covered by
test cases.
 Traceability of test results to risks can be used to evaluate the level of residual risk in a
test object.
Good traceability makes it possible to determine the impact of changes, facilitates test audits, and
helps meet IT governance criteria. It makes test progress and completion reports more easily
understandable by including the status of test basis elements. It can help to communicate the
technical aspects of testing to stakeholders in an understandable manner.
Roles in testing.
Two principal roles: A test manager role and a testing role. The activities assigned to these roles
depend on several factors such as the project and product context, skills or the organization.
Test management role takes overall responsibility for the test process, test team and leadership of
the test activities. It is mainly focused on the activities of test planning, test monitoring and control,
and test completion. In Agile software development some of the test management tasks may be
handled by the Agile team, while tasks that are covered by multiple teams or the entire organization
may be performed by test managers outside of the development team.
Testing role takes overall responsibility for the engineering (techincal) aspect of testing. The testing
role is mainly focused on the activities of test analysis, test design, test implementation and test
execution.
Different organizations may take these roles at different ways. It is also possible for one person to
take on the roles of testing and test management at the same time.
Essential skills and good practices in testing.

 Testing knowledge: to increase effectiveness of testing like using test techniques.


 Thoroughness, carefulness, curiosity, attention to details, being methodical: to identify
defects, especially the ones that are difficult to find.
 Good communication skills, active listening, being a team player: to interact effectively
with all stakeholders, to convey information to others, to be understood, and to report and
discuss defects.
 Analytical thinking, critical thinking, creativity: To increase effectiveness of testing.
 Technical knowledge: To increase efficiency of testing, like using appropriate test tools.
 Domain knowledge: To be able to understand and to communicate with end users/business
representatives.
Testers are often the bearers of bad news. It is a common human trait to blame the bearer of bad
news. This makes communication skills crucial for testers. Communicating test results may be
perceived as criticism of the product and of its author. Confirmation bias can make it difficult to
accept information that disagrees with currently held beliefs. Some people may perceive testing as a
destructive activity, even though it contributes greatly to project success and product quality. To try
to improve this view, information about defects and failures should be communicated in a
constructive way.
Tester needs to be able to work effectively in a team context and contribute positively to the team
goals. Testers work closely with other team members to ensure that the desired quality levels are
achieved. This includes collaborating with business representatives to help them create suitable
acceptance tests and working with developers to agree on the test strategy and decide on test
automation approaches. Tester can thus transfer testing knowledge to other team members and
influence the development of the product.
Independece of testing.
Independence makes the tester more effective at finding defects due to differences between the
author’s and the tester’s cognitive biases. Independence however is not a replace for familiarity,
developers can efficiently find many defects in their own code.
Work products can be tested by their author (no independence), by the author’s peers from the same
team (some independence), by testers from outside the author’s team but within the organization
(high independence), or by testers from outside the organization (very high independence). For
most projects, it is usually best to carry out testing with multiple levels of independence
(Developers performing component and component integration testing, test team performing system
and system integration testing, and business representatives performing acceptance testing).
The main benefit of independence of testing is that independent testers are likely to recognize
different kinds of failures and defects compared to developers because of their backgrounds,
technical perspectives, and biases. Moreover, an independent tester can verify, challenge, or
disprove assumptions made by stakeholders during specification and implementation of the system.
There are some drawbacks. Independent testers may be isolated from the development team, which
may lead to a lack of collaboration, communication problems, or an adversarial relationship with
the development team. Developers may lose a sense of responsibility for quality. Independent
testers may be seen as a bottleneck or be blamed for delays in release.
Testing Throughout the Software Development Lifecycle.
Testing must be adapted to the SDLC to succeed. SDLC impacts:

 Scope and timing of the test activities: Test levels and test types.
 Level of detail of test documentation.
 Choice of test techniques and test approach.
 Extent of test automation.
 Role and responsibilities of a tester.
In sequential development models, in the initial phases testers typically participate in requirement
reviews, test analysis, and test design. The executable code is usually created in the later phases, so
typically dynamic testing cannot be performed early in the SDLC.
In some iterative and incremental development models, it is assumed that each iteration delivers a
working prototype or product increment. This implies that in each iteration both static and dynamic
testing may be performed at all test levels. Frequent delivery of increments requires fast feedback
and extensive regression testing.
Agile software development assumes that change may occur throughout the project. Therefore,
lightweight work product documentation and extensive test automation to make regression testing
easier are favored in agile projects. Also, most of the manual testing tends to be done using
experience-based test techniques that do not require extensive prior test analysis and design.
SDLC and good Testing Practices.

 For every software development activity, there is a corresponding test activity, so that all
development activities are subject to quality control.
 Different test levels have specific and different test objectives, which allows for testing to
be appropriately comprehensive while avoiding redundancy.
 Test analysis and design for a given test level begins during the corresponding development
phase of the SDLC, so that testing can adhere to the principle of early testing.
 Testers are involved in reviewing work products as soon as drafts of this documentation are
available, so that this earlier testing and defect detection can support the shift-left strategy.
TDD, ATDD and BDD are similar development approaches, where tests are defined as a means of
direct development. Each of these approaches implements the principle of early testing and follows
a shift-left approach. Since the tests are defined before the code is written. They should support an
iterative development model. These approaches are characterized as follows:
TDD:

 Directs the coding through test cases (instead of extensive software design)
 Test are written first, then the code is written to satisfy the tests, and then the tests and code
are refactored.
Acceptance Test-Driven Development (ATDD):

 Derives tests from acceptance criteria as part of the system design process
 Tests are written before the part of the application is developed to satisfy the tests
Behaviour-Driven Development (BDD):

 Expresses the desired behaviour of an application with test cases written in a simple form of
natural language, which is easy to understand by stakeholders – Usually using the
Given/When/Then format.
 Test cases are then automatically translated into executable tests.
For all the above approaches, tests may persist as automated tests to ensure the code quality in
future adaptions / refactoring.
DevOps and Testing
DevOps is an organizational approach aiming to create synergy by getting development (including
testing) and operations to work together to achieve a set of common goals. DevOps requires a
cultural shift within an organization to bridge the gaps between development and operations while
treating their functions with equal value. DevOps promotes team autonomy, fast feedback,
integrated toolchains, and technical practices like continuous integration and continuous delivery.
This enables the teams to build, test and release high-quality code faster through a DevOps delivery
pipeline.
Benefits of DevOps from Testing perspective:

 Fast feedback on the code quality, and whether changes adversely affect existing code
 CI promotes a shift-left approach in testing by encouraging developers to submit high
quality code accompanied by component tests and static analysis.
 Promotes automated processes like CI/CD that facilitate establishing stable test
environments.
 Increases the view on non-functional quality characteristics: performance, reliability.
 Automation through a delivery pipeline reduces the need for repetitive manual testing
 The risk in regression is minimized due to the scale and range of automated regression tests.
Risks and challenges of DevOps:

 The DevOps delivery pipeline must be defined and established


 CI/CD tools must be introduced and maintained
 Test automation requires additional resources and may be difficult to establish and
maintain.
Although DevOps comes with a high level of automated testing, manual testing – especially from
the user perspective – will still be needed.
Shift-Left Approach
Early testing is sometimes referred to as shift-left because it is an approach where testing is
performed earlier in the SDLC. Shift-left normally suggests that testing should be done earlier (not
waiting for code to be implemented or for components to be integrated), but it does not mean that
testing later in the SDLC should be neglected.
Good practices on how to achieve a “shift-left” in testing:

 Reviewing the specification from the perspective of testing. These review activities on
specifications often find potential defects, such as ambiguities, incompleteness, and
inconsistencies.
 Writing test cases before the code is written and have the code run in a test harness during
code implementation.
 Using CI and even better CD as it comes with fast feedback and automated component tests
to accompany source code when it is submitted to the code repository.
 Completing static analysis of source code prior to dynamic testing, or as part of an
automated process.
 Performing non-functional testing starting at the component test level, where possible. This
is a form of shift-left as these non-functional test types tend to be performed later in the
SDLC when a complete system and a representative test environment are available.
A shift-left approach might result in extra training, effort and/or costs earlier in the process but is
expected to save efforts and/or costs later in the process. For this approach it is important that
stakeholders are convinced and bought into this concept.
Project retrospectives.
Are often held at the end of a project or an iteration, at a release milestone, or can be held when
needed. The timing depends on the particular SDLC model being followed. In these meetings
participate testers, developers, architects, product owner, business analysts and they discuss:

 What was successful and should be retained?


 What was not successful and could be improved?
 How to incorporate the improvements and retain the successes in the future?
The results should be recorded and are normally part of the test completion report. Retrospectives
are critical for the successful implementation of continuous improvement.
Benefits:

 Increased test effectiveness by implementing suggestions for process improvement


 Increased quality of testware by jointly reviewing the test processes
 Team bonding and learning as a result of the opportunity to raise issues and propose
improvement points.
 Improved quality of test basis since deficiencies in the extent and quality of requirements
could be addressed and solved.
 Better cooperation between development and testing since collaboration is reviewed and
optimized regularly.
Test Levels and Test Types.
Test Levels: Groups of test activities that are organized and managed together. Each test level is an
instance of the test process, performed in relation to software at a given stage of development, from
individual components to complete systems or, where applicable, systems of systems. In sequential
SDLC models, the test levels are often defined such that the exit criteria of one level are part of the
entry criteria for the next level. In some iterative models, this may not apply. Development activities
may span through multiple test levels. Test levels may overlap in time.

 Component Testing (Unit Testing): Focuses on testing components in isolation. It often


requires specific support, such as test harnesses or unit test frameworks. Component testing
is normally performed by developers in their development environment.
 Component Integration testing (Integration testing): Focuses on testing the interfaces
and interactions between components. Component integration testing is heavily dependent
on the integration strategy approaches like bottom-up, top-down or big-bang.
 System testing: Focuses on the overall behaviour and capabilities of an entire system and
product, often including functional testing of end-to-end tasks and the non-functional
testing of quality characteristics. For some non-functional quality characteristics, it is
preferable to test them on a complete system in a representative environment (usability).
Using simulations of sub-systems is also possible. System testing may be performed by an
independent test team, and is related to specifications for the system.
 System integration testing: focuses on testing the interfaces of the system under test and
other systems and external services. System integration testing requires suitable test
environments preferably similar to the operational environment.
 Acceptance testing: focuses on validation and on demonstrating readiness for deployment,
which means that the system fulfills the user’s business needs. Ideally, acceptance testing
should be performed by the intended users. The main forms of acceptance testing are: user
acceptance testing (UAT), operational acceptance testing, contractual and regulatory
acceptance testing, alpha testing and beta testing.
Test Types: Groups of testing activities related to specific quality characteristics and most of those
test activities can be performed at every test level.

 Functional Testing: Evaluates the functions that a component or a system should perform.
The functions are “what” the test object should do. The main objective of functional testing
is checking the functional completeness, functional correctness and functional
appropriateness.
 Non-Functional testing: Evaluates attributes other than functional characteristics of a
component or system. Non-functional testing is the testing of “how well the system
behaves”. The main objective of non-functional testing is checking the non-functional
software quality characteristics.
o Performance efficiency
o Compatibility
o Usability
o Reliability
o Security
o Maintainability
o Portability
Many non-functional tests are derived from functional tests as they use the same functional tests,
but check that wile performing the function, a non-functional constraint is satisfied (checking that a
function performs within an specified time, or a function can be ported to a new platform). The late
discovery of non-functional defects can pose a serious threat to the success of a project. Non-
functional testing sometimes needs a very specific test environment, such as a usability lab for
usability testing.

 Black-box testing: Is specification-based and derives tests from documentation external to


the test object. The main objective of black-box testing is checking the system’s behavior
against its specifications.
 White-box testing: Is structure-based and derives tests from the system’s implementation
or internal structure (code, architecture, work-flows and data-flows). The main objective of
white-box testing is to cover the underlying structure by the tests to the acceptable level.
All the four mentioned test types can be applied to all test levels, although the focus will be
different at each level. Different test techniques can be used to derive test conditions and test cases
from all the mentioned test types.
Confirmation Testing and Regression Testing.

 Confirmation Testing: Confirms that an original defect has been successfully fixed.
Depending on the risk, one can test the fixed version of the software in several ways,
including:
o Executing all test cases that previously have failed due to the defect
o Adding new tests to cover any changes that were needed to fix the defect
o However when time or money is short when fixing defects, confirmation testing
might be restricted to simply exercising the steps that should reproduce the failure
caused by the defect and checking that the failure does not occur.
 Regression Testing: Confirms that no adverse consequences have been caused by a change,
including a fix that has already been confirmation tested. These adverse consequences could
affect the same component where the change was made, other components in the same
system, or even other connected systems. Regression testing may not be restricted to the
test object itself but can also be related to the environment. It is advisable first to perform
an impact analysis to optimize the extent of the regression testing. Impact analysis shows
which parts of the software could be affected.
Regression test suites are run many times and generally the number of regression test cases
will increase with each iteration or release, so regression testing is a strong candidate for
automation. Automation of these tests should start early in the project. Where CI is used,
such as in DevOps, it is good practices to also include automated regression tests.
Depending on the situation, this may include regression tests on different levels.
Static Testing:
In static testing the software under test does not need to be executed. Code, process specification,
system architecture specification or other work products are evaluated through manual examination
or with the help of a tool. Test objectives include improving quality, detecting defects and assessing
characteristics like readability, completeness, correctness, testability and consistency. Static testing
can be applied for both verification and validation.
Testers, business representatives and developers work together during example mappings,
collaborative user story writing and backlog refinement sessions to ensure that user stories and
related work products meet defined criteria. The definition of ready review techniques can be
applied to ensure user stories are complete and understandable and include testable acceptance
criteria. By asking the right questions, testers explore, challenge and help improve the proposed
user stories.
Static analysis can identify problems prior to dynamic testing with less effort since no test cases are
required, and tools are typically used. Static analysis is often incorporated into CI frameworks.
While largely used to detect specific code defects, is also used to evaluate maintainability and
security. Spelling checkers and readability tools are other examples of static analysis tools.
Work Products Examinable by Static Testing
Requirement specification documents, source code, test plans, test cases, product backlog items, test
charters, project documentation, contracts and models.
Any product that can be read and understood can be subject of a review. However for static testing,
work products need a structure against which they can be checked.
Value of Static Testing
It can detect defects in the earliest phases of the SDLC. It can also identify defects which cannot be
detected by dynamic testing like unreachable code, design patterns not implemented as desired,
defects in non-executable work products.
It is recommended to involve a wide variety of stakeholders in static testing. Even though reviews
can be costly to implement, the overall project costs are usually much lower than when no reviews
are performed because less time and effort needs to be spent on fixing defects later in the project.
Code defects can be detected using static analysis more efficiently than in dynamic testing, usually
resulting in both fewer code defects and a lower overall development effort.

 Static and dynamic testing can both lead to the detection of defects, however there are some
defect types that can only be found by either static or dynamic testing.
 Static testing finds defects directly, while dynamic testing causes failures from which the
associated defects are determined through subsequent analysis.
 Static testing may more easily detect defects that lay on paths through the code that are
rarely executed or hard to reach using dynamic testing.
 Static testing can be applied to non-executable work products, while dynamic testing can
only be applied to executable work products.
 Static testing can be used to measure quality characteristics that are not dependent on
executing code like maintainability, while dynamic testing can be used to measure quality
characteristics that are dependent on executing code like performance efficiency.
Defects easier or cheaper to find with static testing:

 Defects in requirements: Inconsistencies, ambiguities, contradictions, omissions,


inaccuracies, duplications.
 Design defects: inefficient database structures, poor modularization.
 Certain types of coding defects: Variables with undefined values, undeclared variables,
unreachable or duplicated code, excessive code complexity.
 Deviation from standards: lack of adherence to naming conventions in coding standards.
 Incorrect interface specifications: mismatched number, type or order of parameters.
 Specific types of security vulnerabilities: buffer overflows.
 Gaps or inaccuracies in test basis coverage: missing tests for an acceptance criterion.
Review types
There exist many review types ranging from informal reviews to formal reviews. The required level
of formality depends on factors such as the SDLC being followed, criticality and complexity of the
work product being reviewed, legal requirements. Selecting the right review type is key to
achieving the required review objectives. The selection is not only based on the objectives, but also
on factors such as the project needs, available resources, risks, business domain,etc.

 Informal review: Do not follow a defined process and do not require a formal documented
output. The main objective is detecting anomalies.
 Walkthrough: It is lead by the author and can serve many objectives, such as evaluating
quality and building confidence in the work product, educating reviewers, gaining
consensus, generating new ideas.
 Technical Review: Is performed by technically qualified reviewers and led by a moderator.
The objectives of a technical review are to gain consensus and make decisions regarding a
technical problem, but also detect anomalies, evaluate quality and build confidence in the
work product.
 Inspection: Are the most formal type of review, the follow a complete generic process. The
main objective is to find the maximum number of anomalies.
Test techniques.
Test techniques support the tester in test analysis (what to test) and in test design (how to test). Test
techniques help to develop a relatively small, but sufficient, set of test cases in a systematic way.
Test techniques also help the tester to define test conditions, identify coverage items, and identify
test data during the test analysis and design.

 Black-box test techniques (specification-based techniques): Are based on an analysis of the


specified behaviour of the test object without reference to its internal structure. Therefore,
the test cases are independent of how the software is implemented. Consequently, if the
implementation changes, but the required behavior stays the same, then the test cases are
still useful.
o Equivalence Partitioning (EP): Divides data into partitions based on the expectation
that all the elements of a given partition are to be processed in the same way by the
test object. The theory is that if a test case, that tests one value from an equivalence
partition, detects a defect, this defect should also be detected by test cases that test
any other value from the same partition. It can be identified for any data element
related to the test object, including inputs, outputs, configuration items, internal
values, time-related values, and interface parameters. The partitions must not
overlap and must be non-empty sets.
For simple test objects EP can be easy, but in practice understanding how the test
object will treat different values is often complicated, so partitioning should be
done with care.
A partition containing valid values is called a valid partition. A partition containing
invalid values is called an invalid partition. The definition of valid and invalid
values may vary among teams and organizations. For example, valid values may be
interpreted as those that should be processed by the test object or as those for which
the specification defines their processing. Invalid values may be interpreted as
those that should be ignored or rejected by the test object or as those for which no
processing is defined in the test object specification.
In EP the coverage items are the equivalence partitions. To achieve 100% coverage
with this technique, test cases must exercise all identified partitions (including
invalid partitions). By covering each partition at least once. Many test objects
include multiple sets of partitions like test objects with more than one input
parameter, which means that a test case will cover partitions from different sets of
partitions. The simplest coverage criterion in the case of multiple sets of partitions
is called Each Choice coverage. Each Choice coverage requires test cases to
exercise each partition from each set of partitions at least once. Each Choice
coverage does not take into account combinations of partitions.
o Boundary Value Analysis: Is a technique based on exercising the boundaries of
equivalence partitions. Therefore BVA can only be used for ordered partitions. The
minimum and maximum values of a partition are its boundary values. In the case of
BVA if two elements belong to the same partition, all elements between them must
also belong to that partition. BVA focuses on the boundary values because
developers are more likely to make errors with these boundary values. Typical
defects found by BVA are located where implemented boundaries are misplaced to
positions above or below their intented positions or are omitted altogether.
 2-value BVA: For each boundary value there are two coverage items. This
boundary value and its closest neighbor belonging to the adjacent partition.
To achieve 100% coverage with 2-value BVA, test cases must exercise all
identified boundary values.
 3-value BVA: For each boundary value there are three coverage items. This
boundary value and both its neighbours. Therefore in 3-value BVA some of
the coverage items may not be boundary values. To achieve 100% coverage
with 3-value BVA test cases must exercise all identified boundary values
ant their neighbours.
o Decision Table Testing: Are used for testing the implementation of system
requirements that specify how different combinations of conditions result in
different outcomes. Decision tables are an effective way of recording complex
logic, such as business rules.
When creating decision tables, the conditions and the resulting actions of the
system are defined. These form the rows of a table. Each column corresponds to a
decision rule that defines a unique combination of conditions, along with the
associated actions. In limited-entry decision tables all the values of the conditions
and actions (except for irrelevant of infeasible ones) are shown as Boolean values.
Alternatively in extended-entry decision tables some or all the conditions and
actions may also take on multiple values (range of numbers, equivalence partitions,
discrete values).
In decision table testing, the coverage items are the columns containing feasible
combinations of conditions. To achieve 100% coverage with this technique, test
cases must exercise all these columns. This provides a systematic approach to
identify all the combinations of conditions, some of which might otherwise be
overlooked. It also helps to find any gaps or contradictions in the requirements. If
there are many conditions, exercising all decision rules may be time consuming,
since the number of rules grows exponentially with the number of conditions. In
such case, to reduce the number of rules that need to be exercised, a minimized
decision table or a risk-based approach may be used.
o State Transition Testing: A state transition diagram models the behaviour of a
system by showing its possible states, and valid state transitions. A transition is
initiated by an event, which may be additionally qualified by a guard condition. The
transitions are assumed to be instantaneous and may sometimes result in software
taking action. The common transition labeling syntax is as follows: “event [guard
condition] / action”. Guard conditions and actions can be omitted if they do not
exist or are irrelevant for the tester.
A state table is a model equivalent to a state transition diagram. Its rows represent
states, and its columns represent events (together with guard conditions if they
exist). Table entries (cells) represent transitions, and contain the target state, as well
as the resulting actions, if defined. The state table explicitly shows invalid
transitions, which are represented by empty cells.
One test case may, and usually will, cover several transitions between states. In all
stage coverage the coverage items are the states, to achieve 100% test cases must
ensure that all the states are visited.
In valid transitions coverage (0-switch coverage), coverage items are single valid
transitions. To achieve 100% valid transitions coverage, test cases must exercise all
the valid transitions.
In all transitions coverage, test coverage items are all the transitions shown in a
state table. To achieve 100% coverage, test cases must exercise all the valid
transitions and attempt to execute invalid transitions. Testing only one invalid
transition in a single test case helps to avoid fault masking, which is a situation in
which one defect prevents the detection of another.
All states coverage is weaker than valid transitions coverage because it can
typically be achieved without exercising all the transitions. Valid transitions
coverage is the most widely used coverage criterion. Full all transitions coverage
should be a minimum requirement for mission and safety-critical software.
 White-box test techniques (structure-based techniques): Are based on an analysis of the test
object’s internal structure and processing. As the test cases are dependent on how the
software is designed, they can only be created after the design or implementation of the test
object.
o Statement testing: In this testing the coverage items are executable statements. The
aim is to design test cases that exercise statements in the code until an acceptable
level of coverage is achieved. When 100% statement coverage is achieved, it
ensures that all executable statements in the code have been exercised at least once.
This means that each statement with a defect will be executed, which may cause a
failure demonstrating the presence of the defect. However, exercising a statement
with a test case will not detect defects in all cases. For example, it may no detect
defects that are data dependent (like division by zero that only fails when a
denominator is set to zero). Also 100% statement coverage does not ensure that all
the decision logic has been tested as, for instance, it may not exercise all the
branches in the code.
o Branch testing: A branch is a transfer of control between two nodes in the control
flow graph, which shows the possible sequences in which source code statements
are executed in the test object. Each transfer of control can be either unconditional
(straight-line code) or conditional (decision outcome).
Here the coverage items are branches and the aim is to design test cases to exercise
branches in the code until an acceptable level of coverage is achieved. When 100%
branch coverage is achieved, all branches in the code, unconditional and
conditional, are exercised by test cases. Conditional branches typically correspond
to a true or false outcome from an if.. then decision. An outcome from a switch/case
statement, or a decision to exit or continue in a loop. However exercising a branch
with a test case will not detect defects in all cases. For example, it may not detect
defects requiring the execution of an specific path in a code. Any set of test cases
achieving 100% of branch coverage also achieves 100% statement coverage.
 Experience-based test techniques: Effectively using the knowledge and experience of
testers for the design and implementation of the test cases. The effectiveness of these
techniques depends on the tester’s skills. It can detect defects that may be missed using the
black-box and white-box test techniques.
o Error guessing: Is a technique to anticipate the occurrence of errors, defects, and
failures, based on the tester’s knowledge, including:
 How the application has worked in the past
 The types or errors the developers tend to make and the types of defects
that result from these errors.
 The types of failures that have been occurred in other, similar applications.
Errors and defects may be related to input (correct input not accepted,
parameters wrong or missing), output (wrong format, wrong result). Logic
(missing cases, wrong operator), computation (incorrect operand, wrong
computation), interfaces (parameter mismatch, incompatible types), or data
(incorrect initialization, wrong type).
Fault attacks are a methodical approach to the implementation of error
guessing. This technique requires the tester to create or acquire a list of possible
errors, defects and failures, and to design tests that will identify defects
associated with the errors, expose the defects, or cause the failures. These lists
can be built based on experience, defect and failure data, or from commong
knowledge about why software fails.
o Exploratory testing: Here tests are simultaneously designed, executed and evaluated
while the tester learns about the test object. The testing is used to learn more about
the test object, to explore it more deeply with focused tests, and to create tests for
untested areas. This is sometimes conducted using session-based testing to structure
the testing. In this approach, exploratory testing is conducted within a defined time-
box. The tester users a test charter containing test objectives to guide the testing.
The test session is usually followed by a debriefing that involves discussion
between the tester and stakeholders interested in the test results of the test session.
In this approach test objectives may be treated as high-level test conditions.
Coverage items are identified and exercised during the last session. The tester may
use test session sheets to document the steps followed and the discoveries made.
This testing is useful when there are few or inadequate specifications or there is
significant time pressure on the testing. It is also a useful complement to other more
formal testing techniques.
o Checklist-based testing: In this testing, a tester designs, implements, and executes
tests to cover test conditions from a checklist. This checklists can be used from
experience, knowledge about what is important for the user, or an understanding of
why and how software fails. Checklists should not contain items that can be
checked automatically, items better suited as entry/exit criteria, or items that are too
general.
This check lists are usually phrased in the form of a question. It should be possible
to check each item separately and directly. These items may refer to requirements,
graphical interface properties, quality characteristics, or other forms of test
conditions. Checklists can be created to support various test types including
functional and non-functional testing. Some checklist entries may gradually
become less effective over time because the developers will learn to avoid making
the same errors. New entries may also need to be added to reflect newly found high
severity defects. Therefore, checklists should be regularly updated based on defect
analysis.
In the absence of detailed test cases, check-list based testing can provide guidelines
and some degree of consistency for the testing.
Collaboration -based Test approaches
Collaborative User Story Writing
US represents a feature that will be evaluated to either a user or purchaser of a system or software.
They have 3 critical aspects:

 Card: The medium describing a user story


 Conversation: Explains how the software will be used
 Confirmation: The acceptance criteria.
The most common format for a user story is “As a [role], I want [goal to be accomplished], so that I
can [resulting business value for the role]”, followed by the acceptance criteria. Collaborative
authorship of the user story can use techniques such as brainstorming and mind mapping. The
collaboration allows the team to obtain a shared vision of what should be delivered, by taking into
account three perspectives: business, development and testing. Good user stories should be
independent, negotiable, valuable, estimable, small and testable. If a stakeholder doesn’t know how
to test a user story, this may indicate the user story is not clear enough.
Acceptance Criteria.
For a user story are the conditions that an implementation of the user story must meet to be accepted
by stakeholders. From this perspective acceptance criteria may be viewed as the test conditions that
should be exercised by the tests.

 Define the scope of the user story


 Reach consensus among stakeholders
 Describe both positive and negative scenarios
 Serve as a basis for the user story acceptance testing
 Allow accurate planning and estimation
Test planning
A test plan describes objectives, resources and processes for a test project. A test plan:

 Documents the means and schedule for achieving test objectives.


 Helps to ensure that the performed test activities will meet the established criteria.
 Serves as a means of communication with team members and other stakeholders.
 Demonstrates that testing will adhere to the existing test policy and test strategy
Typical content of a test plan:

 Context of testing (scope, test objectives, constraints, test basis).


 Assupmtions and constraints of the test project.
 Stakeholders (roles, responsibilities, relevance to testing, hiring and training needs).
 Communication (forms and frequency of communication, documentation templates).
 Risk register (product risk, project risks).
 Test approach (test levels, test types, test techniques, test deliverables, entry criteria and exit
criteria, independence of testing, metrics to be collected, test data requirements, test
environment requirements, deviations from the organizational test policy and test strategy).
 Budget and schedule.

In iterative SDLCs, typically two kinds of planning occur: release planning and iteration planning.
Release planning looks ahead to the release of a product, defines and re-defines the product
backlog, and may involve refining larger user stories into a set of smaller user stories. It also serves
as the basis for the test approach and test plan across all iterations. Testers involved in release
planning participate in writing testable user stories and acceptance criteria, participate in project and
quality risk analyses, estimate test effort associated with user stories, determine the test approach,
and plan the testing for the release.
Iteration planning looks ahead to the end of a single iteration and is concerned with the iteration
backlog. Testers involved in iteration planning participate in the detailed risk analysis of user
stories, determine the testability of user stories, break down user stories into tasks, estimate test
effort for all testing tasks, and identify and refine functional and non-functional aspects of the test
object.
Entry Criteria and Exit Criteria.
Entry criteria define the preconditions for undertaking a given activity. If entry criteria are not met,
it is likely that the activity will prove to be more difficult, time-consuming, costly, and riskier. Exit
criteria define what must be achieved in order to declare an activity completed. Entry criteria and
exit criteria should be defined for each test level, and will differ based on the test objectives.
Typical entry criteria include: availability of resources (people, tools, environment, test data,
budget, time) availability of testware (test basis, testable requirements, user stories, test cases), and
initial quality level of a test object (all smoke tests have passed).
Typical exit criteria include: measure of thoroughness (achived level of coverage, number of
unresolved defects, defect density, number of failed test cases), and completion criteria (planned
tests have been executed, static testing has been performed, all defects found are reported, all
regression tests are automated).
Running out of time or budget can also be viewed as valid exit criteria. Even without other exit
criteria being satisfied, it can be acceptable to end testing under such circumstances, if the
stakeholders have reviewed and accepted the risk to go live without further testing.
In Agile software development, exit criteria are often called definition of done, defining the team’s
objective metrics for a releasable items. Entry criteria that a user story must fulfull to start the
development and/or testing activities are called Definition of Ready,
Estimation techniques
Involves predicting the amount of test-related work needed to meet the objectives of a test project.
It is important to make it clear to the stakeholders that the estimate is based on a number of
assumptions and is always subject to estimation error. Estimation for small tasks is usually more
accurate than for the large ones. Therefore when estimating a large task it can be decomposed into a
set of smaller tasks which then in turn can be estimated.

 Estimation based on ratios: Is a metrics-based technique in which figures collected from


previous projects within the organization, so that is possible to derive “standard” ratios
from similar projects. The ratios of an organization’s own projects (from historical data) are
generally the best source to use in the estimation process. For example, if in the previous
project the development-to-test effort ratio was 3:2, and in the current project the
development effort is expected to be 600 person-days, the test effort can be estimated to be
400 person-days.
 Extrapolation: Is a metrics-based technique, in which measurements are made as early as
possible in the current project to gather the data. Having enough observations, the effort
required for the remaining work can be approximated by extrapolating this data (by a
mathematical model). This method is very suitable in iterative SDLCs. For example, the
team may extrapolate the test efforts in the forthcoming iteration as the averaged effort
from the last three iterations.
 Wideband Delphi: In this iterative, expert-based technique, experts make experience-based
estimations. Each expert, in isolation, estimates the effort. The results are collected and if
there are deviations that are out of range of the agreed upon boundaries, the experts discuss
their current estimates. Each expert is then asked to make a new estimation, based on this
feedback, again in isolation. This process is repeated until a consensus is reached. Planning
Poker is a variant of Wideband Delphi, commonly used in Agile software development.
 Three-point estimation: Expert-based technique, three estimations are made by the experts:
the most optimistic estimation, the most likely estimation and the most pessimistic
estimation. The final estimate is their weighed average. In the most popular version of this
technique, the estimate is calculated as E=(a+4*m+b)/6. The advantage of this technique is
that it allows the expert to calculate the measurement error, or standard deviation SD=(b-
a)/6.
Test Case prioritization
Once test cases and test procedures are specified and assembled into test suites, these suites can be
arranged in a test execution schedule that defines the order on which they are to be run. When
prioritizing test cases, different factors can be taken into account. The most commonly used test
case prioritization strategies are as follows:

 Risk-based prioritization: Where the order of test execution is base on the results of risk
analysis. Test cases covering the most important risks are executed first.
 Coverage-based prioritization: Where the order of test execution is based on coverage. Test
achieving the highest coverage are executed first.
 Requirements-based prioritization: Where the order of test execution is based on the
priorities of the requirements traced back to the corresponding test cases. Requirement
priorities are defined by stakeholders. Test cases related to the most important requirements
are executed first.
The order of the test execution must also take into account the availability of resources. For
example, the required test tools, test environments or people that may only be available for specific
time window.
Risk Management
Risk management allows the organization to increase the likelihood of achieving objectives,
improve the quality of their products and increase stakeholders confidence and trust.
The main risk management activities are:

 Risk Analysis: Risk identification and risk assessment.


 Risk Control: Risk mitigation and risk monitoring
The test approach in which test activities are selected, prioritized, and managed based on risk
analysis and risk control, is called risk based testing.
Risk definition and Risk attributes
A risk can be characterized by two factors:

 Risk likelihood: The probability of the risk occurrence.


 Risk impact: The consequences of this occurrence.
This two factors express the risk level, which is a measure for the risk. The higher the risk level, the
more important is its treatment.
Project risks and Product Risks
Project Risks: Relate to the management and control of the project

 Organizational issues: Delays in work products deliveries, inaccurate estimates, cost-


cutting.
 People issues: insufficient skills, conflicts, communication problems, shortage of staff.
 Technical issues: scope creep, poor tool support.
 Supplier issues: Third party delivery failure, bankruptcy of the supporting company
Product risks: Are related to the product quality characteristics, like missing or wrong functionality,
incorrect calculations, runtime errors, poor architecture, inefficient algorithms, inadequate response
time, poor user experience, security vulnerabilities. They may have several negative consequences
such as:

 User dissatisfaction
 Loss of revenue, trust, reputation
 Damage to third parties
 High maintenance costs, overload of the helpdesk
 Criminal penalties
 In extreme cases, physical damage, injuries or even death.
Product Risk Analysis
The goal is to provide an awareness of product risk in order to focus the testing in a way that
minimizes the residual level of product risk. Ideally, product risk analysis begins early in the SDLC.
It consists of risk identification, and risk assessment. Risk identification is about generating a
comprehensive list of risks. Stakeholders can identify risks by using various techniques and tools,
like brainstorming, workshops, interviews, or cause-effect diagrams. Risk assessment involves
categorization of identified risks, determining their risk likelihood, risk impact and level,
prioritizing, and proposing ways to handle them. Categorization helps in assigning mitigation
actions, because usually risks falling into the same category can be mitigated using a similar
approach.
Product risk analysis may influence the thoroughness and scope of testing. Its results are used to:

 Determine the scope of testing carried out.


 Determine the particular test levels and propose test types to be performed.
 Determine the test techniques to be employed and the coverage to be achieved.
 Estimate the test effort required for each task.
 Prioritize testing in an attempt to find the critical defects as early as possible.
 Determine whether any activities in addition to testing could be employed to reduce risk.
Product Risk Control
Comprises all measures that are taken in response to identified and assessed product risks. It
consists of risk mitigation and risk monitoring. Risk mitigation involves implementing the actions
proposed in risk assessment to reduce the risk level. The aim or risk monitoring is to ensure that the
mitigation actions are effective, to obtain further information to improve risk assessment, and to
identify emerging risks. Measures could be take such as select testers with the right level of
experience and skills, apply appropriate level of indepence of testing, conduct reviews and perform
static analysis, apply the appropriate test techniques and coverage levels, appy the appropriate test
types addressing the affected quality characteristics, and perform dynamic testing, including
regression testing.
Metrics used in testing.
 Project progress metrics: task completion, resource usage, test effort.
 Test progress metrics: test case implementation progress, test environment preparation
progress, number of test cases run/not run, passed/failed, test execution time.
 Product quality metrics: availability, response time, mean time to failure.
 Defect metrics: number and priorities of defects found/fixed, defect density, defect
detection percentage.
 Risk metrics: residual risk level
 Coverage metrics: requirements coverage, code coverage.
 Cost metrics: cost of testing, organizational cost of quality.
Test Reports
It summarizes and communicates test information during and after testing. Test progress reports
support the ongoing control of the testing and must provide enough information to make
modifications to the test schedule, resources, or test plan, when such changes are needed due to
deviation from the plan or changed circumstances. Test completion report summarizes a specific
stage of testing and can give information for subsequent testing.
Test progress reports are usually generated on a regular basis (daily, weekly) and include:

 Test period
 Test progress, including any notable deviations
 Impediments for testing
 Test metrics
 New and changed risks within testing period
 Testing planned for the next period.
A test completion report is prepared during test completion, This report uses test progress reports
and other data, Typical test completion reports include:

 Test summary
 Testing and product quality evaluation based on the original test plan.
 Deviations from the test plan
 Testing impediments and workarounds
 Test metrics based on test progress resports
 Unmitigated risks, defects not fixed.
 Lessons learned that are relevant to the testing.
Defect Management
Typical defect reports have the following objectives:

 Provide those responsible for handling and resolving reported defects with sufficient
information to resolve the issue
 Provide a means of tracking the quality of the work product.
 Provide ideas for improvement of the development and test process.
A defect report logged during dynamic testing typically includes:

 Unique identifier
 Title with a shot summary of the anomaly being reported
 Date when the anomaly was observed, issuing organization, and author, including their role.
 Identification of the test object and test environment.
 Context of the defect (test case being run, test activity being performed, SDLC phase, and
other relevant information such as the test technique, checklist or test data being used).
 Description of the failure to enable reproduction and resolution including the steps that
detected the anomaly, and any relevant test logs, database dumps, screeshots, or recordings.
 Expected results and actual results
 Severity of the defect on the interests of the stakeholders or requirements
 Priority to fix
 Status of the defect: Open, deferred, duplicate, waiting to be fixed, awaiting confirmation
testing, re-opened, closed, rejected.
 References to the test case.
Test Tools

 Management tools: Increase the test process efficiency by facilitating management of the
SDLC, requirements, tests, defects, configuration.
 Static Testing Tools: Support the tester in performing Reviews and static analysis
 Test design and implementation tools: Facilitate the generation of test cases, test data and
test procedures.
 Test execution and coverage tools: Facilitate automated test execution and coverage
measurement.
 Non-functional testing tools: Allow the tester to perform non-functional testing that is
difficult or impossible to perform manually.
 DevOps tools: Support the DevOps delivery pipeline, workflow tracking, automated build
process, CI/CD.
 Collaboration tools: Facilitate communication.
 Tools supporting scalability and deployment standardization: Virtual Machines,
Containerization tools.
 Any other tool that assists in testing (a spreadsheet is a test tool in the context on testing).
Test Automation
Benefits of Test Automation

 Time saved by reducing repetitive manual work: Execute regression tests, re-enter the same
test data, compare expected results to actual results, and check against coding standards.
 Prevention of simple human errors through greater consistency and repeatability: Test are
consistently derived from requirements, test data is created in a systematic manner, and
tests are executed by a tool in the same order with the same frequency.
 More objective assessment and providing measures that are too complicated for humans to
derive.
 Easier access to information about testing to support test management and test reporting:
statistics, graphs, and aggregated data about test progress, defect rates, and test execution
duration.
 Reduced risk test execution times to provide earlier defect detection, faster feedback and
faster time to market.
 More time for testers to design new, deeper and more effective tests.
Potential risks of using automation:

 Unrealistic expectations about the benefits of a tool: including functionality and ease of use.
 Inaccurate estimations of time, costs, effort required to introduce a tool, maintain test
scripts and change existing manual scripts process.
 Using a test tool when manual testing is more appropriate.
 Relying on a tool too much: Ignoring the need of human critical thinking.
 The dependency on the tool vendor which may go out of business, retire the too, sell the
tool to a different vendor or provide poor support.
 Using an open-source software which might be abandoned, meaning that no further updates
are available, or its internal components may require quite frequent updates as a further
development.
 The automation tool is not compatible with the development platform.
 Choosing an unsuitable tool that did not comply with the regulatory requirements and/or
safety standards.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy