Test Execution For Software Testing
Test Execution For Software Testing
The term Test Execution tells that the testing for the product or application needs to be
executed in order to obtain the expected result. After the development phase, the testing
phase will take place where the various levels of testing techniques will be carried out and the
creation and execution of test cases will be taken place. The article focuses on discussing test
execution.
Test Execution is the process of executing the tests written by the tester to check whether the
developed code or functions or modules are providing the expected result as per the client
requirement or business requirement. Test Execution comes under one of the phases of the
Software Testing Life Cycle (STLC).
In the test execution process, the tester will usually write or execute a certain number of test
cases, and test scripts or do automated testing. If it creates any errors then it will be informed
to the respective development team to correct the issues in the code. If the text execution
process shows successful results then it will be ready for the deployment phase after the
proper setup for the deployment environment.
The project runs efficiently: Test execution ensures that the project runs smoothly
and efficiently.
Application competency: It also helps to make sure the application’s competency in
the global market.
Requirements are correctly collected: Test executions make sure that the
requirements are collected correctly and incorporated correctly in design and
architecture.
Application built in accordance with requirements: It also checks whether the
software application is built in accordance with the requirements or not.
1. Defect Finding and Reporting: Defect finding is the process of identifying the bugs
or errors raised while executing the test cases on the developed code or modules. If
any error appears or any of the test cases failed then it will be recorded and the same
will be reported to the respective development team. Sometimes, during the user
acceptance testing also end users may find the error and report it to the team. All the
recorded details will be reported to the respective team and they will work on the
recorded errors or bugs.
2. Defect Mapping: After the error has been detected and reported to the development
team, the development team will work on those errors and fix them as per the
requirement. Once the development team has done its job, the tester team will again
map the test cases or test scripts to that developed module or code to run the entire
tests to ensure the correct output.
3. Re-Testing: From the name itself, we can easily understand that Re-Testing is the
process of testing the modules or entire product again to ensure the smooth release of
the module or product. In some cases, the new module or functionality will be
developed after the product release. In this case, all the modules will be re-tested for a
smooth release. So that it cannot cause any other defects after the release of the
product or application.
4. Regression Testing: Regression Testing is software testing that ensures that the
newly made changes to the code or newly developed modules or functions should not
affect the normal processing of the application or product.
5. System Integration Testing: System Integration Testing is a type of testing
technique that will be used to check the entire component or modules of the system in
a single run. It ensures that the whole system will be checked in a single test
environment instead of checking each module or function separately.
The test Execution technique consists of three different phases which will be carried out to
process the test result and ensure the correctness of the required results. In each phase,
various activities or work will be carried out by various team members. The three main
phases of test execution are the creation of test cases, test case execution, and validation of
test results. Let us discuss each phase.
1. Creation of Test Cases: The first phase is to create suitable test cases for each module or
function. Here, the tester with good domain knowledge must be required to create suitable
test cases. It is always preferable to create simple test cases and the creation of test cases
should not be delayed else it will cause excess time to release the product. The created test
cases should not be repeated again. It should cover all the possible scenarios raised in the
application.
2. Test Cases Execution: After test cases have been created, execution of test cases will take
place. Here, the Quality Analyst team will either do automated or manual testing depending
upon the test case scenario. It is always preferable to do both automated as well as manual
testing to have 100% assurance of correctness. The selection of testing tools is also important
to execute the test cases.
3. Validating Test Results: After executing the test cases, note down the results of each test
case in a separate file or report. Check whether the executed test cases achieved the expected
result and record the time required to complete each test case i.e., measure the performance of
each test case. If any of the test cases is failed or not satisfied the condition then report it to
the development team for validating the code.
Testers can choose from the below list of preferred methods to carry out test execution:
1. Run test cases: It is a simple and easiest approach to run test cases on the local
machine and it can be coupled with other artifacts like test plans, test suites, test
environments, etc.
2. Run test suites: A test suite is a collection of manual and automated test cases and
the test cases can be executed sequentially or in parallel. Sequential execution is
useful in cases where the result of the last test case depends on the success of the
current test case.
3. Run test case execution and test suite execution records: Recording test case
execution and test suite execution is a key activity in the test process and helps to
reduce errors, making the testing process more efficient.
4. Generate test results without execution: Generating test results from non-executed
test cases can be helpful in achieving comprehensive test coverage.
5. Modify execution variables: Execution variables can be modified in the test scripts
for particular test runs.
6. Run automated and manual tests: Test execution can be done manually or can be
automated.
7. Schedule test artifacts: Test artifacts include video, screenshots, data reports, etc.
These are very helpful as they document the results of the past test execution and
provide information about what needs to be done in future test execution.
8. Defect tracking: Without defect tracking test execution is not possible, as during
testing one should be able to track the defects and identify what when wrong and
where.
Test Execution Priorities are nothing but prioritizing the test cases depending upon several
factors. It means that it executes the test cases with high efficient first than the other test
cases. It depends upon various factors. Let us discuss some of the factors to be considered
while prioritizing the test cases.
Complexity: The complexity of the test cases can be determined by including several
factors such as boundary values of test cases, features or components of test cases,
data entry of test cases, and how much the test cases cover the given business
problem.
Risk Covered: How much risk that a certain test case may undergo to achieve the
result. Risk in the form of time required to complete the test case process, space
complexity whether it is executed in the given memory space, etc.,
Platforms Covered: It simply tells that in which platform or operating system the test
cases have been executed i.e., test cases executed in the Windows OS, Mac OS,
Mobile OS, etc.,
Depth: It covers how depth the given test cases cover each functionality or module in
the application i.e., how much a given test procedure covers all the possible
conditions in a single functionality or module.
Breadth: It covers how the breadth of the given test cases covers the entire
functionality or modules in the application i.e., how much a given test procedure
covers all the possible conditions in the entire functionality or modules in the product
or application.
The tester or the Quality Analyst team reports or notices the result of each test case and
records it in their documentation or file. There are various results raised when executing the
test cases. They are
Pass: It tells that the test cases executed for the module or function are successful.
Fail: It tells that the test cases executed for the module or function are not successful
and resulted in different outputs.
Not Run: It tells that the test cases are yet to be executed.
Partially Executed: It tells that only a certain number of test cases are passed and
others aren’t met the given requirement.
Inconclusive: It tells that the test cases are executed but it requires further analysis
before the final submission.
In Progress: It tells that the test cases are currently executed.
Unexpected Result: It tells that all the test cases are executed successfully but
provide different unexpected results.
A test execution cycle is an iterative approach that will be helpful in detecting errors. The test
execution cycle includes various processes. These are:
1. Requirement Analysis: In which, the QA team will gather all the necessary
requirements needed for test execution. For example, how many testers are needed,
what automation test tools are needed, what testing covers under the given budget,
etc., the QA team will also plan depending upon the client or business requirement.
2. Test Planning: In this phase, the QA team will plan when to start and complete the
testing. Choosing of correct automation test tool, and testers needed for executing the
test plan. They further plan who should develop the test cases for which
module/function, who should execute the test cases, how many test cases needed to be
executed, etc.,
3. Test Cases Development: This is the phase in which the QA team assigned a group
of testers to write or generate the test cases for each module. A tester with good
domain knowledge will easily write the best test cases or test scripts. Prioritizing the
developed test cases is also the main factor.
4. Test Environment Setup: Test Environment Setup usually differs from project to
project. In some cases, it is created by the team itself and it is also created by clients
or customers. Test Environment Setup is nothing but testing the entire developed
product with suitable software or hardware components or with both by executing all
the tests on it. It is essential and it is sometimes carried out along with the test case
development process.
5. Test Execution: This stage involves test execution by the team and all the detected
bugs are recorded and reported for remediation and rectification.
6. Test Closure: This is the final stage and here it records the entire details of the test
execution process. It also contains the end-users testing details. It again modifies the
testing process if any defects are found during the testing. Hence, it is a repetitive
process.
The Test Execution Report is nothing but a document that contains all the information about
the test execution process. It is documentation that will be recorded and updated by the QA
team. In that, they just record all the processes happening in the day-to-day test execution
activities. The test execution activities are nothing but executing the task related to testing.
The documentation or the report contains various information. They are:
Write the suitable test cases for each module of the function.
Assign suitable test cases to respective modules or functions.
Execute both manual testing as well as automated testing for successful results.
Choose a suitable automated tool for testing the application.
Choose the correct test environment setup.
Note down the execution status of each test case and note down the time taken by the
system to complete the test cases.
Report all the success status and the failure status to the development team or to the
respective team regularly.
Track the test status again for the already failed test cases and report it to the team.
Highly Skilled Testers are required to perform the testing with less or zero
failures/defects.
Continuous testing is required until success test report is achieved
https://swen90006.github.io/notes/Symbolic-Execution.html
Test Oracles
6.1. Learning outcomes of this chapter
In this chapter, we will discuss test oracles. Recall from Section Black-Box and White-Box
Testing that a test case consist of three elements:
Footnotes
[1]: This is a failure in the sense of Section The Language of Failures, Faults, and Errors
The normal procedure for executing a test case is to execute the program using the inputs in
the test case, record the results, and then to determine if the outputs obtained are failures[1] or
not.
Who or what determines if the results produced by a program are failures? One way is for a
human tester to look at the result of executing the test input and the expected results and
decide if the program has failed the test case. In this case the human tester is playing the role
of a test oracle.
A test oracle is someone or something that determines whether or not the program has passed
or failed the test case. Of course, it can be another program that returns a “yes” if the actual
results are not failures and “no” if they are.
Definition
a program;
a process;
a body of data;
that determines if actual output from a program has failed or not. :::
Footnotes
[2]: Automated means that we can execute the oracle with no human intervention.
Ideally an oracle should be automated [2] because then we can execute a larger volume of test
cases and gain greater coverage of the program, but this is often extremely hard in practice.
Active oracle: A program that, given an input for a program-under-test, can generate the
expected output of that input.
Passive oracle: A program that, given an input for a program-under-test, and the actual
output produced by that program-under-test, verifies whether the actual output is correct.
Passive oracles are generally preferred. This is for two main reasons.
Firstly, passive oracles are typically easier to implement than active oracles. For example,
consider testing a program that sorts a list of numbers. It is considerably easier to check that
an output produced by the is a sorted list, than it is to sort this list. This not only saves the
tester some time, but also means that there is less chance of introducing a fault into the oracle
itself. If the active oracle is required to simulate the entire program-under-test, it may be as
difficult to implement, and therefore, is just as likely to contain faults of its own.
The second reason that passive oracles are preferred is because they can handle non-
determinism. Recall from Section Programs, that a program is non-deterministic if it can
return more than one output for a single input. If an active oracle is used, there is a good
chance that the output produced by the will be different to the output produced by the active
oracle. However, if we use a passive oracle, which simply checks whether the output is
correct, then non-determinism is not an issue.
6.4. Types of Test Oracle
In general, to design a test oracle, there are several types of oracle that can be produce, which
are categorised by the way their are derived, or the way they run.
Formal specifications written in a tight mathematical notation are better for selecting and
creating testing oracles than informal specification written in a natural language. They can be
used as active oracles by generating expected output using a simulation tool, or as passive
oracle, by proving that the specification is satisfied by the input and the actual output. These
are unlikely in practice, and are mostly reserved by high integrity applications such as safety-,
security-, or mission-critical systems.
Solved examples are developed by hand, or the results from a test input can be obtained from
texts and other reference works. This is especially useful for complex domains, in which
deriving the expected output automatically requires a process as complicated as the program
itself, and deriving it manually requires expertise in a specific area that a test engineer is
unlikely to have.
Data in the form of tables, documents, graphs or other recorded results are a good form of
testing oracle. The test input and actual results can be looked up in a table or data-base to see
if they are correct or not.
These types of oracles have the disadvantage that the inputs chosen are restricted to the
examples that we have access to. Despite this, they are common due to their abundance in
many fields.
In some cases, we can use certain metamorphic properties between tests to check each other.
For example, if we want to test a function that sorts a list of numbers, then we can make use
of the fact that, given a list, any permutation of that list will result in the same output of the
sort function (assuming a non-stable sort).
To do metamorphic testing, we generate an input for a program, execute this input, and then
generate another input whose output will be related to the first. In the sorting example, we
can generate a test input as a list of numbers, and then randomly permute the elements of the
list to get a new list. Then, we execute both tests on the sort function, and compare their
output. If their output is different, then we have produced a failure.
Another example is a program for finding the shortest path between two nodes on a graph.
We can select any two nodes on the graph and run the program, returning a path. To check if
this path is correct, we can select any two nodes on that path and run the shortest path
program on those two nodes. The resulting path should be a sub-path of the first path.
The existence of metamorphic properties for programs is surprisingly common, and
metamorphic oracles have been used to test many numerical programs, but also many
applications in non-numerical domains as well, including bioinformatics, search engines,
machine learning, medical imaging, and web services.
An alternate implementation of a program, which can be executed to get the expected output.
This is not ideal because experience has shown that faults in different implementations of the
same specification tend to be located on the same inputs. Therefore, an alternate
implementation is likely to have the same faults as the program-under-test, and some faults
would not be detected via testing as a result.
One approach that has shown to be useful is to provide a partial, simplified version of the
implementation, which does not implement the full behaviour of the program-under-test, and
does not consider efficiency or memory.
For example, if we are an efficient sorting algorithm, then we can restrict the oracle (the
alternative implementation) to only sorting lists of integers whose values, when sorted, form
a complete integer sequence; e.g.,
. To perform the sort, the oracle needs to only find the lowest and highest element in the list
using a linear search, and then return a list with the lowest element at the first index, the
highest element at the last, and the corresponding elements in between. This is partial, and
perhaps not efficient, but it results in a list that is a sorted version of the inputs, and is less
likely to contain a fault than the original sorting algorithm due to its reduced complexity.
Such an approach restricts the test inputs that can be used, but is often sufficient to find many
faults in a system.
Perhaps the most widely-used types of automated oracles being used today are heuristic
oracles. These are oracles that provide approximate results for inputs, and tests that “fail”
must be scrutinised closely to check whether they are a true or false positive.
The trick with heuristic oracles is to find patterns in complex programs — that is, patterns
between the inputs and outputs — and exploit them. For example, a very simple heuristic of
databases is that, when a new record is inserted into a table, the number of records in that
table should increase by 1. We can run thousands of tests inserting single records, and
checking that the number of rows in table increases by 1. This is not complete though,
because we are not checking that the contents of the row are accurate.
As a more realistic example, consider an oracle for checking a system that calculates driving
directions for a GPS-enabled device. If the algorithm finds the shortest path between the start
point and the destination, a complete oracle would need to check that this is indeed the
shortest path. However, to check this fully, our oracle would have to re-calculate the shortest
path as well, which would likely be as complicated as the original algorithm, and therefore
just as prone to faults. Instead, we can use a heuristic that states that the shortest path should
be within, e.g. 1.5 times the distance of a straight line between the two points. Anything
outside of this could signal a fault. This is clearly not complete: the distance between two
points may be small, while the shortest path via a road network may have to take a bridge that
is far away from the destination.
In fact though, all oracles are heuristic, in that none of them really replicates the expected
behaviour of the corresponding program. However, we use the term heuristic oracle to refer
to oracles that are designed based on some heuristics about the software under test.
The ultimate source for a testing oracle but rare in practice. The golden program is an
executable specification, a previous versions of the same program, or test in parallel with a
trusted system that delivers the same functionality.
Still, the golden program is not a pipedream. In industry, it is not uncommon to use the
previous release of a piece of software as a test oracle.
In these notes, we will not consider how to derive oracles. Current industry practice leaves
much of the test case generation, including the oracle, up to human testers, who typically
derive the oracles by looking at the specification and design of the artifact that are testing. In
many unfortunate cases, the tester is left to guess what the behaviour of the software should
be.
State-of-the-art testing includes model-based testing, which is used to automate both input
and oracle generation. Using model-based testing, rather than the test engineer deriving test
cases, he/she derives a model of the expected behaviour of the program-under-test, using
some formal, executable language. From this model, test inputs are generated automatically
(using the types of criteria that we discuss in these notes), and the expected outputs for those
inputs are calculated by simulating the model. This can be seen as a cross between the first
and last types of oracle. The model can be seen as an abstract alternative of the
implementation, which is both formal and executable. However, unlike other alternate
implementations, the higher level of abstract means that the likelihood of faults being located
on the same inputs is reduced. Empirical evidence demonstrates that model-based testing is,
in general, no more expensive than manual test case generation, and in many cases, is
significantly more efficient, and is as successful for locating faults.