Testing Material
Testing Material
Error: A discrepancy between a computed, observed, or measured value or condition and the true,
specified, or theoretically correct value or condition. See: anomaly, bug, defect, exception, and fault
Bug: A fault in a program which causes the program to perform in an unintended or unanticipated
manner. See: anomaly, defect, error, exception, fault.
Simple Wikipedia definition of Bug is: “A computer bug is an error, flaw, mistake, failure, or fault in a
computer program that prevents it from working correctly or produces an incorrect result. Bugs
arise from mistakes and errors, made by people, in either a program’s source code or its design.”
or
A fault in a program, which causes the program to perform in an unintended or unanticipated
manner.Lastly the general definition of bug is: “failure to conform to specifications”.If you want to
detect and resolve the defect in early development stage, defect tracking and software development
phases should start simultaneously.
We will discuss more on Writing effective bug report in another article. Let’s concentrate here on
bug/defect life cycle.
In above list you can add some optional fields if you are using manual Bug submission template:
These Optional Fields are: Customer name, Browser, Operating system, File Attachments or
screenshots.
On successful logging the bug is reviewed by Development or Test manager. Test manager can set
the bug status as Open, can Assign the bug to developer or bug may be deferred until next release.
When bug gets assigned to developer and can start working on it. Developer can set bug status as
won’t fix, Couldn’t reproduce, Need more information or ‘Fixed’.
If the bug status set by developer is either ‘Need more info’ or Fixed then QA responds with specific
action. If bug is fixed then QA verifies the bug and can set the bug status as verified closed or
Reopen.
2) Deferred: If the bug is not related to current build or can not be fixed in this release or bug is not
important to fix immediately then the project manager can set the bug status as deferred.
3) Assigned: ‘Assigned to’ field is set by project lead or manager and assigns bug to developer.
4) Resolved/Fixed: When developer makes necessary code changes and verifies the changes then
he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
5) Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug
report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is
reproduced and can assign to developer with detailed reproducing steps.
6) Need more information: If developer is not clear about the bug reproduce steps provided by QA
to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to
add detailed reproducing steps and assign bug back to dev for fix.
7) Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can
mark it as ‘Reopen’ so that developer can take appropriate action.
8 ) Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can
mark bug as ‘Closed’.
9) Rejected/Invalid: Some times developer or team lead can mark the bug as Rejected or
invalid if the system is working according to specifications and bug is just due to some
misinterpretation
Software Testing:
Software Testing is the process of executing a program or system with the intent of finding errors.
Or, it involves any activity aimed at evaluating an attribute or capability of a program or system and
determining that it meets its required results. Software is not unlike other physical processes where
inputs are received and outputs are produced. Where software differs is in the manner in which it
fails. Most physical systems fail in a fixed (and reasonably small) set of ways. By contrast, software
can fail in many bizarre ways. Detecting all of the different failure modes for software is generally
infeasible.
Severity of Bugs:
It is extremely important to understand the type & importance of every bug detected
during the testing & its subsequent effect on the users of the subject software application
being tested.
Such information is helpful to the developers and the management in deciding the
urgency or priority of fixing the bug during the product-testing phase.
Following Severity Levels are assigned during the Testing Phase:
Critical – is the most dangerous level, which does not permit continuance of the testing
effort beyond a particular point. Critical situation can arise due to popping up of some
error message or crashing of the system leading to forced full closure or semi closure of
the application. Criticality of the situation can be judged by the fact that any type of
workaround is not feasible. A bug can fall into "Critical" category in case of some menu
option being absent or needing special security permissions to gain access to the desired
function being tested.
High – is a level of major defect under which the product fails to behave according to the
desired expectations or it can lead to malfunctioning of some other functions thereby
causing failure to meet the customer requirements. Bugs under this category can be
tackled through some sort of workaround. Examples of bugs of this type can be mistake
in formulas for calculations or incorrect format of fields in the database causing failure in
updating of records. Likewise there can be many instances.
Medium – defects falling under this category of medium or average severity do not have
performance effect on the application. But these defects are certainly not acceptable due
to non-conformance to the standards or companies vide conventions. Medium level bugs
are comparatively easier to tackle since simple workarounds are possible to achieve
desired objectives for performance. Examples of bugs of this type can be mismatch
between some visible link compared with its corresponding text link.
Low - defects falling under low priority or minor defect category are the ones, which do
not have effect on the functionality of the product. Low severity failures generally do not
happen during normal usage of the application and have very less effect on the business.
Such types of bugs are generally related to looks & feel of the user interface & are
mainly cosmetic in nature.
1. New Bug: When a bug is posted for the first time, its state is called "NEW". This
implies that the bug is not approved yet.
2. Open Bug: Once the software tester posts a bug, the team leader approves it after
satisfying himself about its genuinity, and changes its state to "OPEN".
3. Assigned Bug: Once the lead changes the state to "OPEN", the bug is assigned to
the concerned developer team. The state of the bug is changed now to "ASSIGNED".
4. Test Bug: Once the developer fixes the bug, he transfers the bug to the testing team
for next round of testing. After fixing the bug & prior to releasing it back to the testing
team, the state of the bug is changed to "TEST". In other words, the state "Test Bug"
implies that the bug has been fixed and is released to the testing team.
5. Deferred Bug: When the bug is expected to be fixed in next releases, its state is
changed to deferred state. Many factors are responsible for changing the bug to this
state. Few of such factors are priority of the bug may be low, lack of time for the release
or the bug may not have major effect on the software.
6. Rejected Bug: If the developer feels that the bug is not a genuine one, he rejects it.
This leads change of state of the bug to "REJECTED".
7. Duplicate Bug: If a particular bug gets repeated more than once or two bugs point
towards the same concept, then the status of one of the bug is changed to "DUPLICATE".
8. Verified Bug: Once the developer fixes the bug and its status is changed to "TEST",
the software tester confirms the absence of the bug. If the bug is not detected in the
software, the tester approves that the bug is duly fixed and changes its status to
"VERIFIED".
9. Reopened Bug: If the bug is detected again even after the bug is claimed to be fixed
by the developer, the tester changes its status to "REOPENED". The cycle repeats again
& again till the bug gets ultimately fixed & get closed.
10. Closed Bug: Once the bug is fixed & the tester confirms its absence, he changes its
status to "CLOSED". This is the final state which implies that the bug is fixed, tested and
approved.
As is well known that prevention is better than cure, similarly prevention of defect in
software is much more effective and efficient in reducing the number of defects. Some
organizations focus on discovery of defect and subsequent removal. Since discovering
and removing defects is an expensive and inefficient process, hence It is better &
economical for an organization to focus their major attention on activities which prevent
defects.
Valid Bug: New -> Assigned -> Fixed but not patched -> Ready for Re-testing ->
Closed & Fix has been Verified
Invalid Bug: New -> Not a Bug -> Closed since it is Not a Bug
Duplicate Bug: New -> Duplicate Bug -> Closed since it is a Duplicate Bug
Reopened Bug: New -> Assigned -> Fixed but not patched -> Ready for Re-
testing -> Reopened -> Fixed but not patched -> Ready for Re-testing -> Closed
& has been Fix Verified
Analysis of Bugs:
Bugs detected & logged during the testing phase provide valuable opportunity to
improve the product as well as the testing processes. The aim of every testing team
remains to achieve zero Customer Bugs. Majority of the Customer Bugs starts pouring in
first 6 Months to 1 year of the product usage.
Immediately after the completion of the product testing, the testing teams should carry
out detailed analysis of the entire set of Invalid Bugs / Duplicate Bugs /
Could_Not_Be_Reproduced Bugs and come up with adequate measures to reduce their
count in future testing efforts.
However once Customer Bugs start pouring in, the testing Team immediately starts
analyzing each one of them & try to find out as to how & why these bugs have missed
during their testing effort and take appropriate measures immediately.
3 Priority refers to how soon the bug should Severity refers to the seriousness of the
be fixed. bug on the functionality of the product.
Higher effect on the functionality will lead
to assignment of higher severity to the
bug.
5 Product fixes are based on 'Project Product fixes are based on Bug Severity.
Priorities.
1) Generally speaking, a "High Severity" bug would also carry a "High Priority" tag along
with it. However this is not a hard & fast rule. There can be many exceptions to this rule
depending on the nature of the application and its schedule of release.
2) High Priority & Low Severity: A spelling mistake in the name of the company on
the home page of the company’s web site is certainly a High Priority issue. But it can be
awarded a Low Severity just because it is not going to affect the functionality of the Web
site / application.
3) High Severity & Low Priority: System crashes encountered during a roundabout
scenario, whose likelihood of detection by the client is minimal, will have HIGH severity.
In spite of its major affect on the functionality of the product, it may be awarded a Low
Priority by the project manager since many other important bugs are likely to gain more
priority over it simply because they are more visible to the client.
Unit Testing
System Testing
Integration Testing
Functional Testing
Performance Testing
Beta Testing
Acceptance Testing The industry experts based upon the requirement have
categorized many types of Software Testing. Following list presents a brief
introduction to such types.
Unit Testing:
Unit is the smallest compilable component of the software. A unit typically is the work of
one programmer. The unit is tested in isolation with the help of stubs or drivers. It is
functional and reliability testing in an Engineering environment. Producing tests for the
behaviour of components of a product to ensure their correct behaviour prior to system
integration. Unit testing is typically done by the programmers and not by the
testers. More Details
Integration Testing:
Testing of the application after combining / integrating its various parts to find out if all
parts function together correctly. The parts can be code modules, individual applications,
client and server applications on a network, etc. It begins after two or more programs or
application components have been successfully unit tested. This type of testing is
especially relevant to client/server and distributed systems. More Details
Is a type of Unit testing which runs with no specific test in mind. Here the monkey is the
producer of any input data (which can be either a file data or can be an input device
Functional Testing:
Validating an application or Web site conforms to its specifications and correctly
performs all its required functions. This entails a series of tests which perform a feature
by feature validation of behaviour, using a wide range of normal and erroneous input
data. This can involve testing of the product's user interface, APIs, database
management, security, installation, networking, etc Functional testing can be performed
on an automated or manual basis using black box or white box methodologies. This is
usually done by the testers.
Performance Testing:
Performance testing can be applied to understand your application or WWW site's
scalability, or to benchmark the performance in an environment of third party products
such as servers and middle-ware for potential purchase. This sort of testing is particularly
useful to identify performance bottlenecks in high use applications. Performance testing
generally involves an automated test suite as this allows easy simulation of a variety of
normal, peak, and exceptional load conditions. It validates that both the online response
time and batch run times meet the defined performance requirements
System Testing:
Falls within the scope of black box testing, and as such, should require no knowledge of
the inner design of the code or logic. It is conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. System testing is a
more limiting type of testing; it seeks to detect defects both within the Inter assemblages
and also within the system as a whole. More Details
Alpha Testing:
Is simulated or actual operational testing by potential users / customers or an
independent test team at the developers' site. Alpha testing is often employed for off-
the-shelf software as a form of internal acceptance testing, before the software goes to
beta testing. It is usually done when the development of the software product is nearing
completion; minor design changes may still be made as a result of such testing.
Beta Testing:
Comes after alpha testing. Versions of the software, known as beta versions, are released
to a limited audience outside of the programming team. The software is released to
groups of people so that further testing can ensure the product has few faults or bugs.
Sometimes, beta versions are made available to the open public to increase the feedback
field to a maximal number of future users. Thus beta testing is done by end-users or
others, & not by the programmers or testers.
Acceptance Testing :
Is the best industry practice & its is the final testing based on specifications provided by
the end-user or customer, or based on use by end-users/customers over some limited
period of time. In theory when all the acceptance tests pass, it can be said that the
project is done
Broad Comparison among the two Prime testing techniques i.e. Black Box
Testing & White Box Testing are as under
3 Black box testing is applied during later Whereas, white box testing is
stages of testing. performed early in the testing
process.
6 Black box testing, broadens our focus, on White box testing, as described by
the information domain and might be Hetzel is "testing in small" i.e.,
called as "testing in the large' i.e., testing testing small program components
bigger monolithic programs. (e.g., modules or small group of
modules).
7 Using black box testing techniques, we Using white box testing, the software
derive a set of test cases that satisfy engineer can desire test cases that
following criteria
code inspection
A review technique carried out at the end of the coding phase for a module. A
specification (and design documentation) for the module is distributed to the inspection
team in advance. M. E. Fagan recommends an inspection team of about four people.
The module programmer explains the module code to the rest of the team. A
moderator records detected faults in the code and ensures there is no discussion of
corrections. The code designer and code tester complete the team. Any faults are
corrected outside the inspection, and reinspection may take place subject to the quality
targets adopted
Code Walkthroughs
A source code walkthrough often is called a technical code walkthrough or a peer code
review. The typical scenario finds a developer inviting his technical lead, a database
administrator, and one or more peers to a meeting to review a set of source modules
prior to production implementation. Often the modified code is indicated after the fact
on a hardcopy listing with annotations or a highlighting pen, or within the code itself
with comments.
Statement Coverage
Statement coverage identifies which statements in a method or class have been
executed. It is a simple metric to calculate, and a number of open source products exist
that measure this level of coverage.
Ultimately, the benefit of statement coverage is its ability to identify which blocks of
code have not been executed. The problem with statement coverage, however, is that
it does not identify bugs that arise from the control flow constructs in your source code,
such as compound conditions or consecutive switch labels. This means that you easily
can get 100 percent coverage and still have glaring, uncaught bugs.
Branch Coverage
A branch is the outcome of a decision, so branch coverage simply measures which
decision outcomes have been tested.
This sounds great because it takes a more in-depth view of the source code than simple
statement coverage, but branch coverage can also leave you wanting more.
Determining the number of branches in a method is easy. Boolean decisions obviously
have two outcomes, true and false, whereas switches have one outcome for each case
—and don't forget the default case! The total number of decision outcomes in a method
is therefore equal to the number of branches that need to be covered plus the entry
branch in the method (after all, even methods with straight line code have one branch).
Path Coverage
A path represents the flow of execution from the start of a method to its exit. A method
with N decisions has 2^N possible paths, and if the method contains a loop, it may
have an infinite number of paths. Fortunately, you can use a metric called cyclomatic
complexity to reduce the number of paths you need to test.
Cyclomatic complexity
The concept, although not the method, is somewhat similar to that of general text
complexity measured by the Flesch-Kincaid Readability Test.
Cyclomatic complexity is computed using a graph that describes the control flow of the
program. The nodes of the graph correspond to the commands of a program. A directed
edge connects two nodes if the second command might be executed immediately after
the first command.
Mutation testing
Mutation testing (or Mutation analysis) is a method of software testing, which involves
modifying program's source code or byte code in small ways.[1] In short, any tests
which pass after code has been mutated are defective. These, so-called mutations, are
based on well-defined mutation operators that either mimic typical programming errors
(such as using the wrong operator or variable name) or force the creation of valuable
tests (such as driving each expression to zero). The purpose is to help the tester
develop effective tests or locate weaknesses in the test data used for the program or in
sections of the code that are seldom or never accessed during execution.
Types of Black Box Testing Techniques: Following techniques are used for
performing black box testing
b) Different terminating conditions of For-loops, While loops and Repeat loops may
cause defects to move
around the boundary conditions.
c) The requirements themselves may not be clearly understood, especially around the
boundaries, thus causing even the correctly coded program to not perform the correct
way.
Decision tables are ideal for describing situations in which a number of combinations of
actions are taken under varying sets of conditions.
A “Cause” represents a distinct input condition that brings about an internal change in
the system. An “Effect” represents an output condition, a system transformation or a
state resulting from a combination of causes.
S. No
Functional Testing Non-Functional Testing
a) Adequacy of look and feel factors like back ground color, font size, spelling mistakes
etc..
c) Ease of Navigation.
2) Performance Testing:
To verify the speed of the process for completing a transaction. Following performance
testing techniques are employed here.
a) Load Testing or Scalability Testing: To verify that the application supports the
customer expected load or not across the desired number of configured systems.
b) Stress Testing: Is aimed at estimating the peak limit of the load the application can
handle. For such load testing & stress testing, automation tools like load runner etc. are
deployed
c) Data volume testing: To verify the maximum storage capacity in the application
database.
3) Security Testing:
To verify the privacy to the user operations. During security testing, major focus is laid
on the following two factors.
a) Authorization: To verify as to whether the application is permitting the valid users &
at the same time it should be preventing the invalid users.
6) Configuration Testing:
To verify that the application supports different technology hardware devices or not. e.g.
The application is to be checked for printers based upon various technologies.
8) Installation Testing:
This test is aimed to verify the following factors.
9) Sanitation Testing:
This test is aimed to find out the presence of extra features in the application, although
not specified in the client requirements.
Functional Testing:
1)Unit Testing:
In software engineering, unit testing is a test (often automated) that validates that
individual units of source code are working properly. A unit is the smallest testable part
of an application. In procedural programming a unit may be an individual program,
function, procedure, etc., while in object-oriented programming, the smallest unit is a
method, which may belong to a base / super class, abstract class or derived / child class.
Ideally, each test case is independent from the others; Double objects like stubs, mock or
fake objects as well as test harnesses can be used to assist testing a module in isolation.
Unit testing is typically done by software developers to ensure that the code they have
written meets software requirements and behaves as the developer intended.
The goal of unit testing is to isolate each part of the program and show that the
individual parts are correct. A unit test provides a strict, written contract that the piece of
code must satisfy. As a result, it affords several benefits.
Following three steps of unit-testing effectively address the goal of finding faults in
software modules
2) Sanity Testing:
A sanity test or sanity check is a basic test to quickly evaluate the validity of a claim or
calculation. In mathematics, for example, when dividing by three or nine, verifying that
the sum of the digits of the result is a multiple of 3 or 9 (casting out nines) respectively is
a sanity test.
In software development, the sanity test (a form of software testing which offers "quick,
broad, and shallow testing" determines whether it is reasonable to proceed with further
testing.
Software sanity tests are commonly conflated with smoke tests. A smoke test determines
whether it is possible to continue testing, as opposed to whether it is reasonable. A
software smoke test determines whether the program launches and whether its
interfaces are accessible and responsible (for example, the responsiveness of a web
page or an input button). If the smoke test fails, it is impossible to conduct a sanity test.
In contrast, the ideal sanity test exercises the smallest subset of application functions
needed to determine whether the application logic is generally functional and correct (for
example, an interest rate calculation for a financial application). If the sanity test fails, it
is not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are
ways to avoid wasting time and effort by quickly determining whether an application is
too flawed to merit any rigorous testing. Many companies run sanity tests on a weekly
build as part of their development process.
The Hello world program s often used as a sanity test for a development environment. If
Hello World fails to compile the basic environment (or the compile process the user is
attempting) has a configuration problem. If it work
3) Smoke Testing:
Smoke testing is a term used in plumbing, woodwind repair, electronics, and computer
software development. It refers to the first test made after repairs or first assembly to
provide some assurance that the system under test will not catastrophically fail. After a
smoke test proves that the pipes will not leak, the keys seal properly, the circuit will not
burn, or the software will not crash outright, the assembly is ready for more stressful
testing.
In software testing area, smoke testing is a preliminary to further testing, which should
reveal simple failures severe enough to reject a prospective software release. In this
case, the smoke is metaphorical.
Smoke testing is done by developers before the build is released or by testers before
accepting a build for further testing.
In software engineering, a smoke test generally consists of a collection of tests that can
be applied to a newly created or repaired computer program. Sometimes the tests are
performed by the automated system that builds the final software. In this sense a smoke
test is the process of validating code changes before the changes are checked into the
larger product’s official source code collection. Next after code reviews, smoke testing is
the most cost effective method for identifying and fixing defects in software; some even
believe that it is the most effective of all.[citation needed]
In software testing, a smoke test is a collection of written tests that are performed on a
system prior to being accepted for further testing. This is also known as a build
verification test. This is a "shallow and wide" approach to the application. The tester
"touches" all areas of the application without getting too deep, looking for answers to
basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do
the buttons on the window do things?". There is no need to get down to field validation or
business flows. If you get a "No" answer to basic questions like these, then the
application is so badly broken, there's effectively nothing there to allow further testing.
These written tests can either be performed manually or using an automated tool. When
automated tools are used, the tests are often initiated by the same process that
generates the build itself.
4) Integration Testing:
Integration testing (sometimes called Integration and Testing, abbreviated as I&T) is the
phase of software testing in which individual software modules are combined and tested
as a group. It follows unit testing and precedes system testing.
Integration testing takes as its input modules that have been unit tested, groups them in
larger aggregates, applies tests defined in an integration test plan to those aggregates,
and delivers as its output the integrated system ready for system testing.
Purpose:
The purpose of integration testing is to verify functional, performance and reliability
requirements placed on major design items. These "design items", i.e. assemblages (or
groups of units), are exercised through their interfaces using black box testing, success
and error cases being simulated via appropriate parameter and data inputs. Simulated
usage of shared data areas and inter-process communication is tested and individual
subsystems are exercised through their input interface. Test cases are constructed to
test that all components within assemblages interact correctly, for example across
procedure calls or process activations, and this is done after testing individual modules,
i.e. unit testing.
The overall idea is a "building block" approach, in which verified assemblages are added
to a verified base which is then used to support the integration testing of further
assemblages.
a) Top Down Integration Testing
Disadvantages
Stubs have to be written with utmost care as they will simulate setting of output
parameters.It is difficult to have other people or third parties to perform this testing,
mostly developers will have to spend time on this.
Advantages
Behavior of the interaction points are crystal clear, as components are added in the
controlled manner and tested repetitively.
Appropriate for applications where bottom up design methodology is used.
Disadvantages
Writing and maintaining test drivers or harness is difficult than writing stubs.
This approach is not suitable for the software development using top down approach.
5) Usability Testing:
Usability testing is a black-box testing technique. The aim is to observe people using the
product to discover errors and areas of improvement. Usability testing generally involves
measuring how well test subjects respond in four areas: efficiency, accuracy, recall, and
emotional response. The results of the first test can be treated as a baseline or control
measurement; all subsequent tests can then be compared to the baseline to indicate
improvement.
Performance -- How much time, and how many steps, are required for people to
complete basic tasks? (For example, find something to buy, create a new account,
and order the item.)
Accuracy -- How many mistakes did people make? (And were they fatal or
recoverable with the right information?)
Recall -- How much does the person remember afterwards or after periods of non-
use?
Emotional response -- How does the person feel about the tasks completed? Is the
person confident, stressed? Would the user recommend this system to a friend?
6) System Testing:
System testing of software or hardware is testing conducted on a complete, integrated
system to evaluate the system's compliance with its specified requirements. System
testing falls within the scope of black box testing, and as such, should require no
knowledge of the inner design of the code or logic.
As a rule, system testing takes, as its input, all of the "integrated" software components
that have successfully passed integration testing and also the software system itself
integrated with any applicable hardware system(s). The purpose of integration testing is
to detect any inconsistencies between the software units that are integrated together
(called assemblages) or between any of the assemblages and the hardware. System
testing is a more limiting type of testing; it seeks to detect defects both within the "inter-
assemblages" and also within the system as a whole.
2) Usability testing
3) Performance testing
4) Compatibility testing
5) Load testing
6) Volume testing
7) Stress testing
8) Security testing
9) Scalability testing
7) Regression Testing:
Regression testing is any type of software testing which seeks to uncover regression
bugs. Regression bugs occur whenever software functionality that previously worked as
desired, stops working or no longer works in the same way that was previously planned.
Typically regression bugs occur as an unintended consequence of program changes.
More specific forms of regression testing are known as sanity testing, when quickly
checking for erratic behavior, and smoke testing when testing for basic functionality.
Alpha Test: The first test of newly developed hardware or software in a laboratory
setting. When the first round of bugs has been fixed, the product goes into beta test with
actual users. For custom software, the customer may be invited into the vendor's
facilities for an alpha test to ensure the client's vision has been interpreted properly by
the developer.
Beta Test: A test of new or revised hardware or software that is performed by users at
their facilities under normal operating conditions. Beta testing follows alpha testing.
Vendors of packaged software often offer their customers the opportunity of beta testing
new releases or versions, and the beta testing of elaborate products such as operating
systems can take months
9)Acceptance Testing:
Acceptance testing generally involves running a suite of tests on the completed system.
Each individual test, known as a case, exercises a particular operating condition of the
user's environment or feature of the system, and will result in a pass or fail Boolean
outcome.
Process:
The acceptance test suite is run against the supplied input data or using an acceptance
test script to direct the testers. Then the results obtained are compared with the
expected results. If there is a correct match for every case, the test suite is said to pass.
If not, the system may either be rejected or accepted on conditions previously agreed
between the sponsor and the manufacturer.
The objective is to provide confidence that the delivered system meets the business
requirements of both sponsors and users. The acceptance phase may also act as the final
quality gateway, where any quality defects not previously detected may be uncovered
Static Testing:
The Verification activities fall into the category of Static Testing. During static testing,
you have a checklist to check whether the work you are doing is going as per the set
standards of the organization. These standards can be for Coding, Integrating and
Deployment. Review's, Inspection's and Walkthrough's are static testing methodologies.
Dynamic Testing:
Dynamic Testing involves working with the software, giving input values and checking if
the output is as expected. These are the Validation activities. Unit Tests, Integration
Tests, System Tests and Acceptance Tests are few of the Dynamic Testing
methodologies.
verification:
Verification ensures the product is designed to deliver all functionality to the customer; it
typically involves reviews and meetings to evaluate documents, plans, code,
requirements and specifications; this can be done with checklists, issues lists,
walkthroughs and inspection meetings.
validation:
Test case:
Test Suite:
In software development, a test suite, less commonly known as a validation suite, is a
collection of test cases that are intended to be used to test a software program to show
that it has some specified set of behaviours. A test suite often contains detailed
instructions or goals for each collection of test cases and information on the system
configuration to be used during testing
Test Plan:
Who will do it
How long it will take (although this may vary, depending upon resource
availability).
What the test coverage will be, i.e. what quality level is required
Test Design Specification:
detailing test conditions and the expected results as well as test pass criteria.
specifying the test data for use in running the test conditions identified in the Test
Design Specification
detailing how to run each test, including any set-up preconditions and the steps that
need to be followed
reporting on when tested software components have progressed from one stage of
testing to the next
Test Log:
recording which tests cases were run, who ran them, in what order, and whether each
test passed or failed
detailing, for any test that failed, the actual versus expected result, and other
information intended to throw light on why a test has failed. This document is
deliberately named as an incident report, and not a fault report. The reason is that a
discrepancy between expected and actual results can occur for a number of reasons
other than a fault in the system. These include the expected results being wrong, the
test being run wrongly, or inconsistency in the requirements meaning that more than one
interpretation could be made. The report consists of all details of the incident such as
actual and expected results, when it failed, and any supporting evidence that will help in
its resolution. The report will also include, if possible, an assessment of the impact of an
incident upon testing.
Test Strategy:
It is a company level document developed bye quality
assurance manager or quality analyst category people. It
defines testing approach to reach the standards. During
test strategy document preparation QA people concentrate on
below factors:
1. scope and Objective
2. Budget control
3. Testing approach
4. Test deliverables
5. roles and responsibilities
6. communication and status reporting
7. automation tools (if needed)
8. testing measurements
9. risks and litigations
10. change configuration management
11. training plan
Test Script:
A test script in software testing is a set of instructions that will be performed on the
system under test to test that the system functions as expected.
Exit Criteria:
Deadlines (release deadlines, testing deadlines, etc.)
Test cases completed with certain percentage passed
Test budget depleted
Coverage of code/functionality/requirements reaches a
specified point
Bug rate falls below a certain level
Beta or alpha testing period ends