0% found this document useful (0 votes)
4 views26 pages

STA - UNIT I Mar 1

Uploaded by

mallpractice08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views26 pages

STA - UNIT I Mar 1

Uploaded by

mallpractice08
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

CCS366- SOFTWARE TESTING AND AUTOMATION

UNIT I
FOUNDATIONS OF SOFTWARE TESTING
Why do we test Software?, Black-Box Testing and White-Box Testing, Software Testing
Life Cycle, V-model of Software Testing, Program Correctness and Verification,
Reliability versus Safety, Failures, Errors and Faults (Defects), Software Testing
Principles, Program Inspections, Stages of Testing: Unit Testing, Integration Testing,
System Testing

1.1 INTRODUCTION
Software testing is a method for finding out if the software meets requirements
and is error-free. It involves running software or system components manually or
automatically in order to evaluate one or more characteristics. Finding faults, unfulfilled
requirements in comparison to the documented specifications is the aim of software testing.
Some prefer to use the terms white box and black box testing to describe the
concept of software testing. To put it simply, software testing is the process of
validating an application that is being tested.

1.1.1 : What is Software Testing


Software testing is the process of determining if a piece of software is accurate by
taking into account all of its characteristics (reliability, scalability, portability, Re-
usability and usability) and analyzing how its various components operate in order to
detect any bugs, faults or flaws.
Software testing delivers assurance of the software's fitness and offers a detached
viewpoint and purpose of the programmer. It entails testing each component that makes up
the necessary services to see whether or not it satisfies the set criteria. Additionally, the
procedure informs the customer about the software's caliber.
In Simple words, “Testing is the process of executing a program with the intent of
finding faults.”
Testing is required because failure of the programmer at any point owing to a lack of
testing would be harmful. Software cannot be released to the end user without being tested.

1.1.2 : What is Testing


Testing is a collection of methods to evaluate an application's suitability for use in
accordance with a predetermined script, however testing is not able to detect every
application flaw. The basic goal of testing is to find application flaws so that they may be
identified and fixed. It merely shows that a product doesn't work in certain particular
circumstances, not that it works correctly under all circumstances.
Testing offers comparisons between software behaviour and mechanisms since
mechanisms may identify problems in software. The method may incorporate previous‟
iterations of the same or similar items, comparable goods, expected-purpose interfaces,
pertinent standards or other criteria, but is not restricted to these.
Testing includes both the analysis and execution of the code in different settings and
environments, as well as the whole code analysis. A testing team may be independent from
the development team in the present software development scenario so that information
obtained from testing may be utilized to improve the software development process.

Panimalar Engg. College Chennai City Campus 1


CCS366- SOFTWARE TESTING AND AUTOMATION

The intended audience's adoption of the software, its user-friendly graphical user
interface, its robust functionality load test, etc., is all factors in its success. For instance, the
target „market for banking and a video game are very different. As a result, an organization
can determine if a software product it produces will be useful to its customers and other
audience members.

1.1.3 : Why Software Testing is Important? (What is the need of Software Testing?)
Software testing is a method for finding out if the software meets requirements
and is error-free Software testing is a very expensive and critical activity; but releasing the
software without testing is definitely more expensive and dangerous. We shall try to find
more errors in the early phases of software development. The cost of removal of such errors
will be very reasonable as compared to those errors which we may find in the later phases of
software development. The cost to fix errors increases drastically from the specification
phase to the test phase and finally to the maintenance phase as shown in Figure 1.1.

Figure 1.1 Phase wise cost of fixing an error

If an error is found and fixed in the specification and analysis phase, it hardly costs anything.
We may term this as “1 unit of cost” for fixing an error during specifications and analysis
phase. The same error, if propagated to design, may cost 10 units and if, further propagated
to coding, may cost 100 units. If it is detected and fixed during the testing phase, it may lead
to 1000 units of cost. If it could not be detected even during testing and is found by the
customer after release, the cost becomes very high. We may not be able to predict the cost of
failure for a life critical system’s software. The world has seen many failures and these
failures have been costly to the software companies.
The fact is that we are releasing software that is full of errors, even after doing
sufficient testing. No software would ever be released by its developers if they are asked to
certify that the software is free of errors. Testing, therefore, continues to the point where it is
considered that the cost of testing processes significantly outweighs the returns.

Panimalar Engg. College Chennai City Campus 2


CCS366- SOFTWARE TESTING AND AUTOMATION

1.1.4. Consequences of errors in Software in real life situations


Software flaws may be costly or even harmful, thus testing instances when software
defects led to financial and personal loss is crucial. History is replete with
 Over 300,000 traders in the financial markets were impacted after a software error
caused the London Bloomberg terminal to collapse in April 2015. It made the
government delay a 3-billion-pound debt auction.
 Nissan recalled nearly 1 million vehicles from the market because the airbag sensory
detectors software was flawed. Due to this software flaw, two accidents have been
documented.
 Starbucks' POS system malfunctioned, forcing them to shut nearly 60 % of its
locations in the united states and Canada. The shop once provided free coffee since
they couldn't complete the purchase.
 Due to a technical error, some of Amazon's third-party sellers had their product
prices slashed to 1p. They suffered severe losses as a result.
 A weakness in windows 10. Due to a defect in the win32k system, users are able to
bypass security sandboxes.
 In 2015, a software flaw rendered the F-35 fighter jet incapable of accurately
detecting “targets. On April 26, 1994; an airbus A300 operated by China airlines
crashed due to a software error, killing 264 unintentional people.
 Three patients died and three others were badly injured in 1985 when a software
glitch caused Canada's Therac-25 radiation treatment system to fail and deliver
deadly radiation doses to patients.
 In May 1996, a software error led to the crediting of 920 million US dollars to the
bank accounts of 823 clients of a large U.S. bank.
 In April 1999, a software error resulted in the failure of a $1.2 billion military
satellite launch, making it the most expensive accident in history.

1.1.5 : What are the Benefits of Software Testing?


The following are advantages of employing software testing:
Cost Effective : One of the key benefits of software testing is that it is cost-effective.
Timely testing of any IT project enables you to make long-term financial savings. If flaws are
found sooner in the software testing process, fixing them is less expensive.
Security: This is important advantage of software testing. People are searching for reliable
goods. It assists in eradicating hazards and issues early.
Product quality: Any software product must meet these criteria. Testing guarantees that
buyers get a high-quality product.
Customer satisfaction: Providing consumers with contentment is the primary goal of every
product. The optimum user experience is made guaranteed of through UI/UX testing.

1.1.6 : Type of Software Testing


1. Manual testing:
The process of checking the functionality of an application as per the customer needs
without taking any help of automation tools is known as manual testing. While performing
the manual testing on any application, we do not need any specific knowledge of any testing
tool, rather than have a proper understanding of the product so we can easily prepare the test
document.

Panimalar Engg. College Chennai City Campus 3


CCS366- SOFTWARE TESTING AND AUTOMATION

Manual testing can be further divided into three types of testing, which are as
follows:
 White box testing
 Black box testing
 Grey box testing.

Figure : Types of Testing

2. Automation testing:
Automation testing is a process of converting any manual test cases into the test scripts
with the help of automation tools or any programming language. With the help of
automation testing, we can enhance the speed of our test execution because here, we do not
require any human efforts.
Manual Testing Vs Automation Testing
Manual Testing Automation Testing
In manual testing, the test cases are executed In automated testing, the test cases are
by the human tester. executed by the software tools.
Automation testing is faster than manual
Manual testing is time-consuming.
testing.
Automation testing takes up automation tools
Manual testing takes up human resources.
and trained employees.
Exploratory testing is possible in manual Exploratory testing is not possible in
testing. automation testing.
Initial Investment is less Initial Investment is more

Panimalar Engg. College Chennai City Campus 4


CCS366- SOFTWARE TESTING AND AUTOMATION

1.2 WHITE-BOX TESTING , BLACK-BOX TESTING and GREY BOX TESTING


Black box testing (also called functional testing) is testing that ignores the internal mechanism
of a system or component and focuses solely on the outputs generated in response to selected
inputs and execution conditions. White box testing (also called structural testing and glass box
testing) is testing that takes into account the internal mechanism of a system or component.

1.2.1 What is White-Box Testing


 White box testing is a type of software testing that examines the internal
structure and design of a program or application.
 Because of the system's internal viewpoint, the phrase "white box" is employed. The
term “clear box," "white box" or "transparent box" refers to the capability of seeing
the software's inner workings through its exterior layer.
 Developers carry it out before sending the program to the testing team, who then
conducts black-box testing. Testing the infrastructure of the application is the
primary goal of white-box testing. As it covers unit testing and integration testing, it
is performed at lower levels. Given that it primarily focuses on the code structure,
pathways, conditions and branches of a programed or piece of software, it necessitates
programming skills. Focusing on the inputs and outputs via the program and
enhancing its security are the main objectives of white-box testing:
 It is also referred to as transparent testing, code-based testing, structural testing and
clear box testing. It is a good fit and is recommended for testing algorithms.

1.2.1.1 Types of White Box Testing in Software Testing


The following are some common types of white box testing:
 Unit testing: Tests individual units or components of the software to ensure they
function as intended.
 Integration testing: Tests the interactions between different units or components of
the software to ensure they work together correctly.
 Functional testing: Tests the functionality of the software to ensure it meets the
requirements and specifications.
 Performance testing: Tests the performance of the software under various loads and
conditions to ensure it meets performance requirements.
 Security testing: Tests the software for vulnerabilities and weaknesses to ensure it is
secure.
 Code coverage testing: Measures the percentage of code that is executed during
testing for ensure that all parts of the code are tested.
 Regression testing: Tests the software after changes have been made to ensure that
the changes did not introduce new bugs or issues.
1.2.1.2 Techniques of White Box Testing
There are some techniques which is used for white box testing -
 Statement coverage: This testing approach involves going over every statement in
the code to make sure that each one has been run at least once. As a result, the code is
checked line by line.
 Branch coverage: This is a testing approach in which test cases are created to ensure
that each branch is tested at least once. This method examines all potential
configurations for the system.

Panimalar Engg. College Chennai City Campus 5


CCS366- SOFTWARE TESTING AND AUTOMATION
 Path coverage: Path coverage is a software testing approach that defines and covers
all potential pathways. From system entrance to exit points, pathways are statements
that may be executed. It takes a lot of time.
 Loop testing: With the help of this technique, loops and values in both independent
and dependent code are examined. Errors often happen at the start and conclusion of
loops. This method included testing loops, Concatenated loops, Simple loops, Nested
loops.
 Basis path testing: Using this methodology, control flow diagrams are created from
code and subsequently calculations are made for cyclomatic complexity. For the
purpose of designing the fewest possible test cases, cyclomatic complexity specifies
the quantity of separate routes.
o Cyclomatic complexity is a software metric used to indicate the complexity of
a program. It is computed using the Control Flow Graph of the program.
1.2.1.3 Advantages of White Box Testing
 Complete coverage.
 Better understanding of the system.
 Improved code quality.
 Increase efficiency.
 Early detection of error.

1.2.1.4 Disadvantages of White Box Testing


 This testing is very expensive and time-consuming.
 Redesign of code needs test cases to be written again.
 Missing functionalities cannot be detected.
 This technique can be very complex and at times not realistic.
 White-box testing requires a programmer with a high level of knowledge due for the
complexity of the level of testing that needs to be done.

1.2.2 What is Black Box Testing


Testing a system in a "black box" is doing so without knowing anything about how it
operates within. ie It is a form of testing that is performed with no knowledge of a system's
internals. A tester inputs data and monitors the output produced by the system being tested.
This allows for the identification of the system's reaction time, usability difficulties and
reliability concerns as well as how the system reacts to anticipated and unexpected user
activities.) Also called as Functional Testing
Because it tests a system from beginning to finish, black box testing is a powerful
testing method. A tester may imitate user action to check if the system fulfills its promises. A
black box test assesses every important subsystem along the route, including the UI/UX,
database, dependencies and integrated systems, as well as the web server or application
server.

1.2.2.1 Black Box Testing Pros and Cons

Advantages:
1. Testers do not require technical knowledge, programming of IT skills.
2. Testers do not need to learn implementation details of the system
3. Tests can be executed by outsourced testers.
4. Low chance of false positives.
5. Tests have lower complexity, since they simply model common user behavior

Panimalar Engg. College Chennai City Campus 6


CCS366- SOFTWARE TESTING AND AUTOMATION

Dis-Advantages:
1. Difficult to automate.
2. Requires prioritization, typically infeasible to tests all user paths.
3. Difficult to calculate test coverage.
4. If a test fails, it can be difficult to understand the root cause of the issues.
5. Tests may be conducted at low scale or on a non-production like environment

1.2.2.2 Types of Black Box Testing


Black box testing can be applied to three main types of tests: Functional, non-
functional and regression testing.
1. Functional Testing:
Specific aspects or operations of the program that is being tested may be tested via
black box testing. For instance, make sure that the right user credentials may be used to log
in and that the incorrect ones cannot.
Functional testing concentrate on the most important features of the program on how
well the system works as a whole (system testing) with the integration of its essential
components.
2. Non-functional Testing:
 Beyond features and functioning, black box testing allows for the inspection of
extra software components. A non-functional test examines "how" rather than "if"
the program can carry out a certain task.
 Black box testing may determine whether software is:
a) Usable and simple for its users to comprehend;
b) Performance under predicted or peak loads; Compatible with relevant devices,
screen sizes, browsers or operating systems;
c) Exposed to security flaws or frequent security threats.
3. Regression Testing:
To determine if a new software version displays a regression or a decrease in
capabilities, from one version to the next, black box testing may be employed. Regression
testing may be used to evaluate both functional and non-functional features of the program,
such as when a particular feature no longer functions as anticipated in the new version or
when a formerly fast-performing action becomes much slower in the new version.
1.2.2.3 Black Box Testing Techniques
1. Equivalence partitioning:
Testing professionals may organize potential inputs into "partitions" and test just one
sample input from each category. For instance, it is sufficient for testers to verify one birth
date in the "under 18" group and one date in the "over 18" group if a system asks for a user's
birth date and returns the same answer for users under the age of 18 and a different response
for users over 18.

2. Boundary value analysis:


Testers can determine if a system responds differently around a certain boundary
value. For instance, a particular field could only support values in the range of 0 and 99.
Testing personnel may concentrate on the boundary values (1, 0, 99 and 100) to determine if
the system is appropriately accepting and rejecting inputs.

Panimalar Engg. College Chennai City Campus 7


CCS366- SOFTWARE TESTING AND AUTOMATION
3. Decision Table Testing
Numerous systems provide results depending on a set of parameters. Once rules that
are combinations of criteria have been identified, each rule's conclusion can then be
determined and test cases may then be created for each rule.
1.2.3 . Gray Box Testing :
 Gray Box Testing is a combination of the Black Box Testing technique and the White
Box Testing technique in software testing.
 The gray-box testing involves inputs and outputs of a program for the testing purpose but
test design is tested by using the information about the code.
 Gray-box testing is well suited for web application testing because it factors in a high-
level design environment and the inter-operability conditions.

1.2.4 Differences between Black Box Testing , Gray Box and White Box Testing:

Black Box Testing Gray Box Testing White Box Testing


This testing has Low This testing has a medium level This testing has high-level
granularity. of granularity. granularity.
It is done by end-users and It is done by end-users (called
It is generally done by testers and
also by the tester and user acceptance testing) and also
developers.
developers. by testers and developers.
Here, the Internal code of the
Here, Internals are not Here, Internals relevant to the
application and database is
required to be known. testing are known.
known.
It is based on requirements,
It provides better variety/depth
and test cases on the It can exercise code with a
in test cases on account of high-
functional specifications, as relevant variety of data.
level knowledge of the internals.
the internals are not known.
This testing involves Herein, we have a better variety
validating the outputs for of inputs and the ability to It involves structural testing and
given inputs, the application extract test results from the enables logic coverage, decisions,
being tested as a black-box database for comparison with etc. within the code.
technique. expected results.
This is also called Opaque- This is also called Glass-box
box testing, Closed-box testing, Clear-box testing, Design-
This is also called translucent
testing, input-output testing, based testing, Logic-based testing,
box testing
Data-driven testing, Structural testing, and Code-based
Behavioral, Functional testing testing.
Some White-box test design
Some Black-box test design Some Gray box test design
techniques-
techniques- techniques-
 Control flow testing
 Equivalence partitioning  Matrix testing
 Data flow testing
 Error guessing  Regression testing

Black Box testing provides Gray Box testing does not White Box testing does not
resilience and security against provide resilience and security provide resilience and security
viral attacks. against viral attacks. against viral attacks.

Panimalar Engg. College Chennai City Campus 8


CCS366- SOFTWARE TESTING AND AUTOMATION

1.3 SOFTWARE TESTING LIFE CYCLE


The Software Testing Life Cycle (STLC) is a systematic approach to testing a
software application to ensure that it meets the requirements and is free of defects. It is a
process that follows a series of steps or phases, and each phase has specific objectives and
deliverables. The STLC is used to ensure that the software is of high quality, reliable, and
meets the needs of the end-users .The main goal of the STLC is to identify and document any
defects or issues in the software application as early as possible in the development process.
This allows for issues to be addressed and resolved before the software is released to the
public.
The stages of the STLC include Requirement Analysis, Test Planning, Test case
Development, Test Environment Setup, Test Execution and Test Closure. Each of these
stages includes specific activities and deliverables that help to ensure that the software is
thoroughly tested and meets the requirements of the end users.
Overall, the STLC is an important process that helps to ensure the quality of software
applications and provides a systematic approach to testing. It allows organizations to release
high-quality software that meets the needs of their customers, ultimately leading to customer
satisfaction and business success.

Phases of STLC:

1. Requirement Analysis: Requirement Analysis is the first step of the Software Testing
Life Cycle (STLC). In this phase quality assurance team understands the requirements like
what is to be tested. If anything is missing or not understandable then the quality assurance
team meets with the stakeholders to better understand the detailed knowledge of
requirements.
The activities that take place during the Requirement Analysis stage include:
• Reviewing the software requirements document (SRD) and other related documents
• Interviewing stakeholders to gather additional information
• Identifying any ambiguities or inconsistencies in the requirements
• Identifying any missing or incomplete requirements
• Identifying any potential risks or issues that may impact the testing process
• Creating a requirement traceability matrix (RTM) to map requirements to test
cases
At the end of this stage, the testing team should have a clear understanding of the
software requirements and should have identified any potential issues that may impact the
testing process. This will help to ensure that the testing process is focused on the most
important areas of the software and that the testing team is able to deliver high-quality
results.

Panimalar Engg. College Chennai City Campus 9


CCS366- SOFTWARE TESTING AND AUTOMATION

2. Test Planning: Test Planning is the most efficient phase of the software testing life cycle
where all testing plans are defined. In this phase, manager of the testing team calculates the
estimated effort and cost for the testing work. This phase gets started once the requirement-
gathering phase is completed.
The activities that take place during the Test Planning stage include:
• Identifying the testing objectives and scope
• Developing a test strategy: selecting the testing methods and techniques
• Identifying the testing environment and resources needed
• Identifying the test cases that will be executed and the test data that will be used
• Estimating the time and cost required for testing
• Identifying the test deliverables and milestones
• Assigning roles and responsibilities to the testing team
• Reviewing and approving the test plan
At the end of this stage, the testing team should have a detailed plan for the testing activities
that will be performed, and a clear understanding of the testing objectives, scope, and
deliverables. This will help to ensure that the testing process is well-organized and that the
testing team is able to deliver high-quality results.

3. Test Case Development: The test case development phase gets started once the test
planning phase is completed. In this phase testing team notes down the detailed test cases.
The testing team also prepares the required test data for the testing. When the test cases are
prepared then they are reviewed by the quality assurance team.
The activities that take place during the Test Case Development stage include:
• Identifying the test cases that will be developed
• Writing test cases that are clear, concise, and easy to understand
• Creating test data and test scenarios that will be used in the test cases
• Identifying the expected results for each test case
• Reviewing and validating the test cases
• Updating the requirement traceability matrix (RTM) to map requirements to test
cases
At the end of this stage, the testing team should have a set of comprehensive and
accurate test cases that provide adequate coverage of the software or application. This will
help to ensure that the testing process is thorough and that any potential issues are identified
and addressed before the software is released.

4. Test Environment Setup: Test environment setup is a vital part of the STLC. Basically,
the test environment decides the conditions on which software is tested. This is independent
activity and can be started along with test case development. In this process, the testing team
is not involved - either the developer or the customer creates the testing environment.

5. Test Execution: After the test case development and test environment setup test execution
phase gets started. In this phase testing team starts executing test cases based on prepared test
cases in the earlier step.
The activities that take place during the test execution stage of the Software Testing
Life Cycle (STLC) include:
• Test execution: The test cases and scripts created in the test design stage are run
against the software application to identify any defects or issues.
• Defect logging: Any defects or issues that are found during test execution are
logged in a defect tracking system, along with details such as the severity, priority, and

Panimalar Engg. College Chennai City Campus 10


CCS366- SOFTWARE TESTING AND AUTOMATION

description of the issue.


• Test data preparation: Test data is prepared and loaded into the system for test
execution
• Test environment setup: The necessary hardware, software, and network
configurations are set up for test execution
• Test execution: The test cases and scripts are run, and the results are collected and
analyzed.
• Test result analysis: The results of the test execution are analyzed to determine the
software‟s performance and identify any defects or issues.
• Defect retesting: Any defects that are identified during test execution are retested
to ensure that they have been fixed correctly.
• Test Reporting: Test results are documented and reported to the relevant
stakeholders.
It is important to note that test execution is an iterative process and may need to be
repeated multiple times until all identified defects are fixed and the software is deemed fit for
release.

6. Test Closure: Test closure is the final stage of the Software Testing Life Cycle (STLC)
where all testing-related activities are completed and documented. The main objective of the
test closure stage is to ensure that all testing-related activities have been completed and that
the software is ready for release.
At the end of the test closure stage, the testing team should have a clear
understanding of the software’s quality and reliability, and any defects or issues that were
identified during testing should have been resolved. The test closure stage also includes
documenting the testing process and any lessons learned so that they can be used to improve
future testing processes
Test closure is the final stage of the Software Testing Life Cycle (STLC) where all
testing-related activities are completed and documented. The main activities that take place
during the test closure stage include:
• Test summary report: A report is created that summarizes the overall testing
process, including the number of test cases executed, the number of defects found, and the
overall pass/fail rate.
• Defect tracking: All defects that were identified during testing are tracked and
managed until they are resolved.
• Test environment clean-up: The test environment is cleaned up, and all test data
and test artifacts are archived.
• Test closure report: A report is created that documents all the testing-related
activities that took place, including the testing objectives, scope, schedule, and resources
used.
• Knowledge transfer: Knowledge about the software and testing process is shared
with the rest of the team and any stakeholders who may need to maintain or support the
software in the future.
• Feedback and improvements: Feedback from the testing process is collected and
used to improve future testing processes

1.4 V-MODEL OF SOFTWARE TESTING


The V-Model provides a systematic and visual representation of the software
development process. V Model also referred to as the Verification and Validation
Model. Testing of the device is planned in parallel with a corresponding stage of
development.
Verification: It involves a static analysis method (review) done without executing
code. It is the process of evaluation of the product development process to find whether

Panimalar Engg. College Chennai City Campus 11


CCS366- SOFTWARE TESTING AND AUTOMATION

specified requirements are met. Verification ensures that we build the product right.
Validation: It involves dynamic analysis method (functional, non-functional), testing
is done by executing code. Validation is the process to check the software after the
completion of the development process to determine whether the software meets the
customer expectations and requirements. Validation ensures that we build the right product.
So V-Model contains Verification phases on one side of the Validation phases on the
other side. Verification and Validation process is joined by coding phase in V-shape. Thus it
is known as V-Model.

Verification
Phase

Validation
phase

There are the various phases of Verification Phase of V-model:

1. Business requirement analysis: This is the first step where product requirements
are understood from the customer's side. This phase contains detailed communication to
understand customer's expectations and exact requirements.
2. System Design: In this stage system engineers analyze and interpret the business
of the proposed system by studying the user requirements document.
3. Architecture Design: The baseline in selecting the architecture is that it should
understand all which typically consists of the list of modules, brief functionality of each
module, their interface relationships, dependencies, database tables, architecture diagrams,
technology detail, etc. The integration testing model is carried out in a particular phase.
4. Module Design: In the module design phase, the system breaks down into small
modules. The detailed design of the modules is specified, which is known as Low-Level
Design
5. Coding Phase: After designing, the coding phase is started. Based on the
requirements, a suitable programming language is decided. There are some guidelines and
standards for coding. Before checking in the repository, the final build is optimized for better
performance, and the code goes through many code reviews to check the performance.

There are the various phases of Validation Phase of V-model:


1. Unit Testing: In the V-Model, Unit Test Plans (UTPs) are developed during the
module design phase. These UTPs are executed to eliminate errors at code level or unit level.
A unit is the smallest entity which can independently exist, e.g., a program module. Unit
testing verifies that the smallest entity can function correctly when isolated from the rest of
the codes/ units.

Panimalar Engg. College Chennai City Campus 12


CCS366- SOFTWARE TESTING AND AUTOMATION
2. Integration Testing: Integration Test Plans are developed during the Architectural
Design Phase. These tests verify that groups created and tested independently can coexist
and communicate among themselves.
3. System Testing: System Tests Plans are developed during System Design Phase.
Unlike Unit and Integration Test Plans, System Tests Plans are composed by the client’s
business team. System Test ensures that expectations from an application developer are met.
4. Acceptance Testing: Acceptance testing is related to the business requirement
analysis part. It includes testing the software product in user atmosphere. Acceptance tests
reveal the compatibility problems with the different systems, which is available within the
user atmosphere. It conjointly discovers the non-functional problems like load and
performance defects within the real user atmosphere.

When to use V-Model?


 When the requirement is well defined and not ambiguous.
 The V-shaped model should be used for small to medium-sized projects where
requirements are clearly defined and fixed.
 The V-shaped model should be chosen when sample technical resources are
available with essential technical expertise.

Advantages (Pros) of V-Model:


1. Easy to Understand.
2. Testing Methods like planning, test designing happens well before coding.
3. This saves a lot of time.
ids the downward flow of the defects.
5. Works well for small plans where requirements are easily understood.
4. Avo
Disadvantages (Cons) of V-Model:
1. Very rigid and least flexible.
2. Not a good for a complex project.
3. Software is developed during the implementation stage, so no early prototypes of
the software are produced.
4. If any changes happen in the midway, then the test documents along with the
required documents, has to be updated.

1.5 PROGRAM CORRECTNESS AND VERIFICATION


A software/program correctness is the condition or state of the software during
which it is able to perform as expected, and as per the user requirements. We discuss software
correctness from the two perspectives of the operational and the symbolic approach. To show
that a program is correct
• from the operational perspective, we use testing
• from the symbolic perspective, we use proof.
The two perspectives and with them testing and proof are tightly related and we make
ample use of this relationship.
Testing a Simple Fragment (Version 1)
Knowing about the relationship between values and facts, we can formulate a simple
testing method for program fragments. The fragments have the following general shape,
consisting of three parts
initialize variables
carry out computation
check condition
The initialize variables part sets up the input values for the fragment. Usually, the
input values are chosen taking into account, conditions concerning input values, e.g., to avoid
division by zero. The carry out computation part contains the “program”. The check
condition part specifies a condition to determine whether the program is correct.
Panimalar Engg. College Chennai City Campus 13
CCS366- SOFTWARE TESTING AND AUTOMATION
Testing a Simple Fragment (Version 2)
Instead of giving an initialization as in Version 1, we can also use an assume
statement to impose an initial condition for a test. We can specify,
assume initial condition on variables
specify computation
assert final condition on variables
The fragment terminates gently if the initial condition is not met and aborts with an
error if the initial condition was met but the final condition is not. This way of specifying
tests turns out to be a foundation for deriving test cases. We can systematically develop test
cases in this method

Program Correctness
Following the preceding discussion, we base our notion of Programs correctness b y
co nsider ing two variants:
• Pairs of initialize-Check-statements in program fragments and tests. These are
executable and can be evaluated during verfication.
• Pairs of assume-assert-statements in program fragments and tests. These are
executable and can be evaluated at run-time.
We call the first component, initialize, assume-statement as a pre-condition. We call
its second component, check , assert-statement a post- condition.

Program Verification
To demonstrate that a program is correct we verify it. We consider two principles
methods for verifying programs.
 Proof
Using logical deduction, we show that any execution of a program starting in a
state satisfying the pre-condition, it terminates in a state satisfying its post-condition. In
other words, we show that the program is correct.
 Testing
Executing a program for specific states satisfying the pre-condition, we check
whether on termination a state is reached that satisfies the post-condition. It is up to us to
determine suitable pairs of states, called test cases. This approach does not show that a
program is correct. In practice, we guess that programs that have been subjected to a
sufficient number of tests is correct. This kind of reasoning is called induction: from a
collection of tests that confirm correctness for precisely those tests, we infer that this is
the case for possible tests. Testing is a validation method: it is entirely possible that all
tests that we have provided appear to confirm correctness, but later we find a test case
that refutes the conclusion. Either the program contains an error or the test case is wrong.

Verification Vs Validation
Verification Validation
Verification refers to the set of activities that Validation refers to the set of activities that ensure
ensure software correctly implements the specific that the software that has been built is traceable to
function customer requirements.
It includes checking documents, designs, codes It includes testing and validating the actual product.
Verification is the static testing. Validation is dynamic testing.
Methods used in verification are Methods used in validation are Black Box
reviews, walkthroughs, inspections and desk- Testing, White Box Testing and non-functional
checking. testing
It checks whether the software meets the
It checks whether the software conforms to
requirements and expectations of a customer or
specifications or not.
not.
Verification means Are we building the product Validation means Are we building the right
right ? product?

Panimalar Engg. College Chennai City Campus 14


CCS366- SOFTWARE TESTING AND AUTOMATION

1.6 RELIABILITY VERSUS SAFETY

1.6.1 Software Reliability


Software reliability is a measure of how the software is capable of maintaining
its level of performance under stated conditions for a stated period of time. Software
reliability engineering involves much more than analyzing test results, estimating remaining
faults, and modeling future failure probabilities.
Although in most organizations, software test is no longer an afterthought,
management is almost always surprised by the cost and schedule requirements of the test
program, and it is often downgraded in favor of design activities. Often adding a new feature
will seem more beneficial than performing a complete test on existing features. A good
software reliability engineering program, introduced early in the development cycle, will
mitigate these problems by reliability program tasks.
Reliability Program Tasks:
1. Reliability Allocation
Reliability allocation is the task of defining the necessary reliability of a software
item. The item may be a part of an integrated hardware/software system, may be a relatively
independent software application, or, more and more rarely, a standalone software program.
In any of these cases, goal is to bring system reliability within either a strict constraint
required by a customer or optimize reliability within schedule and cost constraints.
2. Defining and Analyzing Operational Profiles
The reliability of software is strongly tied to the operational usage of an application -
much stronger than the reliability of hardware. A software fault may lead to a system failure
only if that fault is encountered during operational usage. If a fault is not accessed in a
specific operational mode, it will not cause failures at all. It will cause failure more often if it
is located in code that is part of a frequently used "operation" (An operation is defined as a
major logical task, usually repeated multiple times within an hour of application usage).
Therefore in software reliability engineering, we focus on the operational profile of the
software which weighs the occurrence probabilities of each operation. Unless safety
requirements indicate a modification of this approach we will prioritize our testing according
to this profile.
Software engineers have to complete the following tasks required to generate a
useable operational profile:
• Determine the operational modes (high traffic, low traffic, high
maintenance, remote use, local use, etc)
• Determine operation initiators (components that initiate the operations in
the system)
• Determine and group "Operations" so that the list includes only operations that
are significantly different from each other (and therefore may present different faults)
• Determine occurrence rates for the different operations
• Construct the operational profile based on the individual operation probabilities
of occurrence.

3. Test Preparation and Plan


Test preparation is a crucial step in the implementation of an effective software
reliability program. A test plan that is based on the operational profile on the one hand, and
subject to the reliability allocation constraints on the other, will be effective in achieving the
program's reliability goals in the least amount of time and cost.
Software Reliability Engineering is concerned not only with feature and regression
test, but also with load test and performance test. All these should be planned based on the
activities outlined above. The reliability program will inform and often determine the
following test preparation activities:
• Assessing the number of new test cases required for the current release

Panimalar Engg. College Chennai City Campus 15


CCS366- SOFTWARE TESTING AND AUTOMATION
• New test case allocation among the systems (if multi-system)
• New test case allocation for each system among its new operations
• Specifying new test cases
• Adding the new test cases to the existing test cases from previous releases

4. Software Reliability Models


Software reliability engineering is often identified with reliability models, in
particular reliability growth models. These models, when applied correctly, are successful at
providing guidance to management decisions such as:
• Test schedule
• Test resource allocation
• Time to market
• Maintenance resource allocation
The application of reliability models to software testing results allows us to infer the
rate at which failures are encountered (depending on usage profile) and, more importantly,
the changes in this rate (reliability growth). The ability to make these inferences depends
critically on the quality of test results. It is essential that testing be performed in such a way
that each failure incident is accurately reported.

1.6.2 Software Safety


Software safety is preventing a system from reaching dangerous states. As systems
and products become more and more dependent on software components it is no longer
realistic to develop a system safety program that does not include the software elements.

Does software fail?


We tend to believe that well written and well tested safety critical software would
never fail. Experience proves otherwise with software making headlines when it actually
does fail, sometimes critically. Software does not fail the same way as hardware does, and
the various failure behaviors we are accustomed to from the world of hardware are often not
applicable to software. However, software does fail, and when it does, it can be just as
catastrophic as hardware failures.

Safety-critical software
Safety-critical software is very different from both non-critical software and safety-
critical hardware. The difference lies in the massive testing program that such software
undergoes.

What are "software failure modes"?


Software, especially in critical systems, tends to fail where least expected. Software
does not "break" but it must be able to deal with "broken" input and conditions, which often
cause the "software failures". The task of dealing with abnormal conditions and inputs is
handled by the exception code dispersed throughout the program. Setting up a test plan and
exhaustive test cases for the exception code is by definition difficult and somewhat
subjective.
Failures can be due to:
• failed hardware
• timing problems
• harsh/unexpected environmental conditions
• multiple changes in conditions and inputs that are beyond what the hardware
is able to deal with
• unanticipated conditions during software mode changes
• bad or unexpected user input
Often the conditions most difficult to predict are multiple, coinciding, irregular
inputs and conditions.

Panimalar Engg. College Chennai City Campus 16


CCS366- SOFTWARE TESTING AND AUTOMATION
Safety-critical software is usually tested to the point that no new critical failures are
observed. This of course does not mean that the software is fault-free at this point, only that
failures are no longer observed in test.
Why are the faults leading to these types of failures overseen in test? These are faults
that are not tested for any of the following reasons:
• Faults in code that is not frequently used and therefore not well represented in the
operational profiles used for testing
• Faults caused by multiple abnormal conditions that are difficult to test
• Faults related to interfaces and controls of failed hardware
• Faults due to missing requirements
It is clear why these types of faults may remain outside of a normal, reliability focused,
test plan.

1.7 FAILURES, ERRORS AND FAULTS (DEFECTS)

Defect:
A defect refers to a situation when the application is not working as per the requirement and the
actual and expected result of the application or software is not in sync with each other.
 The defect is an issue in application coding that can affect the whole program.
 It represents the efficiency and inability of the application to meet the criteria and prevent
the software from performing the desired work.
 The defect can arise when a developer makes major or minor mistakes during the
development phase.

Error
Error is a situation that happens when the Development team or the developer fails to
understand a requirement definition and hence that misunderstanding gets translated into buggy
code. This situation is referred to as an Error and is mainly a term coined by the developers.
 Errors are generated due to wrong logic, syntax, or loop that can impact the end-user
experience.
 It is calculated by differentiating between the expected results and the actual results.
 It raises due to several reasons like design issues, coding issues, or system specification
issues and leads to issues in the application.

Fault:
Sometimes due to certain factors such as Lack of resources or not following proper steps, Fault
occurs in software which means that the logic was not incorporated to handle the errors in the
application. This is an undesirable situation, but it mainly happens due to invalid documented
steps or a lack of data definitions.
 It is an unintended behavior by an application program.
 It causes a warning in the program.
 If a fault is left untreated it may lead to failure in the working of the deployed code.
 A minor fault in some cases may lead to high-end error.
 There are several ways to prevent faults like adopting programming techniques,
development methodologies, peer review, and code analysis.

Failure:
Failure is the accumulation of several defects that ultimately lead to Software failure and results
in the loss of information in critical modules thereby making the system unresponsive. A failure
is the result of execution of a fault and is dynamic in nature. Generally, such situations happen
very rarely because before releasing a product all possible scenarios and test cases for the code
are simulated. Failure is detected by end-users once they face a particular issue in the software.
 Failure can happen due to human errors or can also be caused intentionally in the system by
an individual.

Panimalar Engg. College Chennai City Campus 17


CCS366- SOFTWARE TESTING AND AUTOMATION
 It is a term that comes after the production stage of the software.
 It can be identified in the application when the defective part is executed.
Bug Defect Error Fault Failure

The Fault is a
The Defect is An Error is a
state that causes A failure is the
the difference mistake made in
It is an informal the software to result of execution
between the the code; so that
name specified fail to of a fault and is
actual outcomes we cannot dynamic in
to the defect. accomplish its
and expected execute or nature.
essential
outputs. compile code.
function.

1.8. SOFTWARE TESTING PRINCIPLES

Software testing is a process that involves putting software or an application to use


in order to find faults or flaws. Following certain guidelines can help testers to test
software without creating any problems. It will also save the test engineers' time and
effort. The seven different Software testing principles are given below:

1. Testing shows the presence of defects


2. Exhaustive testing is not possible
3. Early testing
4. Defect clustering
5. Beware of the pesticide paradox
6. Defect clustering
7. Absence-of-errors is a fallacy

1. Testing shows the presence of defects:

• The application will be put through testing by the test engineer to ensure that there
are no bugs or flaws. We can only pinpoint the existence of problems in the application or
program when testing. The main goal of testing is to find any flaws that might prevent the
product from fulfilling the client's needs by using a variety of methods and testing
techniques. Since the entire test should be able to be traced back to the customer
requirement.
• Testing reduces the amount of flaws in any program, but this does not imply that the
application is defect-free since sometimes software seems to be bug-free despite extensive
testing. But if the end-user runs into flaws that weren't discovered during testing, it's at the
point of deployment on the production server.

2. Exhaustive testing is not possible:

It might often appear quite difficult to test all the modules and their features
throughout the real testing process using effective and ineffective combinations of the input
data. Therefore, because it requires endless decisions and the majority of the hard labour is
unsuccessful. Extensive testing is preferred instead. As a result, we may finish this sort of
variation in accordance with the significance of the modules.

Panimalar Engg. College Chennai City Campus 18


CCS366- SOFTWARE TESTING AND AUTOMATION

3. Early testing:

• Here, early testing refers to the idea that all testing activities should begin in the
early stages of the requirement analysis stage of the software development life cycle in order
to identify the defects. If we find the bugs at an early stage, we can fix them right away,
which could end up costing us much less than if they are discovered in a later phase of the
testing process.
• Since we will need the requirement definition papers in order to conduct testing, if
the requirements are mistakenly specified now, they may be corrected later, perhaps during
the development process.

4. Defect clustering:

• The defect clustering specified that we can identify the quantities of problems that
are associated to a limited number of modules during the testing procedure: We have a
number of explanations for this, including the possibility of intricate modules, difficult code
and more.

• According to the pareto principle, which suggests that we may determine that
approximately, these kinds of software or applications will follow, roughly? Twenty percent
of the modules contain eighty percent of the complexity. This allows us to locate the
ambiguous modules, but it has limitations if the same tests are run often since they will not
be able to spot any newly introduced flaws.

5. Beware of the pesticide paradox:

This is based on the theory that when you use pesticide repeatedly on crops, insects
will eventually build up an immunity, rendering it ineffective. Similarly, with testing, if the
same tests are run continuously then – while they might confirm the software is working –
eventually they will fail to find new issues. It is important to keep reviewing your tests and
modifying or adding to your scenarios to help prevent the pesticide paradox from occurring –
maybe using varying methods of testing techniques, methods and approaches in parallel.

6. Testing is context dependent:

Testing is ALL about the context. The methods and types of testing carried out can
completely depend on the context of the software or systems – for example, an e-commerce
website can require different types of testing and approaches to an API application, or a
database reporting application. What you are testing will always affect your approach.

7. Absence-of-errors is a fallacy (myth):

If your software or system is unusable (or does not fulfill users‟ wishes) then it does
not matter how many defects are found and fixed – it is still unusable. So in this sense, it is
irrelevant how issue- or error-free your system is; if the usability is so poor that users are
unable to navigate, or/and it does not match business requirements then it has failed, despite
having few bugs.
It is important, therefore, to run tests that are relevant to the system’s requirements.
You should also be testing your software with users – this can be done against early
prototypes (at the usability testing phase), to gather feedback that can be used to ensure and
improve usability. Remember, just because there might be a low number of issues, it does not
mean your software is shippable – meeting client expectations and requirements are just as
important as ensuring quality.

Panimalar Engg. College Chennai City Campus 19


CCS366- SOFTWARE TESTING AND AUTOMATION
1.9. PROGRAM INSPECTIONS
Program or Software inspection refers to a peer review of software to identify
bugs or defects at the early stages of SDLC. It is a formal review that ensures the
documentation produced during a given stage is consistent with previous stages and
conforms to pre- established rules and standards.
Software inspection involves people examining the software product to discover
defects and inconsistencies. Since it doesn’t require system execution, inspection is usually
done before implementation.

Purpose / Advantages of software inspection:


Software inspection aims to identify software defects and deviations, ensuring the
product meets customer requirements, wants, and needs. Software inspection is designed to
unravel defects or bugs, unlike testing, which is done to make corrections. The purpose can
be given as below :
○ Identifying and resolving defects early
○ Enhancing code readability
○ Improving team collaboration
○ Enhancing code maintainability
○ Improving code efficiency
○ Enhancing security
○ Improving the overall quality of the software

Types of Software Inspections:


1. Document inspection: Here, the documents produced for a given phase are inspected,
further focusing on their quality, correctness, and relevance.
2. Code inspection: The code, program source files, and test scenarios are inspected and
reviewed.

Who are the key parties involved?


 Moderator: A facilitator who organizes and reports on inspection.
 Author: A person who produces the report.
 Reader: A person who guides the examination of software;
 Recorder: An inspector who logs all the defects.
 Inspector: The inspection team member responsible for identifying the defects.

Software Inspection Process:


Software inspection involves six steps – Planning, Overview, Preparation, Meeting,
Rework, and Follow-up.

1. Planning
The planning phase starts with the selection of a group review team. A moderator
plans the activities performed during the inspection and verifies that the software entry
criteria are met.

Panimalar Engg. College Chennai City Campus 20


CCS366- SOFTWARE TESTING AND AUTOMATION
2. Overview
The overview phase intends to disseminate information regarding the background of
the product under review. Here, a presentation is given to the inspector with some
background information needed to review the software product properly.

3. Preparation
In the individual preparation phase, the inspector collects all the materials needed for
inspection. Each reviewer studies the project individually and notes the issues they
encounter.

4. Meeting
The moderator conducts the meeting to collect and review defects. Here, the reader
reads through the product line by line while the inspector points out the flaws. All issues are
raised, and suggestions may be recorded.

5. Rework
Based on meeting notes, the author changes the work product.

6. Follow-up
In the last phase, the moderator verifies if necessary changes are made to the software
product, compiling a defect summary report.

Disadvantages of Software Inspection:


 It is a time-consuming process.
 Software inspection requires discipline.
 Can be subject to bias
 Limited to detecting syntax errors
 Can be costly

1.10. STAGES OF TESTING / LEVELS OF TESTING

1.10.1 UNIT TESTING


A software development-approach known as unit testing involves checking the
functionality of the smallest testable components or units, of an application one by one. Unit
tests are carried out by software developers and sometimes by QA personnel. A unit is a
single testable part of a software system and tested during the development phase of the
application software. Unit testing's primary goal is to separate written code for testing to see
whether it functions as intended.
 Unit tests should be run often by teams, whether manually or more frequently
automatically.
 Automated methods often create test cases using a testing framework. In addition to

Panimalar Engg. College Chennai City Campus 21


CCS366- SOFTWARE TESTING AND AUTOMATION
presenting a summary of the test cases, these frameworks are also configured to flag
and report any failed test cases.

 Unit Test Lifecycle:

The life cycle of a unit test is to plan, implement, review and maintain
1. Review the code written:. According to the unit test life cycle, you first outline the
requirements of your code and then attempt to create a test case for each of them. You
review the code written.
2. Check in code from repository : The reviewed unit is put into the repository for further
testing
3. Check out code from repository : Select the Unit for which the testing has to be done
4. Make suitable changes: When the time comes, make suitable changes to the unit, after
analyzing each function or method . This will give the tester insight into what is going
on in that piece of code. Here is an example:
 Parameters being passed in
 Code doing its job
 Code returning something
5. Execute the test and compare the expected and actual results: This phase of the Unit
testing life cycle involves developing a test by creating a test object, selecting input
values to execute the test, executing the test, and comparing the expected and actual
results
6. Fix the detected bugs in the code: It also gives developers peace of mind when adding
or modifying code because they know if they break something, they will be notified
immediately during testing. This way, you can fix problems before they ever reach
production and cause issues for end users
Re-execute the tests to verify them: Unit testing is a great way for developers to keep
track of their changes, which can be especially important when it comes to life cycle
methods that may not have a visual representation. Re-executing the tests to verify them
after each change can help ensure everything is still working as expected.

Unit testing advantages:


There are many advantages to unit testing, including the following:
 Compound mistakes happen less often the sooner an issue is discovered.
 Fixing issues as they arise is often less expensive than waiting until they become
serious
 Simplified debugging procedures.
 The codebase can be modified easily by developers,
 Code may be transferred to new projects and reused by developers.

Panimalar Engg. College Chennai City Campus 22


CCS366- SOFTWARE TESTING AND AUTOMATION

Unit testing disadvantages:


While unit testing is integral to any software development and testing strategy; there are some
aspects to be aware of. Disadvantages to unit testing include the following:
 Not all bugs will be found during tests.
 Unit testing does not identify integration flaws; it just checks data sets and their
functionality.
 To test one line of code, more lines of test code may need to be developed, which
might require additional time.
 To successfully apply unit testing, developers may need to pick up new skills, such as
how to utilize certain automated software tools.

1.10.2 INTEGRATION TESTING


● The second stage of the software testing process, after unit testing, is known as
integration testing. Integration testing is the process of inspecting various parts or units
of a software project to reveal flaws and ensure that they function as intended.
● Integration testing is the process of testing the interface between two software units
or modules. It focuses on determining the correctness of the interface.
● The purpose of integration testing is to expose faults in the interaction between
integrated units.
● The typical software project often comprises of multiple software modules, many of
which were created by various programmers. Integration testing demonstrates to the
group how effectively these dissimilar components interact.
Why to perform integration testing?
 There are many particular reasons why developers should do integration testing, in
addition to the basic reality that they must test all software programs before making
them available to the general public.
 Errors might result from incompatibility between program components.
 Every software module must be able to communicate with the database and
requirements are subject to change as a result of customer feedback. Though if they
haven't been extensively tested yet, those additional needs should be.
 Every software developer has their own conceptual framework and coding logic.
Integrity testing guarantees that these diverse elements work together flawlessly.
 Modules often interface with third-party APIs or tools; thus we require integration
testing to confirm that the data these tools receive is accurate.
 There may be possible hardware compatibility issues.
 Types of Integration Testing :

Panimalar Engg. College Chennai City Campus 23


CCS366- SOFTWARE TESTING AND AUTOMATION

Big bang Integration Testing :


 All the modules of the system are simply put together and tested.
 This approach is practicable only for very small systems. If an error is found during the
integration testing, it is very difficult to localize the error as the error may potentially belong
to any of the modules being integrated
Bottom-Up Integration Testing:
 In bottom-up testing, each module at lower levels are tested with higher modules until all
modules are tested
Top-Down Integration Testing:
 First, high-level modules are tested and then low-level modules and finally integrating the
low-level modules to a high level to ensure the system is working as intended.
Mixed/ Sandwich Integration Testing:
 A mixed integration testing follows a combination of top down and bottom-up testing
approaches

Advantages of Integration Testing


 Integration testing ensures that every integrated module functions correctly.
 Integration testing uncovers interface errors.
 Testers can initiate integration testing once a module is completed and doesn’t require
waiting for another module to be done and ready for testing.
 Testers can detect bugs, defects and security issues.
 Integration testing provides testers with a comprehensive analysis of the whole
system, dramatically reducing the likelihood of severe connectivity issues.

Challenges of Integration Testing:


Unfortunately, integration testing has some difficulties to overcome as well.
 Questions will arise about how components from two distinct systems produced by
two different suppliers will impact and interact with one another during testing.
 Integrating new and old systems requires extensive testing and possible revisions.
 Integration testing needs testing not just the integration connections but the
environment itself, adding another level of complexity to the process.
 This is because integration testing requires testing not only the integration links but
the environment itself.

1.10.3 SYSTEM TESTING


 System testing is a type of software testing done on a whole integrated system to
determine if it complies with the necessary criteria.
 Integration testing successful components are used as input during system testing.
Integration testing's objective is to find any discrepancies between the Integrated
components.
 System testing finds flaws in the integrated modules as well as the whole system. A
component or system's observed behavior during testing is the outcome of system
testing. System testing is done on the whole system under the guidance of either
functional or system requirement specifications or under the guidance of both.
 The design, behavior and customer expectations of the system are all tested during
system testing. Beyond the parameters specified in the Software Requirements
Specification (SRS), it is used to test the system.
 In essence, system testing is carried out by a testing team that is separate from the
development team and helps to objectively assess the system's quality. It has been
tested in both functional and non- functional ways. Black-box testing is what system
testing is. After integration testing but before acceptance testing, system testing is
carried out.
Panimalar Engg. College Chennai City Campus 24
CCS366- SOFTWARE TESTING AND AUTOMATION
Process for system testing:
The steps for system testing are as follows:
1. Setup of the test environment: Establish a test environment for higher-quality
testing.
2. Produce a test case: Produce a test case for the testing
3. Produce test data: Produce the data-that will be put to the test.
4. Execute test case: Test cases are carried out after the production of the test case and
the test data.
5. Defect reporting: System flaws are discovered.
6. Regression testing: This technique is used to examine the consequences of the
testing procedure's side effects.
7. Log defects: In this stage, defects are corrected.
8. Retest: If the first test is unsuccessful, a second test is conducted.

Main Types of System Testing:


Performance testing: is a sort of software testing used to evaluate the speed,
scalability, stability and dependability of software applications and products.
Load testing: This sort of software testing is used to ascertain how a system or software
product will behave under high loads.

Stress testing: Stress testing is a sort of software testing carried out to examine the
system's resilience under changing loads.

Advantages of system testing:


 The testers don't need to have programming experience to do this testing.
 It test the complete product or piece of software, allowing us to quickly find any
faults or flaws that slipped through integration and unit testing.
 The testing environment resembles a real-world production or commercial setting.
 It addresses the technical and business needs of customers and uses various test
scripts to verify the system's full operation.
 Following this testing, the product will have practically all potential flaws or faults
fixed, allowing the development team to safely go on to acceptance testing.

Disadvantages of system testing:


 Because this testing involves checking the complete product or piece of software, it
takes longer than other testing methods.
 As it involves testing the complete piece of software, the cost will be
considerable.
 Without a proper debugging tool, the hidden faults won't be discovered.
Panimalar Engg. College Chennai City Campus 25
CCS366- SOFTWARE TESTING AND AUTOMATION
1.10.4 Acceptance testing

Acceptance Testing is an important aspect of Software Testing, which guarantees that


software aligns with user needs and business requirements. Acceptance testing is a quality
assurance (QA) process. The major aim of this test is to evaluate the compliance of the system
with the business requirements and assess whether it is acceptable for delivery or not .
Acceptance Testing is the last phase of software testing performed after System Testing and
before making the system available for actual use.
Some situations when acceptance testing is usually performed are mentioned as below:
Stage Description
End of Development After developers complete coding, acceptance testing is
performed to verify that all requirements are met.
Before User Acceptance Conducted before the software is released to end-users to
ensure it aligns with business objectives and user needs.
Pre-Release Performed as the final check to catch any last-minute issues or
defects before the software goes live.

Types of Acceptance Testing


 User Acceptance Testing (UAT)
o User acceptance testing is used to determine whether the product is working
for the user correctly.
 Business Acceptance Testing (BAT)
o BAT is used to determine whether the product meets the business goals and
purposes or not
 Contract Acceptance Testing (CAT)
o CAT is a contract that specifies that once the product goes live, within a
predetermined period, the acceptance test must be performed, and it should
pass all the acceptance use cases
 Regulations Acceptance Testing (RAT)
o RAT is used to determine whether the product violates the rules and
regulations that are defined by the government of the country where it is
being released
 Operational Acceptance Testing (OAT)
o OAT is used to determine the operational readiness of the product and is
non-functional testing.
 Alpha Testing
o Alpha testing is used to determine the product in the development testing
environment by a specialized testers team usually called alpha testers.
 Beta Testing
o Beta testing is used to assess the product by exposing it to the real end-users,
typically called beta testers in their environment.

Advantages of Acceptance Testing


1. This testing helps the project team to know the further requirements from the users directly
as it involve the users for testing.
2. It brings confidence and satisfaction to the clients as they are directly involved in the testing
process.

Disadvantages of Acceptance Testing


1. Users should have basic knowledge about the product or application.
2. Sometimes, users don’t want to participate in the testing process.
3. The feedback for the testing takes a long time as it involves many users and the opinions
may differ from one user to another user.

Panimalar Engg. College Chennai City Campus 26

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy