ST Unit 3 and 4 Merged
ST Unit 3 and 4 Merged
Unit 3
Black Box Testing & Grey Box Testing
1
Black Box Testing
List of Contents
1. What is Black Box Testing 7. Advantages of Black Box Testing
2. Generic Steps of Black Box Testing 8. Techniques for Black Box Testing
3. What is a Test Case 9. Requirements Based Testing
4. Types of Black Box Testing 10. Positive & Negative Testing
5. Differences b/w Black Box and White Box Testing 11. Specification Based Testing
a. Equivalence Partitioning
b. Boundary Value Analysis
c. Decision Tables
d. State Transitioning
6. When is Black Box Testing Done 12. Practice Question
3
Generic Steps Of Black Box Testing
5
Types of Black Box Testing
There are many types of Black Box Testing but the following are the prominent ones –
● Functional testing – This black box testing type is related to the functional
requirements of a system; it is done by software testers.
● Non-functional testing – This type of black box testing is not related to testing of
specific functionality, but non-functional requirements such as performance,
scalability, usability.
● Regression testing – Regression Testing is done after code fixes, upgrades or any other
system maintenance to check the new code has not affected the existing code.
● Tools used for Black box testing largely depends on the type of black box testing
you are doing.
● For Functional/ Regression Tests you can use – QTP, Selenium
● For Non-Functional Tests, you can use – LoadRunner, Jmeter
Selenium Tutorial
JMeter Tutorial
7
Differences b/w Black Box and White Box Testing
8
When do we do Black Box Testing?
● Unlike traditional white box testing, black box testing is beneficial for testing
software usability.
● The overall functionality of the system under test
● Black box testing gives you a broader picture of the software.
● This testing approach sees an application from a user’s perspective.
● To test the software as a whole system rather than different modules.
9
Advantages of Black Box Testing
10
Techniques for Black Box Testing
11
Requirements-Based Testing
● Done to ensure that all requirements in SRS are tested
● Difference between implicit and explicit requirements
● Review requirements first to ensure they are consistent, correct, complete and testable
● Review enables translation of (some of) the implied requirements to stated
requirements
● A reviewed SRS tabulates requirements, along with a requirements id and a priority
● This is the genesis of a Requirements Traceability Matrix
12
Positive and Negative Testing
Positive Testing
● Type of testing which is performed on a software application by providing the valid data sets
as an input.
● It checks whether the software application behaves as expected with positive inputs or not.
● Positive testing is performed in order to check whether the software application does exactly
what it is expected to do.
Negative Testing
● Testing method performed on the software application by providing invalid or improper data
sets as input.
● The purpose of negative testing is to ensure that the software application does not crash and
remains stable with invalid data inputs.
13
Positive and Negative Testing
Illustration Example for Positive and Negative Testing
14
Example 2 – Positive and Negative Testing
Positive Testing
Negative Testing
Test Req ID Input 1 Input 2 Current Expected
Case ID State State
15
Specification-Based Testing
● A specification can be anything like a written document, collection of use cases, a set of
models or a prototype.
● Specification-based testing technique is also known as ‘black-box’ or input/output driven
testing techniques because they view the software as a black-box with inputs and
outputs.
● Specification-based techniques are appropriate at all levels of testing (component testing
through to acceptance testing) where a specification exists.
○ For example, when performing system or acceptance testing, the requirements
specification or functional specification may form the basis of the tests.
16
Types of Specification Based Testing Techniques
1. Equivalence Partitioning: Software Testing technique that divides the input data of a
software unit into partitions of equivalent data from which test cases can be derived.
2. Boundary Value Analysis: Software Testing technique in which tests are designed to include
representatives of boundary values in a range.
3. Decision Tables: Software Testing technique in which tests are more focused on business
logic or business rules. A decision table is a good way to deal with combinations of inputs. *
4. State Transitioning: Software Testing technique which is used when the system is defined in
terms of a finite number of states and the transitions between the states is governed by the
rules of the system.
17
Equivalence Partitioning
18
Equivalence Partitioning - Example 1
● Assume that there is a function of a software application that accepts a particular
number of digits, not greater and less than that particular number.
● For example, an OTP number which contains only six digits, less or more than six
digits will not be accepted, and the application will redirect the user to the error page.
19
Equivalence Partitioning - Example 2
● As present in the image, the “AGE” text field accepts only numbers from 18 to 60. There
will be three sets of classes or groups.
21
Boundary Value Analysis - Example 1
Imagine, there is an Age function that accepts a number between 18 to 30, where 18 is the
minimum and 30 is the maximum value of valid partition, the other values of this partition are 19,
20, 21, 22, 23, 24, 25, 26, 27, 28 and 29.
The invalid partition consists of the numbers which are less than 18 such as 12, 14, and more than
30 such as 31, 32, 34, 36 and 40.
Tester develops test cases for both valid and invalid partitions to capture the behavior of the
system on different input conditions. Test cases are developed for each and every value of the
range.
22
Decision Tables
This technique is related to the correct combination of inputs and determines the result
of various combinations of input.
To design the test cases by decision table technique, we need to consider conditions as
input and actions as output.
Example 1
Most of us use an email account, and when you want to use an email account, for this you
need to enter the email and its associated password.
If both email and password are correctly matched, the user will be directed to the email
account's homepage; otherwise, it will come back to the login page with an error message
specified with "Incorrect Email" or "Incorrect Password."
23
Decision Tables - Example 1 contd.
Let's see how a decision table is created for the login function in which we can log in by
using email and password.
Both the email and the password are the conditions, and the expected result is action.
24
Structural Testing - Example 1 contd.
There are four conditions or test cases to test the login function. In the first condition if
both email and password are correct, then the user should be directed to account's
Homepage.
In the second condition if the email is correct, but the password is incorrect then the
function should display Incorrect Password.
In the third condition if the email is incorrect, but the password is correct, then it should
display Incorrect Email.
Now, in fourth and last condition both email and password are incorrect then the function
should display Incorrect Email.
Tester uses 2n formula where n denotes the number of inputs; in the example there is the
number of inputs is 2 (one is true and second is false).
Number of possible conditions = 2^ Number of Values of the second condition
Number of possible conditions =2^2 = 4 25
Structural Testing - Example 2
Take an example of XYZ bank that provides interest rate for the Male senior citizen as 10%
and for the rest of the people 9%.
Condition C1 has two values as true and false,
Condition C2 also has two values as true and false.
The number of total possible combinations would then be four.
This way we can derive test cases using a decision table.
26
State Transition Testing
This technique is based on the idea that a system can be in one of a number of states, and
that when an event occurs, the system transitions from one state to another. The events
that can cause a state transition are known as triggers and the states that can be reached
from a given state are known as targets.
● To carry out state transition testing, a tester first needs to identify all of the possible
states that a system can be in, and all of the possible triggers that can cause a state
transition.
● They then need to create a test case for each state transition that they want to test.
● When carrying out the test, the tester will start in the initial state, and then trigger
the event that they want to test.
● They will then observe the system to see if it transitions to the expected state.
27
Practice Question
The basic membership fee to a club only for adults above 18 years is $200,
however, this fee can increase or decrease depending on three factors: their age,
income, and their marital status. Customers that are above the age of 65 have
their fee decreased by $50. Customers that have income above $10000 per month
have their fee increased by $50. Customers that are married get a fee increase of
10%.
Derive test cases and test data for the following Black Box Software testing
methods:
● Equivalence Partitioning
● Boundary Value Analysis
● Decision Table
28
Grey Box Testing
● Grey box testing is a software testing method to test the software application with
partial knowledge of the internal working structure.
● It is a combination of black box and white box testing because it involves access to
internal coding to design test cases as white box testing and testing practices are
done at functionality level as black box testing.
Grey Box Testing Strategy
● Grey box testing does not make necessary that the tester must design test cases from
source code.
● To perform this testing test cases can be designed on the base of, knowledge of
architectures, algorithm, internal states or other high -level descriptions of the
program behavior.
● It uses all the straightforward techniques of black box testing for function testing.
Example of Gray Box Testing : While testing websites feature like links or orphan links, if
tester encounters any problem with these links, then he can make the changes
straightaway in HTML code and can check in real time.
Generic Steps to Perform Grey Box Testing
1. Select and identify inputs from BlackBox and WhiteBox testing inputs.
2. Identify expected outputs from these selected inputs.
3. Identify all the major paths to traverse through during the testing period.
4. Identify sub-functions which are the part of main functions to perform deep level
testing.
5. Identify inputs for subfunctions.
6. Identify expected outputs for subfunctions.
7. Executing a test case for Subfunctions.
8. Verification of the correctness of result.
Advantages of Grey Box Testing
● It provides combined benefits of both black box testing and white box testing both
● It combines the input of developers as well as testers and improves overall product
quality
● It reduces the overhead of long process of testing functional and non-functional
types
● It gives enough free time for a developer to fix defects
● Testing is done from the user point of view rather than a designer point of view
Techniques used for Grey Box Testing
1. Matrix Testing: This testing technique involves defining all the variables that exist in
their programs.
2. Regression Testing: To check whether the change in the previous version has
regressed other aspects of the program in the new version. It will be done by testing
strategies like retest all, retest risky use cases, retest within a firewall.
3. OAT or Orthogonal Array Testing: It provides maximum code coverage with
minimum test cases.
4. Pattern Testing: This testing is performed on the historical data of the previous
system defects. Unlike black box testing, gray box testing digs within the code and
determines why the failure happened
THANK YOU
34
Software Testing
Unit 3
Test Management
1
Test Management
List of Contents
- Testing Phase as a project
- Testing Strategy
- Facets of Test Planning
- Test Management
- Test Process
- Test Reporting
- Best Practices
Defining a Project
● A project, as per the Project Management Institute (PMI-2004), is “a temporary endeavor
to create a unique product or service.”
● It is aimed at creating a unique product or service, setting it apart from others in some
distinctive way.
● Every project has a clear and predefined starting point and a specific end date.
● The end result of a project, be it a product or a service, possesses distinctive features that
set it apart from others
● Notably, even testing can qualify as a project on its own. This implies that activities like
testing need to be meticulously planned, executed, tracked, and reported, just like any
other project.
3
Testing Strategy
1. High-Level Approach and Philosophy
Define the overarching principles that guide your testing approach.
Risk Deciding
Mgmt Test
Strategy
Test Identifying
Estimation Skill sets /
Identifying
Identifying Trng
Test
Deliverables Env needs
5
Scope Management
● Understanding what constitutes a release of a product
● Breaking down the release into features
● Prioritizing the features for testing. This includes
○ Features that are new and critical for the product release
○ Features whose failures will be catastrophic
○ Features that are re-extensions of features that have a problem-prone track
record
○ Consideration on environmental and other combinatorial factors
● Deciding which features will be tested and which will not be
● Gathering details to prepare for estimation
6
Deciding the test approach / strategy for the
chosen features
● What type of testing would you use for testing the functionality of each feature?
● What are the configurations or scenarios for testing the features?
● What integration testing would you do to ensure these features work together?
● What localization validations would be needed?
● What “non-functional” tests would you need to do?
7
Setting Up Criteria for Testing
Entry criteria
Defining when to start testing for each type of testing.
Exit criteria
Considering completeness, risk of release – how much to test and when to stop.
Resumption criteria
The above mentioned hurdles being cleared
8
Identifying Staff Skill Sets and Training
Organizational Structure Establishment
Defining the framework of your organization for efficient operations.
Roles are Defined to:
● Have clear accountability for a given task, so that each person knows what they have to
do
● Clearly list the responsibilities for various people involved so that everyone knows how
his or her work fits into the entire project
● Complement each other, ensuring no one steps on others’ toes
● Supplement each other, so that no task is left unassigned
● Establish management and reporting responsibilities
● Match job requirements with people’s skills and aspirations as best as possible
● Identify and provide necessary training (Why?) (Ans : Identify skill and knowledge gaps,
9
Provide essential tools for excellence, Improve employee performance and satisfaction )
Identifying Environment Needs
Basically refers to identifying resource requirements such as:
● Hardware requirements, including RAM, processor, and disk space.
● Test tool prerequisites.
● Supporting tools like compilers, test data generators, and configuration management
tools.
● Different software configurations (e.g., operating systems) needed.
● Special considerations for resource-intensive tests (e.g., load and performance tests).
● Ensure sufficient software licenses.
● Infrastructure and resources, including office space and support functions.
● Planning based on limitations and constraints.
10
Identifying Test Deliverables
● Test Plan: Including master test plan and project-specific test plans.
● Test Case Design Specs: Detailed specifications for test case creation.
● Test Cases: Inclusive of any specified automation.
● Post-Testing Use: Understanding the purpose of test cases after testing.
● Test Data/Test Bed: Data and environment setup for testing.
● Test Logs: Records generated during test execution.
● Test Summary Reports: Summarizing the overall test results.
11
Test Estimation
There are different phases in Test Estimation. We are supposed to estimate:
1. Size
2. Effort
3. Resources
4. Elapsed Time required
12
Test Estimation : Size
● Refers to actual amount of testing to be done
● Depends on
○ Size of product under test
○ Extent of automation required
○ Number of platforms and interoperability requirements
● Expressed as
● Number of test cases
● Number of test scenarios
● Number of configurations to be tested
13
Test Estimation : Effort
● Effort estimation impacts project cost directly.
● Factors influencing effort estimate:
○ Productivity metrics (e.g., test case creation, automation, execution, analysis per
day).
○ Potential for reusing existing resources.
○ Process robustness and efficiency.
14
Test WBS and Scheduling
Activity Breakdown and Scheduling
15
Test WBS and Scheduling (contd.)
Scheduling of Activities
● Identifying external and internal dependencies among the activities
● Sequencing the activities based on the expected duration as well as on the
dependencies
● Identifying the time required for each of the WBS activities, taking into account the
above two factors
● Monitoring the progress in terms of time and effort
● Rebalancing the schedules and resources as necessary
16
Dependencies
External Dependencies
● Product developer availability for support.
● Access to required documentation.
● Hiring resources and considerations.
● Availability of training resources.
● Procurement of necessary hardware/software.
● Availability of translated message files for testing.
Internal Dependencies
● Finalize test specifications.
● Code/script the tests.
● Execute the tests.
● Resolve conflicts if environment sharing is necessary.
17
Risk Management
Common Risks and Mitigation in Testing Projects
● Unclear Requirements:
○ Engage the testing team from the start for clarity.
● Schedule Dependency and Downstream Positioning:
○ Implement backup tasks, multiplexing, and parallel automation.
● Insufficient Testing Time and Excessive Caution:
○ Utilize the V Model and establish clear entry/exit criteria.
● Critical Defects:
○ Identify and address "show stopper" defects promptly.
● Non Availability of Skilled Testers:
○ Showcase career paths to attract and retain skilled testers.
● Challenges in obtaining the required test automation tool.
18
Test Management
Test Process
Mgmt
Integrating Test
With Test Infrastructure
Product Mgmt Mgmt
Release
People
Mgmt
19
Test Process Management
● Test Artifacts Naming and Storage Guidelines
● Ensuring alignment between product features and corresponding test suites.
● Documentation and Test Coding Standards
● Separate Test Reporting Standards
● Test Configuration Management
20
Test Infrastructure Management
Essential Components of Test Infrastructure Management
● Test case repository (TCDB)
● Artefact configuration database (SCM)
● Defect repository
21
Integrating with Product Release
● Synchronize development and testing timelines for various testing phases (e.g.,
integration, system testing).
● Establish Service Level Agreements (SLAs) to define testing completion timeframes.
● Maintain uniform definitions of defect priorities and severities.
● Establish communication channels with documentation team for defect-related
updates and workarounds.
22
Test Process
1. Putting together and baselining a test plan
2. Develop comprehensive test specifications, covering:
● Purpose of test
● Items being tested, with their version numbers
● Environmental setup needed
● Input data required
● Steps to be followed
● Expected results
● Any relationship to other tests
3. Update of RTM
4. Identifying possible automation candidates
5. Developing and “baselining” test cases 23
Test Process (contd.)
24
Test Reporting
● Test Incidence Report
○ Update to DR (Defect Repository)
○ Includes Test ID, Product & Component Info, Defect Description, and Fix Info
● Test Cycle Report
○ Provides a summary of cycle activities
○ Highlights uncovered defects, categorized by severity and impact
○ Tracks progress from the previous cycle
○ Lists outstanding defects for the current cycle
○ Notes any variations in effort or schedule
● Test Summary Report
○ Phase-wise summary and final test summary included
25
Test Reporting - Test Summary Report
● Test Summary Report Overview : Provides an overview of test cycle or phase activities.
● Activities Variance : Highlights variances from planned activities, including:
○ Tests that couldn't be executed (with explanations).
○ Test modifications compared to original specifications (update TCDB).
○ Additional tests conducted (not in the original plan).
○ Differences in effort and time compared to the plan.
○ Any other deviations from the plan.
● Summary of Test Results : Covers
○ Failed tests with root-cause descriptions (if available).
○ Severity of defects uncovered by the tests.
● Comprehensive Assessment : Evaluates the product's "fit-for-release."
● Recommendation for Release : Provides a release recommendation.
26
Test Reporting - Recommendation for Release
● Utilize Dijkstra's Doctrine.
● Testing Team's Role: Inform management about defects and potential risks.
● Quality Profile: Assess the overall quality of the product.
● Final Decision: Management decides whether to release as-is or allocate additional
resources for defect resolution.
27
Best Practices
● Usage of Process Models: Adopt CMMI or TMM for efficient processes.
● Foster Collaboration: Cultivate strong cooperation between Development and
Testing teams.
● Leverage Technology: Utilize an integrated SCM-DR-TCDB infrastructure and
prioritize test automation.
28
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
29
Software Testing
Unit 4
Acceptance Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Acceptance Testing
List of Contents
Acceptance Testing
- What is It?
- Importance
- Acceptance Testing [Criteria & Execution]
- Acceptance Testing – Challenges
- Alpha Testing
- Beta Testing
- Alpha vs. Beta Testing
3
Acceptance Testing - Importance
● Critical for business purpose. Without a defined acceptance criteria,
managing the closure is difficult
● Though software should work as per requirement, it is not easy to
demonstrate ALL required functionalities, under practical constraints
of time and resources
Ex: Think of all path testing for medium complex application
● Very important for software services/project companies.
4
Acceptance Testing - Criteria
● Typically involves specifying business functionality of some
complexity
Ex. Should do tax deduction at source during each month salary
disbursement
● Most of the high priority requirements are covered in the criteria.
● Legal & Statutory requirements
● May cover some non-functional requirements also
Ex. Should process 100 million call records in 2 hours!
5
Acceptance Testing - Criteria
Criteria could also be process / procedure requirements
Ex:
Test reports should show a coverage of > 85% at component level
80 staff members trained in using the data entry
All help documents must open in “Star Office”
6
Acceptance Testing - Criteria
Typical Test cases cover
● Critical functionality
● Most used functionality
● End-to-end scenarios
● New functionalities – during upgrade
● Legal / statutory needs
● Functionality to work on a defined corpus of data.
7
Acceptance Testing - Execution
● Typically happens “On-Site” after careful environment setting
● There should be ‘stand-by’ dev team to address blocking issues
● Needs a team with very good business functional knowledge and
application working knowledge
● Major defects – also defined as part of criteria – break the
acceptance test
○ Has to be re-started after the defect is addressed
● Careful execution documentation is a must & final reporting.
8
Acceptance Testing – Practical Challenges
1. Customers are wary of providing
2. Development team might omit what is not in the Acceptance Test
3. Not easy to specify – Needs good effort.
4. Most of the times vendor company defines and gets concurrence
5. Multiple iterations may be required and multiple customer
representatives may be involved
6. Results need to be carefully analyzed.
9
Alpha Testing
Alpha Testing - Alpha testing is done by internal developer and QA
teams, with its main goal being to ensure that the software is
functional, reliable, and free of any defects or errors
- Prior to Beta
- Product stability still poor; more ad-hoc process
10
Beta Testing
Typically for products; More so for new product releases
Product rejections in market place is a huge risk.
Reasons –
● Implicit requirements not addressed
● Changed needs/perceptions after the initial specs
● Usability
● Competitive comparisons by users
11
Beta Testing
Process
1. Select & list representative customers
2. Work out a beta test plan
3. Initiate the product and support throughout
4. Carefully monitor / watch the progress and the feedbacks –
good & bad both
5. Have a good response system to avoid frustration for customers
6. Analyze the whole feedback and plough it back for product
improvement
Incentivized participation
12
Alpha Testing vs. Beta Testing
Alpha testing vs. Beta testing
Alpha and beta tests are both types of acceptance tests.
During alpha testing, data is not real and typically, the data set is very
small in order to make debugging and root cause analysis easer.
Beta test participants are potential customers who have agreed to test
a possibly unstable application.
The users create their own data sets and the test focus changes to
usability and evaluating real-life performance with multiple users using
their own hardware.
13
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Non-Functional Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Non-Functional Testing
List of Contents
- Overview - Test Automation
- Non Functionality Tests - Test Execution
- Definitions - Test Analysis
- Test Planning & Phases - NF Test Tools
- Scalability - Security Testing
- Reliability - Other NF Tests
- Stress Test
- Performance Test
3
Overview
Non-Functional test: Testing the abilities of the system.
● Reliability
● Scalability
● Performance
● Stress
● Interoperability
● Security
● Compatibility
4
Why Non-Functionality Tests?
➔ To find design faults and to help in fixing them.
➔ To find the limits of the product
➔ Max no. of concurrent access, min memory, max # of rows, max
➔ To get tunable parameters for the best performance
➔ Optimal combination of values
➔ To find out whether resource upgrades can improve performance? - ROI
➔ To find out whether the product can behave gracefully during stress and load conditions
➔ To ensure that the product can work without degrading for a long duration
➔ To compare with other products and previous version of the product under test.
➔ To avoid unintended side effects.
5
Common Characteristics of NFT
NF behavior
● Depends heavily on the deployment environment
● “multi-” in most of the control parameters
6
Quick Definitions
Scalability test
Testing that requires to find out the maximum capability of the product
parameters.
Performance test
Testing conducted to evaluate the time taken or response time of the
product to perform its required functions under stated conditions in
comparison with different versions of same product and competitive
products.
7
Quick Definitions
Reliability test
Testing conducted to evaluate the ability of the product to perform it's
required functions under stated conditions for a specified period of time or
number of iterations.
Stress test
Testing conducted to evaluate a system beyond the limits of the specified
requirements or environment resources (such as disk space, memory,
processor utilization, network congestion) to ensure the product behavior is
acceptable.
8
Test Phases
9
Test Planning – Test Strategy Basis
● What are all the input’s that can be used to design the test cases
○ Product Requirement Document
○ Customer Deployment Information.
○ Key NF requirements & priorities
● Industry / competitor products / current customer product behavior data for
benchmarking.
● What automation can be used.
● Test execution in stages
● How much % of TCs need to/can be automated.
10
Test Planning - Scope
● The new features which can be attributed to non-functional quality
factors
● Old features – design, code change, or side effects
● May skip unchanged and unaffected features
11
Test Planning - Estimations
● Effort estimation – More time
● Resource estimation – More H/W
● Defect estimation – Defects complex and costly.
● Reasons? - Difficulty in visualizing
● More, as compared to functional test runs.
● How much more etc, is matter of judgement.
12
NF Test Planning – Entry/Exit Criteria
When to start execution
Product does not have basic issues
Meets the minimum criteria
13
Entry/Exit Criteria - Examples
14
Test Planning – Defect Management
How is it different in NFT
Definition of defect is different!
15
Test Design – Typical TC Contains
1. Ensures all the non-functional and design requirements are
implemented as specified in the documentation.
2. Inputs – No. of clients, Resources, No. of iterations, Test Configuration
3. Steps to execute, with some verification steps.
4. Tunable Parameters if any
5. Output (Pass/Failed Definition) : Time taken, resource utilization,
Operations/Unit time, …
6. What data to be collected, at what intervals.
7. Data presentation format
8. The test case priority
16
Test Design – Test Scenarios
Based on the quality factors and test requirements different scenarios
can be selected
17
What is Scalability?
Ability of a system to handle increasing amounts of work without
unacceptable level of performance (degradation)
18
Test Design – Scalability Test
The test cases will focus on to test the maximum limits of the
features, utilities and performing some basic operations
Few Examples:
Backup and restore of a DB with 1 GB records.
Add records to until the DB size grows beyond 2 GB.
Repair a DB of 2 million records.
19
Example of Scalability Test
20
Scalability Test– Outcomes
1. Based on a number of tests, documenting data and analysis:
● Scalability increases N% for m times the resources – 50%
increase if memory is doubled
● Suitable , optimal configuration settings for a required scalability
settings
1. This is an important part of application “sizing”
2. If required scalability not achieved, analysis and remediation done
and retested.
21
Reliability
Probability of failure-free software operation for a specified period
of time OR number of operations in a specified environment
23
Test Design – Reliability Test
24
Reliability Test – Outcomes
● # of failures in given run
● Interval of defect free operation - MTBF
● Some of the configuration parameter sensitivity – Which cause more/less
unreliability
● Identifying causes of failures is hardest.
● Possible Failures reasons -?
○ Memory leaks
○ Deep defects due to uninitialized values
○ Weak error handling
○ Unstable environment
○ Unintended “side effects”
25
Test Design – Stress Test
To test the behavior of the system under very severe conditions – high load / low resource
Good system should show a graceful degradation in output and safe/acceptable behavior
under extremes
Example:
Performing login, query, add, repair, backup etc operations randomly from 50 clients
simultaneously at half the rated resource levels – say half memory / low speed
processor…
Since stressed conditions are randomly applied over a period of time, this is similar
to reliability tests
26
Stress Test - Techniques
● Run the application with very high load
● Run the app with very low resource levels
● Run multiple concurrent sessions
● Vary the loads/ resources randomly
● Tools to
○ create artificial scarcity of resources
○ Create artificial high loads
27
Stress Test - Outcomes
● As the resource / load ratio decreases, performance should go down
gradually.
● In extreme conditions, should stop / pause gracefully
● Should recover symmetrically when stress conditions ease.
28
Stress Test – Failure Cases
1. System hang permanently.
2. Runtime crash – unexpectedly
3. Affect other programs in the system badly or verse the full system is
made unstable.
4. Once stress limit reached, does not recover when stress levels are
reduced
29
Test Design – Performance Test
The test cases would focus on getting response time and throughput
for different operations, under defined environment and tracking
resource consumption when resources are shared with other systems
Example:
Performance for adding 1 million records
30
Performance Test – Multiple Needs
● To measure and improve the product performance
● To gauge performance against the competing or standard products
● To generate data for “product sizing” / capacity planning and product
positioning in the market-place
31
Performance Test – Typical Outcome
32
Performance Test – Methodology
● Establish the performance requirements
● Design the performance test cases
● Automate
● Conduct testing and collect results
● Analyze results
● Tune performance
● Benchmark
● Establish needed configuration size.
33
Test Automation – Considerations
● Test automation is itself a software development activity
● Specialized tools and/or Shell script driven batch programs
● Input/Configuration parameters, which are not hard-coded
● Modularization and Reusability
● Selective and Random execution of test cases
● Reporting Data and test logs
● Handling abnormal test termination
● Tool should be maintainable and reliable
34
Test Setup
● Non Functionality setups are usually huge
● Different kinds of tests may need different setups
● Good idea to build the setup before execution.
Test objectives:
● Testing to improve the product quality factors by finding and helping infixing
the defects
● Testing to gain confidence on the product quality factors
● Tunable parameters
36
Test Execution – Data Collection
Important part of NFT is data collection.
Mostly tool driven
Example
37
Test Execution – Outcomes
Observed during execution
System crash / instability
System not responding
Repeated failures
38
Test Analysis - Sample Charts
We can also draw the Throughput Chart and Response charts,
combined with resource utilization of all the resources by multiplying and
normalizing the data to find out
39
Test Analysis - Scalability
Memory α Num clients ⇒
ROI is more when memory is upgraded as the
number of clients increases.
40
Test Analysis - Performance
CPU and Memory α Num clients
Better performance by increasing these resources
Memory
After 100 clients, creation of process is failing or
memory allocator in the S/W is failing
Indicates a defect or upgrade of memory after
certain limit does not have any ROI
41
Test Analysis - ROI
Assuming the customers already
have 128 MB RAM, The ROI from
memory upgrade is short term, as
the after 256MB there is no
improvement in the performance.
42
NF Test Tools
1. LoadRunner (HP)
2. Jmeter (Apache)
3. PerformanceTester(Rational)
4. LoadUI ( Smart Bear)
5. Silk Performer (Borland)
43
Security Testing
● Both static and dynamic
● Weak spots are called security vulnerabilities
● Many test tools to identify vulnerabilities at application level. Ex.,
○ Access control – application level
○ Direct usage of resources through low level code
○ SQL injection through input
○ buffer overflow
○ Usage of encryption
○ Sensitive info in non-secure channel(http)
○ API Interfaces
● OWASP Certifications / Standard
44
Few Other NF Test Types
1. Endurance testing
2. Load testing
3. Compatibility testing
4. Standards Compliance testing
5. Usability testing
6. Accessibility & Internationalization testing
45
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Regression Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Regression Testing
List of Contents
- Regression Testing & Types
- Methodology
- Selecting Test Cases
- Classifying Test Cases
- Resetting Test Cases
- How to conclude Results
- Popular Strategies
- Best Practices
Not just defect fixes, even other modifications to call for regression
testing.
Happens in fields other than software as well – Where a bit of
complexity is involved.
3
Regression Testing – Types
I. Final regression testing
● Unchanged build exercised for the minimum period of “cook time” (gold master build)
● To ensure that “the same build of the product that was tested reaches the customer”
● To have full confidence on product prior to release
II. Regular regression testing
● To validate the product builds between test cycles
● Used to get a comfort feeling on the bug fixes, and to carry on with next cycle of
testing
● Also used for making intermediate releases (Beta,Alpha)
4
Regression Testing – Types
5
Regression Testing – Types
I. Final regression testing
Unchanged build exercised for the minimum period of “cook time” (gold master build)
To ensure that “the same build of the product that was tested reaches the customer”
To have full confidence on product prior to release
II. Regular regression testing
To validate the product builds between test cycles
Used to get a comfort feeling on the bug fixes, and to carry on with next cycle of testing
Also used for making intermediate releases (Beta,Alpha)
6
Regression Testing - Methodology
1. Criteria selecting regression test cases
2. Performing smoke tests
3. Classifying test cases
4. Selecting test cases
5. Resetting test cases for regression testing & phases of testing
6. Conclude results
7. Popular strategies
7
Performing Initial Smoke Test
Smoke testing ensures that the basic functionality works and indicates
that the build can be considered for further testing.
8
What Is Needed for Selecting Test Cases?
● Bug fixes and how they affect the system
● Area of frequent defects
● Area that has undergone many / recent code changes
● Area that is highly visible to the users
● Area that has more risks
● Core features of the product which are mandatory requirements of the customer
Points to Remember…
● Emphasis is more on the criticality of bug fixes than the criticality of the defect itself
● More positive test cases than negative test cases for final regression
● “Constant set” of regression test cases is rare
9
Classifying Test Cases
The order of test execution is priority 1, 2, 3 & 4. Priority helps in entry,
exit criteria
10
Classifying Test Cases
11
Selecting Test Cases
Criteria
● Bug fixes work
● No side-effects
12
Resetting Test Cases
● The results of a test case can be guessed by looking at history.
● There is nothing wrong in communicating expected results before
executing (but don’t conclude).
● In many organizations not all types of testing and all test cases are
repeated for each cycle (but the management generally wants
overall statistics and ‘gut feel’)
● Re-setting a test case is nothing but setting a flag called
NOTRUN or EXECUTE AGAIN and not getting biased with
previous runs
13
Resetting Test Cases (Contd.)
It is done
● When there is a major change in the product
● When there is a change in the build procedure that affects the product
● In a large release cycle where some test cases have not been executed for
a long time
● When you are in the final regression test cycle with a few selected test
cases
● In a situation in which the expected results could be quite different from
history
14
How to Conclude Results
15
Popular Strategies
1. Regress all. Rerun all priority 1 , 2 & 3 TCs. Time becomes the constraint and ROI is
less.
2. Priority-based regression: Rerun priority 1 , 2 & 3 TCs based on time availability.
Cutoff is based on time availability.
3. Regress changes: Compare code changes and and select test cases based on
impact (grey box strategy).
4. Random regression: Select random test cases and execute. Tests can include both
automated and not automated test cases
5. Context-based dynamic regression: Execute a few of the priority-1 TCs, based on
context (e.g., find new defects, boundary value) and outcome, select additional
related cases.
An effective regression strategy is the combination of all of the above, not any of them in isolation.
16
Some Guidelines
● Should not select more test cases that are bound to fail and have no or less relevance to
the bug fixes.
● Select more positive test cases than negative test cases for the final regression test cycle
as more of the latter may create some confusion and unexpected heat.
● The regression guidelines are equally applicable for cases in which
● Major release of a product, have executed all test cycles and are planning a regression
test cycle
17
Best Practices
1. Regression can be used for all types of testing and all phases of
testing
2. Mapping defect numbers with test case result improves regression
quality
3. Create and execute regression test bed daily.
4. Assign your best test engineers for regression.
5. Detect defects, protect your product from defects and defect fixes
18
Summary
19
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Regression Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Regression Testing
List of Contents
- Regression Testing & Types
- Methodology
- Selecting Test Cases
- Classifying Test Cases
- Resetting Test Cases
- How to Conclude Results
- Popular Strategies
- Best Practices
Not just defect fixes, even other modifications to call for regression
testing.
Happens in fields other than software as well – Where a bit of
complexity is involved.
3
Regression Testing – Types
I. Final regression testing
● Unchanged build exercised for the minimum period of “cook time” (gold master build)
● To ensure that “the same build of the product that was tested reaches the customer”
● To have full confidence on product prior to release
II. Regular regression testing
● To validate the product builds between test cycles
● Used to get a comfort feeling on the bug fixes, and to carry on with next cycle of
testing
● Also used for making intermediate releases (Beta,Alpha)
4
Regression Testing – Types
5
Regression Testing - Methodology
1. Criteria for selecting regression test cases
2. Performing smoke tests
3. Classifying test cases
4. Selecting test cases
5. Resetting test cases for regression testing & phases of testing
6. Conclude results
7. Popular strategies
6
Performing Initial Smoke Test
Smoke testing ensures that the basic functionality works and indicates
that the build can be considered for further testing.
7
What Is Needed for Selecting Test Cases?
● Bug fixes and how they affect the system
● Area of frequent defects
● Area that has undergone many / recent code changes
● Area that is highly visible to the users
● Area that has more risks
● Core features of the product which are mandatory requirements of the customer
Points to Remember…
● Emphasis is more on the criticality of bug fixes than the criticality of the defect itself
● More positive test cases than negative test cases for final regression
● “Constant set” of regression test cases is rare
8
Classifying Test Cases
The order of test execution is priority 1, 2, 3 & 4. Priority helps in entry,
exit criteria.
9
Classifying Test Cases
10
Selecting Test Cases
Criteria
● Bug fixes work
● No side-effects
11
Resetting Test Cases
● The results of a test case can be guessed by looking at history.
● There is nothing wrong in communicating expected results before
executing (but don’t conclude).
● In many organizations not all types of testing and all test cases are
repeated for each cycle (but the management generally wants
overall statistics and ‘gut feel’)
● Re-setting a test case is nothing but setting a flag called NOTRUN
or EXECUTE AGAIN and not getting biased with previous runs
12
Resetting Test Cases (Contd.)
It is done
● When there is a major change in the product
● When there is a change in the build procedure that affects the
product
● In a large release cycle where some test cases have not been
executed for a long time
● When you are in the final regression test cycle with a few selected
test cases
● In a situation in which the expected results could be quite different
from history
13
How to Conclude Results
14
Popular Strategies
1. Regress all. Rerun all priority 1 , 2 & 3 TCs. Time becomes the
constraint and ROI is less.
2. Priority-based regression: Rerun priority 1 , 2 & 3 TCs based on
time availability. Cut-off is based on time availability.
3. Regress changes: Compare code changes and and select test cases
based on impact (grey box strategy).
4. Random regression: Select random test cases and execute. Tests
can include both automated and not automated test cases
5. Context-based dynamic regression: Execute a few of the priority-1
TCs, based on context (e.g., find new defects, boundary value) and
outcome, select additional related cases.
An effective regression strategy is the combination of all of the above, not any of them in isolation.
15
Some Guidelines
● Should not select more test cases that are bound to fail and have no or
less relevance to the bug fixes.
● Select more positive test cases than negative test cases for the final
regression test cycle as more of the latter may create some confusion and
unexpected heat.
● The regression guidelines are equally applicable for cases in which major
release of a product, have executed all test cycles and are planning a
regression test cycle
16
Best Practices
1. Regression can be used for all types of testing and all
phases of testing
2. Mapping defect numbers with test case result improves
regression quality
3. Create and execute regression test bed daily.
4. Assign your best test engineers for regression.
5. Detect defects, protect your product from defects and
defect fixes
17
Summary
18
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 3
Defect Management
1
Defect Management
List of Contents
- Defect Management Process
- Defect in Software Testing
- The Objective of Defect Management Process (DMP)
- Various Stages of Defect Management Process
Key Stages:
1. Discovery of Defects
2. Categorization of Defects
3. Defect Resolution by Developers
4. Verification by Testers
5. Defect Closure
6. Generation of Defect Reports (End of Project)
3
Defect in Software Testing
4
The Objective of Defect Management Process (DMP)
● Early Detection of Defects: One key aim of the Defect Management Process (DMP) is to
identify and address defects in software development at an early stage.
● Process Enhancement: Implementing the DMP contributes to refining the software
development process and its implementation.
● Mitigating Defect Impact: The DMP is geared towards minimizing the negative effects and
impact of defects on the software.
● Defect Prevention: DMP plays a role in preventing the occurrence of defects in software.
● Defect Resolution: The primary objective of the Defect Management Process is to
efficiently resolve and fix identified defects.
5
Various Stages of Defect Management Process
The defect management process includes several stages, which are as follows:
1. Defect Prevention
2. Deliverable Baseline
3. Defect Discovery
4. Defect Resolution
5. Process Improvement
6. Management Reporting
6
1. Defect Prevention
● The first stage of the defect management process is defect prevention. In this stage, the
execution of procedures, methodology, and standard approaches decreases the risk of
defects.
● Defect removal at the initial phase is the best approach in order to reduction its impact.
● The defect prevention stage includes the following significant steps:
○ Estimate Predictable Impact
○ Minimize expected impact
○ Identify Critical Risk
7
2. Deliverable Baseline
● The second stage of the defect management process is the Deliverable baseline. Here, the
deliverable defines the system, documents, or product.
● We can say that the deliverable is a baseline as soon as a deliverable reaches its pre-
defined milestone.
8
3. Defect Discovery
● The next stage of the defect management process is defect discovery. At the early stage of
the defect management process, defect discovery is very significant. And later, it might
cause greater damage.
● If developers have approved or documented the defect as a valid one, then only a defect
is considered as discovered.
9
4. Defect Resolution
● The Defect Resolution is a step-by-step procedure of fixing the defects, or we can say that this
process is beneficial in order to specified and track the defects.
● This process begins with handing over the defects to the development team. The developers
need to proceed with the resolution of the defect and fixed them based on the priority.
● We need to follow the below steps in order to accomplish the defect resolution stage.
○ Prioritize the risk
○ Fix the defect
○ Report the Resolution
10
5. Process Improvement
● The process improvement phase, we will look into the lower priority defects because
these defects are also essential as well as impact the system.
● All the acknowledged defects are equal to a critical defect from the process improvement
phase perspective and need to be fixed.
● The people involved in this particular stage need to recall and check from where the
defect was initiated.
● Depending on that, we can make modifications in the validation process, baselining
document, review process that may find the flaws early in the process, and make the
process less costly.
11
6. Management Reporting
12
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
13
Software Testing
Unit 4
Acceptance Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Acceptance Testing
List of Contents
- Alpha & Beta Testing
- Acceptance Testing [Criteria & Execution]
- Acceptance Testing - Challenges
- Prior to Beta
- Product stability still poor; more ad-hoc process
3
System Testing – Beta Testing
Typically for products; More so for new product releases
Product rejections in market place is a huge risk.
Reasons –
● Implicit requirements not addressed
● Changed needs/perceptions after the initial specs
● Usability
● Competitive comparisons by users
4
System Testing – Beta Testing
Process
1. Select & list representative customers
2. Work out a beta test plan
3. Initiate the product and support throughout
4. Carefully monitor / watch the progress and the feedbacks –
good & bad both
5. Have a good response system to avoid frustration for customers
6. Analyze the whole feedback and plough it back for product
improvement
Incentivized participation
5
Acceptance Testing
What is it - Testing done in accordance with customer specified criteria (
Test cases, scenarios, results)
6
Acceptance Testing - Importance
● Critical for business purpose. Without a defined acceptance criteria,
managing the closure is difficult
● Though software should work as per requirement, it is not easy to
demonstrate ALL required functionalities, under practical constraints
of time and resources
Ex: Think of all path testing for medium complex application
● Very important for software services/project companies.
7
Acceptance Testing - Criteria
● Typically involves specifying business functionality of some
complexity
Ex. Should do tax deduction at source during each month salary
disbursement
● Most of the high priority requirements are covered in the criteria.
● Legal & Statutory requirements
● May cover some non-functional requirements also
Ex. Should process 100 million call records in 2 hours!
8
Acceptance Testing - Criteria
Criteria could also be process / procedure requirements
Ex:
Test reports should show a coverage of > 85% at component level
80 staff members trained in using the data entry
All help documents must open in “Star Office”
9
Acceptance Testing - Criteria
Typical Test cases cover
● Critical functionality
● Most used functionality
● End-to-end scenarios
● New functionalities – during upgrade
● Legal / statutory needs
● Functionality to work on a defined corpus of data.
10
Acceptance Testing - Execution
● Typically happens “On-Site” after careful environment setting
● There should be ‘stand-by’ dev team to address blocking issues
● Needs a team with very good business functional knowledge and
application working knowledge
● Major defects – also defined as part of criteria – break the
acceptance test
○ Has to be re-started after the defect is addressed
● Careful execution documentation is a must & final reporting.
11
Acceptance Testing – Practical Challenges
1. Customers are wary of providing
2. Development team might omit what is not in the Acceptance Test
3. Not easy to specify – Needs good effort.
4. Most of the times vendor company defines and gets concurrence
5. Multiple iterations may be required and multiple customer
representatives may be involved
6. Results need to be carefully analyzed.
12
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Agile & AdHoc Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Agile & AdHoc Testing
List of Contents
- Iterative Testing
- Agile Testing
- Methodology
- AdHoc Testing
- Defect Seeding
- Examples of AdHoc Testing
3
Agile Testing
Agile testing is software testing that follows the best practices of the
Agile development framework. Agile development takes an incremental
approach to development. Similarly, Agile testing includes an
incremental approach to testing
4
Advantages of Agile Testing
1. Early Detection of Defects
2. Continuous Integration
3. Risk Reduction
4. Enhanced Product Quality
5. Cost-Efficiency
5
Agile Testing Methodology
1. Impact assessment
2. Planning
3. Daily stand-ups
4. Reviews
6
Defect Seeding
Defect seeding is a method of intentionally introducing defects into a product to
check the rate of detection and residual defects.
7
AdHoc Testing
When a software testing performed without proper planning and
documentation, it is said to be Adhoc Testing.
AdHoc Tests are done after formal testing is performed on the application.
AdHoc methods are the least formal type of testing as it is NOT a structured
approach. Hence, defects found using this method are hard to replicate as
there are no test cases aligned for those scenarios.
8
AdHoc Testing Examples
9
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Software Testing Tools
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
1
Software Testing Tools
List of Contents
- Software Testing Tools
- Selenium
- Advantages & Disadvantages of Selenium
- Test Management Tools
- Bugzilla
- Advantages & Disadvantages of Bugzilla
- Jira
- Advantages & Disadvantages of Jira
- Bugzilla vs Jira (A Comparison)
3
Selenium
Selenium is an open-source, automated
testing tool used to test web
applications across various browsers.
4
Advantages & Disadvantages of Selenium
5
Test Management Tools
Test management tools are used to store information on how testing is to be done, plan
testing activities and report the status of quality assurance activities.
Examples include Bugzilla & Jira
6
Bugzilla
Bugzilla is an open-source tool used to
track bugs and issues of a project or a
software. It helps the developers and other
stakeholders to keep track of unresolved
problems with the product.
7
Features of Bugzilla
Reference
8
Advantages & Disadvantages of Bugzilla
9
Jira
10
Jira
11
Some Jira Use-Cases
12
Advantages & Disadvantages of Jira
13
Bugzilla vs Jira
14
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
15