0% found this document useful (0 votes)
4 views41 pages

Software Testing MODULE2 NOTES

The document discusses functional testing, which verifies that software functions according to specified requirements, focusing on user perspective and predefined functional requirements. It outlines various types of functional testing, including unit, integration, system, and acceptance testing, along with examples and key aspects such as boundary value analysis and robustness testing. Additionally, it highlights the importance of testing edge cases and handling unexpected inputs to ensure software stability and performance.

Uploaded by

bathaganeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views41 pages

Software Testing MODULE2 NOTES

The document discusses functional testing, which verifies that software functions according to specified requirements, focusing on user perspective and predefined functional requirements. It outlines various types of functional testing, including unit, integration, system, and acceptance testing, along with examples and key aspects such as boundary value analysis and robustness testing. Additionally, it highlights the importance of testing edge cases and handling unexpected inputs to ensure software stability and performance.

Uploaded by

bathaganeya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

NOTES-UNIT2

Functional Testing
Functional testing is a type of software testing that focuses on verifying
whether the software system functions according to the specified requirements.
It ensures that the application performs its intended functions correctly. During
functional testing, the tester evaluates the system's behavior against predefined
functional requirements or use cases, often without considering the internal
workings of the application. It helps ensure that the core features and functions
of the software deliver on the promise outlined in the requirements. It is
essential in ensuring that software behaves as expected in real-world scenarios,
meeting both user and business requirements. By performing tests like login
verification, form validation, and e-commerce functionality checks, you can
make sure that the software functions correctly, providing value to end users.
Key Points of Functional Testing:
1. Focus on Features and Functions: The primary focus is on validating
the functionality of the software. Test cases are designed to check if the
system behaves as expected for each function specified in the
requirements.
2. Test the User's Perspective: Functional testing is done from an end-user
perspective. It checks if the application meets the user's needs, inputs,
and outputs correctly.
3. No Concern for Code or Internal Logic: Functional testing doesn't
focus on the underlying code or internal workings of the application,
unlike structural or white-box testing.
4. Based on Functional Requirements: Functional testing uses
requirement documents, user stories, or functional specifications as a
reference point for creating test cases.
Types of Functional Testing:
1. Unit Testing: Testing individual components or modules of the software
in isolation to ensure that each part works as expected.
2. Integration Testing: Verifying the interaction between integrated
components or systems to ensure that they function correctly when
combined.
3. System Testing: Testing the complete integrated system to check if it
meets the specified functional requirements.
4. Sanity Testing: A quick round of testing to check if the basic
functionalities are working after a new build or code change.
5. Smoke Testing: A preliminary test to ensure that the most critical
functions of the software are working before proceeding with more
detailed testing.
6. Regression Testing: Ensuring that new code changes or fixes have not
adversely affected existing functionalities.
7. Acceptance Testing: Typically done by the end users or stakeholders to
determine whether the software meets their requirements and is ready for
production.
Examples of Functional Testing:
• Verifying if a login function accepts valid credentials and rejects invalid
ones.
• Checking if a user can correctly add items to a shopping cart in an e-
commerce website.
• Ensuring that data entered into a form is saved and retrieved correctly.
Key Aspects of Functional Testing:
• Objective: To verify that each function of the software application
operates according to the defined specifications.
• Scope: Focuses on user-facing functionality and system behavior.
• Test Basis: Functional requirements, user stories, and use cases.
• Non-Goal: Does not focus on internal code, performance, or system
design.
Example 1: Login Functionality in a Web Application
• Requirement: The system should allow users to log in using a valid
username and password.
• Test Case:
• Input: Correct username and password.
• Expected Outcome: The system logs the user in and redirects to the
homepage.
• Input: Incorrect username or password.
• Expected Outcome: The system displays an error message
indicating invalid credentials.
• Input: Empty fields for username or password.
• Expected Outcome: The system prompts the user to fill in the
required fields.
Example 2: E-Commerce Shopping Cart
• Requirement: Users should be able to add products to their shopping
cart and view the total amount.
• Test Case:
• Input: Select a product and click "Add to Cart."
• Expected Outcome: The product is added to the shopping cart, and
the cart count is updated.
• Input: View the cart after adding items.
• Expected Outcome: The cart displays the correct list of items and
the total price.
• Input: Remove a product from the cart.
• Expected Outcome: The item is removed, and the total price is
updated accordingly.
Example 3: Form Submission in a Web Application
• Requirement: A user should be able to submit a contact form with valid
information.
• Test Case:
• Input: Fill out the form with valid details (name, email, message)
and click "Submit."
• Expected Outcome: The system displays a confirmation message
saying "Thank you for contacting us."
• Input: Leave the email field empty.
• Expected Outcome: The system displays an error message
requesting a valid email.
• Input: Enter an invalid email format.
• Expected Outcome: The system displays an error message
indicating the email format is invalid.
Example 4: Search Functionality on a Website
• Requirement: The website should allow users to search for products by
entering keywords.
• Test Case:
• Input: Enter a valid search term (e.g., "laptop").
• Expected Outcome: The system returns a list of products related to
the term "laptop."
• Input: Enter a non-existent product name.
• Expected Outcome: The system displays a message like "No results
found."
• Input: Enter a blank search term.
• Expected Outcome: The system should show a message requesting
the user to enter a search term.

Boundary Value Analysis (BVA) is a black-box testing technique in software


testing that focuses on testing the boundaries or edge conditions of input
domains. The idea behind boundary value analysis is that defects are often
found at the "edges" of input values, rather than in the "middle" of the input
range. The test cases are designed around the boundary values of input
parameters. Since many errors occur at the boundaries of input ranges, this
method helps ensure that the software behaves correctly under extreme
conditions. It is a valuable technique in software testing for ensuring that the
software handles edge conditions correctly, especially when working with
input ranges or values that may lead to errors. By testing the boundaries and
values near them, testers can catch defects that might otherwise go unnoticed.
Key Concepts of Boundary Value Analysis:
1. Boundary Conditions: BVA tests the boundaries, such as the minimum and
maximum values, as well as values just inside and just outside the boundary.
This helps in detecting off-by-one errors and other defects related to
boundary conditions.
2. Equivalence Partitioning: Often used in conjunction with BVA,
equivalence partitioning divides the input data into valid and invalid
partitions, while BVA focuses specifically on testing the boundaries of these
partitions.
Steps Involved in Boundary Value Analysis:
1. Identify the Input Range: Determine the valid input range or set of values.
2. Identify Boundaries: Determine the minimum, maximum, and values just
outside these boundaries.
3. Create Test Cases: Create test cases that include the boundary values,
values just inside the boundary, and values just outside the boundary.
Boundary Value Analysis

Example:
Scenario 1: Age Verification
• Requirement: A software system that checks the user's eligibility for a
particular service based on age, where the valid age range is between 18
and 60 (inclusive).
• Valid input range: 18 ≤ Age ≤ 60
• Test Cases:
• Minimum Boundary: Age = 18 (on the boundary)
• Just Inside Minimum: Age = 19 (just above the boundary)
• Just Outside Minimum: Age = 17 (just below the boundary,
invalid)
• Maximum Boundary: Age = 60 (on the boundary)
• Just Inside Maximum: Age = 59 (just below the boundary)
• Just Outside Maximum: Age = 61 (just above the boundary,
invalid)
These test cases ensure that the system correctly handles inputs at the
boundaries of the age range and rejects those outside the valid range.
Scenario 2: Input Validation for a Password Field
• Requirement: The system requires a password between 6 and 12
characters long.
• Valid input range: 6 ≤ Password Length ≤ 12
• Test Cases:
• Minimum Boundary: Password length = 6 characters
• Just Inside Minimum: Password length = 7 characters
• Just Outside Minimum: Password length = 5 characters (invalid)
• Maximum Boundary: Password length = 12 characters
• Just Inside Maximum: Password length = 11 characters
• Just Outside Maximum: Password length = 13 characters
(invalid)
Scenario 3: Discount Calculation Based on Purchase Amount
• Requirement: A discount is applied to customers whose purchase
amount is between $100 and $500.
• Valid input range: $100 ≤ Purchase Amount ≤ $500
• Test Cases:
• Minimum Boundary: Purchase amount = $100 (on the boundary)
• Just Inside Minimum: Purchase amount = $101 (just above the
boundary)
• Just Outside Minimum: Purchase amount = $99 (just below the
boundary, invalid)
• Maximum Boundary: Purchase amount = $500 (on the boundary)
• Just Inside Maximum: Purchase amount = $499 (just below the
boundary)
• Just Outside Maximum: Purchase amount = $501 (just above the
boundary, invalid)
Advantages of Boundary Value Analysis:
1. Effective Test Coverage: Since boundary values are often where defects
occur, BVA can uncover errors that may not be detected through regular
testing.
2. Simplicity: BVA is a simple and effective technique that helps identify
edge cases with minimal effort.
3. Reduces the Number of Test Cases: By focusing on boundary values,
it reduces the number of test cases compared to testing every possible
input value.

Disadvantages of Boundary Value Analysis:


1. Limited Coverage: BVA only tests the boundaries and may miss other
errors that occur in the "interior" of the input range.
2. Not Suitable for Complex Conditions: BVA is most effective with
simple input ranges. It may not be as effective for more complex
scenarios or when there are multiple conditions to consider.

Robustness Testing is a technique used to evaluate how well a system handles


unexpected, invalid, or extreme conditions, such as unexpected inputs,
boundary conditions, or stressful situations like high user load. The purpose is
to ensure that the software behaves reliably even when subjected to adverse
conditions and doesn't crash or exhibit unstable behavior. Robustness testing
is critical in ensuring that software systems are capable of handling extreme or
unexpected conditions gracefully without crashing. By testing the system
under unexpected inputs, boundary conditions, and load scenarios, robustness
testing helps ensure that the application can maintain its stability, performance,
and usability even in the face of user errors, system failures, or high-stress
situations.

Key Aspects of Robustness Testing:


1. Input Validation: Ensures that the application correctly handles
unexpected, invalid, or out-of-range inputs. For instance, providing
incorrect data types, malformed values, or extreme inputs that might
otherwise cause the system to crash.
2. Error Handling: Verifies that the software properly handles errors and
exceptions, providing meaningful feedback to the user or logging the
issue for later diagnosis without causing the system to fail.
3. System Stability: Assesses how the system performs under extreme load
conditions, such as very large data sets or simultaneous requests from
multiple users.
4. Boundary Testing: Tests the system’s reaction to boundary conditions,
ensuring that the application handles the edge cases and doesn’t break or
behave unexpectedly at the limits of acceptable input.
Objectives of Robustness Testing:
• Identify Crashes or Freezes: Detecting if the system crashes or freezes
when it encounters unexpected input or extreme conditions.
• Validate Exception Handling: Ensuring that the system properly
handles error messages and provides useful feedback to the user, rather
than simply crashing or locking up.
• Handle Invalid Inputs: Test how the system handles unexpected or
invalid inputs.
• Error Handling: Verify that the system gracefully handles errors
without crashing.
• System Stability: Ensure that the system can handle extreme conditions,
such as large volumes of data or simultaneous requests.
• Graceful Failure: Test if the application fails gracefully when errors or
unexpected situations occur.
Example 1: Handling Invalid Input
• Scenario: A login form accepts only valid email addresses as input.
• Test Cases for Robustness:
• Input a string that is too short, such as "abc@".
• Input special characters like "@#$%^&*".
• Input an empty field (i.e., leave the email field blank).
• Input a string that exceeds the character limit (e.g., 255 characters).
• Expected Outcome: The system should not crash. Instead, it
should display appropriate error messages like "Invalid email
address" or "Email field cannot be empty."
Example 2: Handling Large Data Volumes
• Scenario: A data processing system receives user data, including
multiple fields such as name, age, address, etc.
• Test Cases for Robustness:
• Provide a dataset with 1,000,000 entries (large volume of data).
• Input a very large file size (e.g., 1 GB file).
• Expected Outcome: The system should be able to process the data
without crashing or freezing. It should either process the data
correctly or display a user-friendly error message if the input
exceeds capacity (e.g., "File too large").
Example 3: Boundary Conditions
• Scenario: A banking application allows the transfer of funds, with the
minimum transfer limit being $1 and the maximum limit being $10,000.
• Test Cases for Robustness:
• Input a transfer amount of $0 (below the minimum boundary).
• Input a transfer amount of $10,000,000 (well above the maximum
limit).
• Expected Outcome: The system should not crash. Instead, it
should display a message such as "Amount must be between $1 and
$10,000" without crashing.
Key Techniques Used in Robustness Testing:
1. Stress Testing: Involves testing the system under heavy load or extreme
conditions to see how it handles overload situations (e.g., too many
requests, high data volume).
2. Negative Testing: Tests how the application behaves when invalid,
incorrect, or unexpected data is provided. This includes inputting invalid
values, empty fields, or unexpected data types.
3. Fault Injection: Involves intentionally introducing faults or errors into
the system to test how it responds to these unexpected conditions. For
example, disconnecting the database while the system is in use to see if
it can handle such interruptions.
4. Error Path Testing: Ensures that the application follows the correct
error paths, providing meaningful feedback to the user without causing
system crashes.
5. Security Testing: Part of robustness testing often involves evaluating
how the system responds to security threats, such as SQL injection or
buffer overflows.
Advantages of Robustness Testing:
1. Ensures System Stability: Ensures that the system remains stable and
functional, even when subjected to invalid or extreme inputs.
2. Improves User Experience: By handling errors gracefully, the system
provides a better user experience, as users won't encounter crashes or
freezes.
3. Identifies Hidden Defects: It helps in identifying rare or difficult-to-
reproduce defects that might arise under unexpected conditions.
4. Prevents System Downtime: Robustness testing can help prevent
unexpected crashes, reducing the potential for system downtime in
production.
Challenges in Robustness Testing:
1. High Complexity: Identifying and simulating all possible edge cases and
extreme scenarios can be complex and time-consuming.
2. Resource Intensive: Stressing the system with a large volume of users
or data can require significant computational resources and time.
3. Difficult to Automate: While tools exist, automating robustness testing,
especially with random inputs or extreme conditions, can be challenging.

Equivalence Class Testing (also called Equivalence Partitioning) is a


software testing technique that divides the input data of a system into groups
(or classes) that are expected to behave similarly. Instead of testing every
possible input, which may be impractical, equivalence class testing reduces the
number of test cases by selecting a representative from each class. It is a
powerful technique that helps reduce the number of test cases by grouping
similar inputs into classes. This method is especially useful in functional
testing where the goal is to test the system’s behavior with different types of
inputs while keeping the testing process efficient and manageable. By focusing
on representative values from each equivalence class, it ensures that the system
handles various input scenarios correctly without needing exhaustive testing of
every possible input.

Key Concepts of Equivalence Class Testing:


1. Equivalence Class: A set of inputs that are treated the same by the
system. If one value in the equivalence class is valid or invalid, then all
values in the class are assumed to behave the same.
2. Valid Equivalence Classes: These are input ranges or sets of values that
the system is expected to handle correctly.
3. Invalid Equivalence Classes: These are input ranges or sets of values
that the system is not expected to handle correctly and should produce
errors.
4. Test Case Optimization: Instead of testing all possible inputs, the
technique helps in selecting a few representative values from each
equivalence class, reducing the number of test cases while still achieving
good coverage.
Steps in Equivalence Class Testing:
1. Identify the Input Domain: Define the range of valid and invalid inputs
for the system.
2. Divide the Input Domain into Classes: Break down the input space into
equivalence classes based on the system’s expected behavior.
3. Select Test Cases: Choose one or more representative values from each
equivalence class.
4. Test the System with the Chosen Values: Run the test cases to check if
the system behaves as expected for these values.
Types of Equivalence Classes:
• Valid Equivalence Classes: Represent input values that should be
processed successfully by the system.
• Invalid Equivalence Classes: Represent input values that should be
rejected or handled with errors by the system.

Example 1: Age Verification System


Scenario: A system allows access only to people aged between 18 and 65
(inclusive).
• Valid Equivalence Class:
• Age between 18 and 65 (e.g., 25, 45, 60).
• Invalid Equivalence Classes:
• Age less than 18 (e.g., 17, 10, 5).
• Age greater than 65 (e.g., 66, 80, 100).
Test Cases:
1. Valid Test Case: Age = 30 (valid input).
2. Invalid Test Case: Age = 15 (invalid input).
3. Invalid Test Case: Age = 70 (invalid input).
Here, instead of testing every age between 18 and 65, we just select
representative values from the valid equivalence class (e.g., 30) and invalid
equivalence classes (e.g., 15, 70).

Example 2: Input Validation for a Bank Account Balance


Scenario: A system accepts a bank account balance between $0 and $100,000,
inclusive.
• Valid Equivalence Class:
• Account balance between $0 and $100,000 (e.g., $500, $10,000,
$99,999).
• Invalid Equivalence Classes:
• Account balance less than $0 (e.g., -$1, -$50).
• Account balance greater than $100,000 (e.g., $200,000, $500,000).
Test Cases:
1. Valid Test Case: Balance = $10,000 (valid input).
2. Invalid Test Case: Balance = -$50 (invalid input).
3. Invalid Test Case: Balance = $200,000 (invalid input).

Advantages of Equivalence Class Testing:


1. Reduces the Number of Test Cases: By selecting only representative
values from each equivalence class, it reduces the number of test cases
needed while still achieving good coverage.
2. Efficiency: It helps to identify defects early while ensuring that the
system handles typical input values as expected.
3. Improves Test Coverage: By systematically covering valid and invalid
input ranges, it increases the likelihood of detecting errors.
4. Better Use of Resources: Reduces testing effort and time by focusing on
a smaller number of key test cases.

Disadvantages of Equivalence Class Testing:


1. Not Suitable for All Testing Types: This technique is best for functional
testing but may not be effective for systems with complex algorithms or
non-obvious input-output relationships.
2. May Miss Edge Cases: If boundaries or edge conditions are not carefully
considered, equivalence class testing might miss critical test cases, such
as those at the edges of valid input ranges.

Key Concepts of Equivalence Class Testing:


• Equivalence Class: A group of inputs that are treated similarly by the
system. All inputs in a given class are expected to produce the same
result.
• Valid Equivalence Classes: Groups of inputs that the system is expected
to process correctly.
• Invalid Equivalence Classes: Groups of inputs that the system is
expected to reject or flag as errors.

Steps in Equivalence Class Testing:


1. Identify the Input Domain: Define the input space of the system.
2. Divide the Input Domain into Classes: Break the input space into valid
and invalid equivalence classes.
3. Select Test Cases: Choose one or more representative values from each
equivalence class.
4. Test the System with the Chosen Test Cases: Execute the tests to
ensure the system handles each class correctly.
The Decision Table Method is a systematic and structured approach to
software testing that helps in modeling complex business logic and decision-
making scenarios. It is especially useful when there are multiple conditions and
actions that need to be tested. A decision table provides a clear representation
of different combinations of inputs (conditions) and their corresponding
outputs (actions). It is an effective tool in software testing, particularly when
dealing with complex decision logic. It helps organize and visualize how
different input conditions lead to specific actions, ensuring that all possible
scenarios are tested. While it can become complex for systems with many
conditions, it is still one of the best ways to handle scenarios where multiple
combinations of conditions must be validated
Key Concepts of the Decision Table Method:
• Conditions: These are the inputs or factors that influence the system’s
decision.
• Actions: These are the outcomes or system responses based on the
combinations of conditions.
• Rules: Each row in the decision table represents a rule, which shows a
specific combination of conditions and the corresponding action.
• Decision Table: A table that helps to visualize how different conditions
interact and what actions should be taken in each case.
Structure of a Decision Table:
A decision table generally has four parts:
1. Conditions: The different factors or inputs that influence the outcome.
2. Actions: The outputs or results that the system should produce based on
the combinations of conditions.
3. Rules: Rows in the decision table, each of which represents a specific
combination of conditions and their corresponding actions.
4. Condition Alternatives: Different possible values for each condition
(e.g., True/False, Yes/No).
Components of a Decision Table:
• Condition Stubs: List of conditions that will affect the decision.
• Action Stubs: List of actions that the system can take.
• Condition Alternatives: Possible values for each condition (e.g.,
True/False, Yes/No).
• Action Entries: The expected outcome (action) corresponding to each
combination of condition alternatives.
How Decision Tables Work:
1. Identify the Conditions: Start by identifying the conditions (inputs) that
affect the system’s behavior.
2. Identify the Actions: Define the possible actions (outputs or results) that
the system may produce based on the conditions.
3. Construct the Table: Create the decision table by assigning all possible
combinations of conditions to specific actions.
4. Test the Rules: For each rule (row in the table), create test cases to verify
that the system produces the correct action for the given combination of
conditions.
Advantages of the Decision Table Method:
1. Clarity: Helps visualize complex decision logic in an easy-to-understand
table format.
2. Exhaustive Coverage: Ensures that all possible combinations of
conditions and actions are considered.
3. Efficiency: Can reduce redundant test cases by focusing on the most
important combinations of conditions.
4. Easy to Manage: The method works well for systems with complex
condition-action rules, making it easier to manage and track decision-
making scenarios.
Disadvantages of the Decision Table Method:
1. Complexity in Large Tables: The number of rules can grow
exponentially as the number of conditions increases. For example, if you
have 3 conditions with two possible values (True/False) each, the number
of rules is 23=823=8.
2. Hard to Maintain: As conditions or actions change, the decision table
needs to be updated, which can be cumbersome in complex systems.

Example of a Decision Table:


Scenario: A discount system that gives a discount based on the customer’s
membership status and the total amount spent.
• Conditions:
1. Customer is a member? (Yes/No)
2. Total amount spent (Above $100 / Below $100)
• Actions:
1. Apply 10% discount
2. Apply 5% discount
3. No discount
Constructing the Decision Table:
Action 3:
Condition 1: Condition 2: Action 1: Action 2: No
Member? Amount > $100 Apply 10% Apply 5% Discount
Yes Yes Yes No No
Yes No No Yes No
No Yes No No Yes
No No No No Yes
Explanation:
1. Rule 1: If the customer is a member and the total amount spent is more
than $100, apply a 10% discount.
2. Rule 2: If the customer is a member and the total amount spent is less
than $100, apply a 5% discount.
3. Rule 3: If the customer is not a member and the total amount spent is
more than $100, apply no discount.
4. Rule 4: If the customer is not a member and the total amount spent is less
than $100, apply no discount.
Test Cases from the Decision Table:
1. Test Case 1: Customer is a member, and the total amount spent is $150.
• Expected Outcome: Apply 10% discount.
2. Test Case 2: Customer is a member, and the total amount spent is $80.
• Expected Outcome: Apply 5% discount.
3. Test Case 3: Customer is not a member, and the total amount spent is
$120.
• Expected Outcome: No discount.
4. Test Case 4: Customer is not a member, and the total amount spent is
$50.
• Expected Outcome: No discount.

Advantages of Using Decision Tables in Testing:


1. Comprehensive Coverage: Decision tables ensure that all combinations
of conditions and actions are considered, minimizing the risk of missing
edge cases or specific combinations.
2. Easy to Understand: The tabular format is easy to read, making it clear
how different conditions lead to different actions.
3. Reduces Redundancy: By organizing the conditions and actions
logically, decision tables help avoid redundant tests.

When to Use Decision Table Testing:


• Complex Business Logic: When there are multiple conditions that result
in different actions or decisions.
• Rule-Based Systems: In systems where behavior is governed by a set of
rules based on various conditions.
• Clear Action-Condition Mapping: When conditions and actions have a
clear mapping that can be represented in tabular form.
Structural Testing (also known as White-box Testing or Glass-box
Testing) is a type of software testing where the tester has knowledge of the
internal workings or structure of the software. This approach focuses on testing
the system's internal structure, logic, and design, as opposed to its external
functionality. It is an essential testing technique, especially when the focus is
on verifying the internal workings and logic of an application. It helps ensure
that the software behaves correctly and consistently by checking different
paths, conditions, and loops. While it requires access to the source code and
can be time-consuming, it is particularly useful for developers and testers to
improve code quality and catch issues early.

Key Features of Structural Testing:


• Internal Focus: Test cases are designed based on the internal structure
or logic of the application (code, control flow, data flow, etc.).
• Knowledge of the System: Testers have access to the source code,
algorithms, data structures, and control flow of the system being tested.
• Testing Logic and Code Paths: Structural testing ensures that all parts
of the code are executed and validated to uncover potential issues or bugs.
• Coverage: The primary objective is to achieve code coverage, which
involves testing different parts of the software to ensure that all possible
paths, branches, conditions, and statements in the code are executed.

Types of Structural Testing Techniques:


1. Statement Coverage:
• This technique ensures that each line of code (or statement) is
executed at least once during the testing process.
• Goal: Achieve 100% statement coverage, meaning every line in the
code is tested.
Example: If there is an if condition in the code, both the true and false
branches of the statement should be tested to ensure every statement is
executed.

2. Branch Coverage:
• Also known as Decision Coverage, this technique ensures that
every decision (i.e., every branch of conditional statements
like if, switch, for, while, etc.) is tested.
• Goal: Achieve 100% branch coverage by ensuring that every
possible outcome of each condition is tested.

Example: For an if-else statement, both the "if" branch and the "else" branch
should be tested.
3. Path Coverage:
• Path coverage ensures that all possible paths through a given
program are tested. A path represents a unique sequence of events
or conditions in the program.
• Goal: Test all possible execution paths within the code.
Example: If a program has multiple if conditions, path coverage tests all
possible combinations of conditions, ensuring that every possible route through
the code is executed.
4. Condition Coverage:
• This technique involves testing each boolean condition within a
decision to ensure that each condition has been evaluated to both
true and false at least once.
• Goal: Achieve 100% condition coverage by testing each
condition's possible outcomes.
Example: For a condition if (a > 0 && b < 5), both a > 0 and b < 5 should be
tested to evaluate both their true and false states.
5. Multiple Condition Coverage:
• This technique ensures that all possible combinations of conditions
within a decision are tested. This is more thorough than simple
condition or branch coverage.
• Goal: Test all combinations of individual conditions within a
compound statement.
Example: For the condition if (a > 0 && b < 5 && c == 10), multiple condition
coverage tests all combinations of a > 0, b < 5, and c == 10 to evaluate the true
and false results.

6. Loop Coverage:
• This testing technique ensures that loops are tested for different
scenarios, including zero iterations, one iteration, and multiple
iterations.
• Goal: Ensure that loop-related logic is thoroughly tested.
Example: If a loop is written as for (int i = 0; i < n; i++), test cases should
ensure that n = 0, n = 1, and other values for n are covered to test the loop's
behavior.
Advantages of Structural Testing:
1. Thoroughness: Structural testing provides a detailed and thorough
evaluation of the internal workings of the application. It helps to identify
issues that might not be visible through black-box testing.
2. Code Quality: By ensuring that all paths, conditions, and statements are
tested, it helps improve code quality and maintainability.
3. Early Detection of Errors: Structural testing helps detect errors early in
the development process, especially in areas related to logic, algorithm,
and performance.
4. Comprehensive Coverage: When performed properly, structural testing
ensures that the entire codebase is covered and tested.
Disadvantages of Structural Testing:
1. Requires Code Knowledge: Unlike black-box testing, structural testing
requires testers to have a deep understanding of the code, which may not
always be possible, especially for external testers.
2. High Complexity: For large and complex systems, achieving complete
coverage (100%) can be difficult due to the sheer number of paths,
conditions, and loops in the code.
3. Time-Consuming: Testing all the paths and conditions in large
applications can be time-consuming and resource-intensive.
4. Not Always Applicable: In some cases, structural testing may be
difficult to apply, particularly in systems where the source code is
unavailable or when the internal logic is too complex to test exhaustively.
1. Statement Coverage
Statement coverage ensures that every statement in the code is executed at least
once during testing. The goal is to ensure that all lines of code are tested,
minimizing untested parts of the code.
Example:
python
Copy
def check number(x): if x > 10: print("x is greater than 10") else: print("x is
less than or equal to 10")
• Statement Coverage requires that both print() statements are executed
during testing.
• Test Cases:
• Test Case 1: x = 15 → The condition x > 10 is true, so "x is greater
than 10" is printed.
• Test Case 2: x = 5 → The condition x > 10 is false, so "x is less
than or equal to 10" is printed.
Here, both branches are tested, so Statement Coverage is satisfied.

2. Branch Coverage (Decision Coverage)


Branch coverage ensures that each decision (i.e., each if, else, for,
or while statement) takes all possible outcomes (true and false) at least once.
Example:
python
Copy
def check_sign(x): if x > 0: print("Positive") else: print("Non-positive")
• Branch Coverage requires that both branches of the if-else statement are
tested.
• Test Cases:
• Test Case 1: x = 10 → The condition x > 0 is true, so "Positive" is
printed.
• Test Case 2: x = -5 → The condition x > 0 is false, so "Non-
positive" is printed.
This ensures both branches (x > 0 and x <= 0) are covered, so Branch
Coverage is satisfied.

3. Path Coverage
Path coverage ensures that all possible execution paths in the code are tested.
This is a more comprehensive technique compared to branch coverage because
it considers all possible combinations of branches.
Example:
python
Copy
def evaluate_numbers(a, b): if a > 10: if b < 5: print("Path 1") else: print("Path
2") else: print("Path 3")
• Path Coverage requires testing all possible paths through the code:
• Path 1: a > 10 and b < 5
• Path 2: a > 10 and b >= 5
• Path 3: a <= 10
• Test Cases:
• Test Case 1: a = 15, b = 3 → Path 1 is executed, printing "Path 1".
• Test Case 2: a = 15, b = 6 → Path 2 is executed, printing "Path 2".
• Test Case 3: a = 5, b = 3 → Path 3 is executed, printing "Path 3".
By testing these cases, Path Coverage ensures that every path is executed.

4. Condition Coverage
Condition coverage ensures that each individual condition in a decision (like
in if statements) is tested for both true and false outcomes. It's more granular
than branch coverage because it tests each condition independently.
Example:
python
Copy
def check_conditions(a, b): if a > 0 and b < 10: print("Condition met") else:
print("Condition not met")
• Condition Coverage requires testing the individual conditions (a >
0 and b < 10) both as true and false.
• Test Cases:
• Test Case 1: a = 5, b = 3 → a > 0 is true, b < 10 is true (both
conditions are true).
• Test Case 2: a = -5, b = 3 → a > 0 is false, b < 10 is true (first
condition is false).
• Test Case 3: a = 5, b = 15 → a > 0 is true, b < 10 is false (second
condition is false).
• Test Case 4: a = -5, b = 15 → a > 0 is false, b < 10 is false (both
conditions are false).
This ensures that both conditions are tested for both outcomes, so Condition
Coverage is satisfied.

5. Multiple Condition Coverage


Multiple condition coverage tests all possible combinations of conditions
within a decision. It's more exhaustive than condition coverage because it tests
all combinations of conditions, not just individual ones.
Example:
python
Copy
def check_multiple_conditions(a, b, c): if a > 0 and b < 10 and c == 5:
print("All conditions met") else: print("Conditions not met")
• Multiple Condition Coverage ensures all combinations of the
conditions a > 0, b < 10, and c == 5 are tested.
• Test Cases:
• Test Case 1: a = 5, b = 3, c = 5 → All conditions are true.
• Test Case 2: a = -5, b = 3, c = 5 → First condition is false.
• Test Case 3: a = 5, b = 15, c = 5 → Second condition is false.
• Test Case 4: a = 5, b = 3, c = 6 → Third condition is false.
• Test Case 5: a = -5, b = 15, c = 6 → All conditions are false.
This tests all possible combinations of the conditions, ensuring comprehensive
coverage.

6. Loop Coverage
Loop coverage ensures that loops are tested in various scenarios, including
cases where the loop runs zero times, once, and multiple times.
Example:
python
Copy
def check_numbers(n): for i in range(n): print(i)
• Loop Coverage ensures the loop is tested for different values of n:
• Test Case 1: n = 0 → The loop runs zero times.
• Test Case 2: n = 1 → The loop runs once.
• Test Case 3: n = 5 → The loop runs five times.
Path Testing is a structural testing technique that focuses on ensuring that all
possible paths through a program's code are tested. It is a form of Whitebox
Testing where the goal is to verify the behavior of the software by exercising
different execution paths of the program. It ensures all logical paths in a
program are tested. While it provides thorough code coverage and is
particularly useful for detecting logic errors and edge cases, it can be complex
and resource-intensive, especially for large programs with many conditions
and loops. Path testing works best in combination with other testing techniques
like statement coverage, branch coverage, and condition coverage to ensure
comprehensive testing of both the internal logic and functionality of the
software.

Objective of Path Testing


The main objective of path testing is to identify and test all possible paths of
execution in the software, ensuring that each path produces the expected
behavior and uncovering any potential defects. By doing so, it provides
thorough code coverage and ensures that the program’s internal logic functions
correctly for all scenarios.
Key Concepts in Path Testing:
1. Path: A unique sequence of statements or code segments from the start
to the end of a program. It represents a route that the program can take
depending on the conditions in the code.
2. Path Coverage: The technique of ensuring that all possible execution
paths of the program are tested at least once.
3. Control Flow Graph: A graphical representation of the flow of control
in a program, where nodes represent statements or groups of statements,
and edges represent control flow between those statements.
4. Independent Paths: These are paths that traverse through different
logical paths of a program. They typically involve different decisions or
branches and should be tested independently.
How Path Testing Works:
1. Identify All Paths: First, you need to identify all the possible paths
through the code. This can be done through analyzing the program's flow
and decision points (such as if, else, for, while, etc.).
2. Test Each Path: For each identified path, you design a test case that will
exercise that path during execution. The test case should ensure that the
program behaves as expected for that particular path.
3. Control Flow Graph: Path testing often uses a Control Flow Graph
(CFG) to visualize and understand the different paths. Each edge and
node of the graph represents a specific flow in the code.
4. Minimize Redundancy: While testing all paths is important, redundant
paths (paths that are essentially the same) can be eliminated to focus on
unique paths. This helps optimize the testing process.
Path Testing Techniques:
1. Basic Path Testing:
• Identify the basic paths through the program using the decision
points (like if statements, loops, etc.).
• Test the program through each of these paths to ensure correctness.
• For example, in a program with a simple decision-making structure,
you would test both the true and false outcomes of an if condition.
2. Extended Path Testing:
• Involves more complex scenarios where you test combinations of
different paths and conditions in the program.
• This may include paths that test the combinations of conditions
inside loops or nested if statements.
3. All-Paths Testing:
• This technique involves testing every possible path in the code,
including all loops, branches, and decisions.
• Although exhaustive, it might not always be practical for large or
complex systems due to the exponential growth in the number of
paths with every added condition or loop.
4. Loop Testing:
• A specific form of path testing where the focus is on ensuring that
loops in the program are tested for a variety of conditions, including
cases where the loop runs zero times, one time, and multiple times.

Example of Path Testing


Let’s take a simple example of a program with an if-else condition and a loop:
python
Copy
def check_numbers(a, b): if a > 10: print("a is greater than 10") else: print("a
is less than or equal to 10") for i in range(b): print(i)
Step 1: Identify the Paths
• The program has two main parts:
1. The if-else condition to check whether a > 10 or not.
2. A loop that runs from 0 to b-1.
For the first part (the if-else condition), there are two paths:
• Path 1: When a > 10 (true branch).
• Path 2: When a <= 10 (false branch).
For the second part (the loop), there are different paths depending on the value
of b:
• If b > 0, the loop will run at least once.
• If b == 0, the loop will not run at all.
Step 2: Control Flow Graph
We can represent the control flow graph for the program as follows:
scss
Copy
Start → [Condition a > 10] → [Path 1] (True) → [Loop b times]

[Path 2] (False) → [Loop b times]
• Path 1: a > 10 → Loop executes b times (if b > 0).
• Path 2: a <= 10 → Loop executes b times (if b > 0).
• Path 3: a > 10 → Loop does not execute (if b = 0).
• Path 4: a <= 10 → Loop does not execute (if b = 0).
Step 3: Test Cases for Each Path
1. Test Case 1: a = 15, b = 5
• Expected Output: "a is greater than 10" and loop prints numbers 0
to 4.
• Path Covered: Path 1 → Loop executes 5 times.
2. Test Case 2: a = 5, b = 3
• Expected Output: "a is less than or equal to 10" and loop prints
numbers 0 to 2.
• Path Covered: Path 2 → Loop executes 3 times.
3. Test Case 3: a = 20, b = 0
• Expected Output: "a is greater than 10" and no numbers are
printed (loop doesn’t run).
• Path Covered: Path 1 → Loop does not execute.
4. Test Case 4: a = 5, b = 0
• Expected Output: "a is less than or equal to 10" and no numbers
are printed (loop doesn’t run).
• Path Covered: Path 2 → Loop does not execute.
Step 4: Path Coverage
By testing the above scenarios, we ensure that:
• Both branches of the if statement (a > 10 and a <= 10) are tested.
• The loop is tested for both scenarios: when it runs and when it does not
run (i.e., when b = 0).
Thus, we achieve Path Coverage, meaning all possible execution paths have
been covered by the test cases.

Advantages of Path Testing:


1. Thorough Testing: Path testing ensures that the internal logic and all
possible paths in the software are tested, providing a comprehensive
validation of the code's functionality.
2. Early Detection of Bugs: By focusing on the execution paths, path
testing helps in detecting logical errors or bugs in the flow of control early
in the development process.
3. Improved Code Quality: Path testing ensures that all decisions, loops,
and code branches are exercised, leading to more reliable and robust
software.
Disadvantages of Path Testing:
1. High Complexity: For programs with many conditions, loops, and
branches, the number of paths can grow exponentially, making path
testing computationally expensive and time-consuming.
2. Difficult for Large Systems: In large systems, it might be impractical to
test every single path due to the large number of possible paths, making
exhaustive path testing challenging.
3. Redundant Tests: Some paths might be redundant or effectively
equivalent, leading to unnecessary tests if not properly optimized.

DD-Paths in Software Testing


DD-Paths, also known as Definition-Use Paths, are a concept used in white-
box testing to analyze and ensure that variables in a program are correctly
defined (assigned values) and used (read values) throughout its execution. This
testing technique focuses on tracking the flow of variables from
their definition (where they are assigned values) to their use (where they are
read or accessed) to ensure that the program behaves as expected. It is an
effective white-box testing technique used to track the flow of variables from
their definition to their use in a program. By ensuring that variables are
correctly defined before being used, it helps in detecting logical errors and
improving the overall quality and reliability of the software. However, for large
systems, it can be challenging to manage and track all definition-use chains,
requiring careful planning and optimization of test cases.
You said:

Objective of DD-Paths Testing


The main goal of DD-Paths Testing is to validate the correctness of variable
definitions and their subsequent usage in a program. This helps to ensure that
variables are initialized properly before being used, which can prevent issues
like accessing uninitialized variables or logical errors due to improper
assignments.
Key Concepts in DD-Paths Testing:
1. Definition (Def): A definition of a variable occurs when it is assigned a
value.
2. Use (Use): A use of a variable happens when its value is accessed or read
(i.e., used in an expression, comparison, or any other operation).
3. DD-Path: A DD-Path is a path in the code that starts at the definition
(assignment) of a variable and traces through the program to its
subsequent uses. This ensures that the variable is correctly defined and
used.
4. Definition-Use Chain: The sequence of events starting from the
definition of a variable to its first use, and subsequent uses, forms
a definition-use chain. Testing this chain helps identify situations where
a variable is used before it is properly initialized.
Steps for DD-Paths Testing:
1. Identify Definitions: Determine where a variable is defined (i.e., where
it is assigned a value).
2. Identify Uses: Identify where the variable is used after it is defined. This
could be in an expression, as part of a conditional statement, or passed as
an argument to a function.
3. Create DD-Paths: Form DD-Paths by linking the definitions of variables
to their uses. Each DD-Path will represent a logical flow from the
definition of a variable to its usage.
4. Design Test Cases: For each DD-Path, create a test case that ensures the
program behaves correctly, considering the definition and the use of the
variable along the path.
5. Execute and Verify: Execute the test cases and verify that the variable
is correctly defined before being used and that the program works as
expected for each DD-Path.
Example of DD-Paths Testing
Let’s consider a simple program snippet to explain DD-Paths:
def calculate_total_price(price, quantity): total = price * quantity # Definition
of 'total' if total > 100: # Use of 'total' print("Total is large.") else: print("Total
is small.") discount = total * 0.1 # Definition of 'discount' final_price = total -
discount # Use of 'total' and 'discount' return final_price
Step 1: Identify Definitions and Uses
• Definition 1: total = price * quantity (variable total is assigned a value).
• Use 1: if total > 100: (variable total is used in the condition).
• Definition 2: discount = total * 0.1 (variable discount is assigned a
value).
• Use 2: final_price = total - discount (both total and discount are used).
Step 2: Create DD-Paths
The DD-Paths for this program would be:
1. DD-Path 1: total = price * quantity → if total > 100:
• This path tests whether the total value is correctly assigned and
used in the condition.
2. DD-Path 2: total = price * quantity → final_price = total - discount
• This path ensures that after total is defined, it is correctly used in
the calculation of final_price after discount is calculated.
Step 3: Design Test Cases
For each of these DD-Paths, test cases can be created to verify that the
definitions and uses of the variables occur correctly:
1. Test Case 1 (DD-Path 1):
• Input: price = 50, quantity = 3
• Expected Output: Since total = 50 * 3 = 150, the condition if total
> 100 will be true, and the program should print "Total is large."
2. Test Case 2 (DD-Path 2):
• Input: price = 50, quantity = 3
• Expected Output: After calculating total = 150, discount = total *
0.1 = 15, so final_price = 150 - 15 = 135. The program should
return 135.
Step 4: Execute and Verify
Run the above test cases and ensure that each path behaves as expected:
• DD-Path 1: Ensure that total is defined before it’s used in the condition.
• DD-Path 2: Ensure that the variables total and discount are correctly
used in the final calculation.
Advantages of DD-Paths Testing
1. Prevents Logical Errors: DD-Paths testing helps identify situations
where a variable is used before it is assigned a valid value, preventing
issues such as using uninitialized variables.
2. Improved Code Quality: By verifying the flow of variables from
definition to use, this technique ensures that the code works as intended
and helps in detecting logical bugs early.
3. Comprehensive Coverage: DD-Paths testing ensures that all critical
definition-use relationships are tested, leading to more thorough code
coverage and better program reliability.
Disadvantages of DD-Paths Testing
1. Complexity: For large and complex programs with many variables,
identifying and tracking all the definition-use chains can become
cumbersome.
2. Exponential Growth: In programs with many variables and complex
control structures, the number of DD-Paths can grow exponentially,
making it difficult to exhaustively test all possible paths.
3. Requires Access to Source Code: Like other white-box testing
techniques, DD-Paths testing requires access to the program's source
code, which might not always be possible in some environments.
Test Coverage is a critical metric in software testing that measures the extent
to which the source code of an application is tested. It helps evaluate the
effectiveness of the testing process and ensures that the most important parts
of the software are properly tested. Test coverage metrics quantify how much
of the application's code, requirements, or functionalities have been covered
by the executed tests.
Why Test Coverage is Important
Test coverage is important for several reasons:
• Identify Gaps: It helps identify areas of the code that have not been
tested, which may lead to undetected bugs.
• Assess Testing Effectiveness: It provides insight into how thoroughly
the software has been tested, indicating the overall quality of the test
suite.
• Improve Test Suite Quality: By identifying untested areas, test
coverage can help guide the development of additional test cases, leading
to better test quality and more reliable software.
Types of Test Coverage Metrics
There are several different test coverage metrics that measure different aspects
of the software, including code coverage, requirements coverage,
and branch coverage. Let's explore the most commonly used ones:

1. Code Coverage (Statement Coverage)


Code coverage measures the percentage of the codebase (lines of code) that
has been executed by the tests.
Formula:
Code Coverage (%)=Number of lines of code executedTotal number of lines
of code×100Code Coverage (%)=Total number of lines of codeNumber of lin
es of code executed×100
Example:
Consider a simple function:
python
Copy
def add(a, b): result = a + b # Line 1 print(result) # Line 2 return result # Line
3
If you have a test case that calls this function:
python
Copy
add(3, 4)
In this case:
• Executed lines: Line 1, Line 2, Line 3.
• Total lines: 3. Thus, Code Coverage = 33×100=100%33×100=100%.
2. Branch Coverage (Decision Coverage)
Branch coverage ensures that every possible decision (i.e.,
every if, else, for, while, etc.) is tested for both true and false outcomes.
Formula:
Branch Coverage (%)=Number of branches executedTotal number of branche
s×100Branch Coverage (%)=Total number of branchesNumber of branches e
xecuted×100
Example:
Consider the following code:
python
Copy
def check_number(a): if a > 0: print("Positive") else: print("Non-positive")
If you have the following test cases:
• Test Case 1: a = 5 (True branch)
• Test Case 2: a = -1 (False branch)
Here, both branches (true and false) are covered. Therefore, Branch
Coverage = 100%.
If only Test Case 1 was executed, the Branch Coverage would be 50%, as
only one of the two possible branches (true or false) would be executed.

3. Path Coverage
Path coverage measures the percentage of all possible paths through the
program that have been tested. Path coverage ensures that all potential
execution sequences are tested.
Formula:
Path Coverage (%)=Number of paths testedTotal number of paths×100Path C
overage (%)=Total number of pathsNumber of paths tested×100
Example:
Consider a simple function with an if-else condition:
python
Copy
def check_number(a, b): if a > 0: if b > 0: print("Both positive") else: print("a
positive, b non-positive") else: print("a non-positive")
There are four possible paths to consider:
1. a > 0 and b > 0
2. a > 0 and b <= 0
3. a <= 0 and b > 0
4. a <= 0 and b <= 0
To achieve 100% path coverage, all four test cases must be executed.

4. Condition Coverage
Condition coverage measures whether each individual condition in a decision
(like if, else, or while) has been evaluated to both true and false at least once.
Formula:
Condition Coverage (%)=Number of conditions evaluatedTotal number of co
nditions×100Condition Coverage (%)=Total number of conditionsNumber of
conditions evaluated×100
Example:
Consider the following code:
python
Copy
def check_number(a, b): if a > 0 and b < 10: print("Valid") else:
print("Invalid")
There are two conditions: a > 0 and b < 10. To achieve 100% condition
coverage, both conditions must be tested for both true and false:
• Test Case 1: a = 5, b = 3 (both conditions true)
• Test Case 2: a = -5, b = 3 (first condition false)
• Test Case 3: a = 5, b = 20 (second condition false)
By running these tests, you ensure that both conditions have been evaluated for
both true and false, achieving 100% Condition Coverage.

5. Function Coverage
Function coverage measures whether each function in the program has been
called during the test. It ensures that all functions in the code are tested at least
once.
Formula:
Function Coverage (%)=Number of functions calledTotal number of function
s×100Function Coverage (%)=Total number of functionsNumber of functions
called×100
Example:
For a program with the following functions:
python
Copy
def add(a, b): return a + b def subtract(a, b): return a - b
If the test suite contains a test that calls both add() and subtract(), the Function
Coverage will be 100%.

6. Loop Coverage
Loop coverage ensures that every loop in the program is tested for multiple
conditions:
• When the loop runs 0 times (e.g., the loop condition is initially false).
• When the loop runs 1 time.
• When the loop runs more than once.
Formula:
Loop Coverage (%)=Number of loop conditions testedTotal number of loop c
onditions×100Loop Coverage (%)=Total number of loop conditionsNumber
of loop conditions tested×100
Example:
For a loop like:
python
Copy
for i in range(5): print(i)
• Test Case 1: range(0) → Loop does not execute (0 times).
• Test Case 2: range(5) → Loop executes 5 times.
For full loop coverage, you must test both the case where the loop executes 0
times and where it executes multiple times.

7. Requirements Coverage
Requirements coverage measures whether all specified requirements of the
software have been covered by test cases. This is a form of black-box
testing where the focus is on ensuring that the system meets its requirements
as specified in the documentation.
Formula:
Requirements Coverage (%)=Number of requirements testedTotal number of
requirements×100Requirements Coverage (%)=Total number of requirements
Number of requirements tested×100
Example:
If a system has 10 requirements and 8 of those are tested by the current test
suite, the Requirements Coverage would be 810×100=80%108×100=80%.

8. Mutation Coverage
Mutation coverage is a more advanced metric that involves introducing small
changes (mutations) to the code and ensuring that the test suite detects these
changes. The goal is to check whether the tests are effective in detecting subtle
changes or errors.
Formula:
Mutation Coverage (%)=Number of mutations killedTotal number of mutatio
ns×100Mutation Coverage (%)=Total number of mutationsNumber of mutati
ons killed×100
Example:
If you introduce 10 mutations into the code and the test suite detects 8 of them,
the Mutation Coverage would be 810×100=80%108×100=80%.
Basis Path Testing is a white-box testing technique that focuses on ensuring
that all possible paths in a program are covered through a minimal set of test
cases. It is based on Control Flow Graphs (CFG) and involves identifying
independent paths that cover all possible decision points, ensuring the entire
codebase has been tested for potential execution paths.
The goal of basis path testing is to provide maximum test coverage with the
least number of test cases, by selecting paths that independently cover all
unique paths through the program's flow.
Key Concepts in Basis Path Testing
1. Control Flow Graph (CFG):
A CFG is a graphical representation of a program’s control structure,
where:
• Nodes represent program statements or blocks of code.
• Edges represent possible control flow between those statements.
The basis path testing technique relies on building a CFG to analyze all paths
through the program.
2. Independent Paths:
Independent paths are paths that introduce at least one new decision
point or edge. These paths are crucial for ensuring complete coverage,
as they represent different execution scenarios that cannot be replicated
by any other path in the program.
3. Cyclomatic Complexity:
Cyclomatic complexity is a metric used in basis path testing to measure
the number of linearly independent paths through a program. It helps
determine the minimum number of test cases needed for complete path
coverage.
Cyclomatic Complexity (V) can be calculated using the formula:
𝑉(𝐺)=𝐸−𝑁+2𝑃V(G)=E−N+2P
where:
• 𝑉(𝐺)V(G) = Cyclomatic Complexity
• 𝐸E = Number of edges in the control flow graph
• 𝑁N = Number of nodes in the control flow graph
• 𝑃P = Number of connected components (usually 1 for a single
program)
Steps in Basis Path Testing
1. Create a Control Flow Graph (CFG):
• Convert the program into a control flow graph by identifying nodes
(statements or blocks of code) and edges (possible control flows
between those statements).
2. Calculate Cyclomatic Complexity:
• Use the formula to calculate the cyclomatic complexity. This will
tell you the number of independent paths that need to be tested.
3. Identify Independent Paths:
• Identify the set of independent paths through the CFG. These are
the paths that need to be tested to ensure maximum coverage.
Independent paths can be selected based on decision points (such
as if statements, loops, and switches).
4. Design Test Cases:
• For each independent path, create a corresponding test case that
exercises that specific path. These test cases will ensure all paths in
the program are covered.
5. Execute the Test Cases:
• Run the designed test cases and observe whether each path behaves
as expected. This helps to ensure the program works under all
possible execution scenarios.
Example of Basis Path Testing
Let's consider a simple function in Python:
python
Copy
def check_number(a, b): if a > 0: if b > 0: return "Both positive" else: return "a
positive, b non-positive" else: return "a non-positive"
Step 1: Create the Control Flow Graph (CFG)
• Nodes:
• Node 1: a > 0
• Node 2: b > 0
• Node 3: return "Both positive"
• Node 4: return "a positive, b non-positive"
• Node 5: return "a non-positive"
• Edges:
• Edge 1: From Node 1 to Node 2 (if a > 0)
• Edge 2: From Node 2 to Node 3 (if b > 0)
• Edge 3: From Node 2 to Node 4 (if b <= 0)
• Edge 4: From Node 1 to Node 5 (if a <= 0)
Step 2: Calculate Cyclomatic Complexity
Using the formula 𝑉(𝐺)=𝐸−𝑁+2𝑃V(G)=E−N+2P, where:
• 𝐸E = 4 (edges),
• 𝑁N = 5 (nodes),
• 𝑃P = 1 (since it's a single program).
𝑉(𝐺)=4−5+2(1)=1V(G)=4−5+2(1)=1
This means there is 1 independent path in the program, indicating a minimal
set of paths needed for complete coverage.
Step 3: Identify Independent Paths
The independent paths in this case would be:
• Path 1: a <= 0 (goes to Node 5)
• Path 2: a > 0 and b > 0 (goes to Node 3)
• Path 3: a > 0 and b <= 0 (goes to Node 4)
These three paths cover all decision points and unique execution flows in the
program.
Step 4: Design Test Cases
For each independent path:
• Test Case 1: check_number(-5, 3) → Path 1 (a <= 0)
• Test Case 2: check_number(5, 3) → Path 2 (a > 0 and b > 0)
• Test Case 3: check_number(5, -3) → Path 3 (a > 0 and b <= 0)
Step 5: Execute the Test Cases
• Test Case 1: check_number(-5, 3) → Expected output: "a non-positive"
• Test Case 2: check_number(5, 3) → Expected output: "Both positive"
• Test Case 3: check_number(5, -3) → Expected output: "a positive, b
non-positive"

Advantages of Basis Path Testing


1. Thorough Testing: Basis path testing ensures all independent paths are
covered, which maximizes test coverage.
2. Improved Test Quality: By identifying and testing different decision
points, it ensures better defect detection.
3. Reduces Redundancy: By focusing on independent paths, it minimizes
the number of test cases needed to achieve good coverage.
4. Efficiency: Helps in identifying critical paths through the program,
making it easier to prioritize testing efforts.
Disadvantages of Basis Path Testing
1. Complexity in Large Programs: For large and complex applications
with many branches and loops, building a control flow graph and
calculating cyclomatic complexity may become challenging.
2. Does Not Ensure Full Functional Coverage: It mainly focuses on the
control flow and may not guarantee complete functional testing, as some
logic or edge cases may be missed.

Dataflow Testing is a white-box testing technique used to identify and test the
flow of data within a program. It focuses on the points at which variables
receive values (definitions) and where those values are used (uses). The
primary aim of dataflow testing is to ensure that the program correctly handles
the flow of data between variables and functions, with particular attention
given to variables' definitions and their uses.
Dataflow testing involves:
1. Definition of a variable: When a variable is assigned a value.
2. Use of a variable: When the value of a variable is read or used.
The technique examines the paths of data through the program, helping to
identify potential problems like undefined variables, incorrect variable use,
or data corruption. It uses the concept of Control Flow Graphs (CFG) to
track data flow between variable definitions and uses.

Key Concepts in Dataflow Testing


1. Definition (Def): A point where a variable is assigned a value.
2. Use (Use): A point where the value of a variable is accessed or used in
an expression.
• P-use (Predicate Use): A use that appears in a predicate
(e.g., if, while).
• C-use (Computational Use): A use that appears in a calculation or
computation.
3. Dataflow Anomaly: Occurs when there is an inconsistency or error in
how variables are defined and used, such as:
• Undefined variable use: Using a variable before it is defined.
• Variable not used after definition: A variable is defined but never
used in the program.
• Dead code: Code that defines or uses variables but never gets
executed.
4. Control Flow Graph (CFG): Used to represent the program’s flow,
where:
• Nodes represent statements (or blocks of code).
• Edges represent the flow of control between those statements.

Types of Dataflow Testing


1. Definition-Use Testing (DU Testing):
• Ensures that for each variable, all definitions are tested with their
corresponding uses. This tests whether a defined variable is used
correctly later in the program.
• Example: For a variable x, ensure that after it is assigned a value,
it is used in an expression.
2. All-Defs Testing:
• Focuses on ensuring that all variable definitions (assignments) are
executed during testing.
3. All-Uses Testing:
• Ensures that every use of a variable is tested, which checks whether
variables are being accessed correctly.
4. Definition-Definition (DD) Paths:
• This is a specialized type of testing where paths between two
successive definitions of a variable are tested to ensure correctness.
Steps in Dataflow Testing
1. Identify the Definitions and Uses:
• Determine where variables are defined and where they are used in
the program.
2. Create the Control Flow Graph (CFG):
• Build a graph representing the program’s control flow to visualize
the relationships between the definitions and uses of variables.
3. Identify the Dataflow Paths:
• Identify paths through the program where data flows from
definitions to uses.
4. Design Test Cases:
• Create test cases that will exercise all the paths from variable
definitions to uses, ensuring all possible dataflow anomalies are
covered.
5. Execute the Test Cases:
• Run the test cases and check for errors related to variable use, such
as accessing an undefined variable or using a variable in an
incorrect manner.

Example of Dataflow Testing


Let's consider the following simple function in Python:
def calculate_area(length, width): area = length * width # Def1: area is defined
if length > 0 and width > 0: result = area # Use1: area is used return result else:
return -1
Step 1: Identify Definitions and Uses
• Definition of area: At the statement area = length * width (Def1).
• Use of area: At the statement result = area (Use1).
• Definition of result: At the statement result = area (Def2).
• Use of result: In the return statement return result (Use2).
Step 2: Create the Control Flow Graph (CFG)
The CFG for this program might look like this:
less
Copy
[Start] ---> [Def1: area = length * width] ---> [Decision: length > 0 and
width > 0?]
| |
| [True] v [False]
| [Def2: result = area]
| |
[Return result] [Return -1]
Step 3: Identify the Dataflow Paths
• Path 1: From Def1 to Use1 (through the true branch of the decision).
• This path represents the dataflow from the definition of area to its
first use in the calculation of result.
• Path 2: If the condition fails, it would not use area correctly in this case,
so it doesn’t need to be tested.
Step 4: Design Test Cases
To test the dataflow, we design test cases for the following scenarios:
1. Test Case 1 (Valid Input):
• Input: calculate_area(5, 3)
• Expected Output: 15
• Path Executed: Path 1 (Definition of area to the use of area)
2. Test Case 2 (Invalid Input):
• Input: calculate_area(0, 3)
• Expected Output: -1
• Path Executed: No path uses the variable area for computation
after the definition in this scenario, as the program returns early.
Step 5: Execute the Test Cases
1. Test Case 1:
calculate_area(5, 3) → Output: 15 (This tests the path
from Def1 to Use1).
2. Test Case 2:
calculate_area(0, 3) → Output: -1 (This skips using area after its
definition, as the condition fails).

Advantages of Dataflow Testing


1. Improves Variable Integrity: Dataflow testing ensures that variables
are defined and used correctly, avoiding issues like undefined variables
or incorrect variable use.
2. Helps Identify Dataflow Anomalies: It can reveal anomalies such as the
use of variables before they are defined or variables that are defined but
never used.
3. Ensures Correct Variable Usage: By focusing on variable definitions
and uses, it helps ensure that the program’s logic handles data
consistently and correctly.
Disadvantages of Dataflow Testing
1. Complex for Large Programs: In large programs with many variables,
tracking data flow can become complex and cumbersome.
2. Does Not Test Functionality: Dataflow testing is mainly focused on the
flow of data, not on the overall functionality of the program. It may not
catch issues related to business logic or incorrect algorithmic behavior.
3. Requires Detailed Knowledge: It requires an in-depth understanding of
the code and control flow, which can be challenging for testers unfamiliar
with the implementation.
Definition/Use Testing, also known as Definition-Use Testing, is a white-box
testing technique that focuses on ensuring that variables are correctly defined
and used throughout the program. It checks the flow of data between variable
definitions (where a variable is assigned a value) and uses (where the value of
a variable is accessed or used). This testing technique aims to ensure that for
each variable definition, there is a corresponding use in the program, and that
the data is used in a way that is consistent with its definition. It helps to identify
potential problems, such as undefined variables, unused variables,
or incorrect data usage.

Key Concepts of Definition/Use Testing


1. Definition (Def):
A point in the code where a variable is assigned a value (e.g., x = 5).
This is where the variable is defined or initialized.
2. Use (Use):
A point in the code where a variable's value is accessed or used in
expressions, comparisons, or calculations. There are two types of uses:
• P-use (Predicate Use): The use of a variable in a decision-making
context, such as an if or while condition (e.g., if x > 0).
• C-use (Computational Use): The use of a variable in a
computation or arithmetic operation (e.g., y = x + 5).
3. Path:
A path in the program between a variable’s definition and its use.
Dataflow paths are the core concept of definition/use testing, and they
are essential to ensure that variables are correctly initialized before use.
4. Anomalies:
• Undefined Variable Use: When a variable is used before it is
defined.
• Redundant Definition: When a variable is defined but never used.
• Dead Code: When a definition or use does not contribute to the
program’s functionality.

Steps in Definition/Use Testing


1. Identify Variables:
Identify all the variables in the program that are defined and used.
2. Create the Control Flow Graph (CFG):
Construct a Control Flow Graph to map the program's flow and to
determine the relationships between definitions and uses of variables.
3. Identify Definition-Use Pairs:
For each variable, identify the definitions (where the variable is
assigned a value) and the uses (where the variable's value is accessed).
Each pair represents a potential definition-use path.
4. Design Test Cases:
Create test cases to ensure that each definition-use pair is covered. This
will validate that the program correctly handles the flow of data from
definition to use.
5. Execute the Test Cases:
Run the tests and verify that each definition is followed by its
corresponding use. Also, ensure that variables are not used before they
are defined.

Example of Definition/Use Testing


Consider the following Python function:
python
Copy
def calculate_discount(price, discount): total = price - (price * discount) #
Def1: total is defined if total > 100: # P-use1: total is used in a predicate return
total else: return 0
Step 1: Identify Definitions and Uses
• Definition of total:
The variable total is defined by the statement: total = price - (price *
discount). This is the definition of total.
• Use of total:
The variable total is used in the if condition: if total > 100:. This is
a predicate use (P-use), where total is used to make a decision.
• Return Use:
total is used in the return statement return total.
Step 2: Create the Control Flow Graph (CFG)
The CFG for this program might look like this:
less
Copy
[Start] ---> [Def1: total = price - (price * discount)] ---> [P-use1: if total > 100]
| |
[True] [False]
| |
[Return total] [Return 0]
Step 3: Identify Definition-Use Paths
• Path 1 (Def1 → P-use1):
This path tests the flow from the definition of total to its use in
the if statement.
Step 4: Design Test Cases
To test the definition-use path, we design test cases:
1. Test Case 1 (Valid Input):
• Input: calculate_discount(200, 0.1)
• Expected Output: 180
• This test case covers the scenario where total = 200 - (200 * 0.1) =
180, which satisfies the condition if total > 100.
2. Test Case 2 (Invalid Input):
• Input: calculate_discount(50, 0.2)
• Expected Output: 0
• This test case tests the situation where total = 50 - (50 * 0.2) = 40,
which does not satisfy if total > 100.
Step 5: Execute the Test Cases
• Test Case 1:
calculate_discount(200, 0.1) → Output: 180
This test case ensures that the definition of total is correctly used in
the if condition and returns the correct result.
• Test Case 2:
calculate_discount(50, 0.2) → Output: 0
This test case ensures that the definition of total is used correctly, even
when the condition is not met.

Advantages of Definition/Use Testing


1. Thorough Data Flow Coverage: Ensures that variables are defined and
used correctly, which helps catch issues like uninitialized variables or
incorrectly handled data.
2. Helps Identify Logical Errors: By checking the flow from definition to
use, it helps identify logical errors where a variable may not be used as
expected or its value is not propagated correctly.
3. Improved Code Quality: Encourages better code practices by ensuring
variables are defined before they are used, and unused variables are
eliminated.
Disadvantages of Definition/Use Testing
1. Complexity for Large Programs: In larger programs with many
variables, managing and tracking all definition-use pairs can become
difficult.
2. Does Not Cover All Functional Testing: While it ensures correct data
flow, definition/use testing does not guarantee that the program behaves
correctly for all functional requirements. It focuses solely on data flow,
not business logic.
3. Overhead in Tracking Variables: Requires careful tracking of variable
definitions and their subsequent uses, which can increase testing effort.
Slice-Based Testing is a white-box testing technique that focuses on analyzing
specific "slices" or sections of a program that affect a particular output or
variable. It involves isolating and testing parts of a program (called "slices")
that are relevant to specific inputs or outputs, allowing testers to narrow their
focus to particular sections of code. The goal is to test the program with respect
to specific variables and their influences, without testing the entire program at
once. A slice in this context refers to a subset of a program, typically consisting
of statements or conditions that directly or indirectly influence the value of a
particular variable. By concentrating on these slices, slice-based testing helps
in isolating and diagnosing issues in specific parts of the program more
efficiently.

Key Concepts of Slice-Based Testing


1. Program Slice:
A program slice is a subset of the program that includes all the
statements and control flow that directly or indirectly affect the value of
a given variable or output. A slice can be forward or backward:
• Forward Slice: Includes all statements that are affected by a
particular variable or output, moving forward in the program.
• Backward Slice: Includes all statements that affect a particular
variable, moving backward in the program from the point where the
variable is used or defined.
2. Static Slicing vs. Dynamic Slicing:
• Static Slicing: A static slice is computed without executing the
program. It is determined based on the program’s control flow and
data dependencies.
• Dynamic Slicing: A dynamic slice is computed by executing the
program with specific input values, tracing the actual execution
flow, and determining which parts of the program influence the
given variable's value during execution.
3. Slicing Criterion:
The slicing criterion specifies the variable of interest (the variable
whose behavior is being analyzed) and the point in the program where
the slicing begins. This criterion helps in defining the relevant slice.
4. Control Flow:
Slice-based testing depends on understanding the control flow of a
program. By analyzing how data flows through the program and how it
is modified, testers can determine the program slices relevant to a given
criterion.
Steps in Slice-Based Testing
1. Identify the Slicing Criterion:
Decide the variable or output of interest that needs to be tested and the
specific program point to begin the analysis.
2. Construct the Program Slice:
Based on the slicing criterion, compute the program slice either
statically or dynamically. This will include all statements that affect the
variable or output of interest.
3. Design Test Cases:
Create test cases that focus on the identified slice of the program. The
test cases should cover different paths and scenarios that may affect the
value of the chosen variable or output.
4. Execute the Test Cases:
Run the test cases to verify the behavior of the sliced portion of the
program. During execution, analyze whether the identified slice
produces the expected output for the given input.
5. Evaluate Test Results:
After executing the test cases, analyze the results to check for errors,
inconsistencies, or unexpected behaviors within the sliced portion of the
program.

Example of Slice-Based Testing


Consider the following example in Python:
def calculate_discount(price, discount): if price <= 0: return "Invalid price" #
Def1 total = price - (price * discount) # Def2 if total > 100: # Condition 1 return
total else: return 0
Let’s say we want to perform slice-based testing on the variable total and
examine how it is affected by the price and discount inputs.
Step 1: Identify the Slicing Criterion
The slicing criterion is total, and we will analyze how the value of total is
influenced by the values of price and discount.
Step 2: Construct the Program Slice
• The slice will include:
• The statement where price is checked (if price <= 0:),
since price affects the calculation of total.
• The statement where total is calculated (total = price - (price *
discount)), since this directly defines the value of total.
• The statement if total > 100: as it checks the value of total.
Step 3: Design Test Cases
Based on the slice, we can design the following test cases to verify the behavior
of total:
1. Test Case 1 (Valid Inputs):
• Input: calculate_discount(200, 0.1)
• Expected Output: 180 (since total = 200 - (200 * 0.1) = 180)
2. Test Case 2 (Invalid Price):
• Input: calculate_discount(-100, 0.1)
• Expected Output: "Invalid price" (since the program returns early
when price <= 0)
3. Test Case 3 (Discount Leading to Small Total):
• Input: calculate_discount(50, 0.2)
• Expected Output: 0 (since total = 50 - (50 * 0.2) = 40, and the
condition total > 100 is not satisfied)
Step 4: Execute the Test Cases
Run each of the test cases and verify whether the output matches the expected
results:
1. Test Case 1: calculate_discount(200, 0.1) → Output: 180
This tests the slice that calculates the total and ensures that it is
computed correctly.
2. Test Case 2: calculate_discount(-100, 0.1) → Output: "Invalid price"
This tests the slice for the price check and confirms that the program
handles invalid prices correctly.
3. Test Case 3: calculate_discount(50, 0.2) → Output: 0
This tests the condition where the total is small and ensures that the
correct branch is taken.
Step 5: Evaluate Test Results
Review the test case results to ensure that the slices are functioning as expected
and that the program behaves correctly for the given inputs.

Advantages of Slice-Based Testing


1. Focused Testing:
Slice-based testing allows testers to focus on specific parts of the
program that are relevant to particular outputs or variables, improving
test efficiency.
2. Helps Isolate Errors:
By isolating portions of the code that affect a specific variable, it
becomes easier to pinpoint and debug errors related to data flow and
dependencies.
3. Reduced Complexity:
Rather than testing the entire program, slice-based testing allows for a
reduced scope of testing, which can make it easier to identify problems.
4. Improved Coverage:
Slice-based testing provides more granular coverage by focusing on
how specific variables are defined and used, potentially improving code
quality.
Disadvantages of Slice-Based Testing
1. High Complexity for Large Programs:
For large programs with many variables and dependencies, constructing
and analyzing slices can become very complex and time-consuming.
2. Limited by Criterion Selection:
The effectiveness of slice-based testing depends on the proper selection
of the slicing criterion. If the wrong variable is selected, the tests might
miss critical paths in the program.
3. Static vs. Dynamic Slicing:
Static slicing might not account for runtime-specific behaviors, such as
values affected by inputs, while dynamic slicing may require significant
resources to track actual execution flow.
4. May Miss Larger Context:
While slice-based testing focuses on specific variables or outputs, it
may overlook the broader context or interactions between different parts
of the program.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy