SQA-note
SQA-note
Here is a comprehensive overview of the provided sources, integrating and explaining the key concepts
related to Software Quality Assurance (SQA) and Testing:
Testing is a method used to "check whether the actual software product matches expected requirements
and to ensure that the software product is defect-free". It involves the execution of software or system
components, either manually or using automated tools. The primary purpose of software testing is to
identify errors, gaps, or missing requirements when compared to the actual requirements.
Preventing defects by evaluating work products like requirements, user stories, design, and code.
Verifying that all specified requirements have been fulfilled.
Checking if the test object is complete and validating that it works as expected by users and other
stakeholders.
Building confidence in the level of quality of the test object.
Finding defects and failures, thereby reducing the risk of inadequate software quality.
Providing sufficient information to stakeholders for informed decisions, especially regarding the
quality level of the test object.
Complying with contractual, legal, or regulatory requirements or standards, and/or verifying
the test object’s compliance.
Reducing costs associated with defects.
Showing the system meets user needs.
Assessing the software quality.
The relationship is sequential: A person makes an error that creates a fault in the software, which can
then cause a failure during operation.
Software is written by human beings who are not perfect and make mistakes (errors).
Developers are often under increasing pressure to deliver to strict deadlines, leading to less time for
checks or assumptions that might be wrong, resulting in incomplete systems.
What do software faults cost? Software faults can be incredibly expensive, causing huge sums of money
(e.g., Ariane 5 rocket failure at 7billion, M arinerspaceprobeat250 million, American Airlines at
$50 million). Conversely, some faults may cost very little or nothing, causing only minor inconvenience
or no visible detrimental impact. Software is not "linear"; a small input can sometimes have a very large
effect. In safety-critical systems, software faults can even cause death or injury (e.g., Therac-25 radiation
treatment, aircraft crashes).
Why not just "test everything"? Exhaustive testing, which means exercising all combinations of inputs
and preconditions, is not practical as it would take an "impractical amount of time" or even "infinite
time". For example, a scenario with 20 inputs, each having multiple values, could lead to 480,000 tests,
taking over 17 days without accounting for retesting.
How much testing is enough? It's never truly "enough" in an absolute sense, but it depends heavily on
risk.
Testing and Quality: Testing measures software quality, and when faults are found and removed,
software quality (and possibly reliability) is improved. Testing assesses system function, correctness, and
non-functional qualities like reliability, usability, maintainability, reusability, and testability.
4. Psychology of Testing
The purpose of testing can be seen paradoxically: it's to find faults, but finding faults can destroy
confidence. However, the best way to build confidence is actually to try and find faults, showing what the
system shouldn't do or doesn't do as it should. This approach, focused on finding faults, results in fewer
faults remaining in the system. A traditional approach, aiming to show the system works, might leave
faults undiscovered.
The Tester's Mindset: Testers perform a critical process, often delivering "bad news" (e.g., "your baby is
ugly") and working under intense time pressure. They need a different mindset, questioning "What if it
isn’t?" or "What could go wrong?".
Rights: Accurate information, insight from developers, delivered code tested to an agreed standard,
professional regard, the right to find faults, challenge specifications, have reported faults taken
seriously, make predictions, and improve their own process.
Responsibilities: Follow test plans, report faults objectively and factually, check tests are correct,
remember it's the software (not the programmer) being tested, assess risk objectively, prioritise
reporting, and communicate the truth.
Independence in Testing: Testing one's own work is less effective (finding only 30-50% of faults) due to
shared assumptions, emotional attachment, and a desire not to find faults. Levels of independence range
from no independence (developer testing own code) to external organisation testing, or even tool-
generated tests.
Defining test strategy and policies to establish a clear roadmap for stakeholders.
Determining scope, risks, and test objectives (ensuring each requirement is covered).
Defining the test approach (procedures, techniques, teams, environment, data) to identify
feasibility.
Implementing the test policy/strategy.
Determining and allocating test resources (environment, people).
Scheduling all test activities (plan, design, implementation, execution, evaluation).
Determining and agreeing on exit criteria with stakeholders (e.g., test coverage, number of
tests executed), which define the end of testing and allow for software release.
Controlling activities involves measuring and analysing results (e.g., tests executed, defects
found by severity), monitoring and documenting progress for stakeholders, initiating corrective
actions, and making GO/NO-GO decisions for release.
Preparation tasks: Developing and prioritising step-by-step test cases, creating test suites, and
verifying the test environment is ready.
Execution: Running prescribed test cases, prioritising the most important ones. This can be
manual or automated. Execution may be stopped if many faults are found early or under time
pressure.
Recording: Documenting the test process, including identities and versions of the software and
test specifications. Mark progress, document actual outcomes, capture new test case ideas.
Crucially, compare actual outcome with expected outcome and log discrepancies as software
faults, test faults, environment faults, or incorrect test runs. Record coverage levels. Retest after
defects are fixed and perform regression tests.
Ending testing by checking planned acceptance/rejection deliverables and whether defects are
resolved or deferred.
Finalising and archiving testware (scripts, data, tools, environment) for future reuse.
6. Test Plan
A Test Plan is a formal document that details the testing strategies, processes, workflow, and
methodologies for a project. Key components include:
Introduction: A brief overview of the test strategies, process, workflow, and methodologies.
Scope: Defines what features and requirements (functional or non-functional) will be tested (In
Scope) and what will not (Out of Scope).
Quality Objective: States the overall goal of the testing project, such as ensuring conformance to
requirements, meeting quality specifications, and identifying/fixing bugs before go-live.
Roles and Responsibilities: Details the roles and duties of team members (e.g., QA Analyst, Test
Manager, Developers).
Test Methodology: Explains the chosen test methodology (e.g., Waterfall, Iterative, Agile, Extreme
Programming) and factors influencing its selection.
Test Levels: Defines the types of testing (e.g., Unit, Integration, System, Acceptance) to be executed
based on project scope, time, and budget constraints.
Bug Triage: Outlines the process for defining resolution types for bugs and prioritising/scheduling
bugs to be fixed.
Suspension Criteria and Resumption Requirements: Criteria for suspending and resuming
testing.
Test Completeness: Criteria for deeming testing complete (e.g., 100% test coverage, all test cases
executed, all open bugs fixed or deferred).
Test Deliverables: Lists all artefacts produced during testing (e.g., Test Plan, Test Cases, Bug
Reports, Test Metrics).
Resource & Environment Needs: Lists required testing tools (e.g., Requirements Tracking, Bug
Tracking, Automation Tools) and minimum hardware/software environment requirements.
Terms/Acronyms: A glossary of terms used in the project.
7. Test Estimation
Estimation is a forecast or prediction, an approximation of cost, time, quantity, or worth for a task. It's a
blend of science and art, based on past experience, available documents, knowledge, assumptions, and
calculated risks.
Why do we Estimate? Poor estimation leads to overshooting budgets and timescales; many software
projects fail to finish within allotted schedules and budgets due to factors like unspecified objectives, bad
planning, new technology, inadequate methodology, and insufficient staff.
1. Identify scope.
2. State assumptions.
3. Assess the tasks involved.
4. Estimate effort for each task.
5. Calculate Total Effort.
6. Work out elapsed time/critical path.
7. Check if the total is reasonable and reassess if needed.
Function Point Method for estimation:
Function Points measure the size of computer applications from a functional, user-centric view,
independent of language, methodology, or team capability.
Steps:
1. Define Function Points (e.g., based on use cases).
2. Give Weightage to all Function Points (e.g., Simple for GUI, Medium for database checks,
Complex for API interactions).
3. Define an Estimate Per Function Point based on similar projects or organisational standards.
4. Calculate Total Effort Estimate by multiplying Total Function Points by the Estimate defined
Per Function Point.
Tips for Estimation: Allow enough time, use documented data from past projects, use own estimates,
involve several people, re-estimate throughout the lifecycle, create standardised procedures, and focus on
improving the estimation process.
Black-box Testing (Functional Testing) Techniques: These techniques analyse the input/output domain
or observable behaviour of the program without knowledge of internal structure.
Equivalence Partitioning: Divides input test data into valid and invalid equivalence classes and
selects one test case from each. This reduces the number of test cases needed. For an input between 1
and 100, examples are 50 (valid), 0 (invalid <1), and 101 (invalid >100).
Boundary Value Analysis: Tests values at the edges and just outside the edges of input ranges. It is
often performed after Equivalence Partitioning. For an input between 1 and 100, examples include 1,
0, 2, 100, 99, 101.
Decision Table (Cause-Effect Table): A systematic way to deal with combinations of inputs,
focusing on business logic or rules. It provides complete coverage of test cases and guarantees
considering every possible combination of condition values (completeness property). The number of
combinations is 2^n, where n is the number of inputs.
Cause-Effect Graph (Ishikawa or Fishbone Diagram): Identifies possible root causes (distinct
input conditions, "Causes") for a specific effect or problem ("Effect"), relating interactions among
factors affecting a process. It helps determine root causes using a structured approach and indicates
causes of variation.
Error Guessing: Uses a tester's skill, intuition, and experience to identify defects not easily captured
by formal techniques. It's often done after formal techniques and by experts. It's less reproducible,
and coverage is limited by experience. Typical conditions include division by zero, blank input,
empty files, or wrong data types.
Random Testing (Monkey Testing): A functional black-box technique used when time is limited.
Random inputs are identified, selected independently, executed, and results recorded/compared. It's
used when defects are not identified at regular intervals and can save time and effort.
9. Types of Testing
The sources outline various types of testing, broadly categorised as:
A. Functional Testing: Ensures software functions as per requirements, focusing on features, user
interactions, and business requirements.
Unit Testing: Tests individual components or units of code in isolation to verify correctness, identify
bugs early, and ensure each unit functions as expected. Benefits include early bug detection,
improved code quality, faster debugging, live documentation, and refactoring confidence.
Integration Testing: Combines individual modules and tests them as a group to verify interactions,
ensure combined components work together, and detect issues in data flow/communication.
System Testing: Tests a complete and integrated system to verify it meets specified functional and
non-functional requirements and behaves as expected in a real-world environment.
Acceptance Testing: The final phase, ensuring the system meets user and business requirements and
is ready for production.
User Acceptance Testing (UAT): Performed by end-users to validate the system against their
requirements, focusing on user-friendliness and readiness for production.
Alpha Testing: Conducted by the internal development or QA team in a controlled environment
to identify critical bugs and validate functionality/stability. Benefits include early bug detection,
controlled environment, and internal feedback.
Beta Testing: Conducted by real users in a real-world (production) environment to gather
feedback and identify issues not caught during alpha testing. Benefits include real-world
feedback, compatibility testing, and user validation.
Regression Testing: Performed after code changes (fixes, enhancements, new features) to ensure
they haven't adversely affected existing functionality. It verifies new changes don't introduce new
bugs, maintains quality, saves costs, and builds confidence. Performed after bug fixes, new features,
configuration changes, or performance optimisations.
Smoke Testing: A preliminary test after a new build deployment to check if basic functionalities are
working correctly. It verifies critical features, determines if the build is stable enough for further
testing, enables early detection, saves time, builds confidence, and provides quick feedback.
B. Black Box Testing: The internal structure, design, or implementation is not known to the tester.
Focuses on "what the system does" and meeting user requirements.
C. White Box Testing: The internal structure, design, and implementation are known to the tester.
Focuses on "how the system works," validating internal logic, code structure, and adherence to design
specifications.
D. Manual Testing: Manually executing test cases without automation tools. Focuses on functionality,
usability, and performance through human observation and interaction. Important for exploratory testing,
user experience (UX) testing, ad-hoc testing, and early-stage testing when the application is unstable.
E. Automated Testing: Uses specialised tools and scripts to execute test cases, compare results, and
generate reports.
Purpose: Increases efficiency, accuracy, reduces manual effort, and enables continuous testing in
CI/CD pipelines.
When to Use: For repetitive test cases (like regression), large-scale execution (load/performance),
cross-browser/platform testing, and high-risk areas.
Types of Testing that can be Automated: Functional, Regression, Performance, Load, Security,
Unit, and Integration Testing.
How to Perform: Identify suitable test cases, select a tool (e.g., Selenium, Appium, TestNG, JUnit,
Cypress), develop scripts, execute, analyse results, and maintain scripts.
Test Automation Framework: A structured set of guidelines and tools that simplify automated
testing, defining how scripts are written, executed, and maintained.
Types: Data-Driven, Keyword-Driven, Hybrid, Behaviour-Driven Development (BDD).
Use of Frameworks: Improves reusability, enhances maintainability/scalability, reduces tester
dependency, provides structured reporting, and supports CI/CD integration.
Challenges: High initial cost, maintenance overhead, tool limitations, and skill requirements.
F. Non-functional Testing: Evaluates how well the system performs, rather than what it does, assessing
aspects like performance, usability, reliability, and security.
Performance Testing: Evaluates system behaviour under various load conditions to identify
bottlenecks.
Load Testing: Assesses behaviour under expected load conditions (e.g., simulating 1,000
users).
Stress Testing: Determines robustness under extreme conditions, pushing the system beyond
normal capacity to find breaking points.
Endurance Testing: Evaluates performance over extended periods to identify issues like
memory leaks or degradation.
Spike Testing: Assesses reaction to sudden, extreme load changes.
Volume Testing: Evaluates performance with large data volumes.
Scalability Testing: Determines how well the system scales up or down.
Capacity Testing: Determines the maximum capacity of the system.
Configuration Testing: Evaluates performance under different configurations.
Failover Testing: Ensures graceful handling of failover scenarios.
Latency Testing: Measures system response time to a request.
Stress Recovery Testing: Evaluates system recovery after a stress test.
Concurrency Testing: Assesses handling of multiple simultaneous users/transactions.
Usability Testing: Evaluates a product by testing it on users, focusing on user-centered interaction
design. Types include Exploratory, Comparative, and Heuristic Evaluation.
Security Testing: Identifies vulnerabilities, threats, and risks to protect data and resources.
Vulnerability Scanning: Automated scanning against known vulnerability signatures.
Penetration Testing: Simulates hacker attacks to find exploitable vulnerabilities.
Configuration Testing: Checks system security settings.
Database Security Testing: Ensures database integrity and confidentiality.
Network Security Testing: Tests network infrastructure security.
Reliability Testing: Ensures consistent performance of intended functions without failure over a
specified period.
Types of Defects:
Defect Life Cycle (Bug Life Cycle): A specific set of states a defect goes through, ensuring systematic
and efficient defect fixing and communication.
Defect Tracking Tools: Examples include JIRA, Bugzilla, Redmine, MantisBT, Trello.
Defect Management in Agile & DevOps: Agile focuses on early and frequent testing, while DevOps
integrates continuous monitoring and feedback loops to minimise defects.
Early detection: Find and fix defects as early as possible in the SDLC.
Clear documentation: Provide complete information for reproducibility.
Prioritisation: Focus on critical defects first (based on impact/severity).
Metrics tracking: Measure defect density, resolution time, etc.
Continuous improvement: Analyse defect trends to prevent recurrence.
Example (Netflix Profiles Feature): The Netflix QA teams sent weekly Test Status Reports to executive
leadership. These reports included:
Status Reporting Guidelines: A good report effectively communicates relevant information tailored to
the audience. A typical report includes:
Extra Content
1. Explain the relationship between an "error," a "fault," and a "failure" in software quality assurance.
2. What is the primary purpose of conducting software testing, according to the provided materials?
3. Why is "exhaustive testing" generally considered impractical or impossible in software quality
assurance?
4. Briefly describe the "testing paradox."
5. What are the three main tasks involved in "test specification" within the fundamental test process?
6. List three key benefits of performing Unit Testing.
7. What is the main difference between Alpha Testing and Beta Testing?
8. Define "Regression Testing" and explain its purpose.
9. What is a "Defect Life Cycle" and why is it important in software testing?
10. Name three essential attributes that should be included when reporting a software defect.
Building Confidence: "To build confidence in the level of quality of the test object." This involves
proving the software is correct and conforms to requirements.
Defect Detection and Risk Reduction: "To find defects and failures thus reduce the level of risk of
inadequate software quality." This is a primary goal, as faults can lead to significant costs and even
loss of life.
Informed Decision-Making: "To provide sufficient information to stakeholders to allow them to
make informed decisions, especially regarding the level of quality of the test object."
Compliance: "To comply with contractual, legal, or regulatory requirements or standards, and/or to
verify the test object’s compliance with such requirements or standards."
Understanding Bugs: The terminology for software imperfections follows a clear progression:
Why Faults Occur and Their Cost: Faults are inherent because software is written by "human beings –
who know something, but not everything – who have skills, but aren’t perfect – who do make mistakes
(errors)." Pressure to meet deadlines also contributes. The cost of software faults can range from "very
little or nothing at all – minor inconvenience" to "huge sums – Ariane 5 ($7billion) – Mariner space probe
to Venus ($250m) – American Airlines ($50m)." In safety-critical systems, faults can tragically "cause
death or injury – radiation treatment kills patients (Therac-25)."
The Necessity of Testing: Testing is crucial because software is likely to have faults, failures can be very
expensive, and it helps learn about software reliability. However, "exhaustive testing" – exercising "all
combinations of inputs and preconditions" – is impractical and takes an "infinite time" or "impractical
amount of time."
How Much Testing is Enough? The amount of testing required "depends on the risks for your system."
These risks include:
The Tester's Role and Independence: Testers often "Bring bad news ('your baby is ugly')" and are
"Under worst time pressure (at the end)." They require a "different mindset ('What if it isn’t?', 'What
could go wrong?')." To ensure objectivity and effectiveness, testers should ideally be independent from
the developers. Testing one's own work limits fault detection to "30% - 50% of your own faults" due to
"same assumptions and thought processes" and "emotional attachment." Levels of independence range
from "None: tests designed by the person who wrote the software" to tests designed by a "different
person," a "different department or team (e.g. test team)," a "different organisation (e.g. agency)," or even
"Tests generated by a tool."
Fundamental Test Process: The fundamental test process involves five key stages:
1. Planning and Control: This defines the "test strategy and policies," determines "scope, risks and
test objectives," and allocates "test resources (environment, people)." Control involves "measure and
analyze the results," "monitor and document progress," and "initiate corrective actions." Crucially,
this stage sets "exit criteria which could be test coverage (%), number of test executed."
2. Test Analysis and Design: This involves defining "what's required (feasibility)" and includes
designing the test environment.
3. Test Implementation and Execution: This involves developing and prioritising "test cases by
describing step by step instructions," creating "test suites," and verifying the test environment.
During execution, test cases are run, and outcomes are logged, including "software fault – test fault
(e.g. expected results wrong) – environment or version fault – test run incorrectly."
4. Evaluating Exit Criteria and Reporting: This assesses if "test activities have been carried out as
specified" and if "initial exit criteria has to be reset and agreed again with stakeholders." A
"summary report" is written.
5. Test Closure Activities: These include "check which planned acceptance or rejection deliverable has
been delivered," resolving or deferring defects, and "finalize and archive test ware for an eventual
reuse."
1. Identify Scope
2. State Assumptions
3. Assess the tasks involved
4. Estimate the effort for each task
5. Calculate Total Effort
6. Work out elapsed time / critical path
7. Is the total reasonable? (Reassess if not)
8. Finish and Present the Estimate
A common method is the Function Point Method, which measures the "size of computer
applications...from a functional, or user, point of view," independent of programming language or
methodology. Function points are defined and assigned weightage (Simple, Medium, Complex) to
estimate effort per point.
Equivalence Partitioning: Divides "the input test data...into each partition at least once of
equivalent data." This reduces test cases. For an input between 1 and 100, valid class (e.g., 50) and
invalid classes (e.g., 0, 101) are identified.
Boundary Value Analysis: "tests are performed using the boundary values." This is done after
equivalence partitioning and tests "the values at the edges and just outside the edges." For input 1-
100, test values include 0, 1, 2, 99, 100, 101.
Decision Table: A "good way to deal with combinations of Inputs" and "Focused on business logic
or business rules." It provides a systematic way to state complex rules. The number of combinations
is 2^n, where n is the number of inputs.
Cause-Effect Graphing: Also known as Ishikawa or fishbone diagram, used to "Identify the
possible root causes, the reasons for a specific effect, problem, or outcome."
Random Testing (Monkey Testing): "performed when there is not enough time to write and execute
the tests." It is less reproducible and coverage is limited by experience.
5. Types of Testing
Software testing can be broadly categorised into Functional and Non-Functional testing.
Functional Testing: "Testing the functional aspects of the software to ensure it works as per the
requirements." Focuses on features, user interactions, and business requirements.
Unit Testing: "individual components or units of code are tested in isolation." Purpose: "Verify the
correctness of small pieces of code," "Identify bugs early," and "Ensure each unit functions as
expected."
Integration Testing: "individual modules or components are combined and tested as a group."
Purpose: "Verify the interaction between integrated units."
System Testing: "a complete and integrated system is tested to verify that it meets specified
requirements." Purpose: "Validate the system’s compliance with functional and non-functional
requirements."
Acceptance Testing: "the final phase of software testing where the system is tested to ensure it
meets the user and business requirements." Purpose: "Validate that the software is ready for
production."
User Acceptance Testing (UAT): "Testing performed by end-users to validate the system against
their requirements."
Alpha Testing: "conducted by the internal development team or QA team in a controlled
environment." Focuses on critical bugs and stability.
Beta Testing: "conducted by real users in a real-world environment." Gathers feedback on user
experience and compatibility.
Regression Testing: "performed to ensure that recent code changes (e.g., bug fixes, enhancements,
or new features) have not adversely affected existing functionality."
Smoke Testing: A preliminary test to "Determine if the build is stable enough for further testing."
"Identifies major issues early."
Non-Functional Testing: "Validate how well the system performs under various conditions."
Performance Testing: Evaluates system speed and stability under different loads. Includes:
Load Testing: Under "expected load conditions." (e.g., simulating 1,000 users).
Stress Testing: "beyond its normal operational capacity to see how it handles high stress or failure
conditions." (e.g., increasing users until crash).
Endurance Testing: "over an extended period" for issues like "memory leaks."
Spike Testing: "reaction to sudden and extreme changes in load."
Volume Testing: With "a large volume of data."
Scalability Testing: How well the system can "scale up or down."
Capacity Testing: Determines "the maximum capacity of the system."
Configuration Testing: Under "different configurations."
Failover Testing: Ensures "system can handle failover scenarios gracefully."
Latency Testing: Measures "time it takes for a system to respond to a request."
Stress Recovery Testing: How well "the system recovers after a stress test."
Concurrency Testing: Ability to handle "multiple users or transactions simultaneously."
Usability Testing: "evaluate a product by testing it on users." Includes Exploratory, Comparative,
and Heuristic Evaluation.
Security Testing: "identifying vulnerabilities, threats, and risks." Includes Vulnerability Scanning
and Penetration Testing ("Simulates an attack from a malicious hacker").
Reliability Testing: "ensuring that a software application performs its intended functions
consistently and without failure over a specified period of time."
Black-Box Testing: "internal structure, design, or implementation...is not known to the tester."
Focus: "What the system does."
White-Box Testing: "internal structure, design, and implementation...are known to the tester."
Focus: "How the system works."
Manual Testing: "manually executing test cases without using automation tools or scripts."
Automation Testing: Using tools to automate test execution. Automated testing is applicable to
various types of testing, including Functional, Regression, Performance, Load, Security, and Unit
Testing.
Before reporting a bug, testers should ensure: "Have I reproduced the bug 2-3 times," "Have I verified in
the Defect Tracking Tool...whether someone else already posted the same issue," and "Have I written the
detailed steps to reproduce the bug." Popular Defect Tracking Tools include JIRA, Bugzilla, and
Redmine.
1. Provides Transparency: Ensures "clear visibility into what is happening in testing." Stakeholders
know "How much work is completed," "What issues are currently being faced," and "Where the risks
are."
2. Supports Timely Decisions (GO/NO-GO): Provides "accurate test reporting arms decision-makers
with the data needed to: Approve release (GO) [or] Hold and fix issues (NO-GO)."
3. Identifies Risks Early: Flags risks like "High-priority unresolved defects," "Untested critical areas,"
or "Environmental instabilities," allowing proactive "risk mitigation strategies."
Project Name:
Duration: Reporting period.
Report by: Author (Test Lead/Manager).
Report to: Target Audience.
Planned Activities/Tasks: For the reporting period.
Activities/Tasks Accomplished: Since the last report.
Project Milestones Reached: With current status.
Activities/Tasks not Accomplished/missed:
Planned Activities for the next Reporting Period:
Test Project Execution Details: # Pass, # Fail, # Blocked, # Not Executed.
Test Summary: Test Coverage details.
Defect Status: Details of # defects / severity wise.
Status of Defect Re-testing:
Issues: Problems faced, listed by criticality.
Unresolved Issues: From previous periods.
Risks: Important risks affecting the testing schedule.
Environment Downtime Tracking: # hours lost due to environment issues.
In summary, SQA, through meticulous planning, execution, defect management, and transparent
reporting, aims to deliver high-quality software that meets requirements, manages risks, and supports
informed business decisions.
Building Confidence: Testing helps establish confidence in the quality level of the software being
tested.
Defect Detection and Risk Reduction: A primary goal of testing is to find defects (bugs) and
failures, thereby reducing the risk of inadequate software quality. Defects are manifestations of
human errors in software, and if executed, can lead to failures, which are deviations from expected
software delivery or service.
Informed Decision-Making: Testing provides stakeholders with sufficient information to make
informed decisions, particularly regarding the software's quality.
Compliance: Testing ensures compliance with contractual, legal, or regulatory requirements or
standards, and verifies the software's adherence to such stipulations.
Cost Avoidance: Failures caused by software faults can be extremely expensive, ranging from minor
inconveniences to huge sums of money and even loss of life in safety-critical systems. Testing helps
to avoid these costs and potential lawsuits.
Reliability Assessment: Testing helps to learn about the reliability of the software, which is the
probability that it will not cause system failure for a specified time under specified conditions.
Software is inherently prone to faults because it is created by humans under pressure, leading to errors.
Therefore, testing is a necessary and critical process to mitigate these risks and ensure the software's
fitness for purpose.
Error: An error is a human action that produces an incorrect result. This is the root cause, stemming
from human mistakes during development, design, or requirements gathering.
Fault (also known as defect or bug): A fault is a manifestation of an error in the software itself. It's
a flaw in the code or design. If a fault is executed, it has the potential to cause a failure. Faults are
states within the software.
Failure: A failure is a deviation of the software from its expected delivery or service. It is an event
that occurs during operation when a fault is triggered and causes the software to behave incorrectly.
Essentially, a person makes an error, which creates a fault in the software, and if that fault is activated, it
can cause a failure in operation.
Instead of exhaustive testing, the amount of testing considered "enough" is primarily determined by risk.
This principle guides testing efforts by considering:
Risk of missing important faults: The potential for critical defects to go undetected.
Risk of incurring failure costs: The financial or operational consequences of a software failure.
Risk of releasing untested or under-tested software: The impact on reputation, market share, or
customer satisfaction.
Risk of missing a market window: The competitive disadvantage of delaying release due to over-
testing.
Risk of over-testing or ineffective testing: Wasting resources on testing that yields diminishing
returns.
By using risk as the primary determinant, test teams can prioritise tests, allocate available time effectively,
and focus their efforts on the most critical areas. The goal is to perform "the best testing in the time
available," ensuring that the most important conditions are covered first and most thoroughly.
Planning: This involves defining the test strategy and policies, determining the scope, risks, and test
objectives (ensuring each requirement is covered), outlining the test approach (procedures,
techniques, teams, environment, data), implementing the test policy, identifying necessary resources,
scheduling all test activities, and establishing clear exit criteria for testing completion.
Control: This phase ensures that the planned activities are implemented and communicated
effectively. It includes measuring and analysing results (e.g., test execution progress, defect
findings), monitoring and documenting progress for stakeholders, initiating corrective actions if the
strategy needs adjustment, and making key decisions regarding continuing, stopping, or restarting
testing, or confirming a "GO" for release.
Analysis: This involves identifying "what" is to be tested (test conditions) based on specifications
and prioritising them.
Design: This involves determining "how" the identified conditions will be tested by designing test
cases (test inputs and expected results) and sets of tests for various objectives.
Building: This involves implementing the test cases by preparing test scripts and necessary test data.
Implementation: This phase focuses on building the high-level designs into concrete test cases and
procedures, developing and prioritising step-by-step instructions, creating test suites, and verifying
the readiness of the test environment.
Execution: This involves running the planned test cases, usually prioritising the most important
ones, logging the outcomes of each test execution, and recording details such as software identities,
versions, data used, and environment.
This phase involves assessing whether the predefined test completion criteria (e.g., test coverage,
number of faults found, cost/time limits) have been met. If not, further test activities may be
required. A summary report is then written as a test deliverable, documenting clear decisions.
This final phase includes checking which planned deliverables have been completed, ensuring
defects are resolved or deferred, finalising and archiving test ware (scripts, data, tools, environment)
for future reuse, and conducting lessons learned to improve future testing processes.
The importance of Test Status Reporting stems from several key benefits:
A good status report should be tailored to its audience and typically includes project name, reporting
period, author, target audience, planned and accomplished activities, milestones, unaccomplished tasks,
execution details (pass/fail/blocked), test summary, defect status, unresolved issues, risks, and
environment downtime tracking.
1. Equivalence Partitioning:
Definition: This technique divides the input test data into "equivalent" partitions or classes. The idea
is that if a test case from a specific partition reveals a defect, other test cases from the same partition
are likely to reveal similar defects.
Steps: Identify input ranges or conditions, divide the input domain into valid and invalid equivalence
classes, and select one test case from each class.
Example: For a system accepting integers between 1 and 100, valid class: 1-100 (e.g., 50); invalid
classes: <1 (e.g., 0) and >100 (e.g., 101).
Definition: A decision table is a systematic way to deal with combinations of inputs (causes) that
lead to specific outputs (effects). It's focused on business logic or rules and provides complete
coverage of test cases for complex conditions.
Principle: For 'n' inputs, there can be 2^n possible combinations. The table maps these combinations
to expected outcomes.
1. Exploratory Testing:
Definition: This is a less structured approach where testers learn about the software, design tests,
and execute them simultaneously. It's often used when documentation is limited or under time
pressure.
Characteristics: It's often performed by experts, is less reproducible, and its coverage is limited by
the tester's experience and knowledge.
Definition: This involves generating random inputs and executing them without a specific test case
design. It's typically used when there's insufficient time for detailed test case creation.
Characteristics: It's a form of black-box functional testing, often used by experts, less reproducible,
and its effectiveness depends on luck.
These techniques help testers to create effective, exemplary, evolvable, and economic test cases that have
a high probability of finding new defects and are traceable to requirements.
Functional Testing: This verifies that each function of the software application operates in conformance
with the functional requirements and specifications. It focuses on "what the system does."
Unit Testing: Tests individual components or units of code in isolation to verify their correctness
and identify bugs early.
Integration Testing: Combines individual modules and tests them as a group to verify interactions
and data flow between integrated units.
System Testing: Tests a complete and integrated system to verify that it meets specified functional
and non-functional requirements as a whole.
Acceptance Testing: The final phase where the system is tested to ensure it meets user and business
requirements, validating readiness for production. This includes:
User Acceptance Testing (UAT): Performed by end-users to validate the system against their
requirements, focusing on user-friendliness and expectations.
Alpha Testing: Conducted by internal development or QA teams in a controlled environment to
identify critical bugs and validate functionality and stability.
Beta Testing: Conducted by real users in a real-world environment to gather feedback, identify
issues not caught internally, and focus on user experience and compatibility.
Regression Testing: Performed to ensure that recent code changes (e.g., bug fixes, enhancements)
have not adversely affected existing functionality.
Smoke Testing: A quick, preliminary test to verify that the critical features of an application are
functioning and that the build is stable enough for further detailed testing.
Non-Functional Testing: This verifies non-functional aspects of the software, such as performance,
usability, security, and reliability. It focuses on "how well the system performs."
Performance Testing: Evaluates how well the system performs under various conditions.
Load Testing: Assesses system behaviour under expected load conditions (e.g., 1,000 simultaneous
users).
Stress Testing: Determines system robustness under extreme conditions, pushing it beyond normal
capacity (e.g., increasing users until crash).
Endurance Testing: Evaluates performance over an extended period to identify issues like memory
leaks or degradation (e.g., running under moderate load for 24 hours).
Spike Testing: Assesses the system's reaction to sudden and extreme changes in load.
Volume Testing: Evaluates performance with a large volume of data.
Scalability Testing: Determines how well the system can scale up or down.
Capacity Testing: Determines the maximum capacity of the system.
Configuration Testing: Evaluates performance under different hardware/software configurations.
Failover Testing: Ensures the system handles failover scenarios gracefully.
Latency Testing: Measures response time to requests.
Stress Recovery Testing: Evaluates system recovery after stress.
Concurrency Testing: Assesses handling of multiple simultaneous users/transactions.
Usability Testing: Evaluates a product by testing it on users to assess user-friendliness and
experience.
Exploratory Usability Testing: Early testing to explore user needs with prototypes.
Comparative Usability Testing: Compares two or more designs for usability.
Heuristic Evaluation: Experts review against usability principles.
Security Testing: Identifies vulnerabilities, threats, and risks to protect data and resources.
Vulnerability Scanning: Automated scans for known security flaws.
Penetration Testing: Simulates a malicious attack to find exploitable vulnerabilities.
Configuration Testing: Checks system security settings.
Database Security Testing: Ensures database integrity and confidentiality.
Network Security Testing: Tests network infrastructure security.
Reliability Testing: Ensures the application performs its intended functions consistently without
failure over a specified period.
Many of these types of testing, both functional and non-functional, can also be automated to improve
efficiency and coverage.
What is the defect life cycle and what are its key stages?
The Defect Life Cycle (or Bug Life Cycle) in software testing describes the specific set of states a defect
or bug goes through from its identification to its resolution. Its purpose is to facilitate coordination and
communication about the defect's status among various team members, making the defect-fixing process
systematic and efficient.
1. New: This is the initial state when a defect is first logged and reported by the tester.
2. Assigned: Once a defect is logged, a test lead or manager reviews and approves it, then assigns it to
a developer or development team for investigation and fixing.
3. Open: The defect's status changes to "Open" when the assigned developer starts analysing the defect
and working on a fix.
4. Fixed: After the developer makes the necessary code changes and performs a preliminary
verification, they change the defect status to "Fixed."
5. Pending Retest: The developer then provides the fixed code to the testing team. Since the retesting
of the fix is still pending on the tester's end, the status is set to "Pending Retest."
6. Retest: The tester retests the code to verify whether the defect has been successfully fixed by the
developer. At this stage, the tester will either confirm the fix or reopen the defect.
7. Closed: If the tester confirms that the defect is no longer reproducible and the fix is satisfactory, the
defect's status is changed to "Closed."
8. Reopened: If, during the retesting phase, the tester finds that the defect still exists or is not fixed
correctly, they will change the status back to "Reopened" and reassign it to the developer.
Deferred/Postponed: If the defect is not critical and can be addressed in a future release.
Rejected/Not a Bug: If the developer determines that the reported issue is not a genuine defect (e.g.,
it's a feature, not a bug, or it's due to incorrect usage).
Duplicate: If the reported defect has already been logged.
Throughout this cycle, it's crucial to report defects with compulsory attributes like ID, title, description,
steps to recreate, test data, test environment, actual results, screenshots, severity, priority, and
reported/assigned details. Tools like JIRA, Bugzilla, and Redmine are commonly used for defect tracking.