0% found this document useful (0 votes)
2 views

SQA-note

Uploaded by

kenewes793
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

SQA-note

Uploaded by

kenewes793
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

SQA Note

Here is a comprehensive overview of the provided sources, integrating and explaining the key concepts
related to Software Quality Assurance (SQA) and Testing:

1. Introduction to Software Quality Assurance (SQA) and Testing


Software Quality Assurance (SQA) is defined as a "set of activities to ensure the quality in software
engineering processes". It ultimately results in quality software products by establishing and evaluating
the processes that produce them, focusing on process-focused action. SQA aims to ensure software meets
quality standards and functions as intended.

Testing is a method used to "check whether the actual software product matches expected requirements
and to ensure that the software product is defect-free". It involves the execution of software or system
components, either manually or using automated tools. The primary purpose of software testing is to
identify errors, gaps, or missing requirements when compared to the actual requirements.

Typical Objectives of Testing include:

Preventing defects by evaluating work products like requirements, user stories, design, and code.
Verifying that all specified requirements have been fulfilled.
Checking if the test object is complete and validating that it works as expected by users and other
stakeholders.
Building confidence in the level of quality of the test object.
Finding defects and failures, thereby reducing the risk of inadequate software quality.
Providing sufficient information to stakeholders for informed decisions, especially regarding the
quality level of the test object.
Complying with contractual, legal, or regulatory requirements or standards, and/or verifying
the test object’s compliance.
Reducing costs associated with defects.
Showing the system meets user needs.
Assessing the software quality.

2. Understanding Software Defects: Error, Fault, and Failure


The sources distinguish between Error, Fault, and Failure.

Error: A human action that produces an incorrect result.


Fault: A manifestation of an error in software, also known as a defect or bug. If a fault is executed, it
may cause a failure.
Failure: A deviation of the software from its expected delivery or service; essentially, a found
defect.

The relationship is sequential: A person makes an error that creates a fault in the software, which can
then cause a failure during operation.

Why do faults occur in software?

Software is written by human beings who are not perfect and make mistakes (errors).
Developers are often under increasing pressure to deliver to strict deadlines, leading to less time for
checks or assumptions that might be wrong, resulting in incomplete systems.

What do software faults cost? Software faults can be incredibly expensive, causing huge sums of money
(e.g., Ariane 5 rocket failure at 7billion, M arinerspaceprobeat250 million, American Airlines at
$50 million). Conversely, some faults may cost very little or nothing, causing only minor inconvenience
or no visible detrimental impact. Software is not "linear"; a small input can sometimes have a very large
effect. In safety-critical systems, software faults can even cause death or injury (e.g., Therac-25 radiation
treatment, aircraft crashes).

3. The Necessity and Extent of Testing


Why is testing necessary?

Because software is likely to have faults.


To learn about the reliability of the software. Reliability is defined as the probability that software
will not cause the failure of the system for a specified time under specified conditions.
Because failures can be very expensive.
To avoid being sued by customers.

Why not just "test everything"? Exhaustive testing, which means exercising all combinations of inputs
and preconditions, is not practical as it would take an "impractical amount of time" or even "infinite
time". For example, a scenario with 20 inputs, each having multiple values, could lead to 480,000 tests,
taking over 17 days without accounting for retesting.

How much testing is enough? It's never truly "enough" in an absolute sense, but it depends heavily on
risk.

Risk helps determine:


What to test first.
What to test most thoroughly.
Where to place emphasis and allocate available time.
The risk of missing important faults.
The risk of incurring failure costs.
The risk of releasing untested or under-tested software.
The risk of losing credibility and market share.
The risk of missing a market window.
The risk of over-testing or ineffective testing.
The most important principle is to prioritise tests so that, whenever testing stops, the best possible testing
has been done in the time available. It is difficult to determine how much testing is enough, but not
impossible.

Testing and Quality: Testing measures software quality, and when faults are found and removed,
software quality (and possibly reliability) is improved. Testing assesses system function, correctness, and
non-functional qualities like reliability, usability, maintainability, reusability, and testability.

4. Psychology of Testing
The purpose of testing can be seen paradoxically: it's to find faults, but finding faults can destroy
confidence. However, the best way to build confidence is actually to try and find faults, showing what the
system shouldn't do or doesn't do as it should. This approach, focused on finding faults, results in fewer
faults remaining in the system. A traditional approach, aiming to show the system works, might leave
faults undiscovered.

The Tester's Mindset: Testers perform a critical process, often delivering "bad news" (e.g., "your baby is
ugly") and working under intense time pressure. They need a different mindset, questioning "What if it
isn’t?" or "What could go wrong?".

Tester's Rights and Responsibilities:

Rights: Accurate information, insight from developers, delivered code tested to an agreed standard,
professional regard, the right to find faults, challenge specifications, have reported faults taken
seriously, make predictions, and improve their own process.
Responsibilities: Follow test plans, report faults objectively and factually, check tests are correct,
remember it's the software (not the programmer) being tested, assess risk objectively, prioritise
reporting, and communicate the truth.

Independence in Testing: Testing one's own work is less effective (finding only 30-50% of faults) due to
shared assumptions, emotional attachment, and a desire not to find faults. Levels of independence range
from no independence (developer testing own code) to external organisation testing, or even tool-
generated tests.

5. The Fundamental Test Process


The Fundamental Test Process comprises five main activities:

1. Planning and Control:

Defining test strategy and policies to establish a clear roadmap for stakeholders.
Determining scope, risks, and test objectives (ensuring each requirement is covered).
Defining the test approach (procedures, techniques, teams, environment, data) to identify
feasibility.
Implementing the test policy/strategy.
Determining and allocating test resources (environment, people).
Scheduling all test activities (plan, design, implementation, execution, evaluation).
Determining and agreeing on exit criteria with stakeholders (e.g., test coverage, number of
tests executed), which define the end of testing and allow for software release.
Controlling activities involves measuring and analysing results (e.g., tests executed, defects
found by severity), monitoring and documenting progress for stakeholders, initiating corrective
actions, and making GO/NO-GO decisions for release.

2. Test Analysis and Design:

Reviewing the test basis to understand software specifications and deliverables.


Identifying test conditions (features, functions, attributes) that can be verified by test cases, and
prioritising them.
Evaluating the testability of requirements, specifying expected results.
Designing the test environment and identifying required tools/infrastructure.
Designing test cases: involves creating test input and data, determining expected results (what
output, what changes, what doesn't), and designing sets of tests for various objectives
(regression, confidence, fault finding).
Building test cases: preparing detailed test scripts (especially for less knowledgeable testers or
automation tools) and test data that must exist in the environment before tests. Expected results
should be defined before execution.

3. Test Implementation and Execution:

Preparation tasks: Developing and prioritising step-by-step test cases, creating test suites, and
verifying the test environment is ready.
Execution: Running prescribed test cases, prioritising the most important ones. This can be
manual or automated. Execution may be stopped if many faults are found early or under time
pressure.
Recording: Documenting the test process, including identities and versions of the software and
test specifications. Mark progress, document actual outcomes, capture new test case ideas.
Crucially, compare actual outcome with expected outcome and log discrepancies as software
faults, test faults, environment faults, or incorrect test runs. Record coverage levels. Retest after
defects are fixed and perform regression tests.

4. Evaluating Exit Criteria and Reporting:

Checking test logs against predefined exit criteria.


Assessing if more tests are needed or if exit criteria need resetting and stakeholder agreement.
Writing a summary report as a test deliverable for official decision-making.

5. Test Closure Activities:

Ending testing by checking planned acceptance/rejection deliverables and whether defects are
resolved or deferred.
Finalising and archiving testware (scripts, data, tools, environment) for future reuse.

6. Test Plan
A Test Plan is a formal document that details the testing strategies, processes, workflow, and
methodologies for a project. Key components include:

Introduction: A brief overview of the test strategies, process, workflow, and methodologies.
Scope: Defines what features and requirements (functional or non-functional) will be tested (In
Scope) and what will not (Out of Scope).
Quality Objective: States the overall goal of the testing project, such as ensuring conformance to
requirements, meeting quality specifications, and identifying/fixing bugs before go-live.
Roles and Responsibilities: Details the roles and duties of team members (e.g., QA Analyst, Test
Manager, Developers).
Test Methodology: Explains the chosen test methodology (e.g., Waterfall, Iterative, Agile, Extreme
Programming) and factors influencing its selection.
Test Levels: Defines the types of testing (e.g., Unit, Integration, System, Acceptance) to be executed
based on project scope, time, and budget constraints.
Bug Triage: Outlines the process for defining resolution types for bugs and prioritising/scheduling
bugs to be fixed.
Suspension Criteria and Resumption Requirements: Criteria for suspending and resuming
testing.
Test Completeness: Criteria for deeming testing complete (e.g., 100% test coverage, all test cases
executed, all open bugs fixed or deferred).
Test Deliverables: Lists all artefacts produced during testing (e.g., Test Plan, Test Cases, Bug
Reports, Test Metrics).
Resource & Environment Needs: Lists required testing tools (e.g., Requirements Tracking, Bug
Tracking, Automation Tools) and minimum hardware/software environment requirements.
Terms/Acronyms: A glossary of terms used in the project.

7. Test Estimation
Estimation is a forecast or prediction, an approximation of cost, time, quantity, or worth for a task. It's a
blend of science and art, based on past experience, available documents, knowledge, assumptions, and
calculated risks.

Why do we Estimate? Poor estimation leads to overshooting budgets and timescales; many software
projects fail to finish within allotted schedules and budgets due to factors like unspecified objectives, bad
planning, new technology, inadequate methodology, and insufficient staff.

Steps involved in Estimation:

1. Identify scope.
2. State assumptions.
3. Assess the tasks involved.
4. Estimate effort for each task.
5. Calculate Total Effort.
6. Work out elapsed time/critical path.
7. Check if the total is reasonable and reassess if needed.
Function Point Method for estimation:

Function Points measure the size of computer applications from a functional, user-centric view,
independent of language, methodology, or team capability.
Steps:
1. Define Function Points (e.g., based on use cases).
2. Give Weightage to all Function Points (e.g., Simple for GUI, Medium for database checks,
Complex for API interactions).
3. Define an Estimate Per Function Point based on similar projects or organisational standards.
4. Calculate Total Effort Estimate by multiplying Total Function Points by the Estimate defined
Per Function Point.

Tips for Estimation: Allow enough time, use documented data from past projects, use own estimates,
involve several people, re-estimate throughout the lifecycle, create standardised procedures, and focus on
improving the estimation process.

8. Test Design Techniques


A Test Case is a set of test inputs, execution conditions, and expected results developed for a particular
objective. It documents the pretest state of the Application Under Test (AUT) and its environment, inputs,
and the expected output.

Components of a Test Case include:

Test Case ID, Test Title/Description/Objective.


Test Steps, Test Data.
Expected Result, Post Condition, Actual Result.
Status (Pass/Fail), Notes/Comments.
Prerequisite, Severity, Priority, Executed by.
Project name, Module name, Created by, Created date, Reviewed by, Reviewed date.

Good Test Cases should:

Find Defects: Have a high probability of finding new defects.


Have an unambiguous and tangible result.
Be repeatable and predictable.
Be traceable to requirements or design documents.
Be automatable for execution and tracking.
Not mislead and be feasible.

Black-box Testing (Functional Testing) Techniques: These techniques analyse the input/output domain
or observable behaviour of the program without knowledge of internal structure.

Equivalence Partitioning: Divides input test data into valid and invalid equivalence classes and
selects one test case from each. This reduces the number of test cases needed. For an input between 1
and 100, examples are 50 (valid), 0 (invalid <1), and 101 (invalid >100).
Boundary Value Analysis: Tests values at the edges and just outside the edges of input ranges. It is
often performed after Equivalence Partitioning. For an input between 1 and 100, examples include 1,
0, 2, 100, 99, 101.
Decision Table (Cause-Effect Table): A systematic way to deal with combinations of inputs,
focusing on business logic or rules. It provides complete coverage of test cases and guarantees
considering every possible combination of condition values (completeness property). The number of
combinations is 2^n, where n is the number of inputs.
Cause-Effect Graph (Ishikawa or Fishbone Diagram): Identifies possible root causes (distinct
input conditions, "Causes") for a specific effect or problem ("Effect"), relating interactions among
factors affecting a process. It helps determine root causes using a structured approach and indicates
causes of variation.
Error Guessing: Uses a tester's skill, intuition, and experience to identify defects not easily captured
by formal techniques. It's often done after formal techniques and by experts. It's less reproducible,
and coverage is limited by experience. Typical conditions include division by zero, blank input,
empty files, or wrong data types.
Random Testing (Monkey Testing): A functional black-box technique used when time is limited.
Random inputs are identified, selected independently, executed, and results recorded/compared. It's
used when defects are not identified at regular intervals and can save time and effort.

9. Types of Testing
The sources outline various types of testing, broadly categorised as:

A. Functional Testing: Ensures software functions as per requirements, focusing on features, user
interactions, and business requirements.

Unit Testing: Tests individual components or units of code in isolation to verify correctness, identify
bugs early, and ensure each unit functions as expected. Benefits include early bug detection,
improved code quality, faster debugging, live documentation, and refactoring confidence.
Integration Testing: Combines individual modules and tests them as a group to verify interactions,
ensure combined components work together, and detect issues in data flow/communication.
System Testing: Tests a complete and integrated system to verify it meets specified functional and
non-functional requirements and behaves as expected in a real-world environment.
Acceptance Testing: The final phase, ensuring the system meets user and business requirements and
is ready for production.
User Acceptance Testing (UAT): Performed by end-users to validate the system against their
requirements, focusing on user-friendliness and readiness for production.
Alpha Testing: Conducted by the internal development or QA team in a controlled environment
to identify critical bugs and validate functionality/stability. Benefits include early bug detection,
controlled environment, and internal feedback.
Beta Testing: Conducted by real users in a real-world (production) environment to gather
feedback and identify issues not caught during alpha testing. Benefits include real-world
feedback, compatibility testing, and user validation.
Regression Testing: Performed after code changes (fixes, enhancements, new features) to ensure
they haven't adversely affected existing functionality. It verifies new changes don't introduce new
bugs, maintains quality, saves costs, and builds confidence. Performed after bug fixes, new features,
configuration changes, or performance optimisations.
Smoke Testing: A preliminary test after a new build deployment to check if basic functionalities are
working correctly. It verifies critical features, determines if the build is stable enough for further
testing, enables early detection, saves time, builds confidence, and provides quick feedback.

B. Black Box Testing: The internal structure, design, or implementation is not known to the tester.
Focuses on "what the system does" and meeting user requirements.

C. White Box Testing: The internal structure, design, and implementation are known to the tester.
Focuses on "how the system works," validating internal logic, code structure, and adherence to design
specifications.

D. Manual Testing: Manually executing test cases without automation tools. Focuses on functionality,
usability, and performance through human observation and interaction. Important for exploratory testing,
user experience (UX) testing, ad-hoc testing, and early-stage testing when the application is unstable.

E. Automated Testing: Uses specialised tools and scripts to execute test cases, compare results, and
generate reports.

Purpose: Increases efficiency, accuracy, reduces manual effort, and enables continuous testing in
CI/CD pipelines.
When to Use: For repetitive test cases (like regression), large-scale execution (load/performance),
cross-browser/platform testing, and high-risk areas.
Types of Testing that can be Automated: Functional, Regression, Performance, Load, Security,
Unit, and Integration Testing.
How to Perform: Identify suitable test cases, select a tool (e.g., Selenium, Appium, TestNG, JUnit,
Cypress), develop scripts, execute, analyse results, and maintain scripts.
Test Automation Framework: A structured set of guidelines and tools that simplify automated
testing, defining how scripts are written, executed, and maintained.
Types: Data-Driven, Keyword-Driven, Hybrid, Behaviour-Driven Development (BDD).
Use of Frameworks: Improves reusability, enhances maintainability/scalability, reduces tester
dependency, provides structured reporting, and supports CI/CD integration.
Challenges: High initial cost, maintenance overhead, tool limitations, and skill requirements.

F. Non-functional Testing: Evaluates how well the system performs, rather than what it does, assessing
aspects like performance, usability, reliability, and security.

Performance Testing: Evaluates system behaviour under various load conditions to identify
bottlenecks.
Load Testing: Assesses behaviour under expected load conditions (e.g., simulating 1,000
users).
Stress Testing: Determines robustness under extreme conditions, pushing the system beyond
normal capacity to find breaking points.
Endurance Testing: Evaluates performance over extended periods to identify issues like
memory leaks or degradation.
Spike Testing: Assesses reaction to sudden, extreme load changes.
Volume Testing: Evaluates performance with large data volumes.
Scalability Testing: Determines how well the system scales up or down.
Capacity Testing: Determines the maximum capacity of the system.
Configuration Testing: Evaluates performance under different configurations.
Failover Testing: Ensures graceful handling of failover scenarios.
Latency Testing: Measures system response time to a request.
Stress Recovery Testing: Evaluates system recovery after a stress test.
Concurrency Testing: Assesses handling of multiple simultaneous users/transactions.
Usability Testing: Evaluates a product by testing it on users, focusing on user-centered interaction
design. Types include Exploratory, Comparative, and Heuristic Evaluation.
Security Testing: Identifies vulnerabilities, threats, and risks to protect data and resources.
Vulnerability Scanning: Automated scanning against known vulnerability signatures.
Penetration Testing: Simulates hacker attacks to find exploitable vulnerabilities.
Configuration Testing: Checks system security settings.
Database Security Testing: Ensures database integrity and confidentiality.
Network Security Testing: Tests network infrastructure security.
Reliability Testing: Ensures consistent performance of intended functions without failure over a
specified period.

10. Defect Life Cycle and Reporting


Software Defect Management is a systematic approach in SQA for identifying, tracking, analysing, and
resolving defects. A software defect (or bug) is an error, flaw, or failure causing incorrect or unexpected
results.

Types of Defects:

Functional Defects (incorrect outputs).


Performance Defects (slow response, high resource consumption).
Security Defects (vulnerabilities).
Usability Defects (poor user experience).
Compatibility Defects (issues across environments/devices).

Defect Life Cycle (Bug Life Cycle): A specific set of states a defect goes through, ensuring systematic
and efficient defect fixing and communication.

New: First logged and posted.


Assigned: Approved by test lead and assigned to a developer.
Open: Developer begins analysing and working on the fix.
Fixed: Developer has made the code change and verified it.
Pending Retest: Defect fixed by developer, awaiting retesting by tester.
Retest: Tester retests to confirm the fix.
Other possible statuses include: Verified, Closed, Reopen, Duplicate, Deferred, Rejected, Cannot be
Fixed, Not Reproducible, Need more information.
Compulsory Attributes of Software Defects (as seen in a defect report):

ID, Title, Description, Steps to recreate, Test Data, Test Environment.


Expected Results, Actual Results, Screenshots/Logs.
Severity, Priority, Status.
Additional Notes, Reported By, Date, Assigned To.
Reporter Name, Defect Reported Date, Who Detected (designation), How Detected (Testing,
Review, Walkthrough).
Project Name, Release/Build Version, Defect/Enhancement type.
URL, Defect Close Date.

Checklist before reporting a bug:

Reproduced the bug multiple times.


Verified if already reported in the defect tracking tool.
Verified similar issues in related modules.
Detailed steps to reproduce.
Proper defect summary.
Attached relevant screenshots.
All necessary fields in the report are filled.

Defect Tracking Tools: Examples include JIRA, Bugzilla, Redmine, MantisBT, Trello.

Defect Prevention Strategies:

Code Reviews & Pair Programming.


Automated Testing.
Continuous Integration & Continuous Deployment (CI/CD).
Proper Requirements Analysis.
Static Code Analysis.
Early Defect Detection via Unit Testing.

Defect Management in Agile & DevOps: Agile focuses on early and frequent testing, while DevOps
integrates continuous monitoring and feedback loops to minimise defects.

Defect Metrics in SQA:

Defect Density: Number of defects per lines of code.


Defect Leakage: Defects found after release.
Defect Removal Efficiency (DRE): Percentage of defects removed before release.
Mean Time to Detect (MTTD) & Mean Time to Repair (MTTR): Time taken to identify and fix
defects.

Best Practices for Defect Management:

Early detection: Find and fix defects as early as possible in the SDLC.
Clear documentation: Provide complete information for reproducibility.
Prioritisation: Focus on critical defects first (based on impact/severity).
Metrics tracking: Measure defect density, resolution time, etc.
Continuous improvement: Analyse defect trends to prevent recurrence.

11. Test Status Reporting


Test Status Reporting is a formal way of communicating the project's status from a testing perspective,
containing quantitative information. It's critical for project management.

Importance of Test Status Reporting:

Prevents last-minute surprises to project sponsors and stakeholders.


Aids in project decision-making (e.g., GO/NO-GO decisions for release) by providing timely and
accurate data.
Provides Transparency to Stakeholders: Ensures everyone knows work completed, current issues,
and risks.
Identifies Risks Early: Flags issues like high-priority unresolved defects, untested critical areas, or
environmental instabilities, allowing proactive mitigation strategies.

Key Stakeholders (Target Audience) for Test Status Reports:

Project Managers: Monitor progress, control risks.


Test Teams: Adjust planning based on issues.
Customers/Product Owners: Ensure quality expectations are met.
Senior Management: Make informed business decisions (e.g., product launch).
Compliance/Regulators: Validate required testing.

Example (Netflix Profiles Feature): The Netflix QA teams sent weekly Test Status Reports to executive
leadership. These reports included:

Test Progress: Percentage of tests completed vs. planned.


Critical Defects: Severe issues found (e.g., profile switching errors).
Risk Areas: Fragile systems (e.g., syncing across devices).
Milestone Updates: Whether app versions passed acceptance tests.
Environment Downtime: Server issues affecting testing. These reports allowed executives to see
testing status in real-time, know about risks early, base launch decisions on solid QA data, and
prioritise immediate fixes.

Status Reporting Guidelines: A good report effectively communicates relevant information tailored to
the audience. A typical report includes:

Project Name, Duration, Report by (Author), Report to (Target Audience).


Planned Activities/Tasks (with owners).
Activities/Tasks Accomplished in the Reporting Period.
Project Milestones Reached (with status).
Activities/Tasks Not Accomplished/Missed.
Planned Activities for the Next Reporting Period.
Test Project Execution Details (# Pass, # Fail, # Blocked, # Not Executed).
Test Summary (Test Coverage details).
Defect Status (details of defects by severity).
Status of Defect Re-testing.
Issues (problems affecting planned activities, listed by criticality, e.g., environment problems,
blocking factors).
Unresolved Issues (from previous periods).
Risks (affecting testing schedule).
Environment Downtime Tracking (hours lost).

Extra Content

Software Quality Assurance (SQA) Study Guide


Quiz: Short-Answer Questions
Answer each question in 2-3 sentences.

1. Explain the relationship between an "error," a "fault," and a "failure" in software quality assurance.
2. What is the primary purpose of conducting software testing, according to the provided materials?
3. Why is "exhaustive testing" generally considered impractical or impossible in software quality
assurance?
4. Briefly describe the "testing paradox."
5. What are the three main tasks involved in "test specification" within the fundamental test process?
6. List three key benefits of performing Unit Testing.
7. What is the main difference between Alpha Testing and Beta Testing?
8. Define "Regression Testing" and explain its purpose.
9. What is a "Defect Life Cycle" and why is it important in software testing?
10. Name three essential attributes that should be included when reporting a software defect.

Quiz Answer Key


1. An "error" is a human action that leads to an incorrect result. This incorrect result manifests as a
"fault" (or defect/bug) within the software. If this fault is executed, it can then cause a "failure,"
which is a deviation of the software from its expected delivery or service.
2. The primary purposes of testing are to build confidence in the test object's quality, find defects and
failures to reduce risk, provide information for informed decisions, and ensure compliance with
requirements or standards. Ultimately, testing measures software quality and improves it by enabling
fault removal.
3. Exhaustive testing means exercising all combinations of inputs and preconditions. This is considered
impractical because it would take an infinite or impractically large amount of time, making it
unfeasible for real-world software development.
4. The testing paradox states that while the purpose of testing is to build confidence in software quality,
finding faults (which is also a purpose of testing) can simultaneously destroy confidence. The best
way to build confidence is often to actively try to find ways to destroy it by exposing faults.
5. The three main tasks in test specification are: identifying test conditions (determining 'what' to test
and prioritising), designing test cases (determining 'how' to test by creating inputs and expected
results), and building tests (implementing test scripts and preparing test data).
6. Unit Testing offers early bug detection by isolating issues to small code pieces, improves code
quality by encouraging modularity, and enables faster debugging by pinpointing failures. It also
provides living documentation and builds confidence for refactoring.
7. Alpha Testing is conducted by internal development or QA teams in a controlled environment to
identify critical bugs and validate functionality. Beta Testing, conversely, is performed by real users
in a real-world environment to gather feedback on user experience and identify issues missed
internally.
8. Regression testing is a type of software testing performed to ensure that new code changes, such as
bug fixes or enhancements, do not negatively impact existing functionality. Its purpose is to verify
that previously working features continue to function as expected after modifications.
9. A Defect Life Cycle is the specific sequence of states a defect or bug goes through from its discovery
to its resolution. It's important because it systematises the defect fixing process, allowing for easy
coordination and communication of the defect's current status among various team members.
10. Three essential attributes for reporting a software defect include: a unique ID and Title, a clear
Description, and detailed Steps to recreate the issue. Other crucial attributes are Test Data, Test
Environment, Actual/Expected Results, Severity, and Priority.

Essay Format Questions


1. Discuss the importance of Test Status Reporting within a software development project. Elaborate on
how it provides transparency, supports timely decisions, and identifies risks early, using examples
where appropriate.
2. Analyse the "psychology of testing" as described in the provided materials. Explain why testers need
a different mindset compared to developers, and describe the responsibilities and rights of a tester.
3. Compare and contrast "Functional Testing" and "Non-Functional Testing." Provide at least two
distinct types of testing under each category and explain their respective purposes.
4. Explain the concept of "Test Estimation" and its significance in software quality assurance. Detail
the steps involved in the estimation process and discuss factors that can influence its accuracy.
5. Describe the "Fundamental Test Process" as outlined in the sources. For each stage (Planning and
Control, Test Analysis and Design, Test Implementation and Execution, Evaluating Exit Criteria and
Reporting, and Test Closure Activities), provide a brief explanation of its key activities.

Glossary of Key Terms


Acceptance Testing: The final phase of software testing where the system is tested to ensure it
meets the user and business requirements, validating its readiness for production.
Alpha Testing: Testing conducted by the internal development or QA team in a controlled
environment to identify critical bugs and validate software functionality and stability.
Application Under Test (AUT): The specific software application that is being tested.
Assigned (Defect Status): A defect status indicating that a bug, once posted and approved by the
test lead, has been assigned to the developer team for fixing.
Automation Testing: The process of automating test cases to execute tests using scripts and tools,
often for functional, regression, performance, load, and security testing.
Beta Testing: Testing conducted by real users in a real-world environment to gather feedback and
identify issues not caught during internal testing, focusing on user experience and compatibility.
Black-box Testing: A software testing method where the internal structure, design, or
implementation of the system is not known to the tester, focusing solely on validating functionality
based on requirements.
Boundary Value Analysis (BVA): A black-box testing technique that involves testing values at the
extreme ends or boundaries of input ranges and just outside these boundaries to identify defects.
Bug: Also known as a defect or fault; a manifestation of an error in software that, if executed, may
cause a failure.
Bug Triage: A process to define the type of resolution for each bug, prioritise them, and schedule
"To Be Fixed Bugs."
Confidence (in Testing): A belief in the level of quality of the test object, often built by proving the
software is correct or by finding and removing faults.
Defect: See "Bug."
Defect Close Date: The date recorded once a defect has been verified as resolved and is no longer
reproducible.
Defect Density: A metric representing the number of defects found per specific unit of code, such as
lines of code.
Defect Life Cycle: The specific set of states a defect or bug goes through from its initial logging to
its resolution, ensuring systematic management and communication.
Defect Management: A systematic approach to identifying, tracking, analysing, and resolving
defects in software products to ensure quality standards are met.
Defect Leakage: A metric that measures the number of defects found after the software has been
released to the users or customers.
Defect Removal Efficiency (DRE): A metric calculating the percentage of defects removed from
the software before its release.
Defect Status: The current state of a defect or bug within its life cycle (e.g., New, Assigned, Open,
Fixed, Pending Retest, Retest).
Decision Table: A test case design technique that systematically deals with combinations of inputs,
focusing on business logic or rules, and ensuring complete test case coverage.
Elapsed Time (Estimation): The total calendar time that passes during a project or task, from start
to finish.
Endurance Testing: A type of performance testing that evaluates the system's performance over an
extended period under a consistent load to identify issues like memory leaks or performance
degradation.
Equivalence Partitioning (EP): A black-box testing technique that divides the input test data into
valid and invalid equivalence classes, from which a single test case is selected from each class.
Error: A human action that produces an incorrect result; the root cause of a fault in software.
Estimation: A forecast or prediction of the time, cost, or effort required to complete a task or
project, typically based on past experience, available documents, assumptions, and calculated risks.
Exhaustive Testing: A testing approach that attempts to exercise all combinations of inputs and
preconditions, generally considered impractical due to the infinite or impractically large time
required.
Expected Result: The predicted outcome of a test case, defined before the test is executed, against
which the actual outcome is compared.
Failure: A deviation of the software from its expected delivery or service; the manifestation of a
fault during operation.
Fault: A manifestation of an error in software, also known as a defect or bug. If executed, a fault
may cause a failure.
Fixed (Defect Status): A defect status indicating that a developer has made the necessary code
changes and verified the fix.
Functional Defects: Issues related to the specific functionalities of the software, such as incorrect
outputs or features not working as required.
Functional Testing: A type of software testing that verifies specific functional aspects of the
software to ensure it works according to requirements, focusing on features, user interactions, and
business needs.
Fundamental Test Process: A structured approach to testing typically involving planning and
control, test analysis and design, test implementation and execution, evaluating exit criteria and
reporting, and test closure activities.
Integration Testing: A type of software testing where individual modules or components are
combined and tested as a group to verify their interactions and ensure they work together as
expected.
Load Testing: A type of performance testing to evaluate how the system behaves under expected
load conditions, simulating a typical number of users or transactions.
Manual Testing: The process of executing test cases manually without the use of automation tools
or scripts, focusing on usability, functionality, and performance.
Mean Time to Detect (MTTD): A defect metric measuring the average time taken to identify a
defect.
Mean Time to Repair (MTTR): A defect metric measuring the average time taken to fix a defect
after it has been detected.
New (Defect Status): The initial status assigned to a defect when it is first logged and reported.
Non-Functional Testing: A type of software testing that focuses on non-functional attributes of the
system, such as performance, usability, security, and reliability, rather than specific functionalities.
Open (Defect Status): A defect status indicating that the developer has started analysing and
working on fixing the reported defect.
Pending Retest (Defect Status): A defect status indicating that the developer has fixed the defect
and provided the code for retesting, and the retesting by the tester is still pending.
Performance Testing: A type of non-functional testing that assesses the system's responsiveness,
stability, and scalability under various loads and conditions.
Prioritisation (of Tests/Defects): The process of determining which tests to execute first or which
defects to address first, typically based on risk, impact, and criticality.
Random Testing: A functional black-box testing technique where test cases are generated randomly,
often used when there is limited time for structured test design.
Regression Testing: A type of software testing performed to ensure that new code changes (e.g., bug
fixes, enhancements) have not adversely affected existing, previously working functionality.
Reliability: The probability that software will not cause the failure of the system for a specified time
under specified conditions.
Reliability Testing: A type of non-functional testing that focuses on ensuring a software application
performs its intended functions consistently and without failure over time.
Retest (Defect Status): A defect status indicating that the tester is re-executing the fixed code to
verify that the defect has indeed been resolved by the developer.
Risk (in Testing): The potential for a negative outcome, such as missing important faults, incurring
failure costs, or releasing untested software. It influences how much and what aspects of the software
are tested.
Scope (Test Plan): Defines the features, functional, or non-functional requirements of the software
that will be tested ("In Scope") and those that will not ("Out of Scope").
Security Testing: A type of non-functional testing focused on identifying vulnerabilities, threats,
and risks in a system to protect data and resources from potential intruders.
Severity (Defect): An attribute indicating the impact of a defect on the system's functionality or
performance.
Smoke Testing: A preliminary type of functional testing performed to determine if the deployed
software build is stable enough for more detailed testing, verifying critical features.
Software Quality Assurance (SQA): A systematic approach to ensuring that software products
meet defined quality standards and function as intended throughout the development lifecycle.
Spike Testing: A type of performance testing that assesses the system's reaction to sudden and
extreme changes in load.
Stress Testing: A type of performance testing that pushes the system beyond its normal operational
capacity to determine its robustness under extreme conditions and identify breaking points.
Suspension Criteria: Conditions that, if met, would cause all or part of the testing procedure to be
halted.
System Testing: A level of software testing where a complete and integrated system is tested to
verify that it meets specified functional and non-functional requirements, ensuring it behaves as
expected as a whole.
Test Case: A set of test inputs, execution conditions, and expected results, developed for a particular
objective, used to verify a specific function or feature of the software.
Test Completion Criteria: Predefined conditions, often based on coverage, faults found, or
cost/time, that determine when testing activities at a particular level or for a particular project can
stop.
Test Deliverables: All the artefacts that will be produced during the different phases of the testing
lifecycle (e.g., Test Plan, Test Cases, Bug Reports).
Test Environment: The minimum hardware and software requirements and configurations used to
test the Application Under Test (AUT).
Test Execution: The process of running prescribed test cases, logging outcomes, and recording
discrepancies.
Test Plan: A formal document outlining the scope, objectives, methodology, resources, and schedule
for testing a software project.
Test Planning: The activity of defining the test strategy, scope, objectives, resources, and schedule,
and setting exit criteria for testing activities.
Test Recording: The process of comparing actual test outcomes with expected outcomes and
logging any discrepancies as faults, environment issues, or test faults.
Test Status Reporting: The component that informs key project stakeholders of the critical aspects
of the project's testing status, preventing surprises and aiding decision-making.
Test Strategy: A high-level document defining the overall approach to testing for a company or
project, often outlining principles, policies, and test levels.
Test Suite: A logical collection of common test cases designed to be executed together to ensure a
feature or transaction is thoroughly tested end-to-end.
Unit Testing: A type of software testing where individual components or "units" of code are tested
in isolation to verify their correctness and identify bugs early.
Usability Defects: Issues related to the user experience, such as poor navigation or difficulty in
using the software.
Usability Testing: A type of non-functional testing that evaluates a product by testing it on users to
assess its ease of use and user satisfaction.
User Acceptance Testing (UAT): A type of acceptance testing performed by end-users to validate
the system against their requirements and confirm its readiness for production.
Vulnerability Scanning: An automated security testing technique that scans a system for known
security flaws or vulnerabilities.
White-box Testing: A software testing method where the internal structure, design, and
implementation of the system are known to the tester, allowing validation of internal logic and code
structure.

Software Quality Assurance (SQA) Briefing Document

1. Introduction to SQA and Testing Fundamentals


Software Quality Assurance (SQA) is a systematic approach to ensuring software products meet quality
standards and function as intended. A core component of SQA is testing, which serves multiple
objectives.
Typical Objectives of Testing:

Building Confidence: "To build confidence in the level of quality of the test object." This involves
proving the software is correct and conforms to requirements.
Defect Detection and Risk Reduction: "To find defects and failures thus reduce the level of risk of
inadequate software quality." This is a primary goal, as faults can lead to significant costs and even
loss of life.
Informed Decision-Making: "To provide sufficient information to stakeholders to allow them to
make informed decisions, especially regarding the level of quality of the test object."
Compliance: "To comply with contractual, legal, or regulatory requirements or standards, and/or to
verify the test object’s compliance with such requirements or standards."

Understanding Bugs: The terminology for software imperfections follows a clear progression:

Error: "a human action that produces an incorrect result."


Fault (Defect/Bug): "a manifestation of an error in software – if executed, a fault may cause a
failure."
Failure: "deviation of the software from its expected delivery or service – (found defect)."
This sequence is summarised as: "A person makes an error ... that creates a fault in the software ...
that can cause a failure in operation."

Why Faults Occur and Their Cost: Faults are inherent because software is written by "human beings –
who know something, but not everything – who have skills, but aren’t perfect – who do make mistakes
(errors)." Pressure to meet deadlines also contributes. The cost of software faults can range from "very
little or nothing at all – minor inconvenience" to "huge sums – Ariane 5 ($7billion) – Mariner space probe
to Venus ($250m) – American Airlines ($50m)." In safety-critical systems, faults can tragically "cause
death or injury – radiation treatment kills patients (Therac-25)."

The Necessity of Testing: Testing is crucial because software is likely to have faults, failures can be very
expensive, and it helps learn about software reliability. However, "exhaustive testing" – exercising "all
combinations of inputs and preconditions" – is impractical and takes an "infinite time" or "impractical
amount of time."

How Much Testing is Enough? The amount of testing required "depends on the risks for your system."
These risks include:

"risk of missing important faults"


"risk of incurring failure costs"
"risk of releasing untested or under-tested software"
"risk of losing credibility and market share"
"risk of missing a market window"
"risk of over-testing, ineffective testing" The "most important principle" is to "Prioritise tests so that,
whenever you stop testing, you have done the best testing in the time available."

2. Psychology and Process of Testing


Psychology of Testing: Traditionally, testing aimed to "show that the system: – does what it should –
doesn't do what it shouldn't." However, a "better testing approach" is to "Show that the system: – does
what it shouldn't – doesn't do what it should," with the "Goal: find faults." This highlights the "testing
paradox": "Purpose of testing: to find faults" but also "Purpose of testing: build confidence." Finding
faults can seem to destroy confidence, but it is ultimately "The best way to build confidence."

The Tester's Role and Independence: Testers often "Bring bad news ('your baby is ugly')" and are
"Under worst time pressure (at the end)." They require a "different mindset ('What if it isn’t?', 'What
could go wrong?')." To ensure objectivity and effectiveness, testers should ideally be independent from
the developers. Testing one's own work limits fault detection to "30% - 50% of your own faults" due to
"same assumptions and thought processes" and "emotional attachment." Levels of independence range
from "None: tests designed by the person who wrote the software" to tests designed by a "different
person," a "different department or team (e.g. test team)," a "different organisation (e.g. agency)," or even
"Tests generated by a tool."

Fundamental Test Process: The fundamental test process involves five key stages:

1. Planning and Control: This defines the "test strategy and policies," determines "scope, risks and
test objectives," and allocates "test resources (environment, people)." Control involves "measure and
analyze the results," "monitor and document progress," and "initiate corrective actions." Crucially,
this stage sets "exit criteria which could be test coverage (%), number of test executed."
2. Test Analysis and Design: This involves defining "what's required (feasibility)" and includes
designing the test environment.
3. Test Implementation and Execution: This involves developing and prioritising "test cases by
describing step by step instructions," creating "test suites," and verifying the test environment.
During execution, test cases are run, and outcomes are logged, including "software fault – test fault
(e.g. expected results wrong) – environment or version fault – test run incorrectly."
4. Evaluating Exit Criteria and Reporting: This assesses if "test activities have been carried out as
specified" and if "initial exit criteria has to be reset and agreed again with stakeholders." A
"summary report" is written.
5. Test Closure Activities: These include "check which planned acceptance or rejection deliverable has
been delivered," resolving or deferring defects, and "finalize and archive test ware for an eventual
reuse."

3. Test Planning and Estimation


Test Plan Template (IEEE 829): A comprehensive test plan ensures structured and effective testing. Key
sections include:

Introduction: Brief overview of strategies, processes, and methodologies.


Scope: Defines "features, functional or non-functional requirements...that will be tested (In Scope)"
and those "that will NOT be tested (Out of Scope)."
Quality Objective: States overall objectives, such as "Ensure the Application Under Test conforms
to functional and non-functional requirements."
Roles and Responsibilities: Details roles for team members like QA Analysts, Test Managers, and
Developers.
Test Methodology: Outlines the chosen approach (e.g., Waterfall, Agile) and "Test Levels" (types of
testing).
Bug Triage: Defines the process for prioritising and scheduling bug fixes.
Suspension Criteria and Resumption Requirements: Criteria for pausing and restarting testing.
Test Completeness: Defines criteria for completion, such as "100% test coverage" or "All open bugs
are fixed or will be fixed in next release."
Test Deliverables: Lists artifacts to be delivered (e.g., "Test Plan," "Bug Reports," "Customer Sign
Off").
Resource & Environment Needs: Specifies "Testing Tools" and "Test Environment"
hardware/software.
Terms/Acronyms: Glossary of project-specific terms.

Test Estimation Techniques: Estimation is a "forecast or prediction" and "approximation of what it


would Cost" and "how long a Task would take to complete." It is "A bit of both - Design..." and "science."
Estimates are based on "Past Experience/ Past Data," "Available Documents/ Knowledge,"
"Assumptions," and "Calculated Risks." Proper estimation is vital to avoid "Overshooting Budgets" and
"Exceeding Timescales," which are common causes of project runaways (e.g., "Bad Planning and
Estimating (48%)"). The estimation process involves:

1. Identify Scope
2. State Assumptions
3. Assess the tasks involved
4. Estimate the effort for each task
5. Calculate Total Effort
6. Work out elapsed time / critical path
7. Is the total reasonable? (Reassess if not)
8. Finish and Present the Estimate

A common method is the Function Point Method, which measures the "size of computer
applications...from a functional, or user, point of view," independent of programming language or
methodology. Function points are defined and assigned weightage (Simple, Medium, Complex) to
estimate effort per point.

4. Test Case Design Techniques


A test case is defined as "A set of test inputs, execution conditions and expected results, developed for a
particular objective." Good test cases are:

Effective: "Finds faults," "Have high probability of finding a new defect."


Exemplary: "Represents others."
Evolvable: "Easy to maintain."
Economic: "Cheap to use."
Also, they should be "Unambiguous tangible result," "Repeatable and predictable," "Traceable to
requirements," and "Feasible." Test cases typically include elements like "Test Case ID," "Test Title,"
"Test Steps," "Test Data," and "Expected result."
Black-Box Testing (Functional Testing) Techniques: These techniques focus on the software's
functionality without knowledge of its internal structure.

Equivalence Partitioning: Divides "the input test data...into each partition at least once of
equivalent data." This reduces test cases. For an input between 1 and 100, valid class (e.g., 50) and
invalid classes (e.g., 0, 101) are identified.
Boundary Value Analysis: "tests are performed using the boundary values." This is done after
equivalence partitioning and tests "the values at the edges and just outside the edges." For input 1-
100, test values include 0, 1, 2, 99, 100, 101.
Decision Table: A "good way to deal with combinations of Inputs" and "Focused on business logic
or business rules." It provides a systematic way to state complex rules. The number of combinations
is 2^n, where n is the number of inputs.
Cause-Effect Graphing: Also known as Ishikawa or fishbone diagram, used to "Identify the
possible root causes, the reasons for a specific effect, problem, or outcome."
Random Testing (Monkey Testing): "performed when there is not enough time to write and execute
the tests." It is less reproducible and coverage is limited by experience.

5. Types of Testing
Software testing can be broadly categorised into Functional and Non-Functional testing.

Functional Testing: "Testing the functional aspects of the software to ensure it works as per the
requirements." Focuses on features, user interactions, and business requirements.

Unit Testing: "individual components or units of code are tested in isolation." Purpose: "Verify the
correctness of small pieces of code," "Identify bugs early," and "Ensure each unit functions as
expected."
Integration Testing: "individual modules or components are combined and tested as a group."
Purpose: "Verify the interaction between integrated units."
System Testing: "a complete and integrated system is tested to verify that it meets specified
requirements." Purpose: "Validate the system’s compliance with functional and non-functional
requirements."
Acceptance Testing: "the final phase of software testing where the system is tested to ensure it
meets the user and business requirements." Purpose: "Validate that the software is ready for
production."
User Acceptance Testing (UAT): "Testing performed by end-users to validate the system against
their requirements."
Alpha Testing: "conducted by the internal development team or QA team in a controlled
environment." Focuses on critical bugs and stability.
Beta Testing: "conducted by real users in a real-world environment." Gathers feedback on user
experience and compatibility.
Regression Testing: "performed to ensure that recent code changes (e.g., bug fixes, enhancements,
or new features) have not adversely affected existing functionality."
Smoke Testing: A preliminary test to "Determine if the build is stable enough for further testing."
"Identifies major issues early."
Non-Functional Testing: "Validate how well the system performs under various conditions."

Performance Testing: Evaluates system speed and stability under different loads. Includes:
Load Testing: Under "expected load conditions." (e.g., simulating 1,000 users).
Stress Testing: "beyond its normal operational capacity to see how it handles high stress or failure
conditions." (e.g., increasing users until crash).
Endurance Testing: "over an extended period" for issues like "memory leaks."
Spike Testing: "reaction to sudden and extreme changes in load."
Volume Testing: With "a large volume of data."
Scalability Testing: How well the system can "scale up or down."
Capacity Testing: Determines "the maximum capacity of the system."
Configuration Testing: Under "different configurations."
Failover Testing: Ensures "system can handle failover scenarios gracefully."
Latency Testing: Measures "time it takes for a system to respond to a request."
Stress Recovery Testing: How well "the system recovers after a stress test."
Concurrency Testing: Ability to handle "multiple users or transactions simultaneously."
Usability Testing: "evaluate a product by testing it on users." Includes Exploratory, Comparative,
and Heuristic Evaluation.
Security Testing: "identifying vulnerabilities, threats, and risks." Includes Vulnerability Scanning
and Penetration Testing ("Simulates an attack from a malicious hacker").
Reliability Testing: "ensuring that a software application performs its intended functions
consistently and without failure over a specified period of time."

Other Testing Categories:

Black-Box Testing: "internal structure, design, or implementation...is not known to the tester."
Focus: "What the system does."
White-Box Testing: "internal structure, design, and implementation...are known to the tester."
Focus: "How the system works."
Manual Testing: "manually executing test cases without using automation tools or scripts."
Automation Testing: Using tools to automate test execution. Automated testing is applicable to
various types of testing, including Functional, Regression, Performance, Load, Security, and Unit
Testing.

6. Defect Management and Reporting


Defect Life Cycle: "Software Defect Management in Software Quality Assurance (SQA) is a systematic
approach to identifying, tracking, analyzing, and resolving defects in software products." A defect (or
bug) is "an error, flaw, or failure in a software system that causes it to produce incorrect or unexpected
results." Types of Defects: Functional, Performance, Security, Usability, and Compatibility. The Defect
Life Cycle describes the "specific set of states that defect or bug goes through in its entire life,"
facilitating "coordinate and communicate current status of defect." Defect Statuses:

New: When first logged.


Assigned: Approved and assigned to a developer.
Open: Developer analyses and works on the fix.
Fixed: Developer makes code change and verifies.
Pending retest: Fixed code given to tester, retesting is pending.
Retest: Tester retests to verify the fix.

Compulsory Attributes of Software Defects (Bug Report):

ID: Unique identifier.


Title: Concise summary.
Description: Detailed explanation.
Steps to recreate: Clear instructions.
Test Data: Data used.
Test Environment: Environment details.
Expected Results: What should happen.
Actual Results: What actually happened.
Screenshots / Logs: Visual evidence.
Severity: Impact of the defect (e.g., High, Medium, Low).
Priority: Urgency of fixing (e.g., High, Medium, Low).
Status: Current state in the life cycle.
Additional Notes: Any extra context or observations.
Reported By: Tester's name.
Date: Date reported.
Assigned To: Developer/team responsible.

Before reporting a bug, testers should ensure: "Have I reproduced the bug 2-3 times," "Have I verified in
the Defect Tracking Tool...whether someone else already posted the same issue," and "Have I written the
detailed steps to reproduce the bug." Popular Defect Tracking Tools include JIRA, Bugzilla, and
Redmine.

Defect Prevention Strategies:

Code Reviews & Pair Programming


Automated Testing
Continuous Integration & Continuous Deployment (CI/CD)
Proper Requirements Analysis
Static Code Analysis
Early Defect Detection via Unit Testing

Defect Metrics and Best Practices: Key metrics include:

Defect Density: "Number of defects per lines of code."


Defect Leakage: "Defects found after release."
Defect Removal Efficiency (DRE): "Percentage of defects removed before release."
Mean Time to Detect (MTTD) & Mean Time to Repair (MTTR): Time taken to identify and fix
defects. Best practices involve "Early detection," "Clear documentation," "Prioritization" (focus on
critical defects), "Metrics tracking," and "Continuous improvement" to prevent recurrence.

7. Test Status Reporting


Purpose of Test Status Reporting: "Test Status Reporting is the component that informs key project
stakeholders of the critical aspects of the project's status." Its main benefits are:

"Prevents last-minute surprises"


"Aids in project decision-making" (e.g., Netflix's Profiles feature launch based on "solid QA data,
not guesswork").

Target Audience: Reports are tailored for various stakeholders, including:

Project Managers: "Monitor progress, control risks."


Test Teams: "Adjust planning based on issues."
Customers/Product Owners: "Ensure quality expectations are met."
Senior Management: "Make informed business decisions."
Compliance/Regulators: "Validate required testing."

Importance of Test Status Reporting:

1. Provides Transparency: Ensures "clear visibility into what is happening in testing." Stakeholders
know "How much work is completed," "What issues are currently being faced," and "Where the risks
are."
2. Supports Timely Decisions (GO/NO-GO): Provides "accurate test reporting arms decision-makers
with the data needed to: Approve release (GO) [or] Hold and fix issues (NO-GO)."
3. Identifies Risks Early: Flags risks like "High-priority unresolved defects," "Untested critical areas,"
or "Environmental instabilities," allowing proactive "risk mitigation strategies."

Status Reporting Guidelines (Typical Content):

Project Name:
Duration: Reporting period.
Report by: Author (Test Lead/Manager).
Report to: Target Audience.
Planned Activities/Tasks: For the reporting period.
Activities/Tasks Accomplished: Since the last report.
Project Milestones Reached: With current status.
Activities/Tasks not Accomplished/missed:
Planned Activities for the next Reporting Period:
Test Project Execution Details: # Pass, # Fail, # Blocked, # Not Executed.
Test Summary: Test Coverage details.
Defect Status: Details of # defects / severity wise.
Status of Defect Re-testing:
Issues: Problems faced, listed by criticality.
Unresolved Issues: From previous periods.
Risks: Important risks affecting the testing schedule.
Environment Downtime Tracking: # hours lost due to environment issues.

In summary, SQA, through meticulous planning, execution, defect management, and transparent
reporting, aims to deliver high-quality software that meets requirements, manages risks, and supports
informed business decisions.

What is software quality assurance (SQA) and why is testing so


important?
Software Quality Assurance (SQA) is a systematic approach to ensuring that software products meet
quality standards and function as intended. Testing is a crucial component of SQA, serving multiple
objectives:

Building Confidence: Testing helps establish confidence in the quality level of the software being
tested.
Defect Detection and Risk Reduction: A primary goal of testing is to find defects (bugs) and
failures, thereby reducing the risk of inadequate software quality. Defects are manifestations of
human errors in software, and if executed, can lead to failures, which are deviations from expected
software delivery or service.
Informed Decision-Making: Testing provides stakeholders with sufficient information to make
informed decisions, particularly regarding the software's quality.
Compliance: Testing ensures compliance with contractual, legal, or regulatory requirements or
standards, and verifies the software's adherence to such stipulations.
Cost Avoidance: Failures caused by software faults can be extremely expensive, ranging from minor
inconveniences to huge sums of money and even loss of life in safety-critical systems. Testing helps
to avoid these costs and potential lawsuits.
Reliability Assessment: Testing helps to learn about the reliability of the software, which is the
probability that it will not cause system failure for a specified time under specified conditions.

Software is inherently prone to faults because it is created by humans under pressure, leading to errors.
Therefore, testing is a necessary and critical process to mitigate these risks and ensure the software's
fitness for purpose.

What is the difference between an error, a fault, and a failure in


software?
In the context of software quality, these terms describe a causal chain of events:

Error: An error is a human action that produces an incorrect result. This is the root cause, stemming
from human mistakes during development, design, or requirements gathering.
Fault (also known as defect or bug): A fault is a manifestation of an error in the software itself. It's
a flaw in the code or design. If a fault is executed, it has the potential to cause a failure. Faults are
states within the software.
Failure: A failure is a deviation of the software from its expected delivery or service. It is an event
that occurs during operation when a fault is triggered and causes the software to behave incorrectly.

Essentially, a person makes an error, which creates a fault in the software, and if that fault is activated, it
can cause a failure in operation.

Why can't all software be "exhaustively" tested, and how is "enough"


testing determined?
Exhaustive testing, defined as exercising all combinations of inputs and preconditions, is generally
impractical and often impossible due to the sheer number of permutations. For example, a system with
numerous inputs and variables can quickly lead to an astronomically high number of potential test cases,
requiring an infinite or impractical amount of time to execute.

Instead of exhaustive testing, the amount of testing considered "enough" is primarily determined by risk.
This principle guides testing efforts by considering:

Risk of missing important faults: The potential for critical defects to go undetected.
Risk of incurring failure costs: The financial or operational consequences of a software failure.
Risk of releasing untested or under-tested software: The impact on reputation, market share, or
customer satisfaction.
Risk of missing a market window: The competitive disadvantage of delaying release due to over-
testing.
Risk of over-testing or ineffective testing: Wasting resources on testing that yields diminishing
returns.

By using risk as the primary determinant, test teams can prioritise tests, allocate available time effectively,
and focus their efforts on the most critical areas. The goal is to perform "the best testing in the time
available," ensuring that the most important conditions are covered first and most thoroughly.

What are the main phases of the fundamental test process?


The fundamental test process is typically broken down into five sequential phases:

1. Planning and Control:

Planning: This involves defining the test strategy and policies, determining the scope, risks, and test
objectives (ensuring each requirement is covered), outlining the test approach (procedures,
techniques, teams, environment, data), implementing the test policy, identifying necessary resources,
scheduling all test activities, and establishing clear exit criteria for testing completion.

Control: This phase ensures that the planned activities are implemented and communicated
effectively. It includes measuring and analysing results (e.g., test execution progress, defect
findings), monitoring and documenting progress for stakeholders, initiating corrective actions if the
strategy needs adjustment, and making key decisions regarding continuing, stopping, or restarting
testing, or confirming a "GO" for release.

1. Test Analysis and Design:

Analysis: This involves identifying "what" is to be tested (test conditions) based on specifications
and prioritising them.

Design: This involves determining "how" the identified conditions will be tested by designing test
cases (test inputs and expected results) and sets of tests for various objectives.

Building: This involves implementing the test cases by preparing test scripts and necessary test data.

1. Test Implementation and Execution:

Implementation: This phase focuses on building the high-level designs into concrete test cases and
procedures, developing and prioritising step-by-step instructions, creating test suites, and verifying
the readiness of the test environment.

Execution: This involves running the planned test cases, usually prioritising the most important
ones, logging the outcomes of each test execution, and recording details such as software identities,
versions, data used, and environment.

1. Evaluating Exit Criteria and Reporting:

This phase involves assessing whether the predefined test completion criteria (e.g., test coverage,
number of faults found, cost/time limits) have been met. If not, further test activities may be
required. A summary report is then written as a test deliverable, documenting clear decisions.

1. Test Closure Activities:

This final phase includes checking which planned deliverables have been completed, ensuring
defects are resolved or deferred, finalising and archiving test ware (scripts, data, tools, environment)
for future reuse, and conducting lessons learned to improve future testing processes.

What is Test Status Reporting and why is it important in software


development?
Test Status Reporting is a critical component of software development that informs key project
stakeholders about the current state of the project's testing efforts. It provides a structured way to
communicate progress, issues, and risks, preventing last-minute surprises and aiding in informed
decision-making.

The importance of Test Status Reporting stems from several key benefits:

Transparency to Stakeholders: It provides clear visibility to various stakeholders (e.g., project


managers, product owners, customers, senior management, regulators) regarding completed work,
current issues, and identified risks. This ensures everyone is on the same page.
Supports Timely Decisions (GO/NO-GO): Test reports provide the data needed for critical
decisions, such as whether to release a product or delay it to fix important issues. For example,
Netflix based key decisions on launching their "Profiles" feature on solid QA data.
Identifies Risks Early: Beyond just passing or failing tests, status reports flag potential risks before
they impact end-users. These risks could include high-priority unresolved defects, untested critical
areas, or environmental instabilities. Early identification allows for proactive risk mitigation
strategies.
Facilitates Project Control: Project managers use these reports to monitor progress and control
risks, while test teams can adjust their planning based on reported issues. Customers and product
owners can ensure quality expectations are met, and senior management can make informed business
decisions.

A good status report should be tailored to its audience and typically includes project name, reporting
period, author, target audience, planned and accomplished activities, milestones, unaccomplished tasks,
execution details (pass/fail/blocked), test summary, defect status, unresolved issues, risks, and
environment downtime tracking.

What are some common Black-Box Test Design Techniques?


Black-Box testing (also known as functional or specification-based testing) focuses on the external
behaviour of the software without knowledge of its internal structure or code. Several techniques are used
to design effective test cases:

1. Equivalence Partitioning:

Definition: This technique divides the input test data into "equivalent" partitions or classes. The idea
is that if a test case from a specific partition reveals a defect, other test cases from the same partition
are likely to reveal similar defects.
Steps: Identify input ranges or conditions, divide the input domain into valid and invalid equivalence
classes, and select one test case from each class.
Example: For a system accepting integers between 1 and 100, valid class: 1-100 (e.g., 50); invalid
classes: <1 (e.g., 0) and >100 (e.g., 101).

1. Boundary Value Analysis (BVA):

Definition: This technique complements equivalence partitioning by focusing on the "boundaries"


between partitions. Defects are often found at the extreme ends of input ranges.
Steps: Identify the boundaries of input ranges, and test values at the edges and just outside the edges.
Example: For the 1-100 integer input, BVA would test 0, 1, 2 (lower boundary and adjacent values)
and 99, 100, 101 (upper boundary and adjacent values).

1. Decision Table Testing (Cause-Effect Graphing):

Definition: A decision table is a systematic way to deal with combinations of inputs (causes) that
lead to specific outputs (effects). It's focused on business logic or rules and provides complete
coverage of test cases for complex conditions.
Principle: For 'n' inputs, there can be 2^n possible combinations. The table maps these combinations
to expected outcomes.

1. Exploratory Testing:
Definition: This is a less structured approach where testers learn about the software, design tests,
and execute them simultaneously. It's often used when documentation is limited or under time
pressure.
Characteristics: It's often performed by experts, is less reproducible, and its coverage is limited by
the tester's experience and knowledge.

1. Random Testing (Monkey Testing):

Definition: This involves generating random inputs and executing them without a specific test case
design. It's typically used when there's insufficient time for detailed test case creation.
Characteristics: It's a form of black-box functional testing, often used by experts, less reproducible,
and its effectiveness depends on luck.

These techniques help testers to create effective, exemplary, evolvable, and economic test cases that have
a high probability of finding new defects and are traceable to requirements.

What are the different types of functional and non-functional software


testing?
Software testing is broadly categorised into two main types:

Functional Testing: This verifies that each function of the software application operates in conformance
with the functional requirements and specifications. It focuses on "what the system does."

Unit Testing: Tests individual components or units of code in isolation to verify their correctness
and identify bugs early.
Integration Testing: Combines individual modules and tests them as a group to verify interactions
and data flow between integrated units.
System Testing: Tests a complete and integrated system to verify that it meets specified functional
and non-functional requirements as a whole.
Acceptance Testing: The final phase where the system is tested to ensure it meets user and business
requirements, validating readiness for production. This includes:
User Acceptance Testing (UAT): Performed by end-users to validate the system against their
requirements, focusing on user-friendliness and expectations.
Alpha Testing: Conducted by internal development or QA teams in a controlled environment to
identify critical bugs and validate functionality and stability.
Beta Testing: Conducted by real users in a real-world environment to gather feedback, identify
issues not caught internally, and focus on user experience and compatibility.
Regression Testing: Performed to ensure that recent code changes (e.g., bug fixes, enhancements)
have not adversely affected existing functionality.
Smoke Testing: A quick, preliminary test to verify that the critical features of an application are
functioning and that the build is stable enough for further detailed testing.

Non-Functional Testing: This verifies non-functional aspects of the software, such as performance,
usability, security, and reliability. It focuses on "how well the system performs."

Performance Testing: Evaluates how well the system performs under various conditions.
Load Testing: Assesses system behaviour under expected load conditions (e.g., 1,000 simultaneous
users).
Stress Testing: Determines system robustness under extreme conditions, pushing it beyond normal
capacity (e.g., increasing users until crash).
Endurance Testing: Evaluates performance over an extended period to identify issues like memory
leaks or degradation (e.g., running under moderate load for 24 hours).
Spike Testing: Assesses the system's reaction to sudden and extreme changes in load.
Volume Testing: Evaluates performance with a large volume of data.
Scalability Testing: Determines how well the system can scale up or down.
Capacity Testing: Determines the maximum capacity of the system.
Configuration Testing: Evaluates performance under different hardware/software configurations.
Failover Testing: Ensures the system handles failover scenarios gracefully.
Latency Testing: Measures response time to requests.
Stress Recovery Testing: Evaluates system recovery after stress.
Concurrency Testing: Assesses handling of multiple simultaneous users/transactions.
Usability Testing: Evaluates a product by testing it on users to assess user-friendliness and
experience.
Exploratory Usability Testing: Early testing to explore user needs with prototypes.
Comparative Usability Testing: Compares two or more designs for usability.
Heuristic Evaluation: Experts review against usability principles.
Security Testing: Identifies vulnerabilities, threats, and risks to protect data and resources.
Vulnerability Scanning: Automated scans for known security flaws.
Penetration Testing: Simulates a malicious attack to find exploitable vulnerabilities.
Configuration Testing: Checks system security settings.
Database Security Testing: Ensures database integrity and confidentiality.
Network Security Testing: Tests network infrastructure security.
Reliability Testing: Ensures the application performs its intended functions consistently without
failure over a specified period.

Many of these types of testing, both functional and non-functional, can also be automated to improve
efficiency and coverage.

What is the defect life cycle and what are its key stages?
The Defect Life Cycle (or Bug Life Cycle) in software testing describes the specific set of states a defect
or bug goes through from its identification to its resolution. Its purpose is to facilitate coordination and
communication about the defect's status among various team members, making the defect-fixing process
systematic and efficient.

The key stages (statuses) in a typical defect life cycle are:

1. New: This is the initial state when a defect is first logged and reported by the tester.
2. Assigned: Once a defect is logged, a test lead or manager reviews and approves it, then assigns it to
a developer or development team for investigation and fixing.
3. Open: The defect's status changes to "Open" when the assigned developer starts analysing the defect
and working on a fix.
4. Fixed: After the developer makes the necessary code changes and performs a preliminary
verification, they change the defect status to "Fixed."
5. Pending Retest: The developer then provides the fixed code to the testing team. Since the retesting
of the fix is still pending on the tester's end, the status is set to "Pending Retest."
6. Retest: The tester retests the code to verify whether the defect has been successfully fixed by the
developer. At this stage, the tester will either confirm the fix or reopen the defect.
7. Closed: If the tester confirms that the defect is no longer reproducible and the fix is satisfactory, the
defect's status is changed to "Closed."
8. Reopened: If, during the retesting phase, the tester finds that the defect still exists or is not fixed
correctly, they will change the status back to "Reopened" and reassign it to the developer.

Other potential statuses include:

Deferred/Postponed: If the defect is not critical and can be addressed in a future release.
Rejected/Not a Bug: If the developer determines that the reported issue is not a genuine defect (e.g.,
it's a feature, not a bug, or it's due to incorrect usage).
Duplicate: If the reported defect has already been logged.

Throughout this cycle, it's crucial to report defects with compulsory attributes like ID, title, description,
steps to recreate, test data, test environment, actual results, screenshots, severity, priority, and
reported/assigned details. Tools like JIRA, Bugzilla, and Redmine are commonly used for defect tracking.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy