0% found this document useful (0 votes)
24 views25 pages

Summary

The document provides a comprehensive overview of software testing, including definitions, key terms, types of testing, and the importance of specifications. It discusses various testing methodologies, roles within software development teams, and the significance of early testing in the software development lifecycle. Additionally, it covers the Capability Maturity Model (CMM) and Agile Testing principles, emphasizing the need for effective communication and collaboration among teams to ensure software quality.

Uploaded by

wanbabsl1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views25 pages

Summary

The document provides a comprehensive overview of software testing, including definitions, key terms, types of testing, and the importance of specifications. It discusses various testing methodologies, roles within software development teams, and the significance of early testing in the software development lifecycle. Additionally, it covers the Capability Maturity Model (CMM) and Agile Testing principles, emphasizing the need for effective communication and collaboration among teams to ensure software quality.

Uploaded by

wanbabsl1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Lecture 1a

Introduction to Software Testing


Definition of Software Testing:
Process of evaluating software to identify any gaps, errors, or missing requirements.
Key Terms:
Software Testing: The process of verifying that a software application meets specified
requirements and is free of defects.
Computer Bug: An error, flaw, or fault in a computer program that causes it to produce incorrect
or unexpected results.
Computer Fault: A defect in the software that can lead to a failure.
Other Terms for Bugs:
Defect, Fault, Problem, Error, Incident, Anomaly, Variance, Failure, Inconsistency, Product
Anomaly, Product Incidence.
Defective Software:
Software that contains defects, which are often hard to predict.
Sources of Problems:
Requirements Definition: Erroneous, incomplete, or inconsistent requirements.
Design: Fundamental design flaws.
Implementation: Mistakes in programming, chip fabrication, or malicious code.
Support Systems: Poor programming languages, faulty compilers, and misleading tools.
Inadequate Testing: Incomplete testing and mistakes in debugging.
Evolution: Sloppy redevelopment, introduction of new flaws, and increased complexity.
Adverse Effects of Faulty Software:
Communications: Loss or corruption of data.
Space Applications: Potential loss of lives and launch delays.
Defense: Misidentification of friend or foe.
Transportation: Deaths, delays, and accidents.
Safety-Critical Applications: Injuries and fatalities.
Electric Power: Power outages and health hazards.
Money Management: Fraud and privacy violations.
Control of Elections: Incorrect results.
Control of Jails: Technology-aided escapes and accidental releases.
Law Enforcement: False arrests.
Examples of Bugs:
Project Mercury’s FORTRAN code error causing incorrect loop execution.
F-18 crash due to missing exception condition.
Year ambiguities leading to incorrect age interpretations.
Nuclear power plants shut down due to simulation program faults.
ATM dispensing incorrect amounts due to software flaws.
Specification Importance:
A specification defines the product and includes functional and non-functional requirements.
Bugs occur when software does not meet specifications.
Cost of Bugs:
Cost to fix bugs increases exponentially over time.
Example: $1 to fix during specification, $10 during design, $100 during coding, $1000 after
release.
Goal of Software Tester:
To find bugs early in the development process and ensure they are fixed.
Software Development Process:
Involves requirements specification, design, coding, testing, and documentation.
Customer Requirements:
Understanding what the customer wants through various methods (surveys, feedback, etc.).
Specification Development:
Takes customer data to define software features and constraints.
Scheduling:
Establishes project timelines and task assignments.
Design Phase:
Creation of design documents before coding begins.
Source Code:
The ultimate specification of the software.
Test Documents:
Include test plans, test cases, bug reports, and metrics.
Software Project Staff:
Roles include project managers, software architects, programmers, testers, technical writers, and
configuration managers.
Software Development Lifecycles:
Common models include Waterfall and Spiral.

Lecture 1b
Definition of Testing:
Process of evaluating a system or its components to find if it meets specified requirements.
Involves executing a system to identify gaps, errors, or missing requirements.
According to ANSI/IEEE 1059, it is analyzing software to detect differences between existing and
required conditions (defects/errors/bugs).
Testing, Quality Assurance (QA), and Quality Control (QC):
QA: Ensures implementation of processes and standards for software verification.
QC: Verifies developed software against documented requirements.
Testing: Identifies bugs/errors/defects in software.
QA is process-oriented, QC is product-oriented, and Testing is a subset of QC.
Importance of Learning Software Testing:
Large companies have dedicated teams for software evaluation.
Developers conduct Unit Testing.
Roles include Software Tester, Software Developer, Project Lead/Manager, and End User.
Applications of Software Testing:
Cost-effective development: Early testing saves time and costs.
Product improvement: Identifying and fixing errors during SDLC phases.
Test automation: Reduces testing time but requires stable software.
Quality check: Determines functionality, reliability, usability, efficiency, maintainability, and
portability.
Who Does Testing?:
Involves various professionals: Software Tester, Developer, Project Lead/Manager, End User.
Different designations based on experience and knowledge.
When to Start Testing?:

Early testing reduces costs and time.


Can start from the Requirements Gathering phase and continue until deployment.
Varies by development model (e.g., Waterfall vs. Incremental).
When to Stop Testing?:
Difficult to determine; testing is never-ending.
Considerations include deadlines, completion of test case execution, bug rates, and management
decisions.
Roles and Responsibilities:
Test Lead/Manager: Defines testing activities, checks resources, prepares status reports, interacts
with customers.
Test Engineers/QA Testers: Understand requirements, develop test cases, execute tests, report
defects, perform regression testing.
Overview of Software Engineering Team:
Organization and planning are crucial for quality software development.
Project managers must resolve obstacles and ensure timely delivery.
Overview of Software Testing Team:
Team structure and roles must be clearly defined.
Performance tracking is essential to assess effectiveness.
Software Tester Role:
Design test suites, understand usability issues, communicate with development teams, create
documentation.
Software Test Manager Role:
Manage testing team, interact with customers, schedule activities, select tools, ensure quality of
requirements.
Software Test Automator Role:
Design automated test scripts, ensure compliance with standards, understand requirements.
Interactions Between Teams:
Communication between testing, development, release management, and project management
teams is vital for project success.
Verification & Validation:
Verification: Ensures building the software right (static activities).
Validation: Ensures building the right software (dynamic activities).

Testing Principles:
Testing shows presence of defects.
Exhaustive testing is not possible.
Early testing is beneficial.
Defect clustering occurs in specific modules.
Pesticide paradox: Repeated tests may not find new bugs.
Testing is context-dependent.
Absence of errors fallacy: No bugs found does not mean the software meets requirements.

Lecture 2
Software Test Case Design Strategy:
Provides a roadmap for testing steps, planning, execution, and resource allocation.
Incorporates test planning, case design, execution, and data evaluation.
Testing is systematic and can be planned in advance.
A template for software testing should be defined for the software process.
Effective testing requires technical reviews to eliminate errors before testing.
Testing starts at the component level and expands to the entire system.
Different techniques are suitable for various engineering approaches and timelines.
Testing is performed by developers and independent test groups for large projects.
Testing and debugging are distinct activities, but debugging must be included in the strategy.
Strategies must accommodate both low-level and high-level tests.
Guidance for practitioners and milestones for managers are essential.
Progress must be measurable, and issues should be identified early.
Software Requirements:
Clearly defined requirements indicate project success.
Establish a formal agreement between clients and providers.
High-quality requirements mitigate financial risks and keep projects on schedule.
Types of Requirements:

Business Requirements: High-level goals and objectives (e.g., increased revenue, reduced errors).
User/Stakeholder Requirements: Needs of specific stakeholder groups, bridging business and
solution requirements.
Other Requirements:
Functional Requirements: Define what the product must do (features and functions).
Non-Functional Requirements: Describe system properties (quality attributes).
Functional vs Non-Functional Requirements:
Functional requirements detail features for user tasks and system behavior.
Non-functional requirements define performance, usability, and reliability.
Test Cases:
Designed to verify application functionality.
Consist of steps to check specific operational logic.
Describe user input, system response, and preconditions.
Linked to business functions/requirements and require specific test data.
Governed by preconditions necessary for execution.
Test Case Design Methods:
Based on functional specifications to check product compliance.
Example: Login process requirements for user ID and password validation.
Test cases include boundary value analysis (BVA) and equivalence class partitioning (ECP).
Linking Test Cases and Requirements:
Testlink is a web-based tool for test management and QA testing.
Allows quick linking of test cases to requirements and user roles.
Linking can be challenging but is facilitated by predefined features in Testlink.

Lecture 3
Aim of the Lecture: Review the Capability Maturity Model (CMM) developed by the Software
Engineering Institute (SEI) at Carnegie Mellon University, focusing on its structure, key practices,
and assessment methods.
What is CMM?:
Developed to assess the capability of software development contractors.
Relies on the concept of process maturity.
Defines software process as a set of activities, methods, and practices for software development
and maintenance.
Maturity indicates the richness and consistency of an organization's software process.
CMM Levels:
Level 1: Initial - Ad-hoc processes, success depends on individual efforts.
Level 2: Repeatable - Basic project management controls established; processes can be repeated.
Level 3: Defined - Processes are documented and standardized across projects.
Level 4: Managed - Emphasis on quantitative management of processes and quality.
Level 5: Optimizing - Focus on continuous process improvement through quantitative feedback.
Key Process Areas (KPAs):
Level 2 KPAs: Requirements Management, Software Project Planning, Tracking and Oversight,
Subcontract Management, Quality Assurance, Configuration Management.
Level 3 KPAs: Organization Process Focus, Definition, Training Program, Integrated Software
Management, Product Engineering, Inter-group Coordination, Peer Reviews.
Level 4 KPAs: Quantitative Process Management, Software Quality Management.
Level 5 KPAs: Defect Prevention, Technology Change Management, Process Change Management.
Common Features of KPAs:
Commitment to Perform (CO)
Ability to Perform (AB)
Activities Performed (AC)
Measurement and Analysis (ME)
Verifying Implementation (VE)
Applying the CMM:
Methods: Self-assessment, formal assessment, software capability evaluation.
Self-assessment is cost-effective but may lack independence.
Formal assessment provides independence and identifies improvement priorities.
Assessment Steps:
Team selection, maturity questionnaire, site visit, presentation of findings.
Comparison with ISO:
CMM focuses on continuous process improvement; ISO9001 sets minimum quality management
requirements.
Both aim for quality and process management but differ in approach and certification.
Critical Evaluation:
Questions about the justification of CMM claims and the concept of evolutionary plateaus.
Concerns about the adequacy of measurement and scoring systems.
The need for evidence supporting the relationship between maturity scores and actual
performance.
Final Thoughts:
CMM has influenced software development by emphasizing the importance of processes in
achieving quality.
Critiques suggest that resources might be better spent directly improving products and services
rather than focusing solely on process improvement.

Lecture 4 & 5
Levels of Testing
Four main levels of software testing:
Unit Testing:
Tests individual software components for functionality.
Ensures each unit operates correctly (e.g., functions in a calculator).
Performed by software testers.
Integration Testing:
Combines components and tests them as a whole.
Identifies defects in interactions between modules.
Approaches include:
Top-Down Integration
Bottom-Up Integration
Big-Bang Integration
Mixed Integration
Performed by testers.
System Testing:
Verifies software operations and compatibility with operating systems.
Includes functional and non-functional testing.
Conducted by external testers.
Acceptance Testing:
Conducted by selected end-users to determine readiness for launch.
Types include Alpha Testing (internal) and Beta Testing (external).
Focuses on user requirements and feedback.
Functional vs Non-Functional Testing
Functional Testing:
Verifies operational execution according to requirements.
Focuses on what the system does.
Conducted manually, checking specific inputs and outputs.
Types include:
Smoke Testing
Sanity Testing
Unit Testing
Integration Testing
User Acceptance Testing
Regression Testing
Non-Functional Testing:
Checks aspects not covered in functional tests (e.g., performance, usability).
Focuses on how the system behaves.
Often automated, using tools to simulate real-life conditions.
Types include:
Performance Testing
Usability Testing
Security Testing
Load Testing
Compliance Testing
Differences Between Functional and Non-Functional Testing:
Functional: Tests features/functions, evaluated as present/absent, usually manual.
Non-Functional: Tests non-functional aspects, evaluated on a scale, usually automated.
Conclusion:
Both functional and non-functional testing are essential for quality assurance.
Professional QA agencies can provide expertise in various testing strategies.

Lecture 6 & 7
Agile Testing:
Iterative development methodology that evolves through collaboration between customers and
self-organizing teams.
Focuses on continuous testing, feedback, and involvement of the whole team in testing.
Key principles include continuous testing, continuous feedback, team involvement, reduced
feedback response time, simplified code, and less documentation.
Advantages of Agile Testing:
Saves time and money.
Reduces documentation.
Flexible and adaptable to changes.
Provides regular feedback from end users.
Better issue determination through daily meetings.
Disadvantages of Agile Testing:
Potential for less documentation can lead to misunderstandings.
Requires a high level of collaboration and communication.
May lead to scope creep if not managed properly.
Agile Testing Methods:
Behavior Driven Development (BDD):
Enhances communication among stakeholders through example-based communication.
Acceptance Test Driven Development (ATDD):
Involves team members from different perspectives to formulate acceptance tests.
Exploratory Testing:
Emphasizes working software over comprehensive documentation and values customer
collaboration.
Life Cycle of Agile Testing:
Impact Assessment: Gather inputs from stakeholders for feedback.
Agile Testing Planning: Plan testing schedules and deliverables.
Release Readiness: Review features for readiness to go live.
Daily Scrums: Daily meetings to check testing status and set goals.
Test Agility Review: Weekly meetings to assess progress against milestones.
Agile Test Plan & Quadrants:
Test plan includes scope, functionalities, types of testing, performance testing, risk planning, and
deliverables.
Quadrants help in understanding the agile testing process.
V-Model:
Development and QA activities occur simultaneously, with testing starting from the requirement
phase.
Verification and validation activities are performed concurrently.
Manual Testing:
Checking application functionality without automation tools.
Types include White Box, Black Box, and Gray Box testing.
Automation Testing:
Converting manual test cases into test scripts using automation tools.
Enhances speed of test execution.
Black Box Testing:
Tests functionalities without knowledge of internal code structure.
Focuses on input and output based on requirements.
Steps include examining requirements, choosing valid/invalid inputs, determining expected
outputs, constructing test cases, executing them, and comparing actual vs expected outputs.
Types include Functional, Non-functional, and Regression testing.
Techniques include Equivalence Class Testing, Boundary Value Testing, and Decision Table Testing.
Comparison of Black Box and White Box Testing:
Black Box focuses on functional requirements; White Box focuses on internal structure.
Black Box does not require programming knowledge; White Box does.
Black Box facilitates communication among modules; White Box does not.
White Box Testing:
Tests internal coding and infrastructure of software.
Requires programming skills to design test cases.
Techniques include Basis Path Testing, Loop Testing, Condition Testing, and Performance Testing.
Steps include designing test scenarios, examining resource utilization, testing internal
subroutines, and security testing.
Advantages of White Box Testing:
Optimizes code and identifies hidden errors.
Test cases can be automated.
More thorough than other testing approaches.
Disadvantages of White Box Testing:
Time-consuming for large applications.
Expensive and complex.
Requires professional programmers with detailed knowledge.
Lecture 8
Introduction to Automation Testing
Automation testing uses specific tools to execute test scripts without human interference.
Enhances efficiency, productivity, and test coverage in software testing.
Test automation engineers write test scripts or use automation tools to execute applications.
Focuses on replacing manual human activity with systems or devices.
Why Perform Automation Testing?
Offers better application quality with less effort and time.
Organizations are increasingly adopting automation testing.
Requires significant investment in resources and money.
Advantages include:
Reusability of test scripts.
Consistency in testing.
Ability to run tests anytime (24/7).
Early bug detection.
Reduced need for human resources.
Automation Testing Methodologies
GUI Testing: Tests applications with graphical user interfaces.
Code-Driven: Focuses on executing test cases to verify code functionality.
Test Automation Framework: A set of rules for generating valuable results in automated testing.
Automation Testing Process
Step 1: Decision to Automate
Assess potential benefits and manage expectations.
Step 2: Test Tool Selection
Evaluate and select appropriate testing tools based on requirements.
Step 3: Scope Introduction
Define the testing area and determine the scope of automation.
Step 4: Test Planning and Development
Define testing strategies, standards, and guidelines.
Step 5: Test Case Execution
Execute test cases using automated tools.
Step 6: Review and Assessment
Continuous quality improvement through evaluation and feedback.
Challenges in Automation Testing
Lack of skilled test automation experts.
Difficulty in selecting the right test automation tool.
Challenges with scaling test environments.
High initial investment costs.
Miscommunication between development and testing teams.
Automation Testing Tools
Functional Testing Tools:
Commercial Tools: QTP, RFT, TestComplete, SoapUI.
Open-Source Tools: Selenium, Sikuli, AutoIt.
Non-Functional Testing Tools:
Commercial Tools: LoadRunner, Silk Performer.
Open-Source Tools: JMeter, NeoLoad.
Advantages of Automation Testing
Faster than manual testing.
Reusability of test cases.
Reliable and comprehensive testing.
Requires fewer human resources.
Cost-effective in the long run.
Disadvantages of Automation Testing
Requires highly skilled testers.
High-quality testing tools are necessary.
Complicated analysis of unsuccessful test cases.
Expensive test maintenance.
Debugging is mandatory for unresolved errors.
Conclusion
Automation testing is a software testing technique that enhances test coverage, efficiency, and
speed.
Successful automation depends on tool selection, testing processes, and team collaboration.
Manual and automation techniques should complement each other to reduce testing time.

Lecture 9
Aim of Lecture: Introduce concepts related to quality, including fitness for purpose, quality
attributes, quality assurance, quality management systems, reviews, and audits.
Software Quality Management: Encompasses management of software products, including code,
documentation, and data.
Definition of Quality:
Pirsig's perspective: Quality is neither objective nor subjective; it is simple and immediate.
Webster's Dictionary: Quality as characteristics, attributes, and degree of excellence.
Juran's view: Quality has multiple meanings and should be qualified.
Generally accepted definition: A quality product is fit for purpose.
ISO definition: Quality is the degree to which inherent characteristics fulfill requirements.
Quality Assurance (QA):
Establishes confidence that quality requirements will be fulfilled, covering both process and
product.
Procedures and tools ensure products meet or exceed standards during development.
QA is a function of the quality management system (QMS).
Independent QA teams should report to management, not tied to specific development groups.
Software Quality Activities:
Part of any software development project, focusing on documentation as evidence of QMS
implementation.
Cost of Quality:
Prevention costs (planning, reviews, training) vs. appraisal costs (inspection, testing) vs. failure
costs (repair, support).
Early fault detection reduces overall costs.
Development of Quality Management:
Influenced by quality gurus like Deming (PDCA cycle) and Juran (process emphasis).
Total Quality Management (TQM) involves continuous improvement and customer perspective.
Process vs. Product Quality:
Quality of development process affects product quality; however, this relationship is not always
direct in software.
QA involves defining process standards, monitoring adherence, and reporting to management.
Reviews in QA:
Major activity in QA, confirming that products meet established criteria.
Effective review teams should have clear roles and time limits.
Launching a Software Quality Assurance Program:
Eight steps: Initiate program, identify issues, write plan, establish standards, staff function, train
staff, implement plan, evaluate program.
Pitfalls in Software Quality Assurance:
Misconceptions about SQA's role and effectiveness without management support.
Types of Quality Management System Assessment:
First-party (self-assessment), second-party (purchaser assessment), third-party (independent
assessment).
Third-party assessments lead to certification, which holds commercial value.
Fitness for Purpose:
Implies a defined purpose for an object, with quality revolving around requirements
specifications.
Quality attributes can be functional (specified), cultural (user experience), or developer-focused
(not in specification).
Quality Attributes Table:
Includes economy, correctness, resilience, integrity, reliability, usability, documentation,
modifiability, clarity, understandability, validity, maintainability, flexibility, generality, portability,
interoperability, testability, efficiency, modularity, reusability.
Further Reading:
Ian Sommerville’s "Software Engineering" for non-functional requirements.
Pressman’s "Software Engineering" for analysis concepts.

Lecture 10
Aim: Overview of Software Quality Management Systems (QMS) and Quality System Standards.
Quality Management System Elements:
Organisation:
Structure and relationships are crucial for quality.
Quality issues often arise at interfaces between individuals and departments.
Responsibilities, authorities, and interfaces must be clearly defined and documented.
The ‘process approach’ is a key tool in ISO QMS standards.
Resources:
Insufficient resources and unqualified personnel lead to quality problems.
QMS standards require identification of necessary personnel and adequate resources.
Verification activities include design review, inspection, testing, and quality audits.
Procedures:
Procedures dictate how tasks are performed.
Effective systems identify critical tasks and provide documentation and controls.
Quality Management System Standards:
Various standards have emerged since the 1960s, driven by software-intensive areas.
Examples include DOD 2167, AQAP series, DEF-STAN 00-16, FAA-STD-018, BS 5760 pt 8, ANSI,
and IEEE standards.
ISO 9000 series has gained widespread adoption for internal and external quality system
development.
Standards and Guidelines:
ISO 9000:2000: Fundamentals and vocabulary, defines terms and principles.
ISO 9001:2000: Requirements for assessing ability to meet customer and regulatory
requirements.
ISO 9004:2000: Guidelines for performance improvements.
ISO 9000-3:1997: Application of 9001 to software development.

Quality Principles (ISO 9000:2000):


Customer focus
Leadership
Involvement of people
Process approach
System approach to management
Continual improvement
Factual approach to decision making
Mutually beneficial supplier relationships
Auditor Roles:
First-party audit: Self-assessment by the organization.
Second-party audit: Assessment by a purchaser against standards.
Third-party audit: External audit for certification.
Licensed auditors confirm the effectiveness of the QMS.
ISO 9001:2000 Certification Process:
Steps include preparation, audit, corrective actions, verification, and certification.
Emphasizes quality principles and a plan-do-check-act approach.
Terms Used by ISO 9000 Series:
Definitions for quality, system, management system, QMS, quality policy, quality objective,
management, top management, quality management, quality planning, quality control, quality
assurance, and quality improvement.
ISO 9001:2000 Structure:
Focuses on effectiveness in meeting customer requirements.
Main sections: Product Realization, Management Responsibility, Resource Management, Quality
Management System, Measurement, Analysis, and Improvement.
Implementation Steps:
Awareness of ISO 9001 standard.
Define and communicate quality policy.
Establish a team for implementation.
Determine current ISO 9000 status.
Prepare an action plan for readiness.
Implement the plan and track progress.
Conduct ISO readiness assessment.
Review of Lecture 2:
Discussed QMS elements, standards, auditor roles, terminology, and ISO 9001 structure.
Emphasized the importance of a process approach and quality principles in effective QMS.

Lecture 11
Aim of Lecture: Discusses implementation and management of ISO 9001 compliant Quality
Management Systems (QMS).
Implementation of ISO 9001 compliant QMS.
Process, system, and documentation approach of ISO 9001.
Effective quality management systems approach.
Steps to certification against ISO 9001.
Implementation of QMS:
Fundamental to ISO 9001 certification.
Three approaches to implementation:
Let the standard determine the system.
Implement applicable ISO 9001 requirements.
Let processes determine the system (most effective).
Effective QMS redesigns business processes to incorporate quality assurance.
Auditor's Role:
Check that the scope of the system is clearly defined.
Individual projects may have customized QMS.
Steps to ISO 9001:2000 Certification:
Awareness: Management familiarizes with ISO 9001 standard.
Establish a Team: Set up a team under the quality manager to champion the standard.
Determine Existing Quality Management: Review current quality systems and effectiveness.
Prepare an Action Plan: Include actions, resources, timelines, and indicators for success.
Implement the Plan: Manage implementation, track progress, and address obstacles.
ISO Readiness Assessment: Conduct independent assessment before certification audit.
Process, System, and Documentation Approach:
Process approach: Management as a process (Plan-Do-Check-Act cycle).
System approach: Interlinked processes within the QMS.
Documentation: Various types of documents and records are necessary for QMS effectiveness.
QMS Effectiveness:
Effectiveness assessed at boundaries between functions and groups.
Regular feedback and corrective actions are essential for continuous improvement.
ISO 9001:2000 Certification Process:
Steps include implementation, readiness audit, corrective actions, official assessment, and follow-
up audits.
Auditors look for defined processes, documentation, effectiveness, and customer focus.
Summary:
Discussed implementation of ISO 9001 compliant QMS, process/system/documentation
approach, effective QMS, and certification steps.
Emphasized clear responsibilities, documentation, and effectiveness of QMS.
Supplemented by three appendices:
Appendix 1: Watts Humphrey's guidelines for Software Quality Assurance.
Appendix 2: Typical activities for developing an effective QMS.
Appendix 3: Guidance on the Process Approach to quality management systems.

Lecture 12
Chapter 2: Software Review and Inspection
Introduction
Quality control activities ensure product compliance with requirements and standards.
Nonconformance leads to defects.
Quality control during development is more efficient than post-completion error detection.
Software review is a verification method to improve work products before completion.
Essential for uncovering defects missed by other techniques (e.g., testing, static analysis).
Includes compliance with standards, logic errors, ambiguous/missing requirements, and code
clarity.
Types of Software Reviews
Various review types based on formality:
Buddy Checking: Informal review without checklists.
Walkthrough: Author presents work to peers for feedback, limited documentation.
Review by Circulation: Artifact circulated for comments without a meeting.
Inspection: Formal peer review with defined roles, criteria, data collection, and quantitative
goals.
Software Inspection
Introduced by Michael Fagan in the 1970s for systematic review of software artifacts.
Main Issues:
Can be expensive due to team size and time.
Outcomes depend on inspectors' experience and training.
Effective when applied throughout a project.
Benefits:
Productivity increase of 30-100%.
Project time savings of 10-30%.
5-10 times reduction in testing costs.
Maintenance cost reduction by an order of magnitude.
Improved product quality and minimal defect correction at integration.
Inspection Participants and Procedure
Participants:
Author: Presents work, does not defend it.
Moderator: Enforces inspection purpose.
Recorder: Logs deficiencies and prepares inspection report.
Inspectors: Raise questions and suggest problems.
Reader: Paraphrases document during meetings.
Procedure:
Planning: Identify work product, team composition, schedule.
Overview: Optional orientation for unfamiliar team members.
Preparation: Individual inspection using checklists, recording issues.
Inspection Meeting: Discuss defects, log issues.
Rework: Revise work product based on findings.
Follow-up: Verify rework, collect metrics, close inspection.
Inspection Checks
Driven by checklists of common programming errors, updated regularly.
Possible checks include error conditions, storage management, interface faults, input/output
faults, control faults, and data faults.
Inspection Metrics
Metrics support evaluation, improvement, planning, and tracking quality.
Common metrics include:
Number of major and minor defects found.
Size of the artifact.
Rate of review (size/time).
Defect detection rate (defects/hour).
Estimated Total Number of Defects
Total defects = A + B - C (A and B are defects found by reviewers, C is common defects).
Defect density = Total defects / Size of artifact.
Inspection Yield
Yield = Total defects found / Estimated total defects (DRE).
Inspection Rate and Defect Detection Rate
Inspection rate = size / total inspection time.
Defect detection rate = total defects found / total inspection time.
Calculating Metrics with Multiple Reviewers
Group reviewers into two for calculations, considering unique defects.
Overview
What is Software Quality?
Quality implies that software meets its specifications.
This definition is simplistic due to:
Incomplete or ambiguous specifications.
Difficulty in specifying some quality attributes.
Tension between quality attributes (e.g., efficiency vs. reliability).

Software Quality Attributes:


Safety
Security
Reliability
Resilience
Robustness
Understandability
Testability
Adaptability
Modularity
Complexity
Portability
Usability
Reusability
Efficiency
Learnability
Software Quality Assurance (SQA):
Ensures quality through a three-prong approach:
Organization-wide policies, procedures, and standards.
Project-specific policies tailored from organization-wide templates.
Quality control to ensure adherence to procedures.
Standards for drafting SQA plans include ISO 9000-3 and ANSI/IEEE standards.
External entities can verify compliance with standards.
SQA Activities:
Applying technical methods for high-quality specifications and designs.
Conducting formal technical reviews to uncover quality problems.
Testing software with effective error detection methods.
Enforcing standards and controlling changes during development and maintenance.
Measurement to track software quality and assess improvements.
Record keeping and reporting for SQA information dissemination.
Advantages of SQA:
Fewer latent defects, reducing testing and maintenance efforts.
Higher reliability leading to greater customer satisfaction.
Reduced maintenance costs.
Lower overall life cycle costs of software.
Disadvantages of SQA:
Difficult to implement in small organizations due to resource constraints.
Represents a cultural change that can be challenging.
Requires budget allocation that may not have been planned.
Quality Reviews:
Validate product or process quality during life cycle phases.
Identify needed improvements and confirm areas of acceptable quality.
Different intents include defect removal, progress assessment, and consistency checks.
Cost Impact of Software Defects:
Errors passed through various stages amplify costs significantly.
Defect Amplification and Removal:
The cost of defects increases as they progress through the development stages.
Review Checklists:
Various checklists for systems engineering, project planning, requirements analysis, design,
coding, testing, and maintenance to ensure thorough reviews.
Formal Technical Review (FTR):
A quality assurance activity to uncover errors and verify requirements compliance.
Involves various types of reviews (walkthroughs, inspections, etc.).
Review Meeting Components:
Involves a small group of reviewers, a producer, a review leader, and a recorder.
Preparation and duration guidelines for effective meetings.
Review Reporting and Recordkeeping:
Summary reports and issues lists to document findings and guide corrections.
Guidelines for FTR:
Focus on the product, set agendas, limit debates, and maintain written records.
Reviewer’s Preparation:
Understand context, skim materials, annotate, and pose questions.
Results of the Review Meeting:
Decisions include acceptance, rejection, or provisional acceptance of the product.
Software Reliability:
Defined as the probability of failure-free operation in a specified environment.
Reliability is subjective and varies by user needs.
Software Faults and Failures:
A failure is observed erroneous behavior; a fault is a static characteristic that can cause a failure.
Reliability Improvements:
Focus on removing faults in frequently used software portions.
Fault removal does not directly correlate to reliability improvement.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy