HETT207 Lecture Notes - Unit 4
HETT207 Lecture Notes - Unit 4
Page 1 of 103
Unit 4: Software Quality
Unit Outline
• Software quality concepts
• Software Review techniques
• Software Quality Assurance (SQA)
• Software Testing
• Software Configuration Management
Page 2 of 103
Unit 4: Software Quality
Page 4 of 103
Unit 4: Software Quality
Page 5 of 103
Unit 4: Software Quality
Page 6 of 103
Unit 4: Software Quality
Page 7 of 103
Unit 4: Software Quality
Page 9 of 103
Unit 4: Software Quality
Page 10 of 103
Unit 4: Software Quality
Page 12 of 103
Unit 4: Software Quality
Page 14 of 103
Unit 4: Software Quality
Page 15 of 103
Unit 4: Software Quality
Informal Review
• Usually involve “desk checks” and casual
meetings
• Can be applied to any work product; where
team members just sit to review / walkthrough
work products without strict prior meeting
agenda or format
• Errors are noted that then corrected
• Another example is pair programming as
advocated by XP
Page 16 of 103
Unit 4: Software Quality
FTR Meeting
• Has the following constraints
3-5 people should be involved
Advance preparation (i.e., reading) should occur for each participant but
should require no more than two hours a piece and involve only a small
subset of components
The duration of the meeting should be less than two hours
• Focuses on a specific work product (a software requirements
specification, a detailed design, a source code listing)
• Activities before the meeting
The producer informs the project manager that a work product is
complete and ready for review
The project manager contacts a review leader, who evaluates the product
for readiness, generates copies of product materials, and distributes
them to the reviewers for advance preparation
Each reviewer spends one to two hours reviewing the product and
making notes before the actual review meeting
The review leader establishes an agenda for the review meeting and
schedules the time and location
Page 19 of 103
Unit 4: Software Quality
Page 20 of 103
Unit 4: Software Quality
Page 21 of 103
Unit 4: Software Quality
Page 22 of 103
Unit 4: Software Quality
SQA Group
• Need for a formation of an SQA group composed of
stakeholders from different departments
• It serves as the customer's in-house representative
• Assists the software team in achieving a high-quality product
• Views the software from the customer's point of view
Does the software adequately meet quality factors?
Has software development been conducted according to pre-established
standards?
Have technical disciplines properly performed their roles as part of the SQA
activity?
• Performs a set of activities that address quality assurance planning,
oversight, record keeping, analysis, and reporting
Page 24 of 103
Unit 4: Software Quality
SQA Goals
• The ultimate goals for SQA activities are:
i. Requirements quality
ii. Design quality
iii. Code quality
iv. Quality control effectiveness
Page 26 of 103
Unit 4: Software Quality
Page 27 of 103
Unit 4: Software Quality
Software Reliability
• Software reliability is the probability of
failure-free operation of a software application
in a specified environment for a specified time
• It is estimated using historical and
development data
• Failure is defined as “non-conformance to
software requirements”
• Given a set of valid requirements, all software
failures can be traced to design or
implementation problems because software
does not wears out like hardware
Page 32 of 103
Unit 4: Software Quality
Software Reliability:
Measuring reliability (MTBF)
• Reliability is measured by mean-time-between-
failure (MTBF):
Software Reliability:
Measuring availability
• Availability is the probability that a program is
operating according to requirements at a given
point in time
MTTF x 100%
Availability =
MTTF + MTTR
• E.g.: from previous example; availability = (68 / (68
+ 3)) x 100% = 96%
• This is also referred in terms of “nines”, i.e. 90% is
“1 nine” while 99% is “2 nines” and 99.9% is called
“3 nines”
• It basically measures “uptime rate”
Page 34 of 103
Unit 4: Software Quality
Software Reliability:
Measuring availability
• One of the major criticisms of MTBF is that it
does not provide the projected failure rate
• This can be done using the failures-in-time
(FIT) : a statistical measure of how many
failures a program will have over a billion hours
of operation, i.e. 1 FIT = 1 failure per billion
hours of program operation
Page 35 of 103
Unit 4: Software Quality
Software Reliability:
Software Safety
• Software safety focuses on identification and
assessment of potential hazards that may affect
software negatively and cause an entire system to fail
• It differs from software reliability
Software reliability uses statistical analysis to determine the
likelihood that a software failure will occur; however, the
failure may not necessarily result in a hazard or mishap
Software safety examines the ways in which failures result
in conditions that can lead to a hazard or mishap; it
identifies faults that may lead to failures
• Software failures are evaluated in the context of an
entire computer-based system and its environment
through the process of fault tree analysis or hazard
analysis
Page 36 of 103
Unit 4: Software Quality
Page 37 of 103
Unit 4: Software Quality
Page 40 of 103
Unit 4: Software Quality
Page 41 of 103
Unit 4: Software Quality
Page 42 of 103
Unit 4: Software Quality
Page 43 of 103
Unit 4: Software Quality
Software Testing:
When is it completed?
• Every time a user executes the software, the program is being
tested
• Testing usually stops when a project is running out of time,
money, or both
• One approach is to divide the test results into various severity
levels
Then consider testing to be complete when certain levels of
errors no longer occur or have been repaired or eliminated
Page 44 of 103
Unit 4: Software Quality
Page 45 of 103
Unit 4: Software Quality
Page 47 of 103
Unit 4: Software Quality
Page 48 of 103
Unit 4: Software Quality
Page 50 of 103
Unit 4: Software Quality
Testing Strategies for Conventional
Software: Integration Testing
(Non-Incremental)
Page 51 of 103
Unit 4: Software Quality
Testing Strategies for Conventional
Software: Integration Testing
(Incremental)
• More modular and more planned
• Three kinds
i. Top-down integration
ii. Bottom-up integration
iii. Sandwich integration
• The program is constructed and tested in small increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested completely
• A systematic test approach is applied
Page 52 of 103
Unit 4: Software Quality
Testing Strategies for Conventional
Software: Integration Testing
(Incremental)
1. Top-down
• Modules are integrated by moving downward through the control hierarchy,
beginning with the main module
• Subordinate modules are incorporated in either a depth-first or breadth-first
fashion
Depth-First: All modules on a major control path are integrated
Breadth-First: All modules directly subordinate at each level are integrated
• Advantages
– This approach verifies major control or decision points early in the test process
• Disadvantages
Stubs need to be created to substitute for modules that have not been built or
tested yet; this code is later discarded
Because stubs are used to replace lower level modules, no significant data flow
can occur until much later in the integration/testing process
Page 53 of 103
Unit 4: Software Quality
Testing Strategies for Conventional
Software: Integration Testing
(Incremental)
2. Bottom-up
• Integration and testing starts with the most atomic modules in the control
hierarchy
• Advantages
This approach verifies low-level data processing early in the testing process
Need for stubs is eliminated
• Disadvantages
Driver modules need to be built to test the lower-level modules; this code is later
discarded or expanded into a full-featured version
Drivers inherently do not contain the complete algorithms that will eventually use
the services of the lower-level modules; consequently, testing may be incomplete
or more testing may be needed later when the upper level modules are available
Page 54 of 103
Unit 4: Software Quality
Testing Strategies for Conventional
Software: Integration Testing
(Incremental)
3. Sandwich
• Consists of a combination of both top-down and bottom-up integration
• Occurs both at the highest level modules and also at the lowest level
modules
• Proceeds using functional groups of modules, with each group completed
before the next
High and low-level modules are grouped based on the control and data
processing they provide for a specific program feature
Integration within the group progresses in alternating steps between the high and
low level modules of the group
When integration for a certain functional group is complete, integration and
testing moves onto the next group
• Reaps the advantages of both types of integration while minimizing the need
for drivers and stubs
• Requires a disciplined approach so that integration doesn’t tend towards the
“big bang” scenario
Page 55 of 103
Unit 4: Software Quality
Page 56 of 103
Unit 4: Software Quality
Page 57 of 103
Unit 4: Software Quality
Page 59 of 103
Unit 4: Software Quality
Page 60 of 103
Unit 4: Software Quality
Validation Testing
• Validation testing follows integration testing
• The distinction between conventional and object-oriented software
disappears
• Focuses on user-visible actions and user-recognizable output from the system
• Demonstrates conformity with requirements
• Designed to ensure that
All functional requirements are satisfied
All behavioral characteristics are achieved
All performance requirements are attained
Documentation is correct
Usability and other requirements are met (e.g., transportability, compatibility,
error recovery, maintainability)
• After each validation test
The function or performance characteristic conforms to specification and is
accepted
A deviation from specification is uncovered and a deficiency list is created
• A configuration review or audit ensures that all elements of the software
configuration have been properly developed, cataloged, and have the
necessary detail for entering the support phase of the software life cycle
Page 61 of 103
Unit 4: Software Quality
Validation Testing:
Alpha & Beta Testing
• Alpha testing
Conducted at the developer’s site by end users
Software is used in a natural setting with developers watching intently
Testing is conducted in a controlled environment
• Beta testing
Conducted at end-user sites
Developer is generally not present
It serves as a live application of the software in an environment that
cannot be controlled by the developer
The end-user records all problems that are encountered and reports
these to the developers at regular intervals
• After beta testing is complete, software engineers make software
modifications and prepare for release of the software product to the entire
customer base
Page 62 of 103
Unit 4: Software Quality
System Testing
• Recovery testing
Tests for recovery from system faults
Forces the software to fail in a variety of ways and verifies that recovery is
properly performed
Tests reinitialization, checkpointing mechanisms, data recovery, and restart for
correctness
• Security testing
Verifies that protection mechanisms built into a system will, in fact, protect it
from improper access
• Stress testing
Executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume
• Performance testing
Tests the run-time performance of software within the context of an integrated
system
Often coupled with stress testing and usually requires both hardware and
software instrumentation
Can uncover situations that lead to degradation and possible system failure
Page 63 of 103
Unit 4: Software Quality
Debugging
• Debugging occurs as a consequence of successful testing
• It is still very much an art rather than a science
• Good debugging ability may be an innate human trait
• Large variances in debugging ability exist
• The debugging process begins with the execution of a test case
• Results are assessed and the difference between expected and
actual performance is encountered
• This difference is a symptom of an underlying cause that lies
hidden
• The debugging process attempts to match symptom with cause,
thereby leading to error correction
Page 64 of 103
Unit 4: Software Quality
Challenges of Debugging
• The symptom and the cause may be geographically remote
• The symptom may disappear (temporarily) when another error is
corrected
• The symptom may actually be caused by non-errors (e.g., round-off
accuracies)
• The symptom may be caused by human error that is not easily traced
• The symptom may be a result of timing problems, rather than
processing problems
• It may be difficult to accurately reproduce input conditions, such as
asynchronous real-time information
• The symptom may be intermittent such as in embedded systems
involving both hardware and software
• The symptom may be due to causes that are distributed across a
number of tasks running on different processes
Page 65 of 103
Unit 4: Software Quality
Debugging: Strategies
• Objective of debugging is to find and correct the cause of a
software error
• Bugs are found by a combination of systematic evaluation,
intuition, and luck
• Debugging methods and tools are not a substitute for careful
evaluation based on a complete design model and clear source
code
• There are three main debugging strategies
i. Brute force
ii. Backtracking
iii. Cause elimination
Page 66 of 103
Unit 4: Software Quality
Debugging: Strategies
1. Brute force
• Most commonly used and least efficient method
• Used when all else fails
• Involves the use of memory dumps, run-time traces, and output
statements
• Leads many times to wasted effort and time
Page 67 of 103
Unit 4: Software Quality
Debugging: Strategies
2. Backtracking
• Can be used successfully in small programs
• The method starts at the location where a symptom has been
uncovered
• The source code is then traced backward (manually) until the
location of the cause is found
• In large programs, the number of potential backward paths may
become unmanageably large
Page 68 of 103
Unit 4: Software Quality
Debugging: Strategies
3. Cause elimination
• Involves the use of induction or deduction and introduces the concept of
binary partitioning
Induction (specific to general): Prove that a specific starting value is true;
then prove the general case is true
Deduction (general to specific): Show that a specific conclusion follows
from a set of general premises
• Data related to the error occurrence are organized to isolate potential causes
• A cause hypothesis is devised, and the aforementioned data are used to
prove or disprove the hypothesis
• Alternatively, a list of all possible causes is developed, and tests are
conducted to eliminate each cause
• If initial tests indicate that a particular cause hypothesis shows promise, data
are refined in an attempt to isolate the bug
Page 69 of 103
Unit 4: Software Quality
Page 70 of 103
Unit 4: Software Quality
Page 71 of 103
Unit 4: Software Quality
Page 72 of 103
Unit 4: Software Quality
Page 73 of 103
Unit 4: Software Quality
Page 74 of 103
Software Testing Techniques:
Unit 4: Software Quality
Page 75 of 103
Unit 4: Software Quality
Page 76 of 103
Unit 4: Software Quality
Software Testing Techniques:
Unit Testing Techniques
2. Black-box Testing: Categories
• Incorrect or missing functions
• Interface errors
• Errors in data structures or external data base access
• Behavior or performance errors
• Initialization and termination errors
Page 77 of 103
Unit 4: Software Quality
Page 78 of 103
Unit 4: Software Quality
Page 79 of 103
Unit 4: Software Quality
Page 81 of 103
Unit 4: Software Quality
Page 82 of 103
Unit 4: Software Quality
Page 83 of 103
Unit 4: Software Quality
SCM : Viewpoints
• There are different viewpoints of SCM from various roles:
Project manager -> an auditing mechanism
Page 85 of 103
Unit 4: Software Quality
Page 86 of 103
Unit 4: Software Quality
Page 88 of 103
Unit 4: Software Quality
Page 89 of 103
Unit 4: Software Quality
Page 90 of 103
Unit 4: Software Quality
Page 91 of 103
Unit 4: Software Quality
Page 92 of 103
Unit 4: Software Quality
Page 93 of 103
Unit 4: Software Quality
Page 95 of 103
Unit 4: Software Quality
Status reporting
Configuration auditing
Version control
Change control
Identification
CSCI CSCI
CSCI CSCI
Page 96 of 103
Unit 4: Software Quality
Page 97 of 103
Unit 4: Software Quality
Page 98 of 103
Unit 4: Software Quality
END