Chapter 2 Reviews
Chapter 2 Reviews
1
Purpose of Reviews
⚫ Serve as a filter for the software process
⚫ Are applied at various points during the software process
⚫ Uncover errors that can then be removed
⚫ Purify the software analysis, design, coding, and testing activities
⚫ Catch large classes of errors that escape the originator more than
other practitioners
⚫ Saves time by reducing the amount for rework
⚫ Technical reviews are also called peer reviews
⚫ Include the formal technical review (also called a walkthrough or
inspection)
◦ Acts as the most effective SQA filter
◦ Conducted by software engineers for software engineers
◦ Effectively uncovers errors and improves software quality
◦ Has been shown to be up to 75% effective in uncovering design
flaws (which constitute 50-65% of all errors in software)
2
What Reviews Are Not
3
What Do We Look For?
4
Defect Amplification
Defects Detection
Errors from Errors passed through
Previous step Percent Errors passed
Amplified errors 1:x Efficiency To next step
Development step
5
Defect Amplification- No reviews
6
Defect Amplification- with reviews
7
Defect Amplification
⚫ In the example
◦ a software process that does NOT include
reviews,
● yields 94 errors at the beginning of testing and
● Releases 12 latent defects to the field
◦ a software process that does include reviews,
● yields 24 errors at the beginning of testing and
● releases 3 latent defects to the field
◦ A cost analysis indicates that the process with
NO reviews costs approximately 3 times more
than the process with reviews, taking the cost of
correcting the latent defects into account
8
Review Metrics
⚫ Preparation effort, Ep —the effort (in person-hours) required to review a
work product prior to the actual review meeting
⚫ Assessment effort, Ea — the effort (in person-hours) that is expending
during the actual review
⚫ Rework effort, Er — the effort (in person-hours) that is dedicated to the
correction of those errors uncovered during the review
⚫ Work product size, WPS —a measure of the size of the work product that
has been reviewed (e.g., the number of UML models, or the number of
document pages, or the number of lines of code)
⚫ Minor errors found, Errminor —the number of errors found that can be
categorized as minor (requiring less than some pre-specified effort to
correct)
⚫ Major errors found, Errmajor — the number of errors found that can be
categorized as major (requiring more than some pre-specified effort to
correct)
9
Metrics
⚫ The total review effort and the total number of
errors discovered are defined as:
● Ereview = Ep + Ea + Er
● Errtot = Errminor + Errmajor
10
An Example—I
⚫ If past history indicates that
◦ the average defect density for a requirements
model is 0.6 errors per page, and a new
requirement model is 32 pages long,
◦ a rough estimate suggests that your software
team will find about 19 or 20 errors during the
review of the document.
◦ If you find only 6 errors, you’ve done an
extremely good job in developing the
requirements model or your review approach
was not thorough enough.
11
An Example—II
⚫ The effort required to correct a minor model error (immediately after
the review) was found to require 4 person-hours.
⚫ The effort required for a major requirement error was found to be 18
person-hours.
⚫ Examining the review data collected, you find that minor errors
occur about 6 times more frequently than major errors. Therefore,
you can estimate that the average effort to find and correct a
requirements error during review is about 6 person-hours.
⚫ Requirements related errors uncovered during testing require an
average of 45 person-hours to find and correct. Using the averages
noted, we get:
⚫ Effort saved per error = Etesting – Ereviews
⚫ 45 – 6 = 30 person-hours/error
⚫ Since 22 errors were found during the review of the requirements
model, a saving of about 660 person-hours of testing effort would be
achieved. And that’s just for requirements-related errors.
12
Overall
13
Reference Model for Technical
Reviews 1. Distinct roles are
explicitly defined for the
reviewers
2. There is sufficient
amount of planning and
preparation for the
review
3. A distinct structure of the
review (including tasks
and internal work
products) is defined
4. Follow-up by the
reviewers occurs for any
corrections that are
made 14
Informal Reviews
⚫ Informal reviews include:
◦ a simple desk check of a software engineering work
product with a colleague
◦ a casual meeting (involving more than 2 people) for
the purpose of reviewing a work product, or
◦ the review-oriented aspects of pair programming
⚫ A review with no advance planning or preparation , no
agenda or meeting structure and no follow-ups on the
errors that are uncovered
⚫ pair programming [Two people work at one computer
workstation to create code] encourages continuous
review as a work product (design or code) is created.
15
Informal Reviews
⚫ One way to improve the efficacy of a desk check review
is
◦ To develop a set of simple review checklists for each major work
products produced
◦ Checklists may include generic questions, but will serve to guide
the reviewers as they check the work products
⚫ For an interface review, the checklist may include
◦ Is the layout designed using standard conventions? Left to right?
Top to bottom?
◦ Does the presentation need to be scrolled?
◦ Are colour and placement, typeface, and size used effectively?
◦ Are all navigation options or functions represented at the same
level of abstraction?
◦ Are all navigation choices clearly labelled?
16
Formal Technical Reviews
⚫ The objectives of an FTR are:
◦ to uncover errors in function, logic, or implementation
for any representation of the software
◦ to verify that the software under review meets its
requirements
◦ to ensure that the software has been represented
according to predefined standards
◦ to achieve software that is developed in a uniform
manner
◦ to make projects more manageable
⚫ The FTR is actually a class of reviews that includes
walkthroughs and inspections.
17
The Review Meeting
⚫ Between three and five people (typically)
should be involved in the review.
⚫ Advance preparation should occur but
should require no more than two hours of
work for each person.
⚫ The duration of the review meeting should
be less than two hours.
⚫ Focus is on a work product (e.g., a portion of a
requirements model, a detailed component
design, source code for a component)
18
The Players
review
leader standards bearer (SQA)
producer
maintenance
oracle
recorde reviewer
r user rep
19
The Players
⚫ Producer—the individual who has developed the work
product
◦ informs the project leader that the work product is complete and
that a review is required
⚫ Review leader—evaluates the product for readiness, generates
copies of product materials, and distributes them to two or
three reviewers for advance preparation.
⚫ Reviewer(s)—expected to spend between one and two hours
reviewing the product, making notes, and otherwise
becoming familiar with the work.
⚫ Recorder—reviewer who records (in writing) all important issues
raised during the review.
20
Conducting the Review
⚫ Review the product, not the producer.
⚫ Set an agenda and maintain it.
⚫ Limit debate and rebuttal.
⚫ Enunciate problem areas, but don't attempt to solve every problem
noted.
⚫ Take written notes.
⚫ Limit the number of participants and insist upon advance
preparation.
⚫ Develop a checklist for each product that is likely to be reviewed.
⚫ Allocate resources and schedule time for FTRs.
⚫ Conduct meaningful training for all reviewers.
⚫ Review your early reviews.
21
Formal Technical Review (FTR)
⚫ Objectives
◦ To uncover errors in function, logic, or implementation for any
representation of the software
◦ To verify that the software under review meets its requirements
◦ To ensure that the software has been represented according to predefined
standards
◦ To achieve software that is developed in a uniform manner
◦ To make projects more manageable
⚫ Serves as a training ground for junior software engineers to observe
different approaches to software analysis, design, and construction
⚫ Promotes backup and continuity because a number of people become
familiar with other parts of the software
⚫ May sometimes be a sample-driven review
◦ Project managers must quantify those work products that are the primary
targets for formal technical reviews
◦ The sample of products that are reviewed must be representative of the
products as a whole
22
The FTR Meeting
⚫ Has the following constraints
◦ From 3-5 people should be involved
◦ Advance preparation (i.e., reading) should occur for each participant but should
require no more than two hours a piece and involve only a small subset of
components
◦ The duration of the meeting should be less than two hours
⚫ Focuses on a specific work product (a software requirements specification, a
detailed design, a source code listing)
⚫ Activities before the meeting
◦ The producer informs the project manager that a work product is complete and
ready for review
◦ The project manager contacts a review leader, who evaluates the product for
readiness, generates copies of product materials, and distributes them to the
reviewers for advance preparation
◦ Each reviewer spends one to two hours reviewing the product and making notes
before the actual review meeting
◦ The review leader establishes an agenda for the review meeting and schedules the
time and location
23
The FTR Meeting (continued)
⚫ Activities during the meeting
◦ The meeting is attended by the review leader, all reviewers, and the producer
◦ One of the reviewers also serves as the recorder for all issues and decisions
concerning the product
◦ After a brief introduction by the review leader, the producer proceeds to "walk
through" the work product while reviewers ask questions and raise issues
◦ The recorder notes any valid problems or errors that are discovered; no time or
effort is spent in this meeting to solve any of these problems or errors
⚫ Activities at the conclusion of the meeting
◦ All attendees must decide whether to
● Accept the product without further modification
● Reject the product due to severe errors (After these errors are corrected, another
review will then occur)
● Accept the product provisionally (Minor errors need to be corrected but no additional
review is required)
◦ All attendees then complete a sign-off in which they indicate that they took part in
the review and that they concur with the findings
24
The FTR Meeting (continued)
⚫ Activities following the meeting
◦ The recorder produces a list of review issues that
● Identifies problem areas within the product
● Serves as an action item checklist to guide the producer in making corrections
◦ The recorder includes the list in an FTR summary report
● This one to two-page report describes what was reviewed, who reviewed it, and
what were the findings and conclusions
◦ The review leader follows up on the findings to ensure that the producer
makes the requested corrections
25
FTR Guidelines
1) Review the product, not the producer
2) Set an agenda and maintain it
3) Limit debate and rebuttal; conduct in-depth discussions off-line
4) Enunciate problem areas, but don't attempt to solve the problem
noted
5) Take written notes; utilize a wall board to capture comments
6) Limit the number of participants and insist upon advance
preparation
7) Develop a checklist for each product in order to structure and
focus the review
8) Allocate resources and schedule time for FTRs
9) Conduct meaningful training for all reviewers
10) Review your earlier reviews to improve the overall review process
26
Sample-Driven Reviews (SDRs)
⚫ SDRs attempt to quantify those work products that are
primary targets for full FTRs.
To accomplish this …
⚫ Inspect a fraction ai of each software work product, i. Record
the number of faults, fi found within ai.
⚫ Develop a gross estimate of the number of faults within work
product i by multiplying fi by 1/ai.
⚫ Sort the work products in descending order according to the
gross estimate of the number of faults in each.
⚫ Focus available review resources on those work products that
have the highest estimated number of faults.
27
POST-MORTEM EVALUATIONS
⚫ A mechanism to determine what went right and what went
wrong when software engineering process and practice are
applied in a specific project.
⚫ Unlike an FTR that focuses on a specific work product, a PME
examines the entire software project, focusing on both “
excellences (that is, achievements and positive experiences)
and challenges (problems and negative experiences)”
⚫ Often conducted in a workshop format, a PME is attended by
members of the software team and stakeholders.
⚫ The intent is to identify excellences and challenges and to
extract lessons learned from both.
⚫ The objective is to suggest improvements to both process and
practice going forward.
28