Course Name: CMPE287n187 - Software Quality Testing
Course Name: CMPE287n187 - Software Quality Testing
A. (20%) Figure 1 shows the control structure of modules in a program written in C. Each node (say M1) stands
for a module. Each link between two nodes represents a control dependency between them. For example, the
link from M1 to M5 indicates a program call between the two nodes. Assume Module 9 is changed. Please
follow Lee White’s firewall concept to identify a minimum firewall to enclose all affected modules and re-
integration links after changing Module 9. Please present this module firewall by answering the following
questions:
M1
M2 M5
M3 M7 M6
M10
M4 Changed M8 M11
M9
Module
Figure 1.
b) (5%) What are the re-integration links inside the module 9’s firewall?
The re-integration modules links inside the module 9’s firewall are
M1M5
M5M7
M7M9
c) (5%) Which modules are not directly affected, but must be re-integrated?
Modules that are not directly affected are M1& M5 but they must be re-integrated.
f) (5%) What are the re-integration links inside the class firewall?
The re-integration links inside the class firewall are:
C1C5
C5C7
C6C7
C5C6
C8C6
g) (5%) Which classes are not directly affected, but must be re-integrated?
The Classes that are not directly affected are C1 & C8, but must be re-integrated.
You may need to read the published papers listed in the Software-test Yahoo group. Three papers are listed
below:
Paper 1: Class Firewall, Test Order, and Regression Testing of Object-Oriented Programs
Paper 2: A Test Strategy for Object-Oriented Programs
Paper 3: Change Impact Identification in Object-Oriented Software Maintenance
C0 C1 I
AG
AS
C5
AS
C2 C8
AS AS AS I
AG AS AS
C3 C10 C6 C7
AS Changed
AS AS Class
C9
C4
Figure 2.
Question #3: Performance Testing (20%)
Assume you are assigned to conduct your performance testing for your project, including component performance evaluation and
system performance evaluation. Please read Chapter 11 and related published papers on the Internet to find your answers to the
following questions:
1. What component evaluation metrics can be used? Please define two of them in details. If you get from other published
papers, please provide the citation and references.(6%)
2. What kinds of system performance evaluation models can be used for component-based systems? Please define your
performance evaluation model(s). In case you get the information from other published papers, please provide the citation
and references.(6%)
System Performance Evaluation Models – this is a well defined model that presents its performance
attributes, parameters and behaviors. These models provide foundation to select performance evaluation
strategy and metrics. Helps construct techniques and use tools to support performance tracking, data collection,
monitoring as well as performance evaluation.
1. Component Based Scenario Model – it is an event based scenario diagram that represents the
interaction sequences among components in a component system. It consists of set of component
nodes, interaction sequences and events. This model assists engineers to measure, function oriented
component and system performance, based on event based function scenarios. This model can be used
to validate function, or processing speed, event or transaction latency, Measuring component or system
throughput, and based system availibility and reliability.
2. Component based transaction model – this model is based on transactions. It consists of set of
component nodes, set of transactions and data or messages associated with transactions. This model
helps engineers evaluate transaction speed, throughput, reliability and availibility.
3. What are system performance evaluation metrics useful to measure component-based systems? Please define at least two
system performance evaluation (or measurement) metrics. In case you get the information from other published papers,
please provide the citation and references. (8%)
Performance evaluation metrics can be classified into 5 classes, namely, processing speed – such as functional
processing speed and response time to users. Throughput metrics, reliability, availability and scalability
1. Speed Related Metrics – a common speed metric is user response time which is used to measure max, min
and average response time for different user groups in a system. For components with message
communication functions, engineers need to check processing time of different type of messages. For
database they need to check processing time for different database queries, and connection time. This may
be useful for performance tuning of database to improve data retrieval speed by optimizing queries. To
measure domain specific functional speed, engineers need to define special performance metrics for each
domain specific function, like call processing is a server call in customer relationship management system.
Here we need to evaluate the call processing time for teach type of call based on the number of agents
available. Another type of processing metric is latency metric which accounts for delay time, java applet
download time.
2. Availibility Metrics – System availibility is defined as the ratio of system available time and the
evaluation time (available + unavailable time), based on this system unavailibilty can be computed as 1 –
available time. This metric can be applied when software components are considered as black boxes. This
metric has two limitations; first, there is no relation between component availibilty and component
supporting functions. Second, it is not applicable to components with fault-tolerant features, with high
available capability. But, today web based systems, like e-commerce, defense systems require high
availibilty so we have to propose way to measure their availibility metric.
First is the function based component availibility : – which refers to total available time of supporting its
system function to performance test period during performance evaluation. Since this metric supports direct
relationship between component availibility and supporting functions, the component availibility can be
done by exercising all component functions during component evaluation.
The second is metric measurement for High availiblity metric: – where a highly available component is N
cluster consisting of ‘N’ redundant components, which are actively running at the same time to support
same set of functional features as a black box. This metric can be computed as the ratio of available time
for highly available component over the sum of available and unavailable time for this highly available
component.
3. Reliability Metrics – this can be evaluated based on reliability of service, in which a function R(t) is used
to present the probability that a service survives until a time ‘t’. Reliability of service is usually
characterized by specifying the MTTF(Mean Time To Failure) or MTBF(Mean Time Between Failures).
Exponential distribution describes the possible occurance of failures. Component reliability here is defined
based on uptime and downtime of services during performance evaluation time period. Hence, component
reliability can be formulated as the ratio of the total uptime over the sum of uptime and downtime.
Reliability of C-HA can be evaluated as uptime over uptime plus downtime, where downtime also includes
the recovery time. This uses the single failure criterion which says that when one component is down the
whole system is down. The other method is based on functional services where, N redundant components
in a cluster , if one component is available to provide the service then the component is considered in
uptime.
Hint: your answer could address the following perspectives for performance evaluation:
a) Reliability, b) processing speed, and c) throughput
Please check other published papers by Jerry Gao and others on this topic.
Question #4: Change Analysis and Impact Analysis for your Component-Based Software Testing
Project (40%)
You are asked to conduct change analysis and impact analysis after you changed your component-based elevator simulation
system.
Please identify and list the changes and impacts based on requested changes to your elevator system.
(a) What are the changes and impacts by adding “Indicator” in the component ‘Floor Panel”. Please list the component
change firewalls based on Figure 3 and Figure 4. (10%)
(c) What is the component unit test firewall for your black-box test suite for Floor Panel component? (10%)
Please identify the following unit test sets in its black-box test set:
No. Conditions
TD1 TD2 TD3 TD4 TD5 TD6 TD7
C1 No Buttons Pressed T
C2. Up Button on T T F F T T
C3 Up Button off F F T T F F
C4 Down Button on F F T T T T
C5 Down Button off T T F F F F
C6 ActiveButtonColor null T T F T F T F
C7 ActiveButtonColor Configured F T F T F T
C8 FloorPanel Status Idle T T F T F T F
C9 FloorPanel Status Active F T F T F T
C10 FloorPanelIndicatorIdle T T F T F T F
C11 FloorPanelIndicator F T F T F T
Active
No. Actions
A1 Nothing Happens X
A2 SetActiveButtonColor X X X
A3 putFloorRequest in Queue X X X X X X
A5 UpdateIndicatorFloor X X X X X X
Number
1) Reusable black-box test cases, such as reusable state-based (or decision table based) test cases.
The Reusable black Box test cases are:
T2 – Indicator Updated, FloorPanelIndicatorStatus:Idle
T3 - Indicator Updated, FloorPanelIndicatorStatus:Active
T4 - Indicator Updated, FloorPanelIndicatorStatus:Idle
T5 - Indicator Updated, FloorPanelIndicatorStatus:Active
T6 - Indicator Updated, FloorPanelIndicatorStatus:Idle
T7 - Indicator Updated, FloorPanelIndicatorStatus:Active
2) New black-box test cases, such as new state-based (or decision table-based) test cases.
New Black Box test cases are:
New Test Case – All FloorPanelIndicator displays the same floor number for a particular car, whether Active or
Idle.
3) Deleted black-box test cases, such as deleted state-based (or decision table-based) test cases.
We can delete the test case as we have previously tested in the manual report.
T1 – No change, FloorPanelIndicatorStatus:Idle
d) What is your system test firewall due to the change of Floor Panel component? (10%) .Please identify the
following system-level function test sets in the system test set:
No. Conditions
TT -> 1 2 3 4 5 6 7 8 9 10 11 12 13
C1 No Buttons Pressed T F F F F F F F F F F F F
C4 UserPanel Button On F F F F F F F F F T T T T
C6 DoorPanel F F F T F F F T F F F F T
openButton On
C7 DoorPanel F F F F T F F F T F F F F
CloseButton On
C10 Car Status Idle T T T T T F F F F F T F F
C18 FloorPanelIndicatorIdle T T F T T F T T T F T T T
C19 FloorPanelIndicatorActive F F T F F T F F F T F F F
No. Actions
A1 Nothing Happens X
A2 FloorPanel SetActiveButtonColor X X X X X X X X X X X X
A4 DoorPanel SetActiveButtonColor X X X X X
A6 UserPanel SetActiveButtonColor X X X X
A8 PutMessage X X X X X X X X X X X X
A9 ChangeFloorNumber X X X X
A10 ChangeDoorStatus X X X X X X X X
A11 ChangeCarStatus X X X X X X X X
(After ChangeDoorStatus)
A12 ChangeCarStatus X X
(Immediate)
A13 UpdateIndicatorFloorNumber X X X X
1) Reusable black-box test cases, such as reusable state-based (or decision table based) test cases.
T3,T6,T10 - Indicator Updated, FloorPanelIndicatorStatus:Active
T2,T4,T5,T7,T8,T9,T11,T12,T13 - Indicator Updated, FloorPanelIndicatorStatus:Idle
2) New black-box test cases, such as new state-based (or decision table-based) test cases.
3) Deleted black-box test cases, such as deleted state-based (or decision table-based) test cases.
T1 – No Change, FloorPanelIndicatorStatus:Idle