0% found this document useful (0 votes)
17 views59 pages

System Analysis & Design Last Note

This document discusses designing input, output, and user interfaces for systems. It covers output design, including the types of outputs like reports and ensuring outputs meet user needs. Input design including creating data input specifications and handling errors. The user interface section discusses interactive vs batch processing, using menus to guide users, and designing form screens so they mirror source documents. The overall document provides guidance on designing the key components involved in a system's interface with users and external entities.

Uploaded by

jeffrey yero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views59 pages

System Analysis & Design Last Note

This document discusses designing input, output, and user interfaces for systems. It covers output design, including the types of outputs like reports and ensuring outputs meet user needs. Input design including creating data input specifications and handling errors. The user interface section discusses interactive vs batch processing, using menus to guide users, and designing form screens so they mirror source documents. The overall document provides guidance on designing the key components involved in a system's interface with users and external entities.

Uploaded by

jeffrey yero
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

lOMoARcPSD|13 98489 1

10
DESIGNING INPUT, OUTPUT & USER
INTERFACE

Unit Structure
10.1 Introduction
10.2 Output Design
10.3 Input Design
10.4 User Interface
10.5 Golden rules of Interface Design
10.6 Summary
10.1 INTRODUCTION :

Output is what the customer is buying when he or she pay for a


development of project. Inputs, databases, and processes are present
to provide output.

A data input specification is a detailed description of the individual


fields (data elements) on an input document together with their
characteristics. In this chapter we will learn about Input design, Output
design and User Interface.
10.2 OUTPUTDESIGN :

Output is the most important task of any system. These guidelines


apply for the most part to both paper and screen outputs. Output design
is often discussed before other feature of design because, from the
customer’s point of view, the output is the system. Output is what the
customer is buying when he or she pay for a development of project.
Inputs, databases, and processes are present to provide output.
Problems often associated with business information output are
information hold-up, information (data) overload, paper domination,
extreme distribution, and no tailoring.

For example:
Mainframe printers: high volume, high speed, located in the data centre
Remote site printers: medium speed, close to end user.

COM is Computer Output Microfilm. It is more compressed than


traditional output and may be produced as fast as non-impact printer
output.
lOMoARcPSD|13 98489 1

2
• Turnaround documents trim down the cost of internal
information processing by reducing both data entry and
associated errors.
• Periodic reports have set frequencies such as daily or weekly;
ad hoc reports are produced at irregular intervals.
• Detail and summary reports differ in the former support dayto-
day operation of the business while the latter include statistics
and ratios used by managers to consider the health of
operations.
• Page breaks and control breaks allow for abstract totals on key
fields. Report requirements documents include general report
information and field specifications; print layout sheets present
a picture of what the report will actually look like.
• Page decoupling is the separation of pages into cohesive
groups.

Two ways to create output for strategic purposes are


(1) Make it compatible with processes outside the immediate scope
of the system
(2) Turn action documents into turnaround documents.

People often receive reports they do not require because the number
of reports received is perceived as a measure of power. Fields on a
report should be selected carefully to provide organized reports,
facilitate 80-column remote printing, and reduce information (data)
overload.

The types of fields which should be considered for business output are:
key fields for access to information, fields for control breaks, fields that
change, and exception fields.

Output may be designed to aid future change by stressing formless


reports, defining field size for future growth, making field constants into
variables, and leaving room on review reports for added ratios and
statistics.

Output can now be more easily tailored to the needs of individual users
because inquiry-based systems allow users themselves to generate ad
hoc reports. An output intermediary can restrict access to key
information and avoid illegal access. An information clearinghouse (or
information centre) is a service centre that provides consultation,
assistance, and documentation to encourage end-user development
and use of applications. The specifications essential to describe the
lOMoARcPSD|13 98489 1

3
output of a system are: data flow diagrams, data flow specifications,
data structure specifications, and data element specifications.

• Output Documents
• Printed Reports
• External Reports: for use or distribution outside the
organization; often on pre-printed forms.
• Internal Reports: for use within the organization; not as "pretty",
stock paper, greenbar, etc.
• Periodic Reports: produced with a set frequency (daily, weekly,
monthly, every fifth Tuesday, etc.)
• Ad-Hoc (On Demand) Reports: unbalanced interval; produced
upon user demand.
• Detail Reports: one line per transaction.
Review Reports: an overview.
• Exception Reports: only shows errors, problems, out-ofrange
values, or unexpected conditions or events.
10.3 INPUTDESIGN

A source document differs from a turnaround document in that the


former holds data that revolutionize the status of a resource while the
latter is a machine readable document. Transaction throughput is the
number of error-free transactions entered during a specified time
period. A document should be concise because longer documents
contain more data and so take longer to enter and have a greater
chance of data entry errors.

Numeric coding substitutes numbers for character data (e.g., 1=male,


2=female); mnemonic coding represents data in a form that is easier
for the user to understand and remember. (e.g., M=male, F=female).
The more quickly an error is detected, the nearer the error is to the
person who generated it and so the error is more easily corrected. An
example of an illogical combination in a payroll system would be an
option to eliminate federal tax withholding.

By "multiple levels" of messages, I mean allowing the user to obtain


more detailed explanations of an error by using a help option, but not
forcing a long-lasting message on a user who does not want it. An error
suspense record would include the following fields: data entry operator
identification, transaction entry date, transaction entry time, transaction
type, transaction image, fields in error, error codes, date transaction
re-entered successfully.
lOMoARcPSD|13 98489 1

4
• A data input specification is a detailed description of the
individual fields (data elements) on an input document together
with their characteristics (i.e., type and length).
• Be specific and precise, not general, ambiguous, or vague.
(BAD: Syntax error, Invalid entry, General Failure)
• Don't JUST say what's wrong---- Be constructive; propose what
needs to be done to correct the error condition.
• Be positive; Avoid condemnation. Possibly even to the point of
avoiding pejorative terms such as "invalid" "illegal" or "bad."
• Be user-centric and attempt to convey to the user that he or
she is in control by replacing imperatives such as "Enter date"
with wording such as "Ready for date."
• Consider multiple message levels: the initial or default error
message can be brief but allow the user some mechanism to
request additional information.
• Consistency in terminology and wording.
i. Place error messages in the same place on the
screen
ii. Use consistent display characteristics (blinking,
colour, beeping, etc.)

10.4 USER INTERFACE

i. The primary differences between an inter active and batch


environment are:

• interactive processing is done during the organization's prime


work hours
• interactive systems usually have multiple, simultaneous users
• the experience level of users runs from novice to
highly experienced
• developers must be good communicators because of the need
to design systems with error messages, help text, and requests
for user responses.

ii. The seven step path that grades the structure of an interactive
system is
a. Greeting screen (e.g., company logo)

b. Password screen -- to prevent unauthorized use


c. Main menu -- allow choice of several
available applications
d. Intermediate menus -- further delineate
choice of functions
lOMoARcPSD|13 98489 1

5
e. Function screens -- updating or deleting records
f. Help screens -- how to perform a task
g. Escape options -- from a
particular screen or the application

iii. An intermediate menu and a function screen differ in that the


former provides choices from a set of related operations while
the latter provides the ability to perform tasks such as updates
or deletes.

iv. The difference between inquiry and command language


dialogue modes is that the former asks the user to provide a
response to a simple question (e.g., "Do you really want to
delete this file?") where the latter requires that the user know
what he or she wants to do next (e.g., MS-DOS C:> prompt;
VAX/VMS $ prompt; Unix shell prompt). GUI Interface
(Windows, Macintosh) provide Dialog Boxes to prompt user
to input required information/parameters.

v. Directions for designing form-filling screens:

a. Fields on the screen should be in the same sequence


as on the source document.
b. Use cuing to provide the user with information such as
field formats (e.g., dates)
c. Provide default values.
d. Edit all entered fields for transaction errors.
e. Move the cursor automatically to the next entry field
f. Allow entry to be free-form (e.g., do not make the user
enter leading zeroes)

Consider having all entries made at the same position on the screen.

vi. A default value is a value automatically supplied by the


application when the user leaves a field blank. For example,
at SXU the screen on which student names and addresses
are entered has a default value of "IL" for State since the
majority of students have addresses in Illinois. At one time
"312" was a default value for Area Code, but with the
additional Area Codes now in use (312, 773, 708, 630, 847)
providing a default value for this field is no longer as useful.

vii. The eight parts of an interactive screen menu are:

0. Locator -- what application the user is currently in


1. Menu ID -- allows the more experienced user accesswithout
going through the entire menu tree.
2. Title
lOMoARcPSD|13 98489 1

6
3. User instructions
4. Menu list
5. Escape option
6. User response area
7. System messages (e.g., error messages)

viii. Highlighting should be used for gaining attention and so should


be limited to critical information, unusual values, high priority
messages, or items that must be changed.

ix. Potential problems associated with the overuse of color are:

• Colors have different meanings to different people and in


different cultures.
• A certain percentage of the population is known to have
color vision deficiency.
• Some color combinations may be disruptive.

x. Information density is important because density that is too


high makes it more difficult to discern the information presented
on a screen, especially for novice users.

xi. Rules for defining message content include:

• Use active voice.


• Use short, simple sentences.
• Use affirmative statements.
• Avoid hyphenation and unnecessary punctuation.
• Separate text paragraphs with at least one blank line.
• Keep field width within 40 characters for easy reading.
• Avoid word contractions and abbreviations.
• Use non threatening language.
• Avoid godlike language.
• Do not patronize.
• Use mixed case (upper and lower case) letters.
• Use humour carefully.

xii. Symmetry is important to screen design because it is


aesthetically pleasing and thus more comforting.

xiii. Input verification is asking the user to confirm his or her most
recent input (e.g., "Are you sure you want to delete this
file?")

xiv. Adaptive models are useful because they adapt to the user's
experience level as he or she moves from novice to
experienced over time as experience with the system grows.
lOMoARcPSD|13 98489 1

7
xv. "Within User" sources of variation include: warm up, fatigue,
boredom, environmental conditions, and extraneous events.

xvi. The elements of the adaptive model are:

• Triggering question to determine user experience level


• Differentiation among user experience
• Alternative processing paths based on user level
• Transition of casual user to experienced processing path
• Transition of novice user to experienced processing path
Allowing the user to move to an easier processing path

xvii. Interactive tasks can be designed for closure by providing the


user with feedback indicating that a task has been completed.

xviii. Internal locus of control is making users feel that they are in
control of the system, rather than that the system is in control
of them.

xix. Examples of distracting use of surprise are:


Highlighting
Input verification
Flashing messages Auditory
messages

xx. Losing the interactive user can be avoided by using short menu
paths and "You are here" prompts.

xxi. Some common user shortcuts are: direct menu access,


function keys, and shortened response time.

10.5 GOLDEN RULES OF INTERFACE DESIGN:

1. Strive for consistency.


2. Enable frequent users to use shortcuts.
3. Offer informative feedback.
4. Design dialogs to yield closure.
5. Offer error prevention and simple error handling.
6. Permit easy reversal of actions.

7. Support internal locus of control.


8. Reduce short-term memory load.
10.6 SUMMARY :

In above chapter, we learn the concept of Output Design (Output is


what the customer is buying when he or she pay for a development of
project), Input Design, User Interface and rules of Interface design.
lOMoARcPSD|13 98489 1

8


11
SOFTWARE TESTING STRATEGY

Unit Structure
11.1 Introduction
11.2 Strategic approach to software testing
11.3 Organizing for Software Testing
11.4 A Software Testing Strategy
11.5 Unit Testing
11.6 Integration testing
11.6.1 Top down Integration
11.6.2 Bottom-up Integration
11.7 Regression Testing
11.8 Comments on Integration Testing
11.9 The art of debugging
11.9.1 The Debugging Process
11.10 Summary
11.1 INTRODUCTION :

A strategy for software testing must accommodate low-level tests that


are necessary to verify that a small source code segment has been
correctly implemented as well as high-level tests that validate major
system functions against customer requirements. A strategy must
provide guidance for the practitioner and a set of milestones for the
manager. Because the steps of the test strategy occur at a time when
dead-line pressure begins to rise, progress must be measurable and
problems must surface as early as possible.

11.2 STRATEGIC APPROACH TO SOFTWARE


TESTING

Testing is a set of activities that can be planned in advance and


conducted systematically. For this reason a template for software
testing -- a set of steps into which we can place specific test case
lOMoARcPSD|13 98489 1

9
design techniques and testing methods -- should be defined for the
software process.

A number of software testing strategies have been proposed in the


literature. All provide the software developer with a template for testing
and all have the following generic characteristics.
 Testing begins at the component level and works 'outward'
toward the integration of the entire computer-based system,
 Different testing techniques are appropriate at different points
in time.
 Testing is conducted by the developer of the software and (for
large projects) an independent test group.
 Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy,

A strategy for software testing must accommodate low-level tests that


are necessary to verify that a small source code segment has been
correctly implemented as well as high-level tests that validate major
system functions against customer requirements. A strategy must
provide guidance for the practitioner and a set of milestones for the
manager. Because the steps of the test strategy occur at a time when
dead-line pressure begins to rise, progress must be measurable and
problems must surface as early as possible.

11.3 ORGANIZING FOR SOFTWARE TESTING

For every software project, there is an inherent conflict of interest that


occurs as testing begins. The people who have built the software are
now asked to test the software. This seems harmless in itself; after all,
who knows the program better than its developers? Unfortunately,
these same developers have a vested interest in demonstrating that
the program is error free, that it works according to customer
requirements, and that it will be completed on schedule and within
budget. Each of these interests mitigate against thorough testing.

From a psychological point of view, software analysis and design


(along with coding) are constructive tasks. The software engineer
creates a computer program, its documentation, and related data
structures. Like any builder, the software engineer is proud of the
edifice that has been built and looks askance at anyone who attempts
to tear it down. When testing commences, there is a subtle, yet definite,
attempt to 'break' the thing that the software engineer has built. From
the point of view of the builder, testing can be considered to be
(psychologically) destructive.
lOMoARcPSD|13 98489 1

10
There are often a number of misconceptions that can be erroneously
inferred from the preceding discussion:
 That the developer of software should do no testing at all.
 That the software should be ‘tossed over the wall’ to strangers
who will test it mercilessly,
 That tester gets involved with the project only when the testing
steps are about to begin.

Each of these above statements is incorrect.


The software developer is always responsible for testing the individual
units (components) of the program, ensuring that each performs the
function for which it was designed. In many cases, the developer also
conducts integration testing -- a testing step that leads to the
construction (and test) of the complete program structure. Only after
the software architecture is complete does an independent test group
become involved.

The role of an independent test group (ITG) is to remove the inherent


problems associated with letting the builder test the thing that has been
built. Independent testing removes the conflict of interest that may
otherwise be present. After all, personnel in the independent group
team are paid to find errors.

However, the software engineer doesn’t turn the program over to ITG
and walk away. The developer and the ITG work closely throughout a
software project to ensure that thorough tests will be conducted: While
testing is conducted, the developer must be available to correct errors
that are uncovered.

The ITG is part of the software development project team in the sense
that it becomes involved during the specification activity and stays
involved (planning and specifying test procedures) throughout a large
project. However, in many cases the ITG reports to the software quality
assurance organization, thereby achieving a degree of independence
that might not be possible if it were a part of the software engineering
organization.

11.4 A SOFTWARE TESTING STRATEGY

The software engineering process may be viewed as the spiral


illustrated in figure below. Initially, system engineering defines the role
of software and leads to software requirements analysis. Where the
information domain, function, behaviour, performance, constraints.
And validation criteria for software are established. Moving inward
along the spiral we come to design and finally to coding. To develop
lOMoARcPSD|13 98489 1

11
computer software, we spiral inward along streamlines that decrease
the level of abstraction on each turn.

A strategy for software testing may also be viewed in the context of the
spiral (figure above). Unit testing begins at the vortex of the spiral and
concentrates on each unit (i.e, component) of the software as
implemented in source code. Testing progresses by moving outward
along the spiral to integration testing, where the focus is on design and
the construction of the software architecture. Taking another turn
outward on the spiral, we encounter validation testing, where
requirements established as part of software requirements analysis are
validated against the software that has been constructed. Finally, we
arrive at system testing, where the software and other system elements
are tested as a whole. To test computer software, we spiral out along
streamlines that broaden the scope of testing with each turn.

Initially, tests focus on each component individually, ensuring that it


functions properly as a unit. Hence, the name unit testing. Unit testing
makes heavy use of white-box testing techniques, exercising specific
paths in a module's control structure to ensure complete coverage and
maximum error detection. Next, components must be assembled or
integrated to form the complete software package. Integration testing
addresses the issues associated with the dual problems of verification
and program construction. Black-box test case design techniques are
the most prevalent during integration, although a limited amount of
white-box testing may be used to ensure coverage of major control
paths. After the software has been integrated (constructed), a set of
high order tests are conducted Validation criteria (established during
requirements analysis) must be tested. Validation testing provides final
assurance that software meets all functional, behavioural, and
performance requirement. Black box testing techniques arc used
exclusively during validation.

The last high-order testing step falls outside the boundary of software
engineering and into the broader context of computer system
engineering. Software, once validated, must be combined with other
system elements (e.g., hardware, people, and databases). System
testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
11.5 UNIT TESTING

Unit testing focuses verification effort on the smallest unit of software


design -- the software component or module. Using the component-
level design description as a guide, important control paths are tested
to uncover errors within the boundary of the module. The relative
complexity of tests and uncovered errors is limited by the constrained
scope established for unit testing. The unit test is white-box oriented,
and the step can be conducted in parallel for multiple components.
lOMoARcPSD|13 98489 1

12
11.6 INTEGRATION TESTING

A neophyte in the software world might ask a seemingly legitimate


question once all modules have been unit tested: "If they all work
individually, why do you doubt that they'll work when we put them
together?" The problem, or course, is putting them together -
interfacing. Data can be lost across an interface; one module can have
an inadvertent, adverse affect on another; sub-functions, when
combined, may not produce the desired major function; individually
acceptable imprecision may be magnified to unacceptable levels;
global data structures can present problems. Sadly, the list goes on
and on.

Integration testing is a systematic technique for constructing the


program structure while at the same time conducting tests to uncover
errors associated with interfacing. The objective is to take unit tested
components and build a program structure that has been dictated by
design.

There is often a tendency to attempt non-incremental integration; that


is, to construct the program using a ‘big bang’ approaches. All
components are combined in advance. The entire program is tested as
a whole. And chaos usually results! A set of errors is encountered.
Correction is difficult because isolation or causes is complicated by the
vast expanse of the entire program. Once these errors are corrected,
new ones appear and the process continues in a seemingly endless
loop.

Incremental integration is the antithesis of the big bang approach. The


program is constructed and tested in small increments, where errors
are easier to isolate and correct; interfaces are more likely to be tested
completely; and a systematic test approach may be applied. In the
sections that follow, a number of different incremental integration
strategies are discussed.

11.6.1 Top down Integration


Top down integration testing is an incremental approach to
construction or program structure. Modules are integrated by moving
downward through the control hierarchy beginning with the main
control module (main program). Modules subordinate (and ultimately
subordinate) to the main control module are incorporated into the
structure in either a depth-first or breadth-first manner. Referring to
Figure below depth-first integration would integrate all components on
a major control path of the structure. Selection of a major path is
somewhat arbitrary and depends on applicationspecific characteristics.
For example, selecting the left hand path, components M1 M2 and M5
would be integrated first. Next, M8 or (if necessary proper functioning
of M2) M6 would be integrated. Then, the central and right hand control
lOMoARcPSD|13 98489 1

13
paths are built. Breadth first integration incorporates all components
directly sub-ordinate at each level, moving across the structure
horizontally.

M1

M2
M3 M4

M5 M6 M7

M8

The integration process is performed in a series of five steps:

1. The main control module is used as a test driver and stubs are
substituted for all components directly sub-ordinate to the main
control module.
2. Depending on the integration approach selected ( i.e., depth or
breadth first), sub-ordinate stubs are replaced one at a time
with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced
with the real components.
5. Regression testing may be conducted to ensure that new
errors have not been introduced.

The process continues from step 2 until the entire program structure is
built.

The top-down integration strategy verifies major control or decision


points early in the test process. In a well-factored program structure,
decision making occurs at upper levels in the hierarchy and is therefore
encountered first. If major control problems do exist, early recognition
is essential. If depth-first integration is selected, a complete function of
the software may be implemented and demonstrated.

Top-down strategy sounds relatively uncomplicated, but in practice,


logistical problems can arise. The most common of these problems
occurs when processing at low levels in the hierarchy is required to
adequately test upper levels. Stubs replace low-level modules at the
lOMoARcPSD|13 98489 1

14
beginning of top-down testing; therefore, no significant data can now
upward in the program structure. The tester is left will three choices;

 Delaymany tests until stubs are replaced with


actual modules.
 Develop stubs that perform limited functions that simulate the
actual module, or
 Integrate the software from the bottom of
the hierarchy upward.

The first approach (delay tests until stubs are replaced by actual
modules) causes us to loose some control over correspondence
between specific tests and incorporation of specific modules. This can
lead to difficulty in determining the cause of errors and tends to violate
the highly constrained nature of the topdown approach. The second
approach is workable but can lead to significant overhead, as stubs
become more and more complex. The third approach is called bottom-
up testing.

11.6.2 Bottom-up Integration

Bottom-up integration testing, as its name implies, begins construction


and testing with atomic modules (i.e., components at the lowest levels
in the program structure). Because components are integrated from
the bottom up, processing required for components subordinate to a
given level is always available and the need for stubs is eliminated.

A bottom-up integration strategy may be implemented with the


following steps:
 Low-level components are combined into clusters (sometimes
called builds) that perform a specific software sub-function.
 A driver (a control program for testing) is written to coordinate
test case input and output.
 The cluster is tested.
 Drivers are removed and clusters are combined moving
upward in the program structure.

Integration follows the pattern illustrated in Figure below. Components


are combined to form clusters 1,2, and 3.
lOMoARcPSD|13 98489 1

15
Mc

Ma Mb

D1 D2 D3

Cluster3

Cluster2

Cluster1

Each of the clusters is tested using a driver (shown as a dashed block).


Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and
D2 are removed and the clusters are interfaced directly to Ma.
Similarly, driver D3 for cluster 3 is removed prior to integration with
module Mb. Both Ma and Mb will ultimately be integrated with
component Mc, and so forth.

As integration moves upward the need for separate test drivers


lessens. In fact, if the top two levels of program structure are integrated
top down, the number of drivers can be reduced substantially and
integration of clusters is greatly simplified.

11.7 REGRESSION TESTING

Each time a new module is added as part of integration testing, the


software changes. New data flow paths are established, new I/O may
occur, and new control logic is invoked. These changes may cause
problems with functions that previously worked flawlessly. In the
context of an integration test strategy, regression testing is the re-
execution of some subset of tests that have already been conducted
to ensure that changes have not propagated unintended side effects.

In a broader context, successful tests (of any kind) result in the


discovery of errors, and errors must be corrected. Whenever software
is corrected, some aspect of the software configuration (the program,
its documentation, or the data that support it) is changed. Regression
testing is the activity that hclp5 to en5ure that changes (due to testing
lOMoARcPSD|13 98489 1

16
or for other reasons) do not introduce unintended behaviour or
additional errors.

For instance, suppose you are going to add new functionality to your
software, or you are going to modify a module to improve its response
time. The changes, of course, may introduce errors into software that
was previously correct. For example, suppose the program fragment

x := c + 1 ;
proc (z);
c := x + 2; x:= 3;
works properly. Now suppose that in a subsequent redesign it is
transformed into

proc(z);
c := c + 3;
x:= 3;
in an attempt at program optimization. This may result in an error if
procedure proc accesses variable x.

Thus, we need to organize testing also with the purpose of verifying


possible regressions of software during its life, i.e., degradations of
correctness or other qualities due to later modifications. Properly
designing and documenting test cases with the purpose of making
tests repeatable, and using test generators, will help regression
testing. Conversely, the use of interactive human input reduces
repeatability and thus hampers regression testing.

Finally, we must treat test cases in much the same way as software. It
is clear that such factors as resolvability, reusability, and verifiability
are just as important in test cases as they are in software. We must
apply formality and rigor and all of our other principles in the
development and management of test cases.

11.8 COMMENTS ON INTEGRATION TESTING

There has been much discussion of the relative advantages and


disadvantages of top-down versus bottom-up integration testing. In
general, the advantages of one strategy tend to result in disadvantages
for the other strategy. The major disadvantage of the top-down
approach is the need for stubs and the attendant testing difficulties that
can be associated with them. Problems associated with stubs may be
offset by the advantage of testing major control functions early. The
major disadvantage of bottom-up integration is that the program as an
entity does not exist until the last module is added. This drawback is
tempered by easier test case design and a lack of stubs.
lOMoARcPSD|13 98489 1

17
Selection of an integration strategy depends upon software
characteristics and, sometimes, project schedule. In general, a
combined approach (sometimes called sandwich testing) that uses
top-down tests for upper levels of the program structure, coupled with
bottom-up tests for subordinate levels may be the best compromise.

As integration testing is conducted, the tester should identify critical


modules. A critical module has one or more of the following
characteristics:
 addresses several software requirements,
 has a high level of control (resides relatively high in the
program structure),
 is complex or error prone (cyclomatic complexity may be used
as an indicator), or
 has definite performance requirements.
 Critical modules should be tested as early as is possible.
In addition, regression tests should focus on critical module function.

11.9 THE ART OF DEBUGGING

Software testing is a process that can be systematically planned and


specified. Test case design can be conducted, a strategy can be
defined, and results can be evaluated against prescribed expectations.

Debugging occurs as a consequence or successful testing. That is,


when a test case uncovers an error, debugging is the process that
results in the removal or the error. Although debugging can and should
be an orderly process, it is still very much an art. A software engineer,
evaluating the results or a test, is often confronted with a "symptomatic"
indication or a software problem. That is, the external manifestation or
the error and the internal cause or the error may have no obvious
relationship to one another. The poorly understood mental process that
connects a symptom to a cause is debugging.

11.9.1 The Debugging Process

Debugging is not testing but always occurs as a consequence of


testing. Referring to Figure in the next page, the debugging process
begins with the execution or a test case. Results are assessed and a
lack or correspondence between expected and actual performance is
encountered. In many cases, the non-corresponding data are a
symptom of an underlying cause as yet hidden. The debugging
process attempts to match symptom with cause thereby leading to
error correction.
lOMoARcPSD|13 98489 1

18
The debugging process will always have one or two outcomes:
1. The cause will be found and corrected, or
2. The cause will not be found.

In the latter case, the person performing debugging may suspect a


cause, design a test case to help validate that suspicion, and work
toward error correction in an iterative fashion.

Testcases

RegressionTest AdditionalTest
Executionofcases

Correction
Suspectedcauses
Result

IdentifyCauses

Debugging

Why is debugging so difficult? In all likelihood, human psychology has


more to do with an answer than software technology. However, a few
characteristics or bugs provide some clues:
 The symptom and the cause may be geographically remote.
That is, the symptom may appear in one part or a program,
while the cause may actually be located at a site that is far
removed. Highly coupled program structures exacerbate this
situation.

 The symptom may disappear (temporarily) when another


error is corrected.
 The symptom may actually be caused by non-errors (e.g.,
round-off inaccuracies).
 The symptom may be caused by human error that is not
easily traced.
 The symptom may be a result, or timing problems, rather than
processing problems.
lOMoARcPSD|13 98489 1

19
 It may be difficult to accurately reproduce input conditions
(e.g., a real-time application in which input ordering is
indeterminate).
 The symptom may be intermittent. This is particularly
common in embedded systems that couple hardware and
software inextricably.
 The symptom may be due to causes that are distributed
across a number of tasks running on different processors.
During debugging, we encounter errors that range from mildly
annoying (e,g., an incorrect output format) to catastrophic (e.g. the
system fails, causing serious economic or physical damage). As the
consequences or an error increase, the amount of pressure to find the
cause also increases. Often, pressure sometimes forces a sort- ware
developer to fix one error and at the same time introduce two more.
10.10 SUMMARY :

In this way, we learned above Strategic approach to software testing,


Unit Testing, Integration Testing and Debugging process in detail.

Questions:
1. Explain Unit Testing?
Ans: Refer 11.5
2. Explain Integration testing?
Ans: Refer 11.6
3. Explain Regression Testing?
Ans: Refer 11.7
4. Explain Debugging Process in detail?Ans: Refer 11.9.1












lOMoARcPSD|13 98489 1

20


12
CATEGORIES OF TESTING

Unit Structure
12.1 Introduction
12.2 The Testing process and the Software Testing Life Cycle
12.3 Types of Testing
12.4 Testing Techniques
12.5 Black Box and White Box testing
12.6 Black box testing
12.6.1 Black box testing Methods
12.6.2 Advantages of Black Box Testing
12.6.3 Disadvantages of Black Box Testing
12.7 White Box Testing
12.7.1 Code Coverage Analysis
12.7.2 Control Structure testing
12.7.3 Advantages of White Box Testing
12.7.4 Disadvantages of White Box Testing
12.8 Difference between Black Box Testing and White Box
Testing
12.9 Summary
12.1 INTRODUCTION :

In this Chapter, we will learn the testing life cycle of software and also
testing methods like white box testing and black box testing. Also we
trying to cover the sub processes of white box testing and black box
testing methods such as Integration Testing, Unit Testing, Regression
Testing, System Testing and much more.

12.2 THE TESTING PROCESS AND THE SOFTWARE


TESTING LIFE CYCLE:

Every testing project has to follow the waterfall model of the testing
process.
The waterfall model is as given below
lOMoARcPSD|13 98489 1

21
1. Test Strategy & Planning

2. Test Design
3. Test Environment setup
4. Test Execution
5. Defect Analysis & Tracking
6. Final Reporting

According to the respective projects, the scope of testing can be


tailored, but the process mentioned above is common to any testing
activity.

Software Testing has been accepted as a separate discipline to the


extent that there is a separate life cycle for the testing activity.

Involving software testing in all phases of the software development


life cycle has become a necessity as part of the software quality
assurance process. Right from the Requirements study till the
implementation, there needs to be testing done on every phase. The
V-Model of the Software Testing Life Cycle along with the Software
Development Life cycle given below indicates the various phases or
levels of testing.

Requirement Production
Study Verification
testing

HighLevel Useracceptance
testing Design

LowLevel SystemTesting
Design

Unit
Testing IntegrationTesting

SDLC - STLC

There are two categories of testing activities that can be done


on software, namely, ฀ Static Testing

฀ Dynamic Testing
lOMoARcPSD|13 98489 1

22
The kind of verification we do on the software work products before the
process of Compilation and creation of an executable is more of
Requirement review, design review, code review, walkthrough and
audits. This type of testing is called Static Testing. When we test the
software by executing and comparing the actual
& expected results, it is called Dynamic Testing

12.3 TYPES OF TESTING

From the V-model, we see that are various levels or phases of testing,
namely, Unit testing, Integration testing, System testing, User
Acceptance testing etc.

Let us see a brief definition on the widely employed types of testing.

Unit Testing: The testing done to a unit or to a smallest piece of


software. Done to verify if it satisfies its functional specification or its
intended design structure.

Integration Testing: Testing which takes place as sub elements are


combined (i.e. integrated) to form higher-level elements

Regression Testing: Selective re-testing of a system to verify the


modification (bug fixes) have not caused unintended effects and that
system still complies with its specified requirements

System Testing: Testing the software for the required


specifications on the intended hardware

Acceptance Testing: Formal testing conducted to determine whether


or not a system satisfies its acceptance criteria, which enables a
customer to determine whether to accept the system or not.

Performance Testing: To evaluate the time taken or response time of


the system to perform it’s required functions in comparison

Stress Testing: To evaluate a system beyond the limits of the


specified requirements or ystem resources (such as disk space,
memory, processor utilization) to ensure the system do not reak
unexpectedly

Load Testing: Load Testing, a subset of stress testing, verifies that a


web site can handle a particular number of concurrent users while
maintaining acceptable response times

Alpha Testing: Testing of a software product or system conducted at


the developer’s site by the customer
lOMoARcPSD|13 98489 1

23
Beta Testing: Testing conducted at one or more customer sites by the
end user of a delivered software product system.

12.4 THE TESTING TECHNIQUES

To perform these types of testing, there are two widely used testing
techniques. The above said testing types are performed based on the
following testing techniques.

Black-Box testing technique:


This technique is used for testing based solely on analysis of
requirements (specification, user documentation.). Also known as
functional testing.

White-Box testing technique:


This technique us used for testing based on analysis of internal logic
(design, code, etc.)(But expected results still come requirements). Also
known as structural testing.

12.5 BLACK BOX AND WHITE BOX TESTING:

Test Design refers to understanding the sources of test cases, test


coverage, how to develop and document test cases, and how to build
and maintain test data. There are 2 primary methods by which tests
can be designed and they are:
- BLACK BOX
- WHITE BOX

Black-box test design treats the system as a literal "black-box", so it


doesn't explicitly use knowledge of the internal structure. It is usually
described as focusing on testing functional requirements. Synonyms
for black-box include: behavioural, functional, opaque box, and closed-
box.

White-box test design allows one to peek inside the "box", and it
focuses specifically on using internal knowledge of the software to
guide the selection of test data. It is used to detect errors by means of
execution-oriented test cases.

Synonyms for white-box include:


Structural, glass-box and clear-box.
While black-box and white-box are terms that are still in popular use,
many people prefer the terms "behavioural" and "structural".
Behavioural test design is slightly different from black-box test design
because the use of internal knowledge isn't strictly forbidden, but it's
still discouraged. In practice, it hasn't proven useful to use a single test
design method. One has to use a mixture of different methods so that
lOMoARcPSD|13 98489 1

24
they aren't hindered by the limitations of a particular one. Some call
this "gray-box" or "translucent-box"
test design, but others wish stop talking About boxes
we'd altogether!!!

12.6 BLACK BOX TESTING


Black Box Testing is testing without knowledge of the internal
workings of the item being tested. For example, when black box testing
is applied to software engineering, the tester would only know the
"legal" inputs and what the expected outputs should be, but not how
the program actually arrives at those outputs. It is because of this that
black box testing can be considered testing with respect to the
specifications, no other knowledge of the program is necessary. For
this reason, the tester and the programmer can be independent of one
another, avoiding programmer bias toward his own work. For this
testing, test groups are often used, Though centered around the
knowledge of user requirements, black box tests do not necessarily
involve the participation of users. Among the most important black box
tests that do not involve users are functionality testing, volume tests,
stress tests, recovery testing, and benchmarks. Additionally, there are
two types of black box test that involve users, i.e. field and laboratory
tests. In the following the most important aspects of these black box
tests will be described briefly.

Black box testing - without user involvement


The so-called ``functionality testing'' is central to most testing
exercises. Its primary objective is to assess whether the program does
what it is supposed to do, i.e. what is specified in the requirements.
There are different approaches to functionality testing. One is the
testing of each program feature or function in sequence. The other is
to test module by module, i.e. each function where it is called first.

The objective of volume tests is to find the limitations of the software


by processing a huge amount of data. A volume test can uncover
problems that are related to the efficiency of a system, e.g. incorrect
buffer sizes, a consumption of too much memory space, or only show
that an error message would be needed telling the user that the system
cannot process the given amount of data.

During a stress test, the system has to process a huge amount of data
or perform many function calls within a short period of time. A typical
example could be to perform the same function from all workstations
connected in a LAN within a short period of time (e.g. sending e-mails,
or, in the NLP area, to modify a term bank via different terminals
simultaneously).

The aim of recovery testing is to make sure to which extent data can
be recovered after a system breakdown. Does the system provide
lOMoARcPSD|13 98489 1

25
possibilities to recover all of the data or part of it? How much can be
recovered and how? Is the recovered data still correct and consistent?
Particularly for software that needs high reliability standards, recovery
testing is very important.

The notion of benchmark tests involves the testing of program


efficiency. The efficiency of a piece of software strongly depends on
the hardware environment and therefore benchmark tests always
consider the soft/hardware combination. Whereas for most software
engineers benchmark tests are concerned with the quantitative
measurement of specific operations, some also consider user tests
that compare the efficiency of different software systems as
benchmark tests. In the context of this document, however, benchmark
tests only denote operations that are independent of personal
variables.

• Black box testing - with user involvement

For tests involving users, methodological considerations are rare in SE


literature. Rather, one may find practical test reports that distinguish
roughly between field and laboratory tests. In the following only a rough
description of field and laboratory tests will be given.

E.g. Scenario Tests. The term ``scenario'' has entered software


evaluation in the early 1990s . A scenario test is a test case which aims
at a realistic user background for the evaluation of software as it was
defined and performed It is an instance of black box testing where the
major objective is to assess the suitability of a software product for
every-day routines. In short it involves putting the system into its
intended use by its envisaged type of user, performing a standardised
task.

In field tests users are observed while using the software system at
their normal working place. Apart from general usabilityrelated
aspects, field tests are particularly useful for assessing the
interoperability of the software system, i.e. how the technical
integration of the system works. Moreover, field tests are the only real
means to elucidate problems of the organisational integration of the
software system into existing procedures. Particularly in the NLP
environment this problem has frequently been underestimated. A
typical example of the organisational problem of implementing a
translation memory is the language service of a big automobile
manufacturer, where the major implementation problem is not the
technical environment, but the fact that many clients still submit their
orders as print-out, that neither source texts nor target texts are
properly organised and stored and, last but not least, individual
translators are not too motivated to change their working habits.
lOMoARcPSD|13 98489 1

26
Laboratory tests are mostly performed to assess the general usability
of the system. Due to the high laboratory equipment costs laboratory
tests are mostly only performed at big software houses such as IBM or
Microsoft. Since laboratory tests provide testers with many technical
possibilities, data collection and analysis are easier than for field tests.

12.6.1 Black box testing Methods

• Graph-based Testing Methods

 Black-box methods based on the nature of the relationships


(links) among the program objects (nodes), test cases are
designed to traverse the entire graph
 Transaction flow testing (nodes represent steps in some
transaction and links represent logical connections between
steps that need to be validated)
 Finite state modelling (nodes represent user observable states
of the software and links represent transitions between states)
 Data flow modelling (nodes are data objects and links are
transformations from one data object to another)
 Timing modelling (nodes are program objects and links are
sequential connections between these objects, link weights are
required execution times)

Equivalence Partitioning

Black-box technique that divides the input domain into classes of data
from which test cases can be derived. An ideal test case uncovers a
class of errors that might require many arbitrary test cases to be
executed before a general error is observed

Equivalence class guidelines:


1. If input condition specifies a range, one valid and two
invalidequivalence classes are defined
2. If an input condition requires a specific value, one valid and
twoinvalid equivalence classes are defined
3. If an input condition specifies a member of a set, one valid andone
invalid equivalence class is defined
4. If an input condition is Boolean, one valid
and one invalid
equivalence class is defined

• Comparison Testing
lOMoARcPSD|13 98489 1

27
Black-box testing for safety critical systems in which independently
developed implementations of redundant systems are tested for
conformance to specifications. Often equivalence class partitioning is
used to develop a common set of test cases for each implementation.

• Orthogonal Array Testing

Black-box technique that enables the design of a reasonably small set


of test cases that provide maximum test coverage. Focus is on
categories of faulty logic likely to be present in the software component
(without examining the code) · Priorities for assessing tests using an
orthogonal array 1. Detect and isolate all single mode faults
2. Detect all double mode faults
3. Multimode faults

12.6.2 Advantages of Black Box Testing

· More effective on larger units of code than glass box testing


· Tester needs no knowledge of implementation, including specific
programming languages
· Tester and programmer are independent of each other
· Tests are done from a user's point of view
· Will help to expose any ambiguities or inconsistencies in the
specifications
· Test cases can be designed as soon as the specifications are
complete

12.6.3 Disadvantages of Black Box Testing


· Only a small number of possible inputs can actually be tested, to test
every possible input stream would take nearly forever
· Without clear and concise specifications, test cases are hard to
design
· There may be unnecessary repetition of test inputs if the tester is not
informed of test cases the programmer has already tried
· May leave many program paths untested
· Cannot be directed toward specific segments of code which may be
very complex (and therefore more error prone)
· Most testing related research has been directed toward glass box
testing

12.7 WHITE BOX TESTING


lOMoARcPSD|13 98489 1

28
Software testing approaches that examine the program structure and
derive test data from the program logic. Structural testing is sometimes
referred to as clear-box testing since white boxes are considered
opaque and do not really permit visibility into the code.

• Synonyms for white box testing


· Glass Box testing
· Structural testing
· Clear Box testing
· Open Box Testing

• The purpose of white box testing


Initiate a strategic initiative to build quality throughout the life cycle of
a software product or service.
Provide a complementary function to black box testing.
Perform complete coverage at the component level. Improve
quality by optimizing performance.

12.7.1 Code Coverage Analysis

• Basis Path Testing


A testing mechanism proposed by McCabe whose
aim is toderive a logical complexity measure of a
procedural design and use this as a guide for defining a
basic set of execution paths. These are test cases that
exercise basic set will execute every statement at least
once.

• Flow Graph Notation


A notation for representing control flow similar to
flow chartsand UML activity diagrams.

• Cyclomatic Complexity
The cyclomatic complexity gives a quantitative measure of 4the logical
complexity. This value gives the number of independent paths in the
basis set, and an upper bound for the number of tests to ensure that
each statement is executed at least once. An independent path is any
path through a program that introduces at least one new set of
processing statements or a new condition (i.e., a new edge).
Cyclomatic complexity provides upper bound for number of tests
required to guarantee coverage of all program statements.

12.7.2 Control Structure testing

• Conditions Testing
lOMoARcPSD|13 98489 1

29
Condition testing aims to exercise all logical conditions in a program
module. They may define:
 Relational expression: (E1 op E2), where E1 and E2 are
arithmetic expressions.
 Simple condition: Boolean variable or relational expression,
possibly proceeded by a NOT operator.
 Compound condition: composed of two or more simple
conditions, Boolean operators and parentheses.
 Boolean expression: Condition without
Relational expressions.

• Data Flow Testing


Selects test paths according to the location of definitions and use of
variables.

• Loop Testing
Loops fundamental to many algorithms. Can define loops as imple,
concatenated, nested, and unstructured.

Examples:
Note that unstructured loops are not to be tested . rather, they are
redesigned.

• Design by Contract (D b C)
DbC is a formal way of using comments to incorporate specification
information into the code itself. Basically, the code specification is
expressed unambiguously using a formal language that describes the
code's implicit contracts. These contracts specify such requirements
as:
 Conditions that the client must meet before a method is
invoked.
 Conditions that a method must meet after it executes.
 Assertions that a method must satisfy at specific points of its
execution

• Profiling
Profiling provides a framework for analyzing Java code performance
for speed and heap memory use. It identifies routines that are
consuming the majority of the CPU time so that problems may be
tracked down to improve performance.

• Error Handling
Exception and error handling is checked thoroughly are simulating
partial and complete fail-over by operating on error causing test
lOMoARcPSD|13 98489 1

30
vectors. Proper error recovery, notification and logging are checked
against references to validate program design.

• Transactions
Systems that employ transaction, local or distributed, may be validated
to ensure that ACID (Atomicity, Consistency, Isolation, Durability).
Each of the individual parameters is tested individually against a
reference data set.

Transactions are checked thoroughly for partial/complete commits and


rollbacks encompassing databases and other XA compliant
transaction processors.

12.7.3 Advantages of White Box Testing


· Forces test developer to reason carefully about implementation
· Approximate the partitioning done by execution equivalence
· Reveals errors in "hidden" code ·
Beneficent side-effects

12.7.4 Disadvantages of White Box Testing


· Expensive
· Cases omitted in the code could be missed out.

12.8 DIFFERENCE BETWEEN BLACK BOX TESTING


AND WHITE BOX TESTING

An easy way to start up a debate in a software testing forum is to ask


the difference between black box and white box testing. These terms
are commonly used, yet everyone seems to have a different idea of
what they mean.

Black box testing begins with a metaphor. Imagine you’re testing an


electronics system. It’s housed in a black box with lights, switches, and
dials on the outside. You must test it without opening it up, and you
can’t see beyond its surface. You have to see if it works just by flipping
switches (inputs) and seeing what happens to the lights and dials
(outputs). This is black box testing. Black box software testing is doing
the same thing, but with software. The actual meaning of the metaphor,
however, depends on how you define the boundary of the box and what
kind of access the
“blackness” is blocking.
lOMoARcPSD|13 98489 1

31
An opposite test approach would be to open up the electronics system,
see how the circuits are wired, apply probes internally and maybe even
disassemble parts of it. By analogy, this is called white box testing, To
help understand the different ways that software testing can be divided
between black box and white box techniques, consider the Five-Fold
Testing System. It lays out five dimensions that can be used for
examining testing:
1. People (who does the testing)
2. Coverage (what gets tested)
3. Risks (why you are testing)
4. Activities (how you are testing)
5. Evaluation (how you know you’ve found a bug)

Let’s use this system to understand and clarify the characteristics of


black box and white box testing.

People: Who does the testing?


Some people know how software works (developers) and others just
use it (users).

Accordingly, any testing by users or other non-developers is


sometimes called “black box” testing. Developer testing is called “white
box” testing. The distinction here is based on what the person knows
or can understand.

Coverage: What is tested?


If we draw the box around the system as a whole, “black box” testing
becomes another name for system testing. And testing the units inside
the box becomes white box testing.

This is one way to think about coverage. Another is to contrast testing


that aims to cover all the requirements with testing that aims to cover
all the code. These are the two most commonly used coverage criteria.
Both are supported by extensive literature and commercial tools.
Requirements-based testing could be called “black box” because it
makes sure that all the customer requirements have been verified.
Code-based testing is often called “white box” because it makes sure
that all the code (the statements, paths, or decisions) is exercised.

Risks: Why are you testing?


Sometimes testing is targeted at particular risks. Boundary testing and
other attack-based techniques are targeted at common coding errors.
Effective security testing also requires a detailed understanding of the
code and the system architecture. Thus, these techniques might be
classified as “white box”. Another set of risks concerns whether the
software will actually provide value to users. Usability testing focuses
on this risk, and could be termed “black box.”
lOMoARcPSD|13 98489 1

32
Activities: How do you test?
A common distinction is made between behavioural test design, which
defines tests based on functional requirements, and structural test
design, which defines tests based on the code itself. These are two
design approaches. Since behavioural testing is based on external
functional definition, it is often called “black box,” while structural
testing—based on the code internals—is called “white box.” Indeed,
this is probably the most commonly cited definition for black box and
white box testing. Another activitybased distinction contrasts dynamic
test execution with formal code inspection. In this case, the metaphor
maps test execution (dynamic testing) with black box testing, and maps
code inspection (static testing) with white box testing. We could also
focus on the tools used. Some tool vendors refer to code-coverage
tools as white box tools, and tools that facilitate applying inputs and
capturing inputs—most notably GUI capture replay tools—as black box
tools. Testing is then categorized based on the types of tools used.

Evaluation: How do you know if you’ve found a bug?


There are certain kinds of software faults that don’t always lead to
obvious failures. They may be masked by fault tolerance or simply luck.
Memory leaks and wild pointers are examples. Certain test techniques
seek to make these kinds of problems more visible. Related techniques
capture code history and stack information when faults occur, helping
with diagnosis. Assertions are another technique for helping to make
problems more visible. All of these techniques could be considered
white box test techniques, since they use code instrumentation to make
the internal workings of the software more visible.

These contrast with black box techniques that simply look at the official
outputs of a program.

White box testing is concerned only with testing the software product,
it cannot guarantee that the complete specification has been
implemented. Black box testing is concerned only with testing the
specification, it cannot guarantee that all parts of the implementation
have been tested. Thus black box testing is testing against the
specification and will discover faults of omission, indicating that part of
the specification has not been fulfilled.

White box testing is testing against the implementation and will


discover faults of commission, indicating that part of the
implementation is faulty. In order to fully test a software product both
black and white box testing are required.

White box testing is much more expensive than black box testing. It
requires the source code to be produced before the tests can be
planned and is much more laborious in the determination of suitable
input data and the determination if the software is or is not correct. The
advice given is to start test planning with a black box test approach as
lOMoARcPSD|13 98489 1

33
soon as the specification is available. White box planning should
commence as soon as all black box tests have been successfully
passed, with the production of flow graphs and determination of paths.
The paths should then be checked against the black box test plan and
any additional required test runs determined and applied.

The consequences of test failure at this stage may be very expensive.


A failure of a white box test may result in a change which requires all
black box testing to be repeated and the redetermination of the white
box paths
12.9 SUMMARY :

To conclude, apart from the above described analytical methods of


both white and black box testing, there are further constructive means
to guarantee high quality software end products. Among the most
important constructive means are the usages of object-oriented
programming tools, the integration of CASE tools, rapid prototyping,
and last but not least the involvement of users in both software
development and testing procedures.

Questions:
1. Explain Software Testing Life Cycle in detail?Ans:
Refer 12.2

2. Explain Types of Testing in detail?Ans: Refer 12.3

3. Explain Black Box and White Box testing?


Ans: Refer 12.5












lOMoARcPSD|13 98489 1

34


13
SOFTWARE TESTING

Unit Structure
13.1 Introduction
13.2 Scope Of Software Testing
13.3 Software Testing Key Concepts
13.4 Software Testing Types
13.5 Software Testing Methodologies
13.6 Software Testing Artifacts
13.7 Available tools, techniques, and metrics
13.8 Summary

13.1 INTRODUCTION:

Software testing is an art. Most of the testing methods and practices


are not very different from 20 years ago. It is nowhere near maturity,
although there are many tools and techniques available to use. Good
testing also requires a tester's creativity, experience and intuition,
together with proper techniques

Before moving further towards introduction to software testing, we


need to know a few concepts that will simplify the definition of software
testing.
• Error: Error or mistake is a human action that produces
wrong or incorrect result.
• Defect (Bug, Fault): A flaw in the system or a product that
can cause the component to fail.
• Failure: It is the variance between the actual and expected
result.
• Risk: Risk is a factor that could result in negativity or a
chance of loss or damage.

Thus Software testing is the process of finding defects/bugs in the


system, which occurs due to an error in the application, which could
lOMoARcPSD|13 98489 1

35
lead to failure of the resultant product and increase in probability of
high risk. In short, software testing has different goals and objectives,
which often include:

1. finding defects;
2. gaining confidence in and providing information about the
level of quality;
3. Preventing defects.

13.2 SCOPE OF SOFTWARE TESTING

The primary function of software testing is to detect bugs in order to


correct and uncover it. The scope of software testing includes
execution of that code in various environments and also to examine
the aspects of code - does the software do what it is supposed to do
and function according to the specifications? As we move further we
come across some questions such as "When to start testing?" and
"When to stop testing?" It is recommended to start testing from the
initial stages of the software development. This not only helps in
rectifying tremendous errors before the last stage, but also reduces the
rework of finding the bugs in the initial stages every now and then. It
also saves the cost of the defect required to find it. Software testing is
an ongoing process, which is potentially endless but has to be stopped
somewhere, due to the lack of time and budget. It is required to achieve
maximum profit with good quality product, within the limitations of time
and money. The tester has to follow some procedural way through
which he can judge if he covered all the points required for testing or
missed out any.

13.3 SOFTWARE TESTING KEY CONCEPTS

• Defects and Failures: As we discussed earlier,


defects are not caused only due to the coding
errors, but most commonly due to the
requirement gaps in the non-functional
requirement, such as usability, testability,
scalability, maintainability, performance and
security. A failure is caused due to the deviation
between an actual and an expected result. But
not all defects result to failures. A defect can
turn into a failure due to the change in the
environment and or the change in the
configuration of the system requirements.

• Input Combination and Preconditions:


Testing all combination of inputs and initial state
(preconditions), is not feasible. This means
lOMoARcPSD|13 98489 1

36
finding large number of infrequent defects is
difficult.

• Static and Dynamic Analysis: Static testing


does not require execution of the code for
finding defects, whereas in dynamic testing,
software code is executed to demonstrate the
results of running tests.

• Verification and Validation: Software testing is


done considering these two factors.

1. Verification: This verifies whether the product is done


according to the specification?

2. Validation: This checks whether the product meets the


customer requirement?

• Software Quality Assurance: Software testing


is an important part of the software quality
assurance. Quality assurance is an activity,
which proves the suitability of the product by
taking care of the quality of a product and
ensuring that the customer requirements are
met.

13.4 SOFTWARE TESTING TYPES

Software test type is a group of test activities that are aimed at testing
a component or system focused on a specific test objective; a non-
functional requirement such as usability, testability or reliability.
Various types of software testing are used with the common objective
of finding defects in that particular component.

Software testing is classified according to two basic types of software


testing: Manual Scripted Testing and Automated Testing.

Manual Scripted Testing:


• Black Box Testing
• White Box Testing
• Gray Box Testing

The level of software testing life cycle includes:


Unit Testing
• Integration Testing
• System Testing
lOMoARcPSD|13 98489 1

37
• Acceptance Testing
1. Alpha Testing
2. Beta Testing

Other types of software testing are:


• Functional Testing
• Performance Testing
1. Load Testing
2. Stress Testing
• Smoke Testing
• Sanity Testing
• Regression Testing
• Recovery Testing
• Usability Testing
• Compatibility Testing
• Configuration Testing
• Exploratory Testing

For further explanation of these concepts, read more on types of


software testing.

Automated Testing: Manual testing is a time consuming process.


Automation testing involves automating a manual process. Test
automation is a process of writing a computer program in the form of
scripts to do a testing which would otherwise need to be done
manually. Some of the popular automation tools are Winrunner, Quick
Test Professional (QTP), Load Runner, Silk Test, Rational Robot, etc.
Automation tools category also includes maintenance tool such as Test
Director and many other.

13.5 SOFTWARE TESTING METHODOLOGIES

The software testing methodologies or process includes various


models that built the process of working for a particular product. These
models are as follows:
• Waterfall Model
• V Model
• Spiral Model
• Rational Unified Process(RUP)
• Agile Model
• Rapid Application Development(RAD)

These models are elaborated briefly in software testing methodologies.


lOMoARcPSD|13 98489 1

38
13.6 SOFTWARE TESTING ARTIFACTS

Software testing process can produce various artifacts such as:


• Test Plan: A test specification is called a test
plan. A test plan is documented so that it can be
used to verify and ensure that a product
or system meets its design
specification.
• Traceability matrix: This is a table that
correlates or design documents to test
documents. This verifies that the test results are
correct and is also used to change tests when
the source documents are changed.
• Test Case: Test cases and software testing
strategies are used to check the functionality of
individual component that is integrated to give
the resultant product. These test cases are
developed with the objective of judging the
application for its capabilities or features.
• Test Data: When multiple sets of values or data
are used to test the same functionality of a
particular feature in the test case, the test
values and changeable environmental
components are collected in separate files and
stored as test data.
• Test Scripts: The test script is the combination
of a test case, test procedure and test data.
• Test Suite: Test suite is a collection of test
cases.

Software Testing Process

Software testing process is carried out in the following sequence, in


order to find faults in the software system:
1. Create Test Plan
2. Design Test Case
3. Write Test Case
4. Review Test Case
5. Execute Test Case
6. Examine Test Results
7. Perform Post-mortem Reviews
8. Budget after Experience
lOMoARcPSD|13 98489 1

39
13.7 AVAILABLE TOOLS, TECHNIQUES, AND
METRICS

• There are an abundance of software testing


tools exist. The correctness testing tools are
often specialized to certain systems and have
limited ability and generality. Robustness and
stress testing tools are more likely to be made
generic.

• Mothora [DeMillo91] is an automated mutation


testing tool-set developed at Purdue University.
Using Mothora, the tester can create and
execute test cases, measure test case
adequacy, determine input-output correctness,
locate and remove faults or bugs, and control
and document the test.

• NuMega's Boundschecker [NuMega99]


Rational's Purify [Rational99]. They are run-time
checking and debugging aids. They can both
check and protect against memory leaks and
pointer problems.

• Ballista COTS Software Robustness Testing


Harness [Ballista99]. The Ballista testing
harness is a full-scale automated robustness
testing tool. The first version supports testing up
to 233 POSIX function calls in UNIX operating
systems. The second version also supports
testing of user functions provided that the data
types are recognized by the testing server. The
Ballista testing harness gives quantitative
measures of robustness comparisons across
operating
systems. The goal is to automatically test and harden
Commercial Off-The-Shelf (COTS) software against
robustness failures.

13.8 SUMMARY:
Software testing is an art. Most of the testing methods and practices
are not very different from 20 years ago. It is nowhere near maturity,
although there are many tools and techniques available to use. Good
testing also requires a tester's creativity, experience and intuition,
together with proper techniques.
lOMoARcPSD|13 98489 1

40
Questions:
1. Explain Software Testing Key Concepts?Ans:
refer

2. Explain Software Testing MethodologiesAns:


refer 13.5

3. Explain Available tools, techniques in detail?


Ans: refer 13.7



14
IMPLEMENTATION & MAINTENANCE

Unit Structure
14.1 Introduction
14.2 Data Entry and Data Storage
14.3 Date Formats
14.4 Data Entry Methods
14.5 System Implementation
14.6 System Maintenance
14.7 System Evaluation
14.8 Summary
14.1 INTRODUCTION :

The quality of data input agrees on the quality of information output.


Systems analysts can support accurate data entry through success of
three broad objectives: effective coding, effective and efficient data
capture and entry, and assuring quality through validation. In this
Chapter, we will learn Data Entry and Data Format.

14.2 DATA ENTRY AND DATA STORAGE

The quality of data input determines the quality of information output.


Systems analysts can support accurate data entry through
achievement of three broad objectives: effective coding, effective and
efficient data capture and entry, and assuring quality through
validation. Coding aids in reaching the objective of efficiency, since
data that are coded require less time to enter and reduce the number
lOMoARcPSD|13 98489 1

41
of items entered. Coding can also help in appropriate sorting of data
during the data transformation process. Additionally, coded data can
save valuable memory and storage space.

In establishing a coding system, systems analysts should follow these


guidelines:
Keep codes concise.
Keep codes stable.

Make codes that are unique.


Allow codes to be sort.
Avoid confusing codes.
Keep codes uniform.
Allow for modification of codes. Make
codes meaningful.

The simple sequence code is a number that is assigned to something


if it needs to be numbered. It therefore has no relation to the data itself.
Classification codes are used to distinguish one group of data, with
special characteristics, from another. Classification codes can consist
of either a single letter or number. The block sequence code is an
extension of the sequence code. The advantage of the block sequence
code is that the data are grouped according to common characteristics,
while still taking advantage of the simplicity of assigning the next
available number within the block to the next item needing
identification.

A mnemonic is a memory aid. Any code that helps the dataentry person
remembers how to enter the data or the end-user remembers how to
use the information can be considered a mnemonic. Mnemonic coding
can be less arbitrary, and therefore easier to remember, than numeric
coding schemes. Compare, for example, a gender coding system that
uses "F" for Female and "M" for Male with an arbitrary numeric coding
of gender where perhaps "1" means Female and "2" means Male. Or,
perhaps it should be "1" for Male and "2" for Female? Or, why not "7"
for Male and "4" for Female? The arbitrary nature of numeric coding
makes it more difficult for the user.
14.3 DATEFORMATS

An effective format for the storage of date values is the eight-digit


YYYYMMDD format as it allows for easy sorting by date. Note the
importance of using four digits for the year. This eliminates any
ambiguity in whether a value such as 01 means the year 1901 or the
year 2001. Using four digits also insures that the correct sort sequence
will be maintained in a group of records that include year values both
before and after the turn of the century (e.g., 1999, 2000, 2001).
lOMoARcPSD|13 98489 1

42
Remember, however, that the date format you use for storage of a date
value need not be the same date format that you present to the user
via the user interface or require of the user for data entry. While
YYYYMMDD may be useful for the storage of date values it is not how
human beings commonly write or read dates. A person is more likely
to be familiar with using dates that are in MMDDYY format. That is, a
person is much more likely to be comfortable writing the date
December 25, 2001 as "12/25/01" than "20011225."

Fortunately, it is a simple matter to code a routine that can be inserted


between the user interface or data entry routines and the data storage
routines that read from or write to magnetic disk. Thus, date values can
be saved on disk in whatever format is deemed convenient for storage
and sorting while at the same time being presented in the user
interface, data entry routines, and printed reports in whatever format is
deemed convenient and familiar for human users.

14.4 DATA ENTRY METHODS

keyboards optical character recognition


(OCR) magnetic ink character recognition
(MICR)
mark-sense forms
punch-out forms bar
codes
intelligent terminals

Tests for validating input data include: test for missing data, test for
correct field length, test for class or composition, test for range or
reasonableness, test for invalid values, test for comparison with stored
data, setting up self-validating codes, and using check digits. Tests for
class or composition are used to check whether data fields are
correctly filled in with either numbers or letters. Tests for range or
reasonableness do not permit a user to input a date such as October
32.

This is sometimes called a sanity check.

Database

A database is a group of related files. This collection is usually


organized to facilitate efficient and accurate inquiry and update. A
database management system (DBMS) is a software package that is
used to organize and maintain a database.

Usually when we use the word "file" we mean traditional or


conventional files. Sometimes we call them "flat files." With these
traditional, flat files each file is a single, recognizable, distinct entity on
lOMoARcPSD|13 98489 1

43
your hard disk. These are the kind of files that you can see cataloged
in your directory. Commonly, these days, when we use the word
"database" we are not talking about a collection of this kind of file;
rather we would usually be understood to be talking about a database
management system. And, commonly, people who work in a DBMS
environment speak in terms of "tables" rather than "files." DBMS
software allows data and file relationships to be created, maintained,
and reported. A DBMS offers a number of advantages over file-
oriented systems including reduced data duplication, easier reporting,
improved security, and more rapid development of new applications.
The DBMS may or may not store a table as an individual, distinct disk
file. The software may choose to store more than one table in a single
disk file. Or it may choose to store one table across several distinct
disk files, or even spread it across multiple hard disks. The details of
physical storage of the data are not important to the end user who only
is concerned about the logical tables, not physical disk files.

In a hierarchical database the data is organized in a tree structure.


Each parent record may have multiple child records, but any child may
only have one parent. The parent-child relationships are established
when the database is first generated, which makes later modification
more difficult.

A network database is similar to a hierarchical database except that a


child record (called a "member") may have more than one parent
(called an "owner"). Like in a hierarchical database, the parent-child
relationships must be defined before the database is put into use, and
the addition or modification of fields requires the relationships to be
redefined.

In a relational database the data is organized in tables that are called


"relations." Tables are usually depicted as a grid of rows ("tuples") and
columns ("attributes"). Each row is a record; each column is a field.
With a relational database links between tables can be established at
any time provided the tables have a field in common. This allows for a
great amount of flexibility.

14.5 SYSTEM IMPLEMENTATION

Systems implementation is the construction of the new system and its


delivery into ‘production’ or day-to-day operation.

The key to understanding the implementation phase is to realize that


there is a lot more to be done than programming. During
implementation you bring your process, data, and network models to
life with technology. This requires programming, but it also requires
database creation and population, and network installation and testing.
You also need to make sure the people are taken care of with effective
lOMoARcPSD|13 98489 1

44
training and documentation. Finally, if you expect your development
skills to improve over time, you need to conduct a review of the lessons
learned.

During both design and implementation, you ought to be looking ahead


to the support phase. Over the long run, this is where most of the costs
of an application reside.

Systems implementation involves installation and changeover from the


previous system to the new one, including training users and making
adjustments to the system. Many problems can arise at this stage. You
have to be extremely careful in implementing new systems. First, users
are probably nervous about the change already. If something goes
wrong they may never trust the new system. Second, if major errors
occur, you could lose important business data.

A crucial stage in implementation is final testing. Testing and quality


control must be performed at every stage of development, but a final
systems test is needed before staff entrust the company's data to the
new system. Occasionally, small problems will be noted, but their
resolution will be left for later.

In any large system, errors and changes will occur, the key is to identify
them and determine which ones must be fixed immediately. Smaller
problems are often left to the software maintenance staff. Change is
an important part of MIS. Designing and implementing new systems
often causes changes in the business operations. Yet, many people
do, not like changes. Changes require learning new methods, forging
new relationships with people and managers, or perhaps even loss of
jobs. Changes exist on many levels: in society, in business, and in
information systems. Changes can occur because of shifts in the
environment, or they can be introduced by internal change agents. Left
to themselves, most organizations will resist even small changes.
Change agents are objects or people who cause or facilitate changes.
Sometimes it might be a new employee who brings fresh ideas; other
times changes can be mandated by top-level management.
Sometimes an outside event such as arrival of a new competitor or a
natural disaster forces an organization to change. Whatever the cause,
people tend to resist change.

However, if organizations do not change, they cannot survive. The goal


is to implement systems in a manner that recognizes resistance to
change but encourages people to accept the new system. Effective
implementation involves finding ways to reduce this resistance.
Sometimes, implementation involves the cooperation of outsiders such
as suppliers.

Because implementation is so important, several techniques have


been developed to help implement new systems. Direct cutover is an
lOMoARcPSD|13 98489 1

45
obvious technique, where the old system is simply dropped and the
new one started. If at all possible, it is best to avoid this technique,
because it is the most dangerous to data. If anything goes wrong with
the new system, you run the risk of losing valuable information because
the old system is not available.

In many ways, the safest choice is to use parallel implementation. In


this case, the new system is introduced alongside the old one. Both
systems are operated at the same time until you determine that the
new system is acceptable. The main drawback to this method is that it
can be expensive because data has to be entered twice. Additionally,
if users are nervous about the new system, they might avoid the
change and stick with the old method. In this case, the new system
may never get a fair trial.

If you design a system for a chain of retail stores, you could pilot test
the first implementation in one store. By working with one store at a
time, there are likely to be fewer problems. But if problems do arise,
you will have more staff members around to overcome the obstacles.
When the system is working well in one store, you can move to the
next location. Similarly, even if there is only one store, you might be
able to split the implementation into sections based on the area of
business. You might install a set of computer cash registers first. When
they work correctly, you can connect them to a central computer and
produce daily reports. Next, you can move on to annual summaries
and payroll.
Eventually the entire system will be installed.
Let us now see the Process of Implementation which involves the
following steps:
Internal or outsourcing (trend is "outsourcing")
Acquisition: purchasing software, hardware, etc.
Training: employee (end-users) training, technical staff training. SQL
training in 5 days costs around $2000, + airplane, hotel, meals, rental
car ($3000 to 5000); evaluation
Testing:
a bigger system requires more testing time a good career opportunity
for non-technical people who wish to get in the door in the IT jobs.
Documentation:
backup
knowledge management system
Actual Installation
Conversion: Migration from the old system to a new system
Maintenance: very important; if you don't maintain the new system
properly, it's useless to develop a new system. monitor the system,
lOMoARcPSD|13 98489 1

46
Upgrade,
Trouble-shooting,
Continuous improvement

14.6 SYSTEM MAINTENANCE

Once the system is installed, the MIS job has just begun. Computer
systems are constantly changing. Hardware upgrades occur
continually, and commercial software tools may change every year.
Users change jobs. Errors may exist in the system. The business
changes, and management and users demand new information and
expansions. All of these actions mean the system needs to be
modified. The job of overseeing and making these modifications is
called software maintenance.

The pressures for change are so great that in most organizations today
as much as 80 per cent of the MIS staff is devoted to modifying existing
programs. These changes can be time consuming and difficult. Most
major systems were created by teams of programmers and analysts
over a long period. In order to make a change to a program, the
programmer has to understand how the current program works.

Because the program was written by many different people with


varying styles, it can be hard to understand. Finally, when a
programmer makes a minor change in one location, it can affect
another area of the program, which can cause additional errors or
necessitate more changes.

One difficulty with software maintenance is that every time part of an


application is modified, there is a risk of adding defects (bugs). Also,
over time the application becomes less structured and more complex,
making it harder to understand. These are some of the main reasons
why the year 2000 alterations were so expensive and time consuming.
At some point, a company may decide to replace or improve the heavily
modified system. There are several techniques for improving an
existing system, ranging from rewriting individual sections to
restructuring the entire application.. The difference lies in scope-how
much of the application needs to be modified. Older applications that
were subject to modifications over several years tend to contain code
that is no longer used, poorly documented changes, and inconsistent
naming conventions. These applications are prime candidates for
restructuring, during which the entire code is analyzed and reorganized
to make it more efficient. More important, the code is organized,
standardized, and documented to make it easier to make changes in
the future.

14.7 SYSTEM EVALUATION


lOMoARcPSD|13 98489 1

47
An important phase in any project is evaluating the resulting system.
As part of this evaluation, it is also important to assess the
effectiveness of the particular development process. There are several
questions to ask. Were the initial cost estimates accurate? Was the
project completed on time? Did users have sufficient input? Are
maintenance costs higher than expected?

Evaluation is a difficult issue. How can you as a manager tell the


difference between a good system and a poor one? In some way, the
system should decrease costs, increase revenue, or provide a
competitive advantage. Although these effects are important, they are
often subtle and difficult to measure. The system should also be easy
to use and flexible enough to adapt to changes in the business. If
employees or customers continue to complain about a system, it
should be re-examined.

A system also needs to be reliable. It should be available when needed


and should produce accurate output. Error detection can be provided
in the system to recognize and avoid common problems. Similarly,
some systems can be built to tolerate errors, so that when errors arise,
the system recognizes the problem and works around it. For example,
some computers exist today that automatically switch to backup
components when one section fails, thereby exhibiting fault tolerance.

Managers concern to remember when dealing with new systems is that


the evaluation mechanism should be determined at the start. The
question of evaluation is ignored until someone questions the value of
the finished product. It is a good design practice to ask what would
make this system a good system when it is finished or how we can tell
a good system from a bad one in this application. Even though these
questions may be difficult to answer, they need to be asked. The
answers, however incomplete, will provide valuable guidance during
the design stage.

Recall that every system needs a goal, a way of measuring progress


toward that goal, and a feedback mechanism.
Traditionally, control of systems has been the task of the computer
programming staff. Their primary goal was to create error-free code,
and they used various testing techniques to find and correct errors in
the code. Today, creating error-free code is not a sufficient goal. We
have all heard the phrase, "The customer is always right." The meaning
behind this phrase is that sometimes people have different opinions on
whether a system is behaving correctly. When there is a conflict, the
opinion that is most important is that of the customer. In the final
analysis, customers are in control because they can always take their
business elsewhere. With information systems, the users are the
customers and the users should be the ones in control. Users
determine whether a system is good. If the users are not convinced
that the system performs useful tasks, it is not a good system.
lOMoARcPSD|13 98489 1

48
Feasibility comparison Cost and budget Compare actual costs to
budget estimates Time estimates Revenue effects Was project
completed on time?
Maintenance costs Does system produce additional revenue?
Project goals how much money and time are spent on changes? Does
system meet the initial goals of the project?
User satisfaction how do users (and management) evaluate the
system?
System performance
System reliability: Are the results accurate and on time?
System availability: Is the system available continually? System
security: Does the system provides access to authorized users?

Summary: In this chapter, we learned Data Format, Data Entry and


Data Storage, Date Formats, Data Entry Methods,
System Implementation, System Maintenance, System
Evaluation.

Questions:
1. Explain System Implementation in detail?Ans:
refer 14.5

2. Explain System Maintenance in detail?Ans:


refer 14.6

3. Explain System Evaluation?


Ans: refer 14.7













lOMoARcPSD|13 98489 1

49


15
DOCUMENTATION

Unit Structure
15.1 Introduction
15.2 Requirements documentation
15.3 Architecture/Design documentation
15.4 Technical documentation
15.5 User documentation
15.6 Marketing documentation
15.7 CASE Tools and their importance
15.8 Summary
15.1 INTRODUCTION :

Documentation is an important part of software engineering. Types of


documentation include:
Requirements - Statements that identify attributes capabilities,
characteristics, or qualities of a system. This is the foundation for what
shall be or has been implemented.
1. Architecture/Design - Overview of software. Includes relations
to an environment and construction principles to be used in
design of software components.
2. Technical - Documentation of code, algorithms, interfaces, and
APIs.
3. End User - Manuals for the end-user, system administrators
and support staff.
4. Marketing - How to market the product and analysis of the
market demand.

15.2 REQUIREMENTS DOCUMENTATION

Requirements documentation is the description of what particular


software does or shall do. It is used throughout development to
communicate what the software does or shall do. It is also used as an
agreement or as the foundation for agreement on what the software
shall do. Requirements are produced and consumed by everyone
lOMoARcPSD|13 98489 1

50
involved in the production of software: end users, customers, product
managers, project managers, sales, marketing, software architects,
usability experts, interaction designers, developers, and testers, to
name a few. Thus, requirements documentation has many different
purposes.

Requirements come in a variety of styles, notations and formality.


Requirements can be goal-like (e.g., distributed work environment),
close to design (e.g., builds can be started by rightclicking a
configuration file and select the 'build' function), and anything in
between. They can be specified as statements in natural language, as
drawn figures, as detailed mathematical formulas, and as a
combination of them all.

The variation and complexity of requirements documentation makes it


a proven challenge. Requirements may be implicit and hard to uncover.
It is difficult to know exactly how much and what kind of documentation
is needed and how much can be left to the architecture and design
documentation, and it is difficult to know how to document
requirements considering the variety of people that shall read and use
the documentation. Thus, requirements documentation is often
incomplete (or non-existent). Without proper requirements
documentation, software changes become more difficult—and
therefore more error prone (decreased software quality) and time-
consuming (expensive).

The need for requirements documentation is typically related to the


complexity of the product, the impact of the product, and the life
expectancy of the software. If the software is very complex or
developed by many people (e.g., mobile phone software),
requirements can help to better communicate what to achieve. If the
software is safety-critical and can have negative impact on human life
(e.g., nuclear power systems, medical equipment), more formal
requirements documentation is often required. If the software is
expected to live for only a month or two (e.g., very small mobile phone
applications developed specifically for a certain campaign) very little
requirements documentation may be needed. If the software is a first
release that is later built upon, requirements documentation is very
helpful when managing the change of the software and verifying that
nothing has been broken in the software when it is modified.

Traditionally, requirements are specified in requirements documents


(e.g. using word processing applications and spreadsheet
applications). To manage the increased complexity and changing
nature of requirements documentation (and software documentation in
general), database-centric systems and specialpurpose requirements
management tools are advocated.
lOMoARcPSD|13 98489 1

51
15.3 ARCHITECTURE/DESIGN DOCUMENTATION

Architecture documentation is a special breed of design document. In


a way, architecture documents are third derivative from the code
(design document being second derivative, and code documents being
first). Very little in the architecture documents is specific to the code
itself. These documents do not describe how to program a particular
routine, or even why that particular routine exists in the form that it
does, but instead merely lays out the general requirements that would
motivate the existence of such a routine. A good architecture document
is short on details but thick on explanation. It may suggest approaches
for lower level design, but leave the actual exploration trade studies to
other documents.

Another breed of design docs is the comparison document, or trade


study. This would often take the form of a whitepaper. It focuses on
one specific aspect of the system and suggests alternate approaches.
It could be at the user interface, code, design, or even architectural
level. It will outline what the situation is, describe one or more
alternatives, and enumerate the pros and cons of each. A good trade
study document is heavy on research, expresses its idea clearly
(without relying heavily on obtuse jargon to dazzle the reader), and
most importantly is impartial. It should honestly and clearly explain the
costs of whatever solution it offers as best. The objective of a trade
study is to devise the best solution, rather than to push a particular
point of view. It is perfectly acceptable to state no conclusion, or to
conclude that none of the alternatives are sufficiently better than the
baseline to warrant a change. It should be approached as a scientific
endeavour, not as a marketing technique.

A very important part of the design document in enterprise software


development is the Database Design Document (DDD). It contains
Conceptual, Logical, and Physical Design Elements. The DDD
includes the formal information that the people who interact with the
database need. The purpose of preparing it is to create a common
source to be used by all players within the scene. The potential users
are:
• Database Designer
• Database Developer
• Database Administrator
• Application Designer
• Application Developer

When talking about Relational Database Systems, the


document should include following parts:

• Entity - Relationship Schema, including following information


and their clear definitions:
lOMoARcPSD|13 98489 1

52
o Entity Sets and their attributes o Relationships

and their attributes o Candidate keys for each entity


set o Attribute and Tuple based constraints
• Relational Schema, including following information:
o Tables, Attributes, and their properties
o Views
o Constraints such as primary keys, foreign keys, o
Cardinality of referential constraints o Cascading Policy
for referential constraints o Primary keys
It is very important to include all information that is to be used by all
actors in the scene. It is also very important to update the documents
as any change occurs in the database as well.

15.4 TECHNICAL DOCUMENTATION

This is what most programmers mean when using the term software
documentation. When creating software, code alone is insufficient.
There must be some text along with it to describe various aspects of
its intended operation. It is important for the code documents to be
thorough, but not so verbose that it becomes difficult to maintain them.
Several How-to and overview documentation are found specific to the
software application or software product being documented by API
Writers. This documentation may be used by developers, testers and
also the end customers or clients using this software application.
Today, we see lot of high end applications in the field of power, energy,
transportation, networks, aerospace, safety, security, industry
automation and a variety of other domains. Technical documentation
has become important within such organizations as the basic and
advanced level of information may change over a period of time with
architecture changes. Hence, technical documentation has gained lot
of importance in recent times, especially in the software field.

Often, tools such as Doxygen, NDoc, javadoc, EiffelStudio,


Sandcastle, ROBODoc, POD, TwinText, or Universal Report can be
used to auto-generate the code documents—that is, they extract the
comments and software contracts, where available, from the source
code and create reference manuals in such forms as text or HTML files.
Code documents are often organized into a reference guide style,
allowing a programmer to quickly look up an arbitrary function or class.

Many programmers really like the idea of auto-generating


documentation for various reasons. For example, because it is
extracted from the source code itself (for example, through comments),
the programmer can write it while referring to the code, and use the
lOMoARcPSD|13 98489 1

53
same tools used to create the source code to make the documentation.
This makes it much easier to keep the documentation up-to-date.

Of course, a downside is that only programmers can edit this kind of


documentation, and it depends on them to refresh the output (for
example, by running a job to update the documents nightly). Some
would characterize this as a pro rather than a con. Donald Knuth has
insisted on the fact that documentation can be a very difficult
afterthought process and has advocated literate programming, writing
at the same time and location as the source code and extracted by
automatic means.

Elucidative Programming is the result of practical applications of


Literate Programming in real programming contexts. The Elucidative
paradigm proposes that source code and documentation be stored
separately. This paradigm was inspired by the same experimental
findings that produced Kelp. Often, software developers need to be
able to create and access information that is not going to be part of the
source file itself. Such annotations are usually part of several software
development activities, such as code walks and porting, where third
party source code is analysed in a functional way. Annotations can
therefore help the developer during any stage of software development
where a formal documentation system would hinder progress. Kelp
stores annotations in separate files, linking the information to the
source code dynamically.

15.5 USER DOCUMENTATION

Unlike code documents, user documents are usually far more diverse
with respect to the source code of the program, and instead simply
describe how it is used.

In the case of a software library, the code documents and user


documents could be effectively equivalent and are worth conjoining,
but for a general application this is not often true. On the other hand,
the Lisp machine grew out of a tradition in which every piece of code
had an attached documentation string. In combination with strong
search capabilities (based on a Unix-like apropos command), and
online sources, Lisp users could look up documentation prepared by
these API Writers and paste the associated function directly into their
own code. This level of ease of use is unheard of in putatively more
modern systems.

Typically, the user documentation describes each feature of the


program, and assists the user in realizing these features. A good user
document can also go so far as to provide thorough troubleshooting
assistance. It is very important for user documents to not be confusing,
and for them to be up to date. User documents need not be organized
lOMoARcPSD|13 98489 1

54
in any particular way, but it is very important for them to have a
thorough index. Consistency and simplicity are also very valuable.
User documentation is considered to constitute a contract specifying
what the software will do. API Writers are very well accomplished
towards writing good user documents as they would be well aware of
the software architecture and programming techniques used. See also
Technical Writing.

There are three broad ways in which user documentation can be


organized.

1. Tutorial: A tutorial approach is considered the most useful for a


new user, in which they are guided through each step of
accomplishing particular tasks .
2. Thematic: A thematic approach, where chapters or sections
concentrate on one particular area of interest, is of more general
use to an intermediate user. Some authors prefer to convey their
ideas through a knowledge based article to facilitating the user
needs. This approach is usually practiced by a dynamic industry,
such as Information technology, where the user population is
largely correlated with the troubleshooting demands.
3. List or Reference: The final type of organizing principle is one in
which commands or tasks are simply listed alphabetically or
logically grouped, often via cross-referenced indexes. This latter
approach is of greater use to advanced users who know exactly
what sort of information they are looking for.

A common complaint among users regarding software documentation


is that only one of these three approaches was taken to the near-
exclusion of the other two. It is common to limit provided software
documentation for personal computers to online help that give only
reference information on commands or menu items. The job of tutoring
new users or helping more experienced users get the most out of a
program is left to private publishers, who are often given significant
assistance by the software developer.

15.6 MARKETING DOCUMENTATION

For many applications it is necessary to have some promotional


materials to encourage casual observers to spend more time learning
about the product. This form of documentation has three purposes:-

1. To excite the potential user about the product and instil in them
a desire for becoming more involved with it.
2. To inform them about what exactly the product does, so that
their expectations are in line with what they will be receiving.
lOMoARcPSD|13 98489 1

55
3. To explain the position of this product with respect to other
alternatives.
4. To completely shroud the function of the product in mystery.
One good marketing technique is to provide clear and memorable
catch phrases that exemplify the point we wish to convey, and also
emphasize the interoperability of the program with anything else
provided by the manufacturer.

15.7 CASE TOOLS AND THEIR IMPORTANCE

CASE tools stand for Computer Aided Software


Engineering tools. As the name implies they are computer based
programs to increase the productivity of
analysts. They permit effective communication with
users as well as other members of the development team. They
integrate the development done during each phase of a
system life cycle and also assist in
correctly assessing the effects and cost of changes so that
maintenance cost can be estimated.

Available CASE tools


Commercially available systems provide tools (i.e. computer program
packages) for each phase of the system development life cycle. A
typical package is Visual Analyst which has several tools integrated
together. Tools are also in the open domain which can be downloaded
and used. However, they do not usually have very good user
interfaces.

Following types of tools are available:


฀ System requirements specification documentation tool
฀ Data flow diagramming tool
฀ System flow chart generation tool
฀ Data dictionary creation
฀ Formatting and checking structured English process logic
฀ Decision table checking

฀ Screen design for data inputting ฀


Form design for outputs.
฀ E-R diagramming
฀ Data base normalization given the dependency information

• When are tools used


Tools are used throughout the system design phase. CASE tools are
sometimes classified as upper CASE tools and lower CASE tools. The
tools we have described so far are upper CASE tools.
lOMoARcPSD|13 98489 1

56
They are tools which will generate computer screen code from higher
level descriptions such as structured English and decision tables, such
tools are called lower CASE tools

• Object Oriented System Design Tools


Unified Modelling Language is currently the standard. UML tool set is
marketed by Rational Rose a company whose tools are widely used.
This is an expensive tool and not in our scope in his course.

• How to use the tools

Most tools have a user’s guide which is given as help files along with
the tool
Many have FAQ’s and search capabilities
Details on several open domain tools and what they do is given below.

• System Flowchart and ER-Diagram generation Tool

Name of the tool: SMARTDRAW

URL: This Software can be downloaded from:


http://www.smartdraw.com. This is a paid software, but a 30-day free
trial for learning can be downloaded.

Requirements to use the tool: PC running Windows 95, 98 or NT.


The latest versions of Internet Explorer or Netscape Navigator,
and about 20MB of free space.

What the tool does: Smartdraw is a perfect suite for drawing all kinds
of diagrams and charts: Flowcharts, Organizational charts, Gantt
charts, Network diagrams, ERdiagrams etc.

The drag and drop readymade graphics of thousands of templates from


built-in libraries makes drawing easier. It has a large drawing area and
drawings from this tool can be embedded into Word, Excel and
PowerPoint by simply copy-pasting. It has an extensive collection of
symbols for all kinds of drawings.

How to use: The built-in tips guides as the drawing is being created.
Tool tips automatically label buttons on the tool bar. There is online
tutorial provided in:
http://www.smartdraw.com/tutorials/flowcharts/tutorials1.htm
http://www.ttp.co.uk/abtsd.html

Data Flow Diagram Tool

Name of the tool: IBMS/DFD


lOMoARcPSD|13 98489 1

57
URL: This a free software that can be
downloaded from: http://viu.eng.rpi.edu

Requirements to use the tool: The following installation instructions


assume that the user uses a PC running Windows 95, 98 or NT.
Additionally, the instructions assume the use of the latest versions of
Internet Explorer or Netscape Navigator. To download the zip files &
extract them you will need WinZip or similar software. If needed
download at http://www.winzip.com.

What the tool does: The tool helps the users draw a standard data
flow diagram (a process-oriented model of information systems) for
systems analysis.

How to use: Double click on the IBMS icon to see the welcome screen.
Click Any where inside the welcome screen to bring up the first screen.
Under "Tools" menu, select DFD Modelling. The IBMS will pop up the
Data Flow Diagram window. Its menu bar has the File, Edit, Insert,
Font, Tool, Window and Help options. Its tool box on the right contains
10 icons, representing (from left to right and top to bottom) pointer, cut,
data flow, process, external entity, data store, zoom-out, zoom-in,
decompose, and compose operations, respectively.
Left click on the DFD component to be used in the toolbox, key in the
information pertaining to it in the input dialogue box that prompts for
information.
To move the DFD components: Left click on the Pointer icon in the tool
box, point to the component, and hold Left Button to move to the new
location desired in the work area.
To edit information of the DFD components: Right click on the DFD
component. The input dialogue box will prompt you to edit information
of that component.

Levelling of DFD: Use the Decompose icon in the tool box for levelling.
To save the DFD: Under File menu, choose Save or SaveAs. Input the
name and extension of the DFD (the default extension is DFD) and
specify folder for the DFD to be saved. Click OK.

System requirement specification documentation tool

Name of the tool: ARM


URL: The tool can be downloaded without cost at
http://sw-assurance.gsfc.nasa.gov/disciplines/quality/index.php

What the tool does: ARM or Automated Requirement Measurement


tool aids in writing the System Requirements Specifications right. The
user writes the SRS in a text file, the ARM tool scans this file that
lOMoARcPSD|13 98489 1

58
contains the requirement specifications and gives a report file with the
same prefix name as the user’s source file and adds an extension of
“.arm”. This report file contains a category called INCOMPLETE that
indicate the words and phrases that are not fully developed.

Requirements to use the tool: PC running Windows 95, 98 or NT.


The latest versions of Internet Explorer or Netscape Navigator, and
about 8MB of free space.

How to use the tool : On clicking the option Analyze under File menu
and selecting the file that contains the System Requirements
Specifications, the tool processes the document to check if the
specifications are right and generate a ARM report.

The WALKTHROUGH option in the ARM tool assists a user by guiding


him as to how to use the tool apart from the HELP menu. The
README.doc file downloaded during installation also contains
description of the usage of this tool.

A Tool for designing and Manipulating Decision tables


Name of the tool: Prologa V.5
URL: http://www.econ.kuleuven.ac.be/prologa
Note: This tool can be downloaded from the above given URL, after
obtaining the password.

What the tool does: The purpose of the tool is to allow the decision
maker to construct and manipulate (systems of) decision tables. In this
construction process, the features available are automatic table
contraction, automatic table optimization, (automatic) decomposition
and composition of tables, verification and validation of tables and
between tables, visual development, and rule based specification.
15.8 SUMMARY :

In this Chapter, we learned the concept of Documentation and types of


documentation such as Requirements documentation
Architecture/Design documentation, Technical documentation, User
documentation, Marketing documentation and CASE Tools and their
importance.

Questions:
1. Explain Requirements documentation?
Ans: refer 15.2
2. Explain Architecture/Design documentation?
Ans: refer 15.3
3. Explain Technical documentation?
lOMoARcPSD|13 98489 1

59
Ans: refer 15.4
4. Explain User documentation?
Ans: refer 15.5
5. Explain Marketing documentation?
Ans: refer 15.6



You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy