0% found this document useful (0 votes)
4 views19 pages

STM Unit-1 (A)

The document outlines the purpose and goals of software testing, emphasizing the importance of bug prevention and discovery. It details the phases of a tester's mental life, the necessity of test design, and various testing methods, including unit, component, integration, and system testing. Additionally, it discusses the complexities of software bugs and the significance of understanding the testing environment and models.

Uploaded by

harshakoushil04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views19 pages

STM Unit-1 (A)

The document outlines the purpose and goals of software testing, emphasizing the importance of bug prevention and discovery. It details the phases of a tester's mental life, the necessity of test design, and various testing methods, including unit, component, integration, and system testing. Additionally, it discusses the complexities of software bugs and the significance of understanding the testing environment and models.

Uploaded by

harshakoushil04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 19

UNIT-I (A)

INTRODUCTION

1. PURPOSE OF TESTING

1.1. What we do

 Testing is a systematic approach that the given one is correct or not.


 In testing each and every module is tested.
 Testing find out the bugs in given software.
 Test design and testing takes longer than program design and coding.
 Bugs are due to imperfect Communication among programmers.

1.2. Productivity and Quality in Software

 Once in production, each and every stage is subjected to quality control and testing from
component source inspection to final testing before shipping.
 If any defects found at any stage in any part of it will be sent back for rework and correction.
 The productivity is measured by the sum of the costs of the resources, the rework and the failed
components and the cost of quality assurance and testing.
 By testing we get the quality of software.
 If we give the guarantee for the quality of a product then it is called Quality Assurance.

1.3. Goals for Testing

 Testing and test design as a parts of quality assurance, should also focus on bug prevention.
 To the extent that testing and test design do not prevent bugs, they should be able to discover
symptoms caused by bugs.
 The two major goals of testing
1. Bug Prevention
2. Bug Discovery

1. Bug Prevention:

 Which is considered as the primary goal for testing.


 When a bug is detected appropriate method should be used to remove it.
 If the bugs are not prevented then certain symptoms are discovered that are the major causes for
the occurrence of bugs.
 When a particular bug is prevented there’s no need to perform testing again.
 Designing tests is considered as the best bug preventer
 It means that if a test is properly designed then it is easy to detect and prevent bugs before coding
phase.
 If testing is performed at every stage of software development i.e during intitialization,
designing, coding followed by testing would be necessary as all the bugs have been discovered
and prevented during the design phase itself.

2. Bug discovery:
 Which is considered as the secondary goal for testing.
 It is performed when the primary goal fails to prevent the bugs.
 A single bug can have many reasons for its occurrence.
 Just by determining that the program is incorrect doesn’t reveal the discovery of bugs
 Each symptom can be revealed only by performing many small detailed tests on each of the
individual components.

1.4. Phases in a Tester’s Mental Life


Why testing? – What’s the purpose of testing?
The attitudinal progression of a tester is characterized by the following 5 phases.

Phase 0: There’s no difference between testing and debugging.


Here there is no effective testing, no quality assurance and no quality.

P
Phase 1: The purpose of testing is to show that the software works.
 There is a difference between testing and debugging.
 The probability of showing that ‘the software works’ decreases as testing increases.
 A failed test shows software does not work, even if many tests pass.

Phase 2: The purpose of testing is to show that the software doesn’t works.
 One failed test proves that- Tests are to be redesigned to test corrected software.
 The test reveals a bug, the programmer corrects it, the test designer designs and executes another
test intended to demonstrate another bug. It is never ending sequence.

Phase 3: The purpose of testing is not to prove anything, but to reduce the perceived risk of not
w working to an acceptable value.
 Here testing implements the quality control. To the extent that testing catches bugs and to the
extent that those bugs are fixed, testing does improve the product.
 If a test is passed, then the product’s quality does not change, but our perception of that quality
does.

Phase 4: Here what testing can do and can’t do.


Testability is the goal for two reasons:
1. Reduce the labor of testing.
2. Testable code has fewer bugs than code that’s hard to test.

1.5. Test Design

We know that the software code must be designed and tested, but many appear to be unaware
that tests themselves must be designed and tested. Tests should be properly designed and tested before
applying it to the actual code.

In test design phase the given system is tested that bugs are present or not. If test design is not
formally designed no one is sure whether there was a bug or not. So, test design is a important one to get
the system without any bugs.

1.6. Testing Isn’t Everything

We must first review, inspect, read, do walkthroughs and then test.

The major methods in decreasing order of effectiveness as follows:

Inspection methods: It includes walkthroughs, desk checking, formal inspection and code reading.
These methods appear to be as effective as testing, but the bugs caught do not completely overload.

Design style: It includes testability, openness and clarity to prevent bugs.

Static Analysis Methods: It includes of strong typing and type checking. It eliminates an entire category
of bugs.

Languages: The source language can help reduce certain kinds of bugs. Programmers find new kinds of
bugs in new languages, so the bug rate seems to be independent of the languages used.
Design methodology and Development Environment: Design methodology can prevent many kinds of
bugs. Development process used and the environment in which what methodology is embedded.

1.7. The pesticide paradox and the complexity Barrier

1. Pesticide Paradox: Every method you use to prevent or find bugs leaves a residue of subtler bugs
again which those methods are ineffectual.

2. Complexity Barrier: Software complexity grows to the limits of our ability to manage that
complexity.

2. MODEL FOR TESTING


2.1. The Project
A real-world context characterized by the following model project.

Application: It is a real-time system that must provide timely responses to user requests for services. It is
an online system connected to remote terminals.

Staff: The programming staff consists of twenty to thirty programmers depends upon the project, but not
too big to manage. Specialists are used for system’s design.

Schedule: The project will take 24 months from the start of design to formal acceptance by the
customer. Acceptance will be followed by a 6-month cutover period

Specification: means requirements. Functionally detailed user requirements are documented here.

Acceptance Test: The system will be accepted only after a formal acceptance test. At first the customer
will intend to design the acceptance test, but later it will become the software design team’s
responsibility.

Personnel (Programmers): The staff is professional and experienced in programming and in the
application. At least half of the staff knows the source language in before. May be one-third are junior
programmers.

Standards: Programming and test standards exist and are usually followed.

Objectives: The system is the first of many similar systems that will be implemented in the future. No
two will be identical, but they will have 75% of the code in common.

Source: One-third of the code is new, one-third extracted from a previous, reliable, but poorly
documented system, and one-third is being rehosted (from other language or computer).

2.2. Overview

The process starts with a program embedded in an environment, such as a computer, an operating
system, or a calling program. This understanding leads us to create three models:

 A model of the environment,


 A model of the program,
 A model of the expected bugs.

From these models we create a set of tests, which are then executed.
 The result of each test is either expected or unexpected.
 If unexpected, it may lead us to revise the test, our model or concept of how the program
behaves, our concept of what bugs are possible, or the program itself.
 Only rarely would we attempt to modify the environment.

2.3 The Environment

A program’s environment is the hardware and software required to make it run.

For online systems the environment may include communications lines, other systems, terminals,
and operators. The environment also includes all programs that interact with and are used to create the
program under test, such as operating system, loader, linkage editor, compiler, utility routines.

If testing reveals an unexpected result, we may have to change our beliefs (our model of the
environment) to find out what went wrong. But sometimes the environment could be wrong: the bug
could be in the hardware or firmware after all.

2.4. The Program

Programs are too complicated to understand in detail. The concept of the program is to be
simplified in order to test it.

If simple model of the program doesn’t explain the unexpected behavior, we may have to
modify that model to include more facts and details. And if that fails, we may have to modify the
program.

2.5. Bugs

 Bugs are more insidious (cunning/ harmful) than ever we expect them to be.
 An unexpected test result may lead us to change our notion of what a bug is and our model
of bugs.
 Some optimistic notions that many programmers or testers have about bugs are usually
unable to test effectively and unable to justify the dirty tests most programs need.

There are 9 Hypotheses regarding Bugs.

1) Benign Bug Hypothesis:


 The belief that the bugs are nice, tame(mild) & logical.
 These bugs are not dangerous bugs.
2) Bug locality hypothesis:
 Belief that bugs are localized.
 The belief that a bug discovered with in a component effects only that components
behavior.
 Subtle(narrow) bugs affect that component & external to it.
3) Control Dominance hypothesis:
 The belief that errors in the control structure of programs dominate the bugs.
 Belief that most errors are in control structures, but data flow & data structure errors
are common too.
 Subtle bugs are not detectable only through control structure.
4) Code/data Separation hypothesis:
 Belief that the bugs respect the separation of code & data.
5) Lingua Salvator Est hypothesis:
 Belief that the language syntax & semantics eliminate most bugs.
 But, such features may not eliminate Subtle Bugs.
6) Corrections Abide hypothesis:
 Belief that a corrected bug remains corrected.
 Subtle bugs may not.
7) Silver Bullets hypothesis:
 Belief that - language, design method, representation, environment etc. grant
immunity(resistance) from bugs.
8) Sadism Suffices hypothesis:
 Belief that a sadistic streak, low cunning eliminate most bugs.
 Tough bugs need methodology & techniques.
9) Angelic Testers hypothesis:
 Belief that testers are better at test design than programmers at code design.

2.6. Tests

 Tests are Formal procedures.


 Input preparation, outcome prediction and observation, tests documentation and command
execution are subjected to errors.
 An unexpected test result may lead us to revise the test and test models.

2.7. Testing and Levels


We do three distinct kinds of testing on a typical software system: unit/ component testing,
integration testing, and system testing.

The objectives of each kind are different and therefore, we can expect the mix of test methods
used to differ. They are:

1) Unit, Unit Testing:

 A Unit is the smallest testable part of an application called units are individually and
independently tested.
 A unit is usually the work of one programmer and it consists of several hundred or fewer, lines
of source code.
 The goal of unit testing is to segregate each part of the program and test that the individual parts
are working correctly.
 Unit Testing is done before integration.
 When our tests reveal such faults, we say that there is a unit bug.

2) Component, Component Testing:

 A component is an integrated aggregate of one or more units.


 Component test means testing all related modules that form a component as a group to make sure
they work together.
 Component testing is a method where testing of each component in an application is done
separately.
 Suppose, in an application there are 5 components. Testing of each 5 components separately and
efficiently is called as component testing.
 Component testing is done by the tester.
 When our tests reveal such problems, we say that there is a component bug.

3) Integration, Integration Testing:

 Integration is a process by which components are aggregated to create larger components.


 In Integration Testing, individual software modules are integrated logically and tested as a group.
 Integration testing becomes necessary to verify the software modules work in unity
Also after integrating two different components together we do the integration testing. As displayed in
the image below when two different modules ‘Module A’ and ‘Module B’ are integrated then the
integration testing is done.
 Integration testing is done by a specific integration tester or developer.
 Integration testing follows two approach known as ‘Top Down’ approach and ‘Bottom Up’
approach as shown in the image below:

4) System, System Testing:

 System Testing is a level of the software testing where a complete and integrated software is
tested.
 In system testing the behavior of whole system/product is tested as defined by the scope of the
development project or product.
 System testing is most often the final test to verify that the system to be delivered meets the
specification and its purpose.
 System testing is carried out by specialists testers or developers.

Difference between Unit Testing and Integration Testing


Unit Testing Integration Testing

1 It do not occurs after and before of It occurs after Unit Testing and before System
anything. Testing.
2 It is not abbreviated by any name. It is abbreviated as “I&T” that is why
sometimes also called Integration and Testing.

3 It is not further divided into any. It is further divided into Top-down


Integration, Bottom-Up Integration and so on.

4 It may not catch integration errors, or Integration testing uncovers an error that
other system-wide issues because unit arises when modules are integrated to build
testing only tests the functionality of the the overall system.
units themselves.
5 The goal of unit testing is to isolate each The goal of Integration Testing is to combined
part of the program and show that the modules in the application and tested as a
individual parts are correct. group to see that they are working fine.
6 It does not follow anything. It follows unit testing and precedes system
testing.
7 It obviously starts from the module It obviously starts from the interface
specification. specification.
8 Unit testing always tests the visibility of Integration testing always tests the visibility of
code in details. the integration structure.

9 Unit testing always tests the visibility of Integration testing always tests the visibility of
code in detail the integration structure.
10 It definitely pays attention to the It definitely pays attention to the integration
behavior of single modules. among modules.
11 It is only the kind of White Box Testing. It is both the kind of Black Box and White
Box Testing.

Difference between System Testing and Integration Testing


.

System Testing Integration Testing


1. In system testing we test the complete 1. In integration testing we test the modules to
system as a whole to check whether the see whether they are integrating properly or not
system is properly working or not means as by combining the modules and tested as a
per the requirements or not. group.
2. In system testing testers always have to 2. In integration testing testers have to
concentrate on both functional and non concentrate on functional testing means main
functional testing like performance, load, focus on how two modules are combined and
stress, security, recovery testing and so on. tested as a group.
3. For performing this testing system must be 3. For performing this testing system must be
integrated tested. unit tested before.
4. It starts from the requirements 4. It starts from the interface specification.
specifications.
5. System Testing does not test the visibility 5. Integration Testing test the visibility of the
of code. integration structure.
6. It does not require any frame means 6. It requires some frame means scaffolding.
scaffolding.
7 In System Testing Tester pays attention to
7. In Integration Testing Tester pays attention
the system functionality. to the integration among modules.
8. It pays attention to the system functionality.
8. It pays attention to the Integration among
modules.
9. It is always only the kind of Black Box 9. It is a kind of both White Box Testing and
Testing. Black Box Testing.

2.8. The Role of Models

 Used for the testing process until system behavior is correct or until the model is insufficient
(for testing).
 Unexpected results may force a revision of the model.
 Art of testing consists of creating, selecting, exploring and revising models.
 The model should be able to express the program.

3. CONSEQUENCES OF BUGS

3.1. Importance of Bugs:

The importance of bugs depends on metrics like frequency, correction cost, installation cost, and
consequences cost

Frequency:

 Frequency of a bug refers to the rate at which it occurs.


 The more frequently it occurs, the more will be its frequency.
 Pay more attention to the more frequent bug types.

Correction Cost:

 Once the bug has been detected, it need to be corrected.


 Correction cost is nothing but the cost that occurs during the error correction process.
 The cost is the sum of 2 factors: (1) The cost of discovery
(2)The cost of correction.
 The cost of these bugs increase suddenly at the later stages in the development cycle, when
the bug is discovered.
 The size of the system also effects the correction cost i.e with increase in the size of a
system, the correction cost also increases.

Installation Cost:

 Installation cost depends on the number of installations i.e. this cost relies on the different
applications that are used in the system.
 As the number of applications increases, the associated cost also increases.

Consequences:
 It depends upon the consequences (affects) of bugs.
 There are many consequences of bugs which makes the system either from mild to
infectious.

A metric for the importance of bug is


Importance of bug($) = Frequency*[Correction cost + Installation cost + Consequential
cost]

3.2. Consequences of Bugs :

 The consequences of bugs can range from mild to infectious.


 The consequences of a bug can be measured in terms of human rather than machine.

The various bug consequences are as follows

1) Mild
• Appearance of bug such as misspelled output or misaligned print-out.
2) Moderate
• This consequence effects the performance of the system and hence it results in a duplicate
output.
3) Annoying (Irritating)
• Systems behavior is dehumanizing for e.g. names are truncated(shorten, cut
short)/modified arbitrarily(Random choice).
• Because of the presence of bugs in the system the performance of the system degrades.
• For example: The names are shortened or changed.
4) Disturbing
• I t refuses to handle Legitimate (legal/authorized) transactions .
• For e.g. ATM machine refuses to process the withdrawal transaction.
5) Serious
• The information about the transaction gets lost such as
• Losing track of transactions & transaction events.
• Accountability(responsibility) is lost.
• Transaction occurrence.
• When such information is lost the resulting bug is called a serious bug.
6) Very serious
• System does another transaction instead of requested
• For e.g. Deposit transaction is converted into withdrawal transaction
7) Extreme
• The consequence occurs frequently and is not limited to small number of users or
transactions.
8) Intolerable
• Long term, unrecoverable corruption of the Data base (not easily discovered and may lead
to system down).
9) Catastrophic
• System fails and shuts down.
10) Infectious
• Corrupts other systems, even when it may not fail.

3.3. Flexible Severity Rather Than Absolutes

Many programmers, testers, and quality assurance workers have an absolutist attitude towards
bugs. “Everybody knows that a program must be perfect if it’s to work: if there’s a bug, it must be
fixed.”

Metrics:
Correction Cost
 The cost of correcting a bug has almost nothing to do with symptom severity.
 Catastrophic, life-threatening bugs could be ignorable to fix, whereas minor annoyances could
require major rewrites to correct.

Context and Application Dependency


 Severity depends on the context and the application in which it is used.
Creating Culture Dependency
 Severity depends on the creators of the software and their cultural aspirations.

User Culture Dependency


 Severity depends on user culture
 Naïve users of software go crazy over bugs where as experts may just ignore..

The Software Development Phase


 Severity depends on development phase.
 Any bug gets more severe as it gets closer to field use and more severe the longer it has been
around.
3.4. The Nightmare List and When to Stop Testing

1. List all nightmares in terms of the symptoms & reactions of the user to their consequences.

2. Convert the consequences of each nightmare into a cost. There could be rework cost. Order these
from the costliest to the cheapest. Discard those with which you can live with.

3. Based on experience, measured data, insight, and published statistics assume the kind of bugs
causing each symptom. This is called ‘bug design process’. A bug type can cause multiple
symptoms.

4. Order the causative bugs by decreasing probability. Calculate the importance of a bug type as:
Importance of bug type j = ∑ C j k P j k where, all k

C j k = cost due to bug type j causing nightmare k

P j k = probability of bug type j causing nightmare k

Cost due to all bug types = ∑ ∑ C j k P j k

5. Rank the bug types in order of decreasing importance.

6. Design tests & design QA inspection process by using most effective methods against the most
important bugs.

7. If a test is passed or when correction is done for a failed test, some nightmares disappear. As
testing progresses, revise the probabilities & nightmares list as well as the test strategy.

8. Stop testing when probability (importance & cost) proves to be inconsequential.

4. TAXONOMY OF BUGS (Classification)

There is no universally correct way to categorize bugs. This taxonomy is not rigid. Bugs are difficult to
categorize. A given bug can be put into one or another category depending on its history and the
programmer’s state of mind.

6 main categories with sub-categories. (Sample bug statistics)

 Requirements, Features, Functionality Bugs 24.3% bugs

 Structural Bugs 25.2%

 Data Bugs 22.4%

 Coding Bugs 9.9%

 Interface, Integration and System Bugs 10.7%

 Testing & Test Design Bugs 2.8 %

4.1. Requirements, Features, Functionality Bugs

Requirements & Specifications:


 Requirements are expressed in the form of specifications.
 Specifications give detailed description about the requirements of the software.
 If specifications are not clearly defined we may get bugs.
 Incompleteness, ambiguous (confusion) bugs occur in this phase.
 Analyst’s assumptions not known to the designer
 These are expensive: introduced early in SDLC and removed at the last
Feature Bugs:
 The difficulties that arise in feature bugs are due to specification problems
 A feature can be either incorrect or missing feature.
 A Missing feature can be detected & corrected easily.
 Removing features may complicate software and cause more bugs.
Functionality Bugs
 The features that are similar are combined to form groups.
 Thorough testing is performed between the features of group and between the interaction of
features.
 The difficulty arises when there is an unexpected interaction between the features.
Remedies Testing Techniques:
 Functional test techniques - transaction flow testing, syntax testing, domain testing,
logic testing, and state testing can eliminate requirements & specifications bugs.
4.2. Structural Bugs
In this, we have 5 types of structural bugs, their causes and remedies.
1) Control & Sequence bugs
2) Logic Bugs
3) Processing bugs
4) Initialization bugs
5) Data flow bugs
1) Control & Sequence Bugs
 Control and sequence bugs occurs due to missing process steps, paths left out unreachable code
 Improper nesting of loops, Incorrect loop-termination Missing process steps, duplicated or
unnecessary processing, violent GOTOs.
 Usage of Old code (assembly language & COBOL).
(Remedies)
 Detected by Unit, structural, path, & functional testing.
2) Logic Bugs
 Logic bugs show the behavior of the statements & operations.
 Logic bugs include incorrect design of test cases, incorrect explanation and combination of
cases, complicated operators.
 Misunderstanding of the semantics of the control structures & logic operators.
 Improper layout of cases (including impossible cases & ignoring necessary cases).
 Deeply nested conditional statements & using many logical operations in 1 stmt.
Prevention and Control(Remedies)
 Logic testing, careful checks, functional testing
3) Processing Bugs
 Arithmetic, algebraic, mathematical function evaluation
 These bugs occur if wrong data conversion methods are used for converting data from one
format to another.
 Improper use of relational operators.
Prevention and Control(Remedies)
 These frequent bugs are caught in Unit Testing & have only localized effect.
 Domain testing methods
4) Initialization Bugs
 Initialization bugs consists of wrong data types & wrong initial values, wrong registers etc.
 Initialization bugs are detected by both experienced programmers & testers.
 Initialize to wrong data type or format.
 These are very common.
Prevention and Control:
 Programming tools, explicit declaration & type checking in source language, preprocessors.
 Data flow test methods help design of tests and debugging.
5) Dataflow Bugs & Anomalies
 Dataflow anomalies arise when a data is attempted to be used for an unnecessary purpose
such as using an uninitialized variable, initializing a variable twice.
 Re-initialization without an intermediate use.
 The data flow anomaly detection can be detected by a compiler during
Compile time and execution time.
Prevention and Control:
 Data flow testing methods
4.3. Data Bugs

Depend on the types of data or the representation of data. There are 4 sub categories.
1) General Data Bugs
2) Dynamic Data Vs Static Data
3) Information, Parameter, and Control Bugs
4) Contents, Structure & Attributes related Bugs
1) General Data Bugs:
 Due to data object specs., formats, number of objects & their initial values.
 Common as much as in code, especially as the code migrates to data.
2) Dynamic Data Vs Static Data:
Dynamic Data:
 Dynamic data are transitory (temporary).
 Whatever their purpose their lifetime is relatively short (the processing time of a transaction).
 A storage object may be used to hold dynamic data of different types, with different formats
and attributes.
 Dynamic data bugs are due to leftover garbage in a shared resource.
 This can be handled in one of the three ways:
(1) Clean up after the use by the user
(2) Common Cleanup by the resource manager
(3) No Clean up (this is usually we do).
Static Data:
 Static Data are fixed in form and content.
 They appear in the source code or database directly or indirectly.
 Compile time processing will solve the bugs caused by static data.
3) Information, parameter, and control:
 Static or dynamic data can serve in one of three roles, or in combination of roles: as a
parameter, for control, or for information.
Information: dynamic, local to a single transaction or task.
Parameter: parameters passed to a call.
Control: data used in a control structure for a decision.
Bugs:
 Usually simple bugs and easy to catch.
4) Contents, Structure & Attributes related Bugs:
Data specifications consist of three parts.
Contents: are pure bit patterns.
Structure: Size, shape & alignment of data object in memory. A structure may have substructures.
Attributes: Semantics associated with the contents of data object (e.g. integer, string and
subroutine).
Bugs:
 Content bugs are due to misinterpretation or corruption of it.
 Structural bugs may be due to wrong declaration
 Attribute bugs are due to misinterpretation of data type, probably at an interface

4.4. Coding Bugs


 Coding errors of all kinds can create any of the other kind of bugs.
 Syntax errors are generally not important in the scheme of things if the source language
translator has adequate syntax checking.
 If a program has many syntax errors, then we should expect many logic and coding bugs.
 The documentation bugs are also considered as coding bugs which may mislead the
maintenance programmers.

4.5. Interface, Integration and Systems Bugs:


There are 9 types of bugs of this type.
1) External Interfaces
2) Internal Interfaces
3) Hardware Architecture Bugs
4) Operating System Bugs
5) Software architecture bugs
6) Control & Sequence bugs
7) Resource management bugs
8) Integration bugs
9) System bugs
1) External Interfaces:
 The external interfaces are the means used to communicate with the world.
 These include devices, sensors, input terminals, printers, and communication lines.
 Other external interface bugs are: invalid timing or sequence assumptions related to external
signals
 Misunderstanding external input or output formats.
 Insufficient tolerance to bad input data.
2) Internal Interfaces:
 Internal interfaces are not different from external interfaces but they are more controlled.
 A best example for internal interfaces is communicating routines.
 The external environment is fixed and system must adapt to it but the internal environment,
which consists of interfaces with other components, can be negotiated (mutual agreed)
 Internal interfaces have the same problem as external interfaces.
3) Hardware Architecture:
 Bugs related to hardware architecture originate mostly from misunderstanding how the
hardware works.
 Examples of hardware architecture bugs: address generation error, i/o device operation /
instruction error, waiting too long for a response, incorrect interrupt handling etc.
 The remedy for hardware architecture and interface problems is
(1) Good Programming and Testing
(2) Centralization of hardware interface software in programs written by hardware interface
specialists.
4) Operating System Bugs:
 Program bugs related to the operating system are a combination of hardware architecture and
interface bugs mostly caused by a misunderstanding of what it is the operating system does.
 Use operating system interface specialists, and use explicit interface modules for all operating
system calls.
5) Software Architecture:
 Software architecture bugs are the kind that called - interactive.
 Routines can pass unit and integration testing without revealing such bugs..
 Careful integration of modules and subjecting the final system to test are effective methods for
these bugs.
6) Control and Sequence Bugs (Systems Level):
 These bugs include: Ignored timing, Assuming that events occur in a specified sequence,
Working on data before all the data have arrived from disc, Waiting for an impossible
combination of prerequisites, Missing, wrong, redundant or superfluous process steps.
 The remedy for these bugs is highly structured sequence control.
7) Resource Management Problems:
 Memory is subdivided into dynamically allocated resources such as buffer blocks, queue
blocks, task control blocks, and overlay buffers.
 Some resource management and usage bugs: Required resource not obtained, Wrong resource
used, Resource is already in use, Resource dead lock etc.
8) Integration Bugs:
 Integration bugs are bugs having to do with the integration of, and with the interfaces between,
working and tested components.
 These bugs results from inconsistencies or incompatibilities between components.
9) System Bugs:
 System bugs covers all kinds of bugs such as programs, data, hardware, and the operating
systems.
 There can be no meaningful system testing until there has been thorough component and
integration testing.

4.6. Test and Test Design Bugs


Testing:
 Testers have no immunity to bugs.
 Tests require code that uses complicated scenarios & databases, to be executed.
 Bugs in Testing (scripts or process) are not software bugs.
 It’s difficult & takes time to identify if a bug is from the software or from the test
script/procedure.
Test Criteria: (Test Design)
 Testing process is correct, but the criterion for judging software’s response to tests is incorrect
or impossible. So, a proper test criterion has to be designed.
 The more complicated the criteria, the likelier they are to have bugs.

Remedies:
 Test Debugging: The first remedy for test bugs is testing and debugging the tests. Test
debugging, when compared to program debugging, is easier because tests, when properly
designed are simpler than programs and don’t have to make concessions to efficiency.
 Test Quality Assurance: Programmers have the right to ask how quality in independent testing
is monitored.
 Test Execution Automation: Assemblers, loaders, compilers are developed to reduce the
incidence of programming and operation errors. Test execution bugs are virtually eliminated by
various test execution automation tools.
 Test Design Automation: Just as much of software development has been automated, much
test design can be and has been automated.
Overview:
 At the end of a long study on taxonomy, we could say
 Good design controls bugs and is easy to test.
 The two factors (Good and Bad design) results in high productivity differences.
 Good test works best on good code and good design.
 Good test cannot do a magic on badly designed software.
 Biggest part of software cost is the cost of bugs: the cost of detecting them, the cost of
correcting them, the cost of designing tests that discover them, and the cost of running tests.
 The test techniques you use must be matched to the kind of bugs you have.

5. IMPLEMENTATION & APPLICATION OF PATH TESTING


5.1. Integration, Coverage, and Paths in Called Components:
• Mainly used in Unit testing, especially new software.
• In an Idealistic bottom-up integration test process – integrating one component at a time. Use
stubs for lower level component (sub-routines), test interfaces and then replace stubs by real
subroutines.
• In reality, integration proceeds in associated blocks of components. Stubs may be avoided.
Need to think about paths inside the subroutine.
To achieve C1 or C2 coverage:
 Sensitization becomes more difficult.
 Selected path may be unachievable as the called components’ processing may block it.
Weaknesses of Path testing:
• It assumes that effective testing can be done one level at a time without bothering what happens
at lower levels.
• Predicate coverage problems & blinding.

5.2. Applications:
1) Application of path testing to New Code:
1.Do Path Tests for C1 + C2 coverage.
2.A path blocked or not achievable could mean a bug.
3.When a bug occurs the path may be blocked.
2) Application of path testing to Maintenance:
1) Path testing is applied first to the modified component.
2) Select paths to achieve C2 over the changed code.
3) Newer and more effective strategies could emerge to provide coverage in maintenance phase.
3) Application of path testing to Rehosting:
1) Path testing with C1 + C2 coverage is a powerful tool for rehosting old software.
2) Software is rehosted as it’s no more cost effective to support the application environment.
Process of path testing during Rehosting
• A translator from the old to the new environment is created & tested. Rehosting process
is to catch bugs in the translator software.
• A complete C1 + C2 coverage path test suite is created for the old software. Tests are run
in the old environment. The outcomes become the specifications for the rehosted
software.
• Another translator may be needed to adapt the tests & outcomes to the new environment.
• The cost of the process is high, but it avoids risks associated with rewriting the code.
• Once it runs on new environment, it can be optimized or enhanced for new functionalities
(which were not possible in the old environment).

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy