U.iii ST
U.iii ST
Regulation : 2017
UNIT III
LEVELS OF TESTING
The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit
Tests – The Test Harness – Running the Unit tests and Recording results – Integration
tests – Designing Integration Tests – Integration Test Planning – Scenario testing – Defect bash
elimination System Testing – Acceptance testing – Performance testing – Regression Testing –
Internationalization testing – Ad-hoc testing – Alpha, Beta Tests – Testing OO systems –
Usability and Accessibility testing – Configuration testing – Compatibility testing – Testing
the documentation –Website testing
Levels of Testing
Unit Test: In unit test a single component is tested
Goal: To detect functional and structural defects in the unit
Integration Test: In the integration level several components are tested as a group
Goal: To investigate component interactions
System Test: In the system level the system as a whole is tested
Goal: To evaluate attributes such as usability, reliability, and performance
Acceptance test: In acceptance test the development organization must
show that the software meets all of the client’s requirements.
Goal: To provide a good opportunity for developers to request recommendation letters from the
client.
1
IT6004 SOFTWARE TESTING
There are two major approaches to system development bottom-up, and top- down.
These approaches are supported by two major types of programming languages procedure-
oriented and object-oriented. The different nature of the code produced requires testers to use
different strategies to identify and test components and component groups.
• Systems developed with procedural languages are generally viewed as being composed
of passive data and active procedures. When test cases are developed the focus is on
generating input data to pass to the procedures (or functions) in order to reveal defects.
• Object oriented systems are viewed as being composed of active data along with
allowed operations on that data, all encapsulated within a unit similar to an abstract
data type. The operations on the data may not be called upon in any specific order.
Testing this type of software means designing an order of calls to the operations using various
parameter values in order to reveal defects. Issues related to inheritance of operations also impact
on testing
Levels of abstraction
Procedural Systems:
In traditional procedural systems, the lowest level of abstraction is described as a function
or a procedure that performs some simple task. The next higher level of abstraction is a group
of procedures (or functions) that call one another and implement a major system requirement.
These are called subsystems. Combining subsystems finally produces the system as a whole,
which is the highest level of abstraction.
Object-oriented systems:
In object-oriented systems the lowest level is viewed by some researchers as the method
or member function. The next highest level is viewed as the class that encapsulates data
and methods that operate on the data . To move up one more level in an object-oriented system
some researchers use the concept of the cluster, which is a group of cooperating or related classes.
Finally, there is the system level, which is a combination of all the clusters and any
auxiliary code needed to run the system. Not all researchers in object-oriented development
have the same view of the abstraction levels, for example, Jorgensen describes the thread
as a highest level of abstraction
Unit Test
A unit is the smallest possible testable software component.It can be characterized in
several ways. For example, a unit in a typical procedure-oriented software system:
• performs a single cohesive function;
• can be compiled separately;
• is a task in a work breakdown structure (from the manager’s point of view);
• contains code that can fit on a single page or screen.
A unit is traditionally viewed as a function or procedure implemented in a
procedural (imperative) programming language. In object-oriented systems both the method
and the class/object have been suggested by researchers as the choice for a unit. A unit may
also be a small-sized COTS component purchased from an outside vendor that is
undergoing evaluation by the purchaser, or a simple module retrieved from an in-house
reuse library.
2
IT6004 SOFTWARE TESTING
Finally, the planner estimates resources needed for unit test, such as hardware,
software, and staff, and develops a tentative schedule under the constraints identified at that time.
Phase 2: Identify Unit Features to be tested
This phase requires information from the unit specification and detailed design
description. The planner determines which features of each unit will be tested, for example:
functions, performance requirements, states, and state transitions, control structures,
messages, and data flow patterns.
3
IT6004 SOFTWARE TESTING
If some features will not be covered by the tests, they should be mentioned and the risks of
not testing them be assessed. Input/output characteristics associated with each unit should also be
identified, such as variables with an allowed ranges of values and performance at a certain level.
Phase 3: Add Levels of Detail to the Plan
In this phase the planner refines the plan as produced in the previous two phases. The
planner adds new details to the approach, resource, and scheduling portions of the unit test
plan. Unit availability and integration scheduling information should be included in the revised
version of the test plan.
The planner must be sure to include a description of how test results will be
recorded. Test-related documents that will be required for this task, for example, test logs, and test
incident reports, should be described, and references to standards for these documents
provided. Any special tools required for the tests are also described.
Designing the Unit Test:
The Part of the preparation work for unit test involves unit test design. It is important to specify
(i) the test cases and, (ii) the test procedures
• Test case data should be tabularized for ease of use, and reuse.
• To specifically support object-oriented test design and the organization of test data,
Berard has described a test case specification notation, He arranges the components of
a test case into a semantic network with parts, Object_ID, Test_Case_ID, Purpose, and
List_of_Test_Case_Steps. Each of these items has component parts. In the test design
specification Berard also includes lists of relevant states, messages (calls to methods), exceptions,
and interrupts.
As part of the unit test design process, developers/testers should also describe the
relationships between the tests. Test suites can be defined that bind related tests together as a
group. All of this test design information is attached to the unit test plan.
Test case design at the unit level can be based on use of the black and white box test
design strategies. Both of these approaches are useful for designing test cases for functions
and procedures. They are also useful for designing tests for the individual methods (member
functions) contained in a class
Many developers/testers consider the class to be the component of choice for unit testing.
The process of testing classes as units is sometimes called component test. A class encapsulates
multiple interacting methods operating on common data, so what we are testing is the intraclass
interaction of the methods.
When testing on the class level we are able to detect not only traditional types of
defects,(control or data flow errors ), but also defects due to the nature of object oriented systems,
( encapsulation, inheritance, and polymorphism errors).
Issues related to the testing and retesting of class as a component
Issue 1: Adequately Testing Classes
Testers must decide if they are able to adequately cover all necessary features of each
method in class testing. Coverage objectives and test data need to be developed for each of the
4
IT6004 SOFTWARE TESTING
methods. A class can be adequately tested as a whole by observation of method interactions using
a sequence of calls to the member functions with appropriate parameters
Issue 2: Observation of Object States and State Changes
Methods often modify the state of an object, and the tester must ensure that each
state transition is proper. The test designer can prepare a state table that specifies states
the object can assume, and then in the table indicate sequence of messages and parameters
that will cause the object to enter each state.
Issue 3: The Retesting of Classes—I
A tester of object-oriented code would conclude that only the class with implementation
changes to its methods needs to be retested. Client classes using unchanged interfaces need
not be retested. This is not necessarily correct, as explained by Perry and Kaiser on adequate
testing for object-oriented systems. In an object-oriented system, if a developer changes a class
implementation that class needs to be retested as well as all the classes that depend on it. If a
superclass, for example, is changed, then it is necessary to retest all of its subclasses. In addition,
when a new subclass is added (or modified), we must also retest the methods inherited from each
of its ancestor super classes.
Issue 4: The Retesting of Classes—II
Very often a tester may assume that once a method in a superclass has been tested, it does
not need retested in a subclass that inherits it. However, in some cases the method is used in a
different context by the subclass and will need to be retested. In addition, there may be an
overriding of methods where a subclass may replace an inherited method with a locally defined
method. Not only will the new locally defined method have to be retested, but designing a new
set of test cases may be necessary
The Test Harness
The auxiliary code developed to support testing of units and components is called a test
harness. The harness consists of drivers that call the target code and stubs that represent
modules it calls.
Drivers and stubs can be developed at several levels of functionality
Functionality of a driver
call the target unit;
do 1, and pass inputs parameters from a table;
do 1, 2, and display parameters;
do 1, 2, 3 and display results (output parameters)
5
IT6004 SOFTWARE TESTING
It is very important for the tester at any level of testing to carefully record, review,
and check test results. The tester must determine from the results whether the unit has passed or
failed the test. If the test is failed, the nature of the problem should be recorded in what
is sometimes called a test incident report.
Differences from expected behavior should be described in detail. This gives clues to
the developers to help them locate any faults. During testing the tester may determine that
additional tests are required. For example, a tester may observe that a particular coverage goal
has not been achieved. The test set will have to be augmented and the test plan documents should
reflect these changes.
Reasons for the failure of a Unit
• fault in the unit implementation ( code)
• a fault in the test case specification
• a fault in test procedure execution
• a fault in the test environment
6
IT6004 SOFTWARE TESTING
Integration Test
The main goals of Integration test (Procedural code)
• to detect defects that occur on the interfaces of units;
• to assemble the individual units into working subsystems and finally a complete
system that is ready for system test
In unit test the testers attempt to detect defects that are related to the functionality and
structure of the unit. There is some simple testing of unit interfaces when the units interact with
drivers and stubs. However, the interfaces are more adequately tested during integration
test when each unit is finally connected to a full and working implementation of those
units it calls, and those that call it. As a consequence of this assembly or integration
process, software subsystems and finally a completed system is put together during the
integration test. The completed system is then ready for system testing.
Integration in procedural oriented system
Integration testing works best as an iterative process in procedural oriented system. One
unit at a time is integrated into a set of previously integrated modules which have passed a set of
integration tests. The interfaces and functionally of the new unit in combination with the
previously integrated units is tested. For conventional procedural/functional-oriented systems
there are two major integration strategies—top-down and bottom-up
Bottom-up integration of the modules begins with testing the lowest level modules, those
at the bottom of the structure chart. These are modules that do not call other modules. Drivers are
needed to test these modules. The next step is to integrate modules on the next upper level of the
structure chart whose subordinate modules have already been tested. After a module has been
tested, its driver can be replaced by an actual module
Bottom Up Approach
7
IT6004 SOFTWARE TESTING
Top-down integration starts at the top of the module hierarchy. The rule of thumb for
selecting candidates for the integration sequence says that when choosing a candidate module to
be integrated next, at least one of the module’s superordinate (calling) modules must have been
previously tested.
M1 is the highest-level module, start the sequence by developing stubs to test it. In order
to get a good upward flow of data into the system, the stubs may have to be fairly complex. The
next modules to be integrated are those for whom their superordinate modules has been
tested. The way to proceed is to replace one-by-one each of the stubs of the superordinate module
with a subordinate module
Integration in OO system
8
IT6004 SOFTWARE TESTING
Scenario Testing
Scenario testing is a software testing technique that makes best use of scenarios. Scenarios
help a complex system to test better where in the scenarios are to be credible which are easy to
evaluate.
Methods in Scenario Testing:
9
IT6004 SOFTWARE TESTING
System scenarios
Use-case and role-based scenarios
Strategies to Create Good Scenarios:
Enumerate possible users their actions and objectives
Evaluate users with hacker's mindset and list possible scenarios of system abuse.
List the system events and how does the system handle such requests.
List benefits and create end-to-end tasks to check them.
Read about similar systems and their behaviour.
Studying complaints about competitor's products and their predecessor.
Scenario Testing Risks:
When the product is unstable, scenario testing becomes complicated.
Scenario testing are not designed for test coverage.
Scenario tests are often heavily documented and used time and again
System Testing
• The goal is to ensure that the system performs according to its requirements.
• System test evaluates both functional behaviour and quality requirements such as reliability,
usability, performance and security.
Types of System Test:
The types of system tests are as follows:
Functional testing
Performance testing
Stress testing
Configuration testing
Security testing
Recovery testing
10
IT6004 SOFTWARE TESTING
Types of Testing
Functional Testing:
• Functional tests at the system level are used to ensure that the behaviour of the system
adheres to the requirements specification.
• All functional requirements for the system must be achievable by the system.
Functional tests are black box in nature. The focus is on the inputs and proper outputs for
each function. Improper and illegal inputs must also be handled by the system. System behavior
under the latter circumstances tests must be observed. All functions must be tested
Functional Test must focus on the following goals
All types or classes of legal inputs must be accepted by the software
All classes of illegal inputs must be rejected
All possible classes of system output must exercised and examined
All effective system states and state transitions must be exercised and examined
All functions must be exercised
Performance Testing:
Requirements document shows that there are two major types of requirements:
1. Functional requirements: Users describe what functions the software should perform. Testers
test for compliance of these requirements at the system level with the functional-based system
tests.
2. Quality requirements: They are non-functional in nature but describe quality levels expected
for the software. One example of a quality requirement is performance level. The users may have
objectives for the software system in terms of memory use, response time, throughput, and delays.
The goal of system performance tests is to see if the software meets the performance
requirements. Testers also learn from performance test whether there are any hardware or software
factors that impact on the system’s performance. Performance testing allows testers to tune the
system (i.e), to optimize the allocation of system resources.
Performance objectives must be articulated clearly by the users/clients in the requirements
documents, and be stated clearly in the system test plan.
The objectives must be quantified.
For example, a requirement that the system return a response to a query in “a reasonable
amount of time” is not an acceptable requirement; the time requirement must be specified in
quantitative way.
Resources for performance testing must be allocated in the system test plan
11
IT6004 SOFTWARE TESTING
Configuration Testing:
Configuration testing allows developers/testers to evaluate system performance and
availability when hardware exchanges and reconfigurations occur.
Objectives of Configuration Testing:
Show that all the configuration changing commands and menus work properly.
Show that all interchangeable devices are really interchangeable, and that they each enter
the proper states for the specified conditions.
Show that the systems’ performance level is maintained when devices are interchanged, or
when they fail.
Operations performed during configuration test
Rotate and permutate the positions of devices to ensure physical/ logical device
permutations work for each device (e.g., if there are two printers A and B, exchange their
positions);
Induce malfunctions in each device, to see if the system properly handles the malfunction;
Induce multiple device malfunctions to see how the system reacts.
Security Testing:
12
IT6004 SOFTWARE TESTING
Security testing evaluates system characteristics that relate to the availability, integrity, and
confidentially of system data and services. Users/clients should make sure their security needs
are clearly known at requirements time, so that security issues can be addressed by designers and
testers.
Computer software and data can be compromised by:
(i) Criminals intent on doing damage, stealing data and information, causing denial of service,
invading privacy;
(ii) Errors on the part of honest developers/maintainers who modify, destroy, or compromise
data because of misinformation, misunderstandings, and/or lack of knowledge.
Sources of Damages:
(i) Viruses;
(ii) Trojan horses;
(iii) Trap doors;
(iv) Illicit channels
Effects of security breaches could be extensive and can cause:
• loss of information;
• corruption of information;
• misinformation;
• privacy violations;
• denial of service.
Developers try to ensure the security of their systems through use of protection mechanisms such
as passwords, encryption, virus checkers, and the detection and elimination of trap doors.
Queries related to password
• What is the minimum and maximum allowed length for the password?
• Can it be pure alphabetical or must it be a mixture of alphabetical and other characters?
• Can it be a dictionary word?
• Is the password permanent, or does it expire periodically?
Areas to focus on during security testing
• Password Checking
• Legal and Illegal Entry with Passwords
• Password Expiration
• Encryption
• Browsing
• Trap Doors
• Viruses
The best approach to ensure security if resources permit, is to hire a so-called “tiger team”
which is an outside group of penetration experts who attempt to breach the system security.
Although a testing group in the organization can be involved in testing for security breaches, the
tiger team can attack the problem from a different point of view. Before the tiger team starts its
work the system should be thoroughly tested at all levels.
Recovery Testing:
Recovery testing subjects a system to losses of resources in order to determine if it can
recover properly from these losses. This type of testing is especially important for transaction
systems, for example, on-line banking Software
A test scenario might be to emulate loss of a device during a transaction. Tests would
determine if the system could return to a well-known state, and that no transactions have been
13
IT6004 SOFTWARE TESTING
compromised. Systems with automated recovery are designed for this purpose. They usually have
multiple CPUs and/or multiple instances of devices, and mechanisms to detect the failure of a
device. They also have a so-called “checkpoint” system that meticulously records transactions
and system states periodically so that these are preserved in case of failure. This information
allows the system to return to a known state after the failure. The recovery testers must ensure
that the device monitoring system and the checkpoint software are working properly. The area to
be focused during recovery testing are, Restart and Switch over
In these testing situations all transactions and processes must be carefully examined to detect:
• loss of transactions;
• merging of transactions;
• incorrect transactions;
• an unnecessary duplication of a transaction.
Regression Testing
Regression testing a black box testing technique that consists of re-executing those tests
that are impacted by the code changes. These tests should be executed as often as possible
throughout the software development life cycle.
Types of Regression Tests:
• Final Regression Tests: - A "final regression testing" is performed to validate the build that
hasn't changed for a period of time. This build is deployed or shipped to customers.
• Regression Tests: - A normal regression testing is performed to verify if the build has NOT
broken any other parts of the application by the recent code changes for defect fixing or for
enhancement.
Selecting Regression Tests:
• Requires knowledge about the system and how it affects by the existing functionalities.
• Tests are selected based on the area of frequent defects.
• Tests are selected to include the area, which has undergone code changes many a times.
• Tests are selected based on the criticality of the features.
Regression Testing Steps:
Regression tests are the ideal cases of automation which results in better Return On Investment
(ROI).
• Select the Tests for Regression.
• Choose the apt tool and automate the Regression Tests
• Verify applications with Checkpoints
• Manage Regression Tests/update when required
• Schedule the tests
• Integrate with the builds
• Analyze the results
14
IT6004 SOFTWARE TESTING
Internationalization testing is the process of verifying the application under test to work uniformly
across multiple regions and cultures.
Internationalization Checklists:
1. Testing to check if the product works across settings.
2. Verifying the installation using various settings.
3. Verify if the product works across language settings and currency settings.
Internationalization typically entails:
1. Designing and developing the application such that it simplifies the deployment of localization
and internationalization of the application. This includes taking care of proper rendering of
characters in various languages, string concatenation etc. which can be done by using Unicode
during development
2. Taking care of the big picture while developing the application in order to support bidirectional
text or for identifying languages we need to add markup in out DTD
3. Code should be able to support local and regional language and also other cultural preferences.
This involves using predefined localization data and features from existing libraries. Date time
formats, local calendar holidays, numeric formats, data presentation, sorting, data alignment,
name and address displaying format etc.
4. Making localizable elements separate from the source code so that code is independent. And
then as per user’s requirement, localized content can be loaded based on their preferences.
15
IT6004 SOFTWARE TESTING
Globalization Process
Adhoc Testing
When a software testing performed without proper planning and documentation, it is said to
be Adhoc Testing. Such kind of tests are executed only once unless testers uncover the defects.
Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are
the least formal type of testing as it is NOT a structured approach.
The success of Adhoc testing depends upon the capability of the tester, who carries out the
test. The tester has to find defects without any proper planning and documentation, solely based
on his intuition
Adhoc testing can be performed when there is limited time to do exhaustive testing and
usually performed after the formal test execution. Adhoc testing will be effective only if the tester
has in-depth understanding about the System under Test.
Forms of Adhoc Testing:
Buddy Testing: Two buddies, one from development team and one from test team mutually work
on identifying defects in the same module. Buddy testing helps the testers develop better test
cases while development team can also make design changes early. This kind of testing happens
usually after completing the unit testing.
Pair Testing:
Two testers are assigned the same modules and they share ideas and work on the same systems
to find defects. One tester executes the tests while another tester records the notes on their
findings.
Monkey Testing:
In monkey testing, testing is performed randomly without any test cases in order to break the
system.
Adhoc Testing can be made more effective by
Preparation
16
IT6004 SOFTWARE TESTING
Acceptance tests are a very important milestone for the developers. At this time the clients
will determine if the software meets their requirements. Contractual obligations can be satisfied
if the client is satisfied with the software. Development organizations will often receive their final
payment when acceptance tests have been passed.
Acceptance tests must be rehearsed by the developers/testers. There should be no signs of
unprofessional behaviour or lack of preparation. Clients do not appreciate surprises. Clients
should be received in the development organization as respected guests. They should be provided
with documents and other material to help them participate in the acceptance testing process, and
to evaluate the results. After acceptance testing the client will point out to the developers which
are not been satisfied. Some requirements may be deleted, modified, or added due to changing
needs.
If the client is satisfied that the software is usable and reliable, and they give their
approval, then the next step is to install the system at the client’s site. If the client’s site conditions
are different from that of the developers, the developers must set up the system so that it can
interface with client software and hardware. Retesting may have to be done to insure that the
software works as required in the client’s environment. This is called installation test.
If the software has been developed for the mass market (shrink-wrapped software), then
testing it for individual clients/users is not practical or even possible in most cases. Very often
this type of software undergoes two stages of acceptance test.
Stages of Acceptance Test:
The two stages of acceptance testing are,
1. Alpha test
2. Beta test
Alpha test:
17
IT6004 SOFTWARE TESTING
This test takes place at the developer’s site. A cross-section of potential users and members
of the developer’s organization are invited to use the software. Developers observe the users and
note problems.
Beta test:
The software is sent to a cross-section of users who install it and use it under real world
working conditions. The users send records of problems with the software to the development
organization where the defects are repaired sometimes in time for the current release. In many
cases the repairs are delayed until the next release.
Testing OO System
Testing is a continuous activity during software development. In object-oriented systems,
testing encompasses three levels, namely, unit testing, subsystem testing, and system testing
18
IT6004 SOFTWARE TESTING
System Testing:
System testing involves testing the system as a whole and is the responsibility of the quality-
assurance team. The team often uses system tests as regression tests when assembling new
releases.
Object-Oriented Testing Techniques:
Grey Box Testing:
The different types of test cases that can be designed for testing object-oriented programs are
called grey box test cases. Some of the important types of grey box testing are:
State model based testing: This encompasses state coverage, state transition coverage, and
state transition path coverage.
Use case based testing: Each scenario in each use case is tested.
Class diagram based testing: Each class, derived class, associations, and aggregations are
tested.
Sequence diagram based testing: The methods in the messages in the sequence diagrams
are tested.
Techniques for Subsystem Testing:
The two main approaches of subsystem testing are:
Thread based testing: All classes that are needed to realize a single use case in a subsystem
are integrated and tested.
Use based testing: The interfaces and services of the modules at each level of hierarchy are
tested. Testing starts from the individual classes to the small modules comprising of classes,
gradually to larger modules, and finally all the major subsystems.
19
IT6004 SOFTWARE TESTING
back up or back out? Are users inputs acknowledged? Do the menus or windows go too
deep?
Is there excessive functionality? Does the software attempt to do too much, either as a
whole or in part? Do too many features complicate users work? Do users feel like they’re
getting information overload?
If all else fails, does the help system really help the user?
Consistent
Consistency within the software and with other software is a key attribute. Users develop
habits and expect that if they do something a certain way in one program, another will do the
same operation the same way
Ex.
Shortcut keys and menu selections
Terminology and naming
Audience
Placement and keyboard equivalents for buttons
Flexible
Users like choices—not too many, but enough to allow them to select what they want to do
and how they want to do it. The Windows Calculator has two views: Standard and Scientific.
Users can decide which one they need for their task or the one they’re most comfortable using.
Comfortable
Software should be comfortable to use. It shouldn’t get in the way or make it difficult for a
user to do his work. Software comfort is a pretty touchy-feely concept
Features in identifying good and bad software comfort
Appropriateness
Error handling
Performance
Correct
When testing for correctness, tester are testing whether the UI does what it’s supposed to do.
To ensure correctness make sure to pay attention to
Marketing differences
Language and spelling
Bad media
WYSIWYG
Useful
The final trait of a good user interface is whether it’s useful. The tester is not concerned with
whether the software itself is useful, just whether the particular feature is
When testers are reviewing the product specification, preparing to test, or actually performing
testing,
Ask if the features see actually contribute to the software’s value.
Do they help users do what the software is intended to do?
If testers don’t think they’re necessary, do some research to find out why they’re in the
software
Accessibility Testing:
Developing software with a user interface that can be used by the disabled isn’t just a good
idea, a guideline, or a standard—it’s the law
20
IT6004 SOFTWARE TESTING
Configuration Testing
Configuration testing is the process of checking the operation of the software under testing
with all the various types of hardware.
The different configuration possibilities for a standard Windows-based PC
• The PC - Compaq, Dell, Gateway, Hewlett Packard, IBM
• Components - system boards, component cards, and other internal devices such as disk
drives, CD-ROM drives, video, sound, modem, and network cards
• Peripherals - printers, scanners, mice, keyboards, monitors, cameras, joysticks
• Interfaces - ISA, PCI, USB, PS/2, RS/232, and Firewire
• Options and memory - hardware options and memory sizes
• Device Drivers
All components and peripherals communicate with the operating system and the software
applications through low-level software called device drivers. These drivers are often provided
by the hardware device manufacturer and are installed when you set up the hardware. Although
technically they are software, for testing purposes they are considered part of the hardware
configuration.
To start configuration testing on a piece of software, the tester needs to consider which of
these configuration areas would be most closely tied to the program.
Examples:
• A highly graphical computer game will require lots of attention to the video and sound
areas.
• A greeting card program will be especially vulnerable to printer issues.
• A fax or communications program will need to be tested with numerous modems and
network configurations.
Finding Configuration Bugs:
The sure way to tell if a bug is a configuration problem and not just an ordinary bug is to perform
the exact same operation that caused the problem, step by step, on another computer with a
completely different configuration.
If the bug doesn’t occur, it’s very likely a configuration problem.
If the bug happens on more than one configuration, it’s probably just a regular bug.
21
IT6004 SOFTWARE TESTING
The general process that the tester should use when planning the configuration testing are:
• Decide the types of hardware needed
• Decide what hardware brands, models, and device drivers are available
• Decide which hardware features, modes, and options are possible
• Pare down the identified hardware configurations to a manageable set
• Identify the software’s unique features that work with the hardware configurations
• Design the test cases to run on each configuration
• Execute the tests on each configuration
• Rerun the tests until the results satisfy the test team
Compatibility Testing
Software compatibility testing means checking that your software interacts with and shares
information correctly with other software. This interaction could occur between two programs
simultaneously running on the same computer or even on different computers connected through
the Internet thousands of miles apart
Examples of compatible software
Cutting text from a Web page and pasting it into a document opened in your word processor
In performing software compatibility testing on a new piece of software, the tester needs to
concentrate on
Platform and Application Versions (Backward and Forward Compatibility, The Impact of
Testing Multiple Versions)
Standards and Guidelines (High-Level Standards and Guidelines, Low-Level Standards and
Guidelines
Data Sharing Compatibility (File save and file load, File export and file import )
Compatibility Testing
Platform and Application Versions
Selecting the target platforms or the compatible applications is really a program
management or a marketing task. A Person who’s very familiar with the customer base will
22
IT6004 SOFTWARE TESTING
decide whether the software is to be designed for a specific operating system, Web browser, or
some other platform. They’ll also identify the version or versions that the software needs to be
compatible with.
Backward compatible:
If something is backward compatible, it will work with previous versions of the software.
Forward compatible:
If something is forward compatible, it will work with future versions of the software.
Software Interaction
To begin the task of compatibility testing, tester needs to equivalence partition all the
possible software combinations into the smallest, effective set that verifies that the software
interacts properly with other software.
Factors to be considered in partitioning
• Popularity
• Age
• Type
• Manufacturer
In Compatibility testing a new application tester may require to test it on multiple platforms and
with multiple applications
23
IT6004 SOFTWARE TESTING
24
IT6004 SOFTWARE TESTING
Forms
Test forms just as you would if they were fields in a regular software program. Are the fields
the correct size? Do they accept the correct data and reject the wrong data? Is there proper
25
IT6004 SOFTWARE TESTING
confirmation when you finally press Enter? Are optional fields truly optional and the required
ones truly required?
Objects and Other functionality
Take care to identify all the features present on each page. Treat each unique feature as a
feature in a regular program and test it individually with the standard testing techniques Does it
have its own states? Does it handle data? Could it have ranges or boundaries? What test cases
apply and how should they be equivalence classed?
Grey-Box Testing:
Greybox testing, is a mixture of black-box and white-box testing. Test the software as a black-
box, but supplement the work by taking a peek (not a full look, as in white-box testing) at what
makes the software work. HTML and Web pages can be tested as a grey box
White-Box Testing:
Features of a website tested with a white-box approach are
Dynamic Content
Database-Driven Web Pages
Programmatically Created Web Pages
Server Performance and Loading
Security
Configuration and Compatibility Testing:
Configuration testing is the process of checking the operation of the software with various
types of hardware and software platforms and their different settings.
Compatibility testing is checking the software’s operation with other software
The possible hardware and software configurations might be that could affect the operation or
appearance of a web site are:
• Hardware Platform
• Browser Software and Version
• Browser Plug-Ins
• Browser Options
• Video Resolution and Color Depth
• Text Size
• Modem Speeds
Usability Testing :
Following and testing a few basic rules can help make Web sites more usable.
Jakob Nielsen, a respected expert on Web site design and usability, has performed extensive
research on Web site usability.
The Top Ten Mistakes in Web Design
1. Gratuitous Use of Bleeding-Edge Technology
2. Scrolling Text, Marquees, and Constantly Running Animations
3. Long Scrolling Pages
4. Non-Standard Link Colors
5. Outdated Information
6. Overly Long Download Times
7. Lack of Navigation Support
8. Orphan Pages
9. Complex Web Site Addresses
26
IT6004 SOFTWARE TESTING
27
IT6004 SOFTWARE TESTING
Correctness
28
IT6004 SOFTWARE TESTING
29