0% found this document useful (0 votes)
16 views30 pages

U.iii ST

Uploaded by

s39074983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views30 pages

U.iii ST

Uploaded by

s39074983
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

NAME OF THE SUBJECT : Software Testing

Subject code : IT6004

Regulation : 2017

UNIT III – LEVELS OF TESTING


IT6004 SOFTWARE TESTING

UNIT III
LEVELS OF TESTING

The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit
Tests – The Test Harness – Running the Unit tests and Recording results – Integration
tests – Designing Integration Tests – Integration Test Planning – Scenario testing – Defect bash
elimination System Testing – Acceptance testing – Performance testing – Regression Testing –
Internationalization testing – Ad-hoc testing – Alpha, Beta Tests – Testing OO systems –
Usability and Accessibility testing – Configuration testing – Compatibility testing – Testing
the documentation –Website testing

The Need for Levels of Testing


Execution-based software testing, especially for large systems, is usually carried out at
different levels. In most cases there will be 3-4 levels, or major phases of testing: unit test,
integration test, system test, and some type of acceptance test

Levels of Testing
Unit Test: In unit test a single component is tested
Goal: To detect functional and structural defects in the unit
Integration Test: In the integration level several components are tested as a group
Goal: To investigate component interactions
System Test: In the system level the system as a whole is tested
Goal: To evaluate attributes such as usability, reliability, and performance
Acceptance test: In acceptance test the development organization must
show that the software meets all of the client’s requirements.
Goal: To provide a good opportunity for developers to request recommendation letters from the
client.

Levels of Testing and Software Development Paradigms

1
IT6004 SOFTWARE TESTING

There are two major approaches to system development bottom-up, and top- down.
These approaches are supported by two major types of programming languages procedure-
oriented and object-oriented. The different nature of the code produced requires testers to use
different strategies to identify and test components and component groups.
• Systems developed with procedural languages are generally viewed as being composed
of passive data and active procedures. When test cases are developed the focus is on
generating input data to pass to the procedures (or functions) in order to reveal defects.
• Object oriented systems are viewed as being composed of active data along with
allowed operations on that data, all encapsulated within a unit similar to an abstract
data type. The operations on the data may not be called upon in any specific order.
Testing this type of software means designing an order of calls to the operations using various
parameter values in order to reveal defects. Issues related to inheritance of operations also impact
on testing
Levels of abstraction
Procedural Systems:
In traditional procedural systems, the lowest level of abstraction is described as a function
or a procedure that performs some simple task. The next higher level of abstraction is a group
of procedures (or functions) that call one another and implement a major system requirement.
These are called subsystems. Combining subsystems finally produces the system as a whole,
which is the highest level of abstraction.
Object-oriented systems:
In object-oriented systems the lowest level is viewed by some researchers as the method
or member function. The next highest level is viewed as the class that encapsulates data
and methods that operate on the data . To move up one more level in an object-oriented system
some researchers use the concept of the cluster, which is a group of cooperating or related classes.
Finally, there is the system level, which is a combination of all the clusters and any
auxiliary code needed to run the system. Not all researchers in object-oriented development
have the same view of the abstraction levels, for example, Jorgensen describes the thread
as a highest level of abstraction

Unit Test
A unit is the smallest possible testable software component.It can be characterized in
several ways. For example, a unit in a typical procedure-oriented software system:
• performs a single cohesive function;
• can be compiled separately;
• is a task in a work breakdown structure (from the manager’s point of view);
• contains code that can fit on a single page or screen.
A unit is traditionally viewed as a function or procedure implemented in a
procedural (imperative) programming language. In object-oriented systems both the method
and the class/object have been suggested by researchers as the choice for a unit. A unit may
also be a small-sized COTS component purchased from an outside vendor that is
undergoing evaluation by the purchaser, or a simple module retrieved from an in-house
reuse library.

2
IT6004 SOFTWARE TESTING

Components suitable for Unit Test


Advantages of Unit Test:
• It is easier to design, execute, record, and analyze test results.
• If a defect is revealed by the tests it is easier to locate and repair since only the one unit is
under consideration.

Unit Test: The need for preparation


To prepare for unit test the developer/tester must perform several tasks. They are:
 plan the general approach to unit testing;
 design the test cases, and test procedures (these will be attached to the test plan);
 define relationships between the tests;
 prepare the auxiliary code necessary for unit test.
Unit Test Planning:
A general unit test plan should be prepared. It may be prepared as a component of the
master test plan or as a stand-alone plan. It should be developed in conjunction with the master
test plan and the project plan for each project. Documents that provide inputs for the unit test
plan are the project plan, as well the requirements, specification, and design documents
that describe the target units.
Development phases for unit test planning:
Phase 1: Describe Unit Test Approach and Risks
Phase 2: Identify Unit Features to be tested
Phase 3: Add Levels of Detail to the Plan
Phase 1: Describe Unit Test Approach and Risks
In this phase of unit testing planning the general approach to unit testing is outlined.
The test planner:
 identifies test risks;
 describes techniques to be used for designing the test cases for the units;
 describes techniques to be used for data validation and recording of test results;
 describes the requirements for test harnesses and other software that interfaces with the
units to be tested, for example, any special objects needed for testing object- oriented units.
During this phase the planner also identifies, Completeness requirements ,what will be covered
by the unit test and to what degree (states, functionality, control, and data flow patterns).
Termination conditions for the unit tests. This includes coverage requirements, and special cases.
Special cases may result in abnormal termination of unit test (e.g., a major design flaw). Strategies
for handling these special cases need to be documented.

Finally, the planner estimates resources needed for unit test, such as hardware,
software, and staff, and develops a tentative schedule under the constraints identified at that time.
Phase 2: Identify Unit Features to be tested
This phase requires information from the unit specification and detailed design
description. The planner determines which features of each unit will be tested, for example:
functions, performance requirements, states, and state transitions, control structures,
messages, and data flow patterns.

3
IT6004 SOFTWARE TESTING

If some features will not be covered by the tests, they should be mentioned and the risks of
not testing them be assessed. Input/output characteristics associated with each unit should also be
identified, such as variables with an allowed ranges of values and performance at a certain level.
Phase 3: Add Levels of Detail to the Plan
In this phase the planner refines the plan as produced in the previous two phases. The
planner adds new details to the approach, resource, and scheduling portions of the unit test
plan. Unit availability and integration scheduling information should be included in the revised
version of the test plan.
The planner must be sure to include a description of how test results will be
recorded. Test-related documents that will be required for this task, for example, test logs, and test
incident reports, should be described, and references to standards for these documents
provided. Any special tools required for the tests are also described.
Designing the Unit Test:
The Part of the preparation work for unit test involves unit test design. It is important to specify
(i) the test cases and, (ii) the test procedures
• Test case data should be tabularized for ease of use, and reuse.
• To specifically support object-oriented test design and the organization of test data,
Berard has described a test case specification notation, He arranges the components of
a test case into a semantic network with parts, Object_ID, Test_Case_ID, Purpose, and
List_of_Test_Case_Steps. Each of these items has component parts. In the test design
specification Berard also includes lists of relevant states, messages (calls to methods), exceptions,
and interrupts.
As part of the unit test design process, developers/testers should also describe the
relationships between the tests. Test suites can be defined that bind related tests together as a
group. All of this test design information is attached to the unit test plan.
Test case design at the unit level can be based on use of the black and white box test
design strategies. Both of these approaches are useful for designing test cases for functions
and procedures. They are also useful for designing tests for the individual methods (member
functions) contained in a class

The Class as a Testable Unit:


If an organization is using the object-oriented paradigm to develop software systems it
will need to select the component to be considered for unit test.
The choices are (a) either the individual method as a unit or (b) the class as a whole

Many developers/testers consider the class to be the component of choice for unit testing.
The process of testing classes as units is sometimes called component test. A class encapsulates
multiple interacting methods operating on common data, so what we are testing is the intraclass
interaction of the methods.
When testing on the class level we are able to detect not only traditional types of
defects,(control or data flow errors ), but also defects due to the nature of object oriented systems,
( encapsulation, inheritance, and polymorphism errors).
Issues related to the testing and retesting of class as a component
Issue 1: Adequately Testing Classes
Testers must decide if they are able to adequately cover all necessary features of each
method in class testing. Coverage objectives and test data need to be developed for each of the

4
IT6004 SOFTWARE TESTING

methods. A class can be adequately tested as a whole by observation of method interactions using
a sequence of calls to the member functions with appropriate parameters
Issue 2: Observation of Object States and State Changes
Methods often modify the state of an object, and the tester must ensure that each
state transition is proper. The test designer can prepare a state table that specifies states
the object can assume, and then in the table indicate sequence of messages and parameters
that will cause the object to enter each state.
Issue 3: The Retesting of Classes—I
A tester of object-oriented code would conclude that only the class with implementation
changes to its methods needs to be retested. Client classes using unchanged interfaces need
not be retested. This is not necessarily correct, as explained by Perry and Kaiser on adequate
testing for object-oriented systems. In an object-oriented system, if a developer changes a class
implementation that class needs to be retested as well as all the classes that depend on it. If a
superclass, for example, is changed, then it is necessary to retest all of its subclasses. In addition,
when a new subclass is added (or modified), we must also retest the methods inherited from each
of its ancestor super classes.
Issue 4: The Retesting of Classes—II
Very often a tester may assume that once a method in a superclass has been tested, it does
not need retested in a subclass that inherits it. However, in some cases the method is used in a
different context by the subclass and will need to be retested. In addition, there may be an
overriding of methods where a subclass may replace an inherited method with a locally defined
method. Not only will the new locally defined method have to be retested, but designing a new
set of test cases may be necessary
The Test Harness
The auxiliary code developed to support testing of units and components is called a test
harness. The harness consists of drivers that call the target code and stubs that represent
modules it calls.
Drivers and stubs can be developed at several levels of functionality
Functionality of a driver
 call the target unit;
 do 1, and pass inputs parameters from a table;
 do 1, 2, and display parameters;
 do 1, 2, 3 and display results (output parameters)

5
IT6004 SOFTWARE TESTING

Role of the test harness


Role of the test harness
Functionality of a stub
 display a message that it has been called by the target unit;
 do 1, and display any input parameters passed from the target unit;
 do 1, 2, and pass back a result from a table;
 do 1, 2, 3, and display result from table
Drivers and stubs are developed as procedures and functions for traditional
imperative-language based systems. For object-oriented systems, developing drivers and stubs
often means the design and implementation of special classes to perform the required
testing tasks.
The higher the degree of functionally for the harness, the more resources it will require
to design, implement, and test. Developers/testers will have to decide depending on the nature of
the code under test, just how complex the test harness needs to be.

Running the Unit tests and Recording results:


Unit tests can begin when
 the units becomes available from the developers
 the test cases have been designed and reviewed, and
 the test harness, and any other supplemental supporting tools, are available.
The testers then proceed to run the tests and record results. Documents called test
logs that can be used to record the results of specific tests. The status of the test efforts for a unit,
and a summary of the test results, could be recorded in a simple format. These forms can be
included in the test summary report, and are of value at the weekly status meetings that are often
used to monitor test progress.
Summary worksheet for Unit test result

It is very important for the tester at any level of testing to carefully record, review,
and check test results. The tester must determine from the results whether the unit has passed or
failed the test. If the test is failed, the nature of the problem should be recorded in what
is sometimes called a test incident report.
Differences from expected behavior should be described in detail. This gives clues to
the developers to help them locate any faults. During testing the tester may determine that
additional tests are required. For example, a tester may observe that a particular coverage goal
has not been achieved. The test set will have to be augmented and the test plan documents should
reflect these changes.
Reasons for the failure of a Unit
• fault in the unit implementation ( code)
• a fault in the test case specification
• a fault in test procedure execution
• a fault in the test environment

6
IT6004 SOFTWARE TESTING

• a fault in the unit design


The causes of the failure should be recorded in a test summary report, which is a summary
of testing activities for all the units covered by the unit test plan.
Ideally, when a unit has been completely tested and finally passes all of the
required tests it is ready for integration. Under some circumstances a unit may be given a
conditional acceptance for integration test. This a risky procedure and testers should evaluate
the risks involved. Units with a conditional pass must eventually be repaired.
When testing of the units is complete, a test summary report should be prepared. This is
a valuable document for the groups responsible for integration and system tests. It is also a
valuable component of the project history. Its value lies in the useful data it provides for test
process improvement and defect prevention

Integration Test
The main goals of Integration test (Procedural code)
• to detect defects that occur on the interfaces of units;
• to assemble the individual units into working subsystems and finally a complete
system that is ready for system test
In unit test the testers attempt to detect defects that are related to the functionality and
structure of the unit. There is some simple testing of unit interfaces when the units interact with
drivers and stubs. However, the interfaces are more adequately tested during integration
test when each unit is finally connected to a full and working implementation of those
units it calls, and those that call it. As a consequence of this assembly or integration
process, software subsystems and finally a completed system is put together during the
integration test. The completed system is then ready for system testing.
Integration in procedural oriented system
Integration testing works best as an iterative process in procedural oriented system. One
unit at a time is integrated into a set of previously integrated modules which have passed a set of
integration tests. The interfaces and functionally of the new unit in combination with the
previously integrated units is tested. For conventional procedural/functional-oriented systems
there are two major integration strategies—top-down and bottom-up

Bottom-up integration of the modules begins with testing the lowest level modules, those
at the bottom of the structure chart. These are modules that do not call other modules. Drivers are
needed to test these modules. The next step is to integrate modules on the next upper level of the
structure chart whose subordinate modules have already been tested. After a module has been
tested, its driver can be replaced by an actual module

Bottom Up Approach

7
IT6004 SOFTWARE TESTING

Top-down integration starts at the top of the module hierarchy. The rule of thumb for
selecting candidates for the integration sequence says that when choosing a candidate module to
be integrated next, at least one of the module’s superordinate (calling) modules must have been
previously tested.
M1 is the highest-level module, start the sequence by developing stubs to test it. In order
to get a good upward flow of data into the system, the stubs may have to be fairly complex. The
next modules to be integrated are those for whom their superordinate modules has been
tested. The way to proceed is to replace one-by-one each of the stubs of the superordinate module
with a subordinate module

Top Down Approach


Integration in object oriented system
The integration process in object-oriented systems is driven by assembly of the
classes into cooperating groups. The cooperating groups of classes are tested as a whole and then
combined into higher-level groups. A good approach to integration of an object-oriented system
is to make use of the concept of object clusters. A cluster consists of classes that are related, for
example, they may work together (cooperate) to support a required functionality for the complete
system

Integration in OO system

8
IT6004 SOFTWARE TESTING

Designing Integration Test:


Integration tests for procedural software can be designed using a black or white box
approach. Since many errors occur at module interfaces, test designers need to focus on exercising
all input/output parameter pairs, and all calling relationships. The tester needs to insure the
parameters are of the correct type and in the correct order.
Data flow–based (def-use paths) and control flow (branch coverage) test data generation
methods are useful to insure that the input parameters, are used properly
Integration testing of clusters of classes also involves building test harnesses which in this
case are special classes of objects built especially for testing. Whereas in class testing we
evaluated intraclass method interactions, at the cluster level we test interclass method interaction
as well.

Integration Test Planning:


Integration Testing involves the testing of all units together and the plan for performing the same
is called integration test plan.
Documents relevant to integration test planning
• system architecture,
• requirements document,
• the user manual, and
• usage scenarios
Contents of the integration test plan document
 structure charts,
 state charts,
 data dictionaries,
 cross-reference tables,
 module interface descriptions,
 data flow descriptions,
 messages and event
 descriptions
For procedural-oriented system the order of integration of the units should be defined. This
depends on the strategy selected.
For object-oriented systems a working definition of a cluster or similar construct must be
described, and relevant test cases must be specified. In addition, testing resources and schedules
for integration should be included in the test plan.
Murphy et al. has a detailed description of a Cluster Test Plan. It includes the following
items:
(i) clusters this cluster is dependent on;
(ii) a natural language description of the functionality of the cluster to be tested;
(iii) list of classes in the cluster;
(iv) a set of cluster test cases.

Scenario Testing
Scenario testing is a software testing technique that makes best use of scenarios. Scenarios
help a complex system to test better where in the scenarios are to be credible which are easy to
evaluate.
Methods in Scenario Testing:

9
IT6004 SOFTWARE TESTING

 System scenarios
 Use-case and role-based scenarios
 Strategies to Create Good Scenarios:
Enumerate possible users their actions and objectives
 Evaluate users with hacker's mindset and list possible scenarios of system abuse.
 List the system events and how does the system handle such requests.
 List benefits and create end-to-end tasks to check them.
 Read about similar systems and their behaviour.
 Studying complaints about competitor's products and their predecessor.
Scenario Testing Risks:
 When the product is unstable, scenario testing becomes complicated.
 Scenario testing are not designed for test coverage.
 Scenario tests are often heavily documented and used time and again

Defect Bash Elimination System Test


 Defect bash or bug bash is an ad hoc testing where people performing different roles in an
organization test the product together at the same time.
 The testing by all the participants during defect bashing is not based on written test cases.
What is to be tested is left to an individual’s decision and creativity.
 There are two types of defects that will emerge during a defect bash. The defects that are
in the product, as reported by the users, can be classified as functional defects. Defects that
are unearthed while monitoring the system resources, such as memory leak, long
turnaround time, missed requests, high impact and utilization of system resources and so
on are called non-functional.
 Defect bash is a unique testing method which can bring out both functional and non-
functional defects.

System Testing
• The goal is to ensure that the system performs according to its requirements.
• System test evaluates both functional behaviour and quality requirements such as reliability,
usability, performance and security.
Types of System Test:
The types of system tests are as follows:
 Functional testing
 Performance testing
 Stress testing
 Configuration testing
 Security testing
 Recovery testing

10
IT6004 SOFTWARE TESTING

Types of Testing
Functional Testing:
• Functional tests at the system level are used to ensure that the behaviour of the system
adheres to the requirements specification.
• All functional requirements for the system must be achievable by the system.
Functional tests are black box in nature. The focus is on the inputs and proper outputs for
each function. Improper and illegal inputs must also be handled by the system. System behavior
under the latter circumstances tests must be observed. All functions must be tested
Functional Test must focus on the following goals
 All types or classes of legal inputs must be accepted by the software
 All classes of illegal inputs must be rejected
 All possible classes of system output must exercised and examined
 All effective system states and state transitions must be exercised and examined
 All functions must be exercised

Performance Testing:
Requirements document shows that there are two major types of requirements:
1. Functional requirements: Users describe what functions the software should perform. Testers
test for compliance of these requirements at the system level with the functional-based system
tests.
2. Quality requirements: They are non-functional in nature but describe quality levels expected
for the software. One example of a quality requirement is performance level. The users may have
objectives for the software system in terms of memory use, response time, throughput, and delays.
The goal of system performance tests is to see if the software meets the performance
requirements. Testers also learn from performance test whether there are any hardware or software
factors that impact on the system’s performance. Performance testing allows testers to tune the
system (i.e), to optimize the allocation of system resources.
Performance objectives must be articulated clearly by the users/clients in the requirements
documents, and be stated clearly in the system test plan.
The objectives must be quantified.
For example, a requirement that the system return a response to a query in “a reasonable
amount of time” is not an acceptable requirement; the time requirement must be specified in
quantitative way.
Resources for performance testing must be allocated in the system test plan

11
IT6004 SOFTWARE TESTING

Special resources needed for a performance test


Stress Testing:
When a system is tested with a load that causes it to allocate its resources in maximum
amounts, it is called stress testing.
Example:
If an operating system is required to handle 10 interrupts/second and the load causes 20
interrupts/second, the system is being stressed
• The goal of stress test is to try to break the system; find the circumstances under which it
will crash. This is sometimes called “breaking the system.”
• Stress testing often uncovers race conditions, deadlocks, depletion of resources in unusual
or unplanned patterns, and upsets in normal operation of the software system.
• Stress testing is supported by many of the resources used for performance test

Configuration Testing:
Configuration testing allows developers/testers to evaluate system performance and
availability when hardware exchanges and reconfigurations occur.
Objectives of Configuration Testing:
 Show that all the configuration changing commands and menus work properly.
 Show that all interchangeable devices are really interchangeable, and that they each enter
the proper states for the specified conditions.
 Show that the systems’ performance level is maintained when devices are interchanged, or
when they fail.
Operations performed during configuration test
 Rotate and permutate the positions of devices to ensure physical/ logical device
permutations work for each device (e.g., if there are two printers A and B, exchange their
positions);
 Induce malfunctions in each device, to see if the system properly handles the malfunction;
 Induce multiple device malfunctions to see how the system reacts.

Security Testing:

12
IT6004 SOFTWARE TESTING

Security testing evaluates system characteristics that relate to the availability, integrity, and
confidentially of system data and services. Users/clients should make sure their security needs
are clearly known at requirements time, so that security issues can be addressed by designers and
testers.
Computer software and data can be compromised by:
(i) Criminals intent on doing damage, stealing data and information, causing denial of service,
invading privacy;
(ii) Errors on the part of honest developers/maintainers who modify, destroy, or compromise
data because of misinformation, misunderstandings, and/or lack of knowledge.
Sources of Damages:
(i) Viruses;
(ii) Trojan horses;
(iii) Trap doors;
(iv) Illicit channels
Effects of security breaches could be extensive and can cause:
• loss of information;
• corruption of information;
• misinformation;
• privacy violations;
• denial of service.
Developers try to ensure the security of their systems through use of protection mechanisms such
as passwords, encryption, virus checkers, and the detection and elimination of trap doors.
Queries related to password
• What is the minimum and maximum allowed length for the password?
• Can it be pure alphabetical or must it be a mixture of alphabetical and other characters?
• Can it be a dictionary word?
• Is the password permanent, or does it expire periodically?
Areas to focus on during security testing
• Password Checking
• Legal and Illegal Entry with Passwords
• Password Expiration
• Encryption
• Browsing
• Trap Doors
• Viruses
The best approach to ensure security if resources permit, is to hire a so-called “tiger team”
which is an outside group of penetration experts who attempt to breach the system security.
Although a testing group in the organization can be involved in testing for security breaches, the
tiger team can attack the problem from a different point of view. Before the tiger team starts its
work the system should be thoroughly tested at all levels.
Recovery Testing:
Recovery testing subjects a system to losses of resources in order to determine if it can
recover properly from these losses. This type of testing is especially important for transaction
systems, for example, on-line banking Software
A test scenario might be to emulate loss of a device during a transaction. Tests would
determine if the system could return to a well-known state, and that no transactions have been

13
IT6004 SOFTWARE TESTING

compromised. Systems with automated recovery are designed for this purpose. They usually have
multiple CPUs and/or multiple instances of devices, and mechanisms to detect the failure of a
device. They also have a so-called “checkpoint” system that meticulously records transactions
and system states periodically so that these are preserved in case of failure. This information
allows the system to return to a known state after the failure. The recovery testers must ensure
that the device monitoring system and the checkpoint software are working properly. The area to
be focused during recovery testing are, Restart and Switch over
In these testing situations all transactions and processes must be carefully examined to detect:
• loss of transactions;
• merging of transactions;
• incorrect transactions;
• an unnecessary duplication of a transaction.

Regression Testing
Regression testing a black box testing technique that consists of re-executing those tests
that are impacted by the code changes. These tests should be executed as often as possible
throughout the software development life cycle.
Types of Regression Tests:
• Final Regression Tests: - A "final regression testing" is performed to validate the build that
hasn't changed for a period of time. This build is deployed or shipped to customers.
• Regression Tests: - A normal regression testing is performed to verify if the build has NOT
broken any other parts of the application by the recent code changes for defect fixing or for
enhancement.
Selecting Regression Tests:
• Requires knowledge about the system and how it affects by the existing functionalities.
• Tests are selected based on the area of frequent defects.
• Tests are selected to include the area, which has undergone code changes many a times.
• Tests are selected based on the criticality of the features.
Regression Testing Steps:
Regression tests are the ideal cases of automation which results in better Return On Investment
(ROI).
• Select the Tests for Regression.
• Choose the apt tool and automate the Regression Tests
• Verify applications with Checkpoints
• Manage Regression Tests/update when required
• Schedule the tests
• Integrate with the builds
• Analyze the results

Internationalization testing and Localization testing


Internationalization is a process of designing a software application so that it can be adapted
to various languages and regions without any changes.
Localization is a process of adapting internationalized software for a specific region or
language by adding local specific components and translating text.
The main purpose of internationalization is to check if the code can handle all international
support without breaking functionality that might cause data loss or data integrity issues.

14
IT6004 SOFTWARE TESTING

Internationalization testing is the process of verifying the application under test to work uniformly
across multiple regions and cultures.
Internationalization Checklists:
1. Testing to check if the product works across settings.
2. Verifying the installation using various settings.
3. Verify if the product works across language settings and currency settings.
Internationalization typically entails:
1. Designing and developing the application such that it simplifies the deployment of localization
and internationalization of the application. This includes taking care of proper rendering of
characters in various languages, string concatenation etc. which can be done by using Unicode
during development
2. Taking care of the big picture while developing the application in order to support bidirectional
text or for identifying languages we need to add markup in out DTD
3. Code should be able to support local and regional language and also other cultural preferences.
This involves using predefined localization data and features from existing libraries. Date time
formats, local calendar holidays, numeric formats, data presentation, sorting, data alignment,
name and address displaying format etc.
4. Making localizable elements separate from the source code so that code is independent. And
then as per user’s requirement, localized content can be loaded based on their preferences.

Testing across multiple regions

Internationalization basically consists of design and development of an application to make


it ready for localization. It is not necessary that language, culture and region related translation
takes place. It is to make an application ready for migration in a later stage if, localization is to
take place.

15
IT6004 SOFTWARE TESTING

Globalization Process

Adhoc Testing
When a software testing performed without proper planning and documentation, it is said to
be Adhoc Testing. Such kind of tests are executed only once unless testers uncover the defects.
Adhoc Tests are done after formal testing is performed on the application. Adhoc methods are
the least formal type of testing as it is NOT a structured approach.
The success of Adhoc testing depends upon the capability of the tester, who carries out the
test. The tester has to find defects without any proper planning and documentation, solely based
on his intuition
Adhoc testing can be performed when there is limited time to do exhaustive testing and
usually performed after the formal test execution. Adhoc testing will be effective only if the tester
has in-depth understanding about the System under Test.
Forms of Adhoc Testing:
Buddy Testing: Two buddies, one from development team and one from test team mutually work
on identifying defects in the same module. Buddy testing helps the testers develop better test
cases while development team can also make design changes early. This kind of testing happens
usually after completing the unit testing.
Pair Testing:
Two testers are assigned the same modules and they share ideas and work on the same systems
to find defects. One tester executes the tests while another tester records the notes on their
findings.
Monkey Testing:
In monkey testing, testing is performed randomly without any test cases in order to break the
system.
Adhoc Testing can be made more effective by
 Preparation

16
IT6004 SOFTWARE TESTING

 Creating a Rough Idea


 Divide and Rule
 Targeting Critical Functionalities
 Using Tools: Documenting the findings

Alpha, Beta and Acceptance Test


The software is being developed to satisfy the user’s requirements, and no matter how
elegant its design it will not be accepted by the users unless it helps them to achieve their goals
as specified in the requirements. Alpha, beta, and acceptance tests allow users to evaluate the
software in terms of their expectations and goals.
When software is being developed for a specific client, acceptance tests are carried out after
system testing. The acceptance tests must be planned carefully with input from the client/users.
The software must run under real-world conditions on operational hardware and software. The
software-under-test should be stressed.
For continuous systems the software should be run at least through a 25-hour test cycle.
Conditions should be typical for a working day.
Typical inputs and illegal inputs should be used and all major functions should be exercised.
If the entire suite of tests cannot be run for any reason, then the full set of tests needs to be rerun
from the start.

Acceptance tests are a very important milestone for the developers. At this time the clients
will determine if the software meets their requirements. Contractual obligations can be satisfied
if the client is satisfied with the software. Development organizations will often receive their final
payment when acceptance tests have been passed.
Acceptance tests must be rehearsed by the developers/testers. There should be no signs of
unprofessional behaviour or lack of preparation. Clients do not appreciate surprises. Clients
should be received in the development organization as respected guests. They should be provided
with documents and other material to help them participate in the acceptance testing process, and
to evaluate the results. After acceptance testing the client will point out to the developers which
are not been satisfied. Some requirements may be deleted, modified, or added due to changing
needs.
If the client is satisfied that the software is usable and reliable, and they give their
approval, then the next step is to install the system at the client’s site. If the client’s site conditions
are different from that of the developers, the developers must set up the system so that it can
interface with client software and hardware. Retesting may have to be done to insure that the
software works as required in the client’s environment. This is called installation test.
If the software has been developed for the mass market (shrink-wrapped software), then
testing it for individual clients/users is not practical or even possible in most cases. Very often
this type of software undergoes two stages of acceptance test.
Stages of Acceptance Test:
The two stages of acceptance testing are,
1. Alpha test
2. Beta test
Alpha test:

17
IT6004 SOFTWARE TESTING

This test takes place at the developer’s site. A cross-section of potential users and members
of the developer’s organization are invited to use the software. Developers observe the users and
note problems.
Beta test:
The software is sent to a cross-section of users who install it and use it under real world
working conditions. The users send records of problems with the software to the development
organization where the defects are repaired sometimes in time for the current release. In many
cases the repairs are delayed until the next release.

Testing OO System
Testing is a continuous activity during software development. In object-oriented systems,
testing encompasses three levels, namely, unit testing, subsystem testing, and system testing

Levels of OO Testing system


Unit Testing:
In unit testing, the individual classes are tested. It is seen whether the class attributes are
implemented as per design and whether the methods and the interfaces are error-free. Unit testing
is the responsibility of the application engineer who implements the structure.
Subsystem Testing:
This involves testing a particular module or a subsystem and is the responsibility of the subsystem
lead. It involves testing the associations within the subsystem as well as the interaction of the
subsystem with the outside. Subsystem tests can be used as regression tests for each newly
released version of the subsystem.

18
IT6004 SOFTWARE TESTING

System Testing:
System testing involves testing the system as a whole and is the responsibility of the quality-
assurance team. The team often uses system tests as regression tests when assembling new
releases.
Object-Oriented Testing Techniques:
Grey Box Testing:
The different types of test cases that can be designed for testing object-oriented programs are
called grey box test cases. Some of the important types of grey box testing are:
 State model based testing: This encompasses state coverage, state transition coverage, and
state transition path coverage.
 Use case based testing: Each scenario in each use case is tested.
 Class diagram based testing: Each class, derived class, associations, and aggregations are
tested.
 Sequence diagram based testing: The methods in the messages in the sequence diagrams
are tested.
Techniques for Subsystem Testing:
The two main approaches of subsystem testing are:
 Thread based testing: All classes that are needed to realize a single use case in a subsystem
are integrated and tested.
 Use based testing: The interfaces and services of the modules at each level of hierarchy are
tested. Testing starts from the individual classes to the small modules comprising of classes,
gradually to larger modules, and finally all the major subsystems.

Usability and Accessibility Testing


Usability Testing:
Usability is how appropriate, functional, and effective that interaction between the user and
the software.
User Interface Testing
The means that allows the users to interact with a software program is called its user interface, or
UI
List of seven important traits common to a good UI
1. Follows standards and guidelines
2. Intuitive
3. Consistent
4. Flexible
5. Comfortable
6. Correct
7. Useful
Follows Standards or Guidelines
 In testing software that runs on a specific platform, the tester need to treat the standards
and guidelines for that platform as an addendum to the product’s specification.
 Create test cases based on it
Intuitive
 Is the user interface clean, unobtrusive, not busy?
 Is the UI organized and laid out well? Does it allow users to easily get from one function
to another? Is what to do next obvious? At any point can user decide to do nothing or even

19
IT6004 SOFTWARE TESTING

back up or back out? Are users inputs acknowledged? Do the menus or windows go too
deep?
 Is there excessive functionality? Does the software attempt to do too much, either as a
whole or in part? Do too many features complicate users work? Do users feel like they’re
getting information overload?
 If all else fails, does the help system really help the user?
Consistent
Consistency within the software and with other software is a key attribute. Users develop
habits and expect that if they do something a certain way in one program, another will do the
same operation the same way
Ex.
 Shortcut keys and menu selections
 Terminology and naming
 Audience
 Placement and keyboard equivalents for buttons
Flexible
Users like choices—not too many, but enough to allow them to select what they want to do
and how they want to do it. The Windows Calculator has two views: Standard and Scientific.
Users can decide which one they need for their task or the one they’re most comfortable using.
Comfortable
Software should be comfortable to use. It shouldn’t get in the way or make it difficult for a
user to do his work. Software comfort is a pretty touchy-feely concept
Features in identifying good and bad software comfort
 Appropriateness
 Error handling
 Performance
Correct
When testing for correctness, tester are testing whether the UI does what it’s supposed to do.
To ensure correctness make sure to pay attention to
 Marketing differences
 Language and spelling
 Bad media
 WYSIWYG
Useful
The final trait of a good user interface is whether it’s useful. The tester is not concerned with
whether the software itself is useful, just whether the particular feature is
When testers are reviewing the product specification, preparing to test, or actually performing
testing,
 Ask if the features see actually contribute to the software’s value.
 Do they help users do what the software is intended to do?
 If testers don’t think they’re necessary, do some research to find out why they’re in the
software

Accessibility Testing:
Developing software with a user interface that can be used by the disabled isn’t just a good
idea, a guideline, or a standard—it’s the law

20
IT6004 SOFTWARE TESTING

Accessibility Features in Software:


Software can be made accessible in one of two ways
1. to take advantage of support built into its platform or operating system
2. to have its own accessibility features specified, programmed, and tested
In testing usability for a product, be sure to create test cases specifically for accessibility.
Capabilities provided by Windows for applications to be accessibility enable are
• StickyKeys
• FilterKeys
• ToggleKeys
• SoundSentry
• ShowSounds
• High Contrast
• MouseKeys
• SerialKey

Configuration Testing
Configuration testing is the process of checking the operation of the software under testing
with all the various types of hardware.
The different configuration possibilities for a standard Windows-based PC
• The PC - Compaq, Dell, Gateway, Hewlett Packard, IBM
• Components - system boards, component cards, and other internal devices such as disk
drives, CD-ROM drives, video, sound, modem, and network cards
• Peripherals - printers, scanners, mice, keyboards, monitors, cameras, joysticks
• Interfaces - ISA, PCI, USB, PS/2, RS/232, and Firewire
• Options and memory - hardware options and memory sizes
• Device Drivers
All components and peripherals communicate with the operating system and the software
applications through low-level software called device drivers. These drivers are often provided
by the hardware device manufacturer and are installed when you set up the hardware. Although
technically they are software, for testing purposes they are considered part of the hardware
configuration.
To start configuration testing on a piece of software, the tester needs to consider which of
these configuration areas would be most closely tied to the program.
Examples:
• A highly graphical computer game will require lots of attention to the video and sound
areas.
• A greeting card program will be especially vulnerable to printer issues.
• A fax or communications program will need to be tested with numerous modems and
network configurations.
Finding Configuration Bugs:
The sure way to tell if a bug is a configuration problem and not just an ordinary bug is to perform
the exact same operation that caused the problem, step by step, on another computer with a
completely different configuration.
 If the bug doesn’t occur, it’s very likely a configuration problem.
 If the bug happens on more than one configuration, it’s probably just a regular bug.

21
IT6004 SOFTWARE TESTING

The general process that the tester should use when planning the configuration testing are:
• Decide the types of hardware needed
• Decide what hardware brands, models, and device drivers are available
• Decide which hardware features, modes, and options are possible
• Pare down the identified hardware configurations to a manageable set
• Identify the software’s unique features that work with the hardware configurations
• Design the test cases to run on each configuration
• Execute the tests on each configuration
• Rerun the tests until the results satisfy the test team

Compatibility Testing
Software compatibility testing means checking that your software interacts with and shares
information correctly with other software. This interaction could occur between two programs
simultaneously running on the same computer or even on different computers connected through
the Internet thousands of miles apart
Examples of compatible software
Cutting text from a Web page and pasting it into a document opened in your word processor
In performing software compatibility testing on a new piece of software, the tester needs to
concentrate on
 Platform and Application Versions (Backward and Forward Compatibility, The Impact of
Testing Multiple Versions)
 Standards and Guidelines (High-Level Standards and Guidelines, Low-Level Standards and
Guidelines
 Data Sharing Compatibility (File save and file load, File export and file import )

Compatibility Testing
Platform and Application Versions
Selecting the target platforms or the compatible applications is really a program
management or a marketing task. A Person who’s very familiar with the customer base will

22
IT6004 SOFTWARE TESTING

decide whether the software is to be designed for a specific operating system, Web browser, or
some other platform. They’ll also identify the version or versions that the software needs to be
compatible with.
Backward compatible:
If something is backward compatible, it will work with previous versions of the software.
Forward compatible:
If something is forward compatible, it will work with future versions of the software.

Forward and Backward Compatibility


In compatibility testing a new platform, tester must check that existing software
applications work correctly with it

Software Interaction
To begin the task of compatibility testing, tester needs to equivalence partition all the
possible software combinations into the smallest, effective set that verifies that the software
interacts properly with other software.
Factors to be considered in partitioning
• Popularity
• Age
• Type
• Manufacturer
In Compatibility testing a new application tester may require to test it on multiple platforms and
with multiple applications

23
IT6004 SOFTWARE TESTING

Testing in Multiple Platform and Multiple Application


Standards and Guidelines
There are two types of standards:
• High level
• Low level
High-level standards are the ones that guide the product’s general compliance, its look and feel,
its supported features, and so on.
Low-level standards are the nitty-gritty details, such as the file formats and the network
communications protocols.
Data Sharing Compatibility
The sharing of data among applications is what really gives software its power. A well-
written program that supports and adheres to published standards must allow users to easily
transfer data to and from other software to be a great compatible product.
Familiar means of transferring data:
1. File save and file load
2. File export and file import
3. Cut, copy, and paste
Web site Testing
Web site testing encompasses many areas, including
• configuration testing,
• compatibility testing,
• usability testing,
• documentation testing,
• localization testing
Web page features
The different features of web pages are as follows,
• Text of different sizes, fonts, and colors
• Graphics and photos
• Hyperlinked text and graphics
• Varying advertisements
• Drop-down selection boxes

24
IT6004 SOFTWARE TESTING

• Fields in which the users can enter data


Features that make the Web site much more complex:
 Customizable layout that allows users to change where information is positioned on screen
 Customizable content that allows users to select what news and information they want to
see
 Dynamic drop-down selection boxes
 Dynamically changing text
 Dynamic layout and optional information based on screen resolution
 Compatibility with different Web browsers, browser versions, and hardware and software
platforms
 Lots of hidden formatting, tagging, and embedded information that enhances the Web
page’s usability
Black-Box Testing
 Treat the Web page or the entire Web site as a black box
 Take some time and explore
 Think about how to approach testing a website
 What would the tester test?
 What would the equivalence partitions be?
 What would the tester choose not to test?
When testing a Web site, the tester first creates a state table, treating each page as a different
state with the hyperlinks as the lines connecting them. A completed state map will give a better
view of the overall task.
The tester should look for the following
Text
Web page text should be treated just like documentation and tested accordingly. Check the
audience level, the terminology, the content and subject matter, the accuracy—especially of
information that can become outdated—and always, always check spelling
Hyperlinks
Links can be tied to text or graphics. Each link should be checked to make sure that it jumps
to the correct destination and opens in the correct window .Make sure that hyperlinks are obvious.
Text links are usually underlined, and the mouse pointer should change to a hand pointer when
it’s over any kind of hyperlink—text or graphic. If the link opens up an email message, fill out
the message, send it, and make sure in getting a response.
Graphics
Do all graphics load and display properly? If a graphic is missing or is incorrectly named, it
won’t load and the Web page will display an error where the graphic was to be placed. If text and
graphics are intermixed on the page, make sure that the text wraps properly around the graphics.
Try resizing the browser’s window to see if strange wrapping occurs around the graphic. How’s
the performance of loading the page? Are there so many graphics on the page, resulting in a large
amount of data to be transferred and displayed, that the Web site’s performance is too slow? What
if it’s displayed over a slow dial-up modem connection on a poor-quality phone line?

Forms
Test forms just as you would if they were fields in a regular software program. Are the fields
the correct size? Do they accept the correct data and reject the wrong data? Is there proper

25
IT6004 SOFTWARE TESTING

confirmation when you finally press Enter? Are optional fields truly optional and the required
ones truly required?
Objects and Other functionality
Take care to identify all the features present on each page. Treat each unique feature as a
feature in a regular program and test it individually with the standard testing techniques Does it
have its own states? Does it handle data? Could it have ranges or boundaries? What test cases
apply and how should they be equivalence classed?
Grey-Box Testing:
Greybox testing, is a mixture of black-box and white-box testing. Test the software as a black-
box, but supplement the work by taking a peek (not a full look, as in white-box testing) at what
makes the software work. HTML and Web pages can be tested as a grey box
White-Box Testing:
Features of a website tested with a white-box approach are
 Dynamic Content
 Database-Driven Web Pages
 Programmatically Created Web Pages
 Server Performance and Loading
 Security
Configuration and Compatibility Testing:
 Configuration testing is the process of checking the operation of the software with various
types of hardware and software platforms and their different settings.
 Compatibility testing is checking the software’s operation with other software
The possible hardware and software configurations might be that could affect the operation or
appearance of a web site are:
• Hardware Platform
• Browser Software and Version
• Browser Plug-Ins
• Browser Options
• Video Resolution and Color Depth
• Text Size
• Modem Speeds
Usability Testing :
Following and testing a few basic rules can help make Web sites more usable.
Jakob Nielsen, a respected expert on Web site design and usability, has performed extensive
research on Web site usability.
The Top Ten Mistakes in Web Design
1. Gratuitous Use of Bleeding-Edge Technology
2. Scrolling Text, Marquees, and Constantly Running Animations
3. Long Scrolling Pages
4. Non-Standard Link Colors
5. Outdated Information
6. Overly Long Download Times
7. Lack of Navigation Support
8. Orphan Pages
9. Complex Web Site Addresses

26
IT6004 SOFTWARE TESTING

Testing the documentation


If the software’s documentation consists of nothing but a simple readme file, testing it
would not be a big deal. The tester should make sure that it included all the material that it was
supposed to, that everything was technically accurate, and to run a spell check and a virus scan
on the disk
Types of Documentation: (components classified as documentation)
• Packaging text and graphics
• Marketing material, ads, and other inserts
• Warranty/registration
• EULA
• Labels and stickers
• Installation and setup instructions
• User’s manual
• Online help
• Tutorials, wizards, and CBT
• Samples, examples, and templates
• Error messages

Documentation on a disk label

Documentation Testing Checklist


General Areas

27
IT6004 SOFTWARE TESTING

Correctness

The Importance of Documentation Testing


Good software documentation contributes to the product’s overall quality in three ways
1. It improves usability
2. It improves reliability

28
IT6004 SOFTWARE TESTING

3. It lowers support costs


The effective approach to documentation testing is to treat the documentation like a user.
Read it carefully, follow every step, examine every figure, and try every example. With this
approach, the tester will find bugs both in the software and the documentation.

29

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy