Chapter-2
Chapter-2
1. this section discusses the relationships between software development activities and test activities in the
software development lifecycle
software development lifecycle model:describes the types of activity performed at each stage in a
software development project.
Sequential development models: A sequential development model describes the software development
process as a linear, sequential flow of activities.This means that any phase in the development process
should begin when the previous phase is complete.
In any software development lifecycle model, there are several characteristics of good testing:
2. Identify reasons why software development lifecycle models must be adapted to the context of project and
product characteristics
Software development lifecycle models must be selected and adapted to the context of project and product
characteristics. An appropriate software development lifecycle model should be selected and adapted
based on the project goal.
2.2 Test Levels
Test levels are groups of test activities that are organized and managed together. which include:
● Component testing
● Integration testing
● System testing
● Acceptance testing
Component testing
Objectives of component testing (also known as unit or module testing) focuses on components that are
separately testable. Objectives of component testing include:
● Reducing risk
● Verifying whether the functional and non-functional behaviors of the component are as designed
and specified
● Building confidence in the component’s quality
● Finding defects in the component
● Preventing defects from escaping to higher test levels
Note: Component testing is often done in isolation from the rest of the system, which may require mock
objects such as service virtualization, harnesses, stubs, and drivers.
Note: Defects are typically fixed as soon as they are found, often with no formal defect management.
However, when developers do report defects, this provides important information for root cause analysis
and process improvement.
Note: usually this type of testing is done by the developer of the code. (the author).
Test driven development (TDD): Test driven development is highly iterative and is based on cycles of
developing automated test cases, then building and integrating small pieces of code, then executing the
component tests, correcting any issues, and refactoring the code.
Integration Testing
integration testing: focuses on interactions between components or systems. Objectives of integration
testing include:
● Reducing risk
● Verifying whether the functional and non-functional behaviors of the interfaces are as designed
and specified
● Building confidence in the quality of the interfaces
● Finding defects in the interfaces themselves or within the components or systems
● Preventing defects from escaping to higher test levels
Test Basis Test Object Typical Defect Components Typical Defect System
Software and system design Subsystems Incorrect data, missing data, or Inconsistent message structures
incorrect data encoding between systems
Sequence diagrams Databases Incorrect sequencing or timing of Incorrect data, missing data, or
interface calls incorrect data encoding
Workflows Microservices Incorrect assumptions about the Incorrect assumptions about the
meaning, units, or boundaries of meaning, units, or boundaries of
the data being passed between the data being passed between
components systems
Note: In order to simplify defect isolation and detect defects early, integration should normally be
incremental rather than “big bang” that’s why we tend to use Systematic integration strategies
Systematic integration strategies is based on the system architecture such as top-down and
bottom-up.
System Testing
System testing focuses on the behavior and capabilities of a whole system or product, often
considering the end-to-end tasks the system can perform and the non-functional behaviors it
exhibits while performing those tasks. Objectives of ٍsystem testing include:
● Reducing risk
● Verifying whether the functional and non-functional behaviors of the system are as designed and
specified
● Building confidence in the of the system as a whole
● Finding defects
● Preventing defects from escaping to higher test levels or production
Note: System testing often produces information that is used by stakeholders to make release decisions.
System testing may also satisfy legal or regulatory requirements or standards.
Use cases Operating systems Incorrect control and/or data flows within the
system
Epics and user stories System under test (SUT) Failure to properly and completely carry out
end-to-end functional tasks
Models of system behavior System configuration and Failure of the system to work properly in the
configuration data production environment
Note: System testing should focus on the overall, end-to-end behavior of the system as a whole, both
functional and non-functional.
Acceptance Testing
Acceptance testing, like system testing, typically focuses on the behavior and capabilities of a
whole system or product. Objectives of acceptance testing include:
● Establishing confidence in the quality of the system as a whole
● Validating that the system is complete and will work as expected
● Verifying that functional and non-functional behaviors of the system are as specified
Note: Acceptance testing may produce information to assess the system’s readiness for deployment and
use by the customer (end-user). Defects may be found during acceptance testing, but finding defects is
often not an objective, and finding a significant number of defects during acceptance testing may in some
cases are considered a major project risk. Acceptance testing may also satisfy legal or regulatory
requirements or standards.
Business processes Business processes for a fully integrated System workflows do not meet
system business or user requirements
User or business requirements Recovery systems and hot sites (for Business rules are not implemented
business continuity and disaster recovery correctly
testing)
Use cases Operational and maintenance processes System does not satisfy contractual or
regulatory requirements
Regulations, legal contracts and System under test (SUT) Non-functional failures
standards
● Functional Testing:
○ Functional testing of a system involves tests that evaluate functions that the system should perform.
Functional testing concerned with “what” the system should do.
○ Functional testing considers the behavior of the software, so black-box techniques may be used to
derive test conditions and test cases for the functionality of the component or system
○ The thoroughness of functional testing can be measured through functional coverage. Functional
coverage is the extent to which some type of functional element has been exercised by tests.
● Non-functional Testing:
○ Non-functional testing of a system evaluates characteristics of systems and software such as
usability, performance efficiency or security.
○ Black-box techniques may be used to derive test conditions and test cases for nonfunctional testing.
● White-box Testing:
○ White-box testing derives tests based on the system’s internal structure or implementation.
○ The thoroughness of white-box testing can be measured through structural coverage. Structural
coverage is the extent to which some type of structural element has been exercised by tests.
● Change-related Testing:
○ When changes are made to a system, either to correct a defect or because of new or changing
functionality, testing should be done to confirm that the changes have corrected the defect or
implemented the functionality correctly. and have not caused any new defects
■ Confirmation testing: After a defect is fixed, the software may be tested with all test cases that
failed due to the defect.
■ Regression testing: It is possible that a change made in one part of the code, may accidentally
affect the behavior of other parts of the code
When any changes are made as part of maintenance, maintenance testing should be performed, both to
evaluate the success with which the changes were made and to check for possible side-effects
Impact analysis evaluates the changes that were made for a maintenance release to identify the intended
consequences as well as expected and possible side effects of a change, and to identify the areas in the
system that will be affected by the change. Impact analysis can also help to identify the impact of a change
on existing tests. The side effects and affected areas in the system need to be tested for regressions,
possibly after updating any existing tests affected by the change.
Impact analysis may be done before a change is made, to help decide if the change should be made,
based on the potential consequences in other areas of the system.