Prep Material
Prep Material
Disadvantages:
Requirement changes are not allowed
If there is any defect in the requirement itself, it will be carried to the later
stages
Total project investment is higher as the rework on defect is time consuming
Testing will start once total development is completed
Especially suitable for Product based companies as the products will be on going
with them
As it has many cycles which brings in new features and we deliver the product in
every cycle, we also call it as Iterative model
Advantages:
Spiral model is iterative model
This model overcomes the drawbacks of Waterfall model---Requirements
changes are not allowed in waterfall but here it is allowed
We follow spiral model when we have dependency in the modules---Gmail
compose Mail will be in the first module and the email which is sent should come
into the Sent Mail is developed on the compose email...so, there is dependency
on first module
New SW will be released to the customer at the end of each cycle
SW will be release in multiple versions and so is also called Version control model
Testing is done in every cycle before going to another cycle
Customer can use the software in every module/cycle
Disadvantages:
Requirement changes are not allowed in between the cycle
There is no testing in Requirement & Design phase
V model
Phases:
1) Requirement analysis---Verification
2) System Design
3) Architecture design
4) Module design
5) Coding
6) Unit testing---Validation
7) Integration testing
8) System testing
9) Business/User Acceptance testing (BAT/UAT)
BRS doc is the base to carry out the UAT testing(Testers from the client and few
testing team members will be involved)
SRS doc is the base to carry out System testing(Purely Testing team will be
involved)
High Level & Low Level design docs are the base to carry out Integration
testing(Developers will be involved)
Coding/small piece of code will be the base for carrying Unit testing(Developers
will be doing this testing)
Advantages:
• This is a highly-disciplined model and Phases are completed one at a time.
• Works well for smaller projects where requirements are very well
understood.
• Simple & easy to understand and use.
• Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
Disadvantages:
• Not a good model for complex projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high
risk of changing.
• Once an application is in the testing phase, it is difficult to go back and
change a functionality.
• No working software is produced until late during the life cycle.
Static testing: Testing the project related documents is called the Static testing
or Document Level testing
Some techniques to perform Static testing are
1) Review---Will conduct on the docs to ensure correctness &
completeness---Requirement reviews, Design review, code review, Test
plan review, Test cases review etc
2) Walkthrough---Its an informal review which doesn't have any plan...we
can do it at any point of time
Author reads the doc or code and discuss with peers
Walkthroughs doesn't have any Minutes of Meeting(MOM)
3) Inspection---Its a most formal review type where the Author, Project
managers, team members are required
Inspection will have a proper schedule which will be informed much earlier
for the concerned Dev/Testing team
Dynamic testing: Testing the actual SW is the Dynamic testing.
Unit
Integration
System
BAT/UAT testing types will come under this Dynamic testing
Verification:
Verification checks whether we are building the right product
Focus on documentation
Verification typically involves Reviews, Walkthroughs & Inspection
BRS, SRS, High level document & Low level document comes under verification
So, Static testing belongs to Verification
Validation:
Validation checks whether the developed product is right or not
Takes place after the Verifications are completed
Focus on actual software which is developed
Validation typically involves actual testing on the software
Unit testing, Integration, System & UAT testings comes under this
So, Dynamic testing belongs to Validation
QA, QC & QE
QA is the Quality Assurance
QC is the Quality Control
QE is the Quality Engineering
QA vs QC
1) QA is a process related thing where the higher management will follow the
process and make sure the respective teams will follow the process
QC is the actual testing of the SW and are involved during the testing
phase
2) QA focuses on building Quality
QC focuses on delivering Quality
3) QA is for preventing the defects
QC is for detecting the defects
4) QA is process oriented
QC is product oriented
5) QA is followed for the entire life cycle
QC is involved only during the Testing phase which comes under SDLC
process
Black box testing: Testing carried out without knowing the code or logic
behind is the Black box testing. The testing team performs this testing. Ex:
System testing & UAT testing
Levels of testing:
1) Unit testing
2) Integration testing
3) System testing
4) User Acceptance testing (UAT)
Unit testing:
1) A Unit is a single component or module of a software
2) Unit testing is conducted on a single program or a single module
3) Unit testing is a white box testing technique
4) Unit testing is conducted by the Developers
Unit testing techniques:
1) Basis path testing: Testing every line of the code is the Basis path testing
2) Control structure testing: Under this we have two different sub-techniques
Conditional coverage & Loops coverage
3) Mutation testing: Mutation is nothing but the repetition
System testing:
High level idea about the system testing is:
Testing over all functionality of the application with respect to client requirement
It is a black box testing technique
This testing is conducted by the testing team
After completion of the unit and integration level testing we start the system
testing
Before conducting the system testing, tester should know the customer
requirements very clearly
Usability testing: Checks how easily the end user can understand and operate
the application
Testing the help menu documents
-----------------------------
Functional testing:
1) Object properties testing
2) Database testing
3) Error handling
4) Calculation/Manipulations testing
5) Links existence & Links execution
6) Cookies & sessions
Functionality is nothing but the behaviour of the application/software. Testing
whether the application is behaving as per the customer requirement or not
Sessions will be created on the server side. These are time slots created by the
server. Sessions will expire after some time if you are idle
Main reason to use this is for the security mechanism
-----------------------------------
While performing Functional testing, it is based on Customer requirement
Non-Functional testing: This testing is purely based on Customer expectation
after functionality is stable
1) Performance testing: Speed of the application like how well the application is
responding when more number of users are using the application. Performance
will be tested on the web based applications
a) Load: Gradually increasing the load on the application slowly by increasing
the no.of users and verify the response time
b) Stress: We suddenly increase/decrease the load on the application and
check the speed of the application
c) Volume: How much data our application is able to handle
2) Security testing: How well our application is providing the security or how
secure our application is. Main focus is on Authentication and Authorisation
Authentication: Testing whether the users are valid or not
Authorisation/Access control: If a valid user logs in, he will have only few access
granted. Permissions of the valid user
3) Recovery testing: Recovering the lost data is the recovery testing. Checks the
system change from abnormal to normal (Last known good configuration)
--------------------------------------------------------------------------
Summarise the differences between Functional & Non-functional testing:
Software testing
terminology
1) Regression testing: Testing conducted on the modified build to make
sure there won’t be any impact on existing functionalities because of the
changes like adding/deleting/modifying features.
A) Unit regression testing: Testing only the changes/modifications done
by the developer
B) Regional regression testing: --Testing the modified module along with
the impacted modules
-- Impact analysis meetings conduct’s to identify the impacted modules
with Dev & QA
C) Full regression: --Testing the main feature & the remaining part of the
application
--Ex: Dev has done changes in many modules, instead of identifying
impacted modules, we perform one round of Full Regression
2)Re-testing: Testing the bug fixed by the developer to make sure the
bug is working as expected is called Re-testing
--Tester will close the bug if its working fine else, he will reopen the bug
--To make sure if the bugs reported in the earlier build are fixed properly
in the current build
4) Exploratory testing:
a) We have to explore the application, understand completely and test
b) Understand the application, identify all possible scenarios and then
start the testing
c) We perform the Exploratory testing when we have the application with
no proper requirement
d) Tester will perform Exploratory testing when we have no/proper
requirement
Drawbacks:
a) Time consuming
b) Tester will never know if there is a bug in the application
c) Tester might understand any feature as a bug or any bug as a feature
as we dont have any requirement
5) Adhoc testing:
a) Testing application randomly without any TCs or any requirement
documents
b) Its an informal testing type with an aim to break the system
c) Testers should have knowledge of the application even though we dont
have any requirements/TCs
d) This is an unplanned activity and hence we call it as an Adhoc testing
6) Monkey/Gorilla testing:
a) Testing application randomly without any TCs or any requirement
documents
b) Tester do not have knowledge of the application
c) Suitable for gaming applications
7) Positive testing:
Testing the application with valid inputs is called positive testing
We check if the application behaves as expected with the valid inputs
8) Negative testing:
Testing the application with invalid inputs is called Negative testing
It checks if the application behaves as expected with the invalid inputs
Positive vs Negative TCs:
End-to-end testing:
Testing the overall functionalities of the system including the data
integration among all the modules is called E2E testing
RegisterLogin--->Add payee---a) Delete payee b) Edit payee--->Transfer
funds-Logout
9) Globalization/Internationalization/I18N testing:
Performed to ensure the system or application can run globally
Different aspects of application are tested to make sure it supports every
language
It tests the different currency formats, mobile number formats and
address formats are supported by the application
Ex: Amazon supports many languages, address, mobile numbers,
currency
One more ex: There is a text box which will allow only alphabet
A--Z--->Valid XGK
a---z--->Valid tdj
Special characters-->Invalid !@#
Numbers-->Invalid 123
Spaces-->Invalid xY Z
Note: We will use Equivalence class partitioning & Boundary value analysis
techniques for Input Domain testing
Input domain testing: The values will be verified in the text box/Input fields.
Hence we use ECP & BVA techniques to prepare test data
TC1 is a valid TC
TC2, TC3, TC4 are negative TCs
TC5 is an Invalid TC
State transition:
--In State Transition technique, if changes in input conditions, it change the state
of the application
--This technique allows the tester to test the behavior of an Application
--The tester can perform this action by entering various input conditions in a
sequence
--In this technique, the tester will provide positive as well as negative input test
values for evaluating the system behavior
Ex: Take an example of a login page of an application which locks the user after
3 wrong attempts of password
Error guessing:
Error guessing is one of the testing techniques used to find bugs in a SW
application based on tester's prior experience
In Error guessing we dont follow any rules
It depends on tester analytical skills and experience
Ex: Submitting a form without completing all the values
Entering invalid values like for age, entering alphabets..
STLC: Software Testing Life Cycle
STLC talks about completely on the Testing process
Phases in STLC:
a) Requirement analysis
b) Test planning
c) Test Design
d) Test Execution
e) Bug/Defect Reporting & Tracking
f) Test Closure
Flowchart describing the STLC phases:
Test plan: A Test Plan refers to a detailed document that catalogues the test
strategy, objectives, schedule, estimations, deadlines, and the resources
required for completing a particular project.
Traceability matrix: Traceability matrix or software testing traceability matrix
is a document that traces and maps the relationship between two baseline
documents. This includes one with the requirement specifications and another
one with the test cases.
Test Plan contents: A Test Plan is a document that describes the Test scope,
Test strategy, Objectives, Schedule, Deliverables and Resources to perform
testing for a software product.
Test plan template contents:
a) Overview
b) Scope----Inclusions, Test Environments, Exclusions
c) Test Strategy
d) Defect Reporting procedure
e) Roles/Responsibilities
f) Test schedules
g) Test deliverables
h) Pricing
i) Entry/Exit criteria
j) Suspension & Resumption criteria
k) Tools
l) Risks & Mitigations
m) Approvals
Test case: A Test Case is a set of actions executed to validate particular feature
or functionality of a Software application
Contents of Test case:
a) Test Case ID
b) Test Case Title
c) Description
d) Pre-condition/pre-requisite
e) Requirement ID
f) Steps
g) Expected Result
h) Actual Result
i) Test Data
Test case template example:
Test Environment:
Test environment is a platform specifically built for test case execution on the
software product
It is created by integrating the required software and hardware along with proper
network configurations
Test environment simulates the Production environment
Another name of Test Environment is Test Bed
Ex: www.gmail.com--Production environment
www.qa.gmail.com--Test/QA Environment
www.dev.gmail.com--Dev Environment
Test Execution:
During this phase Testing team will carry out the testing based on the Test plan
& the Test cases prepared
Entry criteria: Test cases, Test data & Test plan
Activities:
Test cases are executed based on the Test planning
Status of TCs are marked like Pass, Fail, No Run, Blocked
Documentation of Test Results and log defects for failed cases is done
All the blocked and failed TCs are assigned Bug IDs
Retesting once the defects are fixed
Defects are tracked till closure
Deliverables: Provide defect & Test case execution report with completed
results
Guidelines for Test execution:
The build being deployed to the QA environment is the most important part of
Test Execution cycle
Test execution is done in QA environment
Test execution happens in multiple cycles
Test execution phase consists of Executing the Test cases + Test
scripts(Automation)
Defects/Bugs:
Any mismatched functionality/deviation in the functionality in an application is
called a Bug/Defect
During Test Execution Test Engineers report deviations as Defects to Dev
through templates or Tools
Defect Reporting tools:
a) HP ALM
b) Rally
c) JIRA
d) Bug Jilla etc:
Defect report contents:
Defect_ID: Unique identification number for the defects
Defect Description: Detailed information of the defect including information
about the module in which defect was found
Version: Version of the software in which defect was found
Steps: Detailed steps along with screen shots with which the developer can
reproduce the defects
Date raised: Date when the defect is raised
Reference: Where you provide the reference to the documents like
requirements, design, architecture, or any screen shots of the error to help in
understanding the defect
Detected by: name/ID of the tester who raised the defect
Status: Status of the defect (We shall speak more on this in Defect life cycle)
Fixed by: Name/ID of the developer who fixed the defect
Date closed: when the defect is closed
Severity: which describes the impact of the defect on application
Priority: which is related to defect fixing urgency. Severity could be
High/Medium/Low based on the impact urgency at which the defect should be
fixed
Defect Severity:
--Severity describes the seriousness of the defect and how much impact on
business flow
Defect severity can be categorized into 4 classes:
--Blocker(Show stopper): This defect indicates nothing can proceed further
--Critical: The main functionality is not working. Customer business workflow is
broken. They cannot proceed further.
Ex: Fund transfer is not working in Net banking
Ordering a product in Ecommerce application is not working
--Major: It causes an undesirable behavior but the feature or the application is
still functional.
Ex: After sending an email, there is no confirmation
After booking a cab, there is no confirmation
--Minor: It won’t cause any major breakdown of the system
Ex: Look & feel issues, spellings, alignments
Defect Priority:
Priority describes the importance of the defect
Defect priority states the order in which a defect should be fixed
Defect priority can be classified into 3 classes:
--P0(High): The defect must be resolved immediately as it affects the system
severely and cannot be used till the defect is fixed
--P1(Medium): It can wait until a new version/build is created
--P2(Low): Developer can fix it in the later releases
Examples of Severity & priority combinations:
Low priority & Low severity: A spelling mistake in a page not frequently
navigated by the user
Low priority & High severity: Application crashing in some corner cases
High priority & Low severity: Slight color change in logo or spelling mistake in
company name
High priority & High severity: Issue with Login functionality(user is not able to
login to the application)
High Severity & Low priority: Web page not found when user clicks on a link(user
doesn’t visit that page generally)
Defect Resolution:
After receiving the defect report from the testing team, Dev team will conduct a
review meeting to fix the defects. Then they send the resolution type to the
testing team for further communication.
Resolution types:
Accept
Reject
Duplicate
Enhancement---Till the time we get a confirmation from client, we mark the
defect status as Deferred
Need more information
Not reproducible
Fixed
Backlogged
Defect Life cycle flow chart:
QA/Testing Activities:
1) Understanding the requirements and functional specifications of the
application
2) Identifying required Test scenarios
3) Designing TCs to validate/test the application
4) Setting up Test environment (Test Bed)
5) Execute TCs in valid application
6) Log Test results (No .of TCs pass/fail)
7) Defect reporting & tracking
8) Retest fixed defects of previous builds
9) Perform various types of testing in the application(Sanity, Regression, E2E,
Smoke, Exploratory, Monkey testing) etc😉
10) Reports to Test lead about the status of assigned tasks
11) Participate in regular team meetings
12) Creating automation scripts
13) Provides recommendation on whether the build is ready for production
deployment