0% found this document useful (0 votes)
5 views32 pages

Prep Material

The document provides an overview of software, its types, and the software development life cycle (SDLC), including various models like Waterfall, Spiral, and V model. It explains software testing, quality assurance, and the differences between errors, bugs, and failures, along with testing methodologies such as static and dynamic testing. Additionally, it covers various testing levels, including unit, integration, system, and user acceptance testing, as well as non-functional testing aspects like performance and security.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views32 pages

Prep Material

The document provides an overview of software, its types, and the software development life cycle (SDLC), including various models like Waterfall, Spiral, and V model. It explains software testing, quality assurance, and the differences between errors, bugs, and failures, along with testing methodologies such as static and dynamic testing. Additionally, it covers various testing levels, including unit, integration, system, and user acceptance testing, as well as non-functional testing aspects like performance and security.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 32

1) What is a software?

Software is a collection of computer programs that helps us to perform a task on


any electronic devices
Types:
System sw: Operating Systems, Device drivers, Servers
Programming sw: compliers, debuggers, interpreters etc
Application sw: Web applications, Mobile application etc
2) What is sw testing?
Software testing is a part of SW development process which is an activity to
detect and identify the defects in the SW. The objective of testing is to deliver a
quality product to the client
3) What is SW quality?
There are few parameters which concludes that the product is a good product
Bug/Defect Free
Meets the requirements
User friendly
Within budget
Delivery on time
4) Product & Project
If the software is developed for multiple customers based on the market
requirement is the Product
If the software is developed for some specific customer base with their own
requirements is a Project
5) Error, Bug & Failure
Any human mistake can be considered as an Error (Developer coding)
Defect/Bug is a deviation in the expected result
Deviation identified by the end user while working on it is a failure
6) Why do we have bugs/defects in the software?
Programming errors
Software complexity
Miscommunication/No communication between the teams
Frequently changing requirements
Lack of knowledge in testers
Objectives of testing:
Finding the defects which may get created by the programmer while developing
the SW
Gaining confidence & providing the information on the level of quality
To prevent the defects in Production
To make sure the product meets the Business & End user requirements
SDLC: SW Development Life Cycle: Its is a process which SW industry follows to
Design, Develop & Test the SW to deliver a quality product
Phases: 1) Requirement Analysis
2) Design
3) Development
4) Testing
5) Maintenance
Different models in SDLC:
1) Waterfall model
2) Spiral model
3) V model
4) Agile model
Waterfall model: Also called as Linear model & first model SW industry has
adopted. Old & traditional model of its kind..
Phases: 1) Requirement analysis
2) System Design
3) Implementation/Development
4) Testing
5) Deployment
6) Maintenance
Advantages:
Quality of the product will be good
As there wont be any requirement changes frequently, the chances of getting
bugs are lesser
Initial investment will be a bit lesser as it goes stage by stage and testers will be
involved only after certain phases are completed
Preferred for smaller projects where the requirements are freezed

Disadvantages:
Requirement changes are not allowed
If there is any defect in the requirement itself, it will be carried to the later
stages
Total project investment is higher as the rework on defect is time consuming
Testing will start once total development is completed

Spiral or Iterative model:


Phases:
1) Planning
2) Risk Analysis
3) Engineering & Execution
4) Evaluation

Especially suitable for Product based companies as the products will be on going
with them
As it has many cycles which brings in new features and we deliver the product in
every cycle, we also call it as Iterative model
Advantages:
Spiral model is iterative model
This model overcomes the drawbacks of Waterfall model---Requirements
changes are not allowed in waterfall but here it is allowed
We follow spiral model when we have dependency in the modules---Gmail
compose Mail will be in the first module and the email which is sent should come
into the Sent Mail is developed on the compose email...so, there is dependency
on first module
New SW will be released to the customer at the end of each cycle
SW will be release in multiple versions and so is also called Version control model
Testing is done in every cycle before going to another cycle
Customer can use the software in every module/cycle

Disadvantages:
Requirement changes are not allowed in between the cycle
There is no testing in Requirement & Design phase

V model

Phases:
1) Requirement analysis---Verification
2) System Design
3) Architecture design
4) Module design
5) Coding
6) Unit testing---Validation
7) Integration testing
8) System testing
9) Business/User Acceptance testing (BAT/UAT)
BRS doc is the base to carry out the UAT testing(Testers from the client and few
testing team members will be involved)
SRS doc is the base to carry out System testing(Purely Testing team will be
involved)
High Level & Low Level design docs are the base to carry out Integration
testing(Developers will be involved)
Coding/small piece of code will be the base for carrying Unit testing(Developers
will be doing this testing)
Advantages:
• This is a highly-disciplined model and Phases are completed one at a time.
• Works well for smaller projects where requirements are very well
understood.
• Simple & easy to understand and use.
• Easy to manage due to the rigidity of the model. Each phase has specific
deliverables and a review process.
Disadvantages:
• Not a good model for complex projects.
• Poor model for long and ongoing projects.
• Not suitable for the projects where requirements are at a moderate to high
risk of changing.
• Once an application is in the testing phase, it is difficult to go back and
change a functionality.
• No working software is produced until late during the life cycle.
Static testing: Testing the project related documents is called the Static testing
or Document Level testing
Some techniques to perform Static testing are
1) Review---Will conduct on the docs to ensure correctness &
completeness---Requirement reviews, Design review, code review, Test
plan review, Test cases review etc
2) Walkthrough---Its an informal review which doesn't have any plan...we
can do it at any point of time
Author reads the doc or code and discuss with peers
Walkthroughs doesn't have any Minutes of Meeting(MOM)
3) Inspection---Its a most formal review type where the Author, Project
managers, team members are required
Inspection will have a proper schedule which will be informed much earlier
for the concerned Dev/Testing team
Dynamic testing: Testing the actual SW is the Dynamic testing.
Unit
Integration
System
BAT/UAT testing types will come under this Dynamic testing
Verification:
Verification checks whether we are building the right product
Focus on documentation
Verification typically involves Reviews, Walkthroughs & Inspection
BRS, SRS, High level document & Low level document comes under verification
So, Static testing belongs to Verification

Validation:
Validation checks whether the developed product is right or not
Takes place after the Verifications are completed
Focus on actual software which is developed
Validation typically involves actual testing on the software
Unit testing, Integration, System & UAT testings comes under this
So, Dynamic testing belongs to Validation

QA, QC & QE
QA is the Quality Assurance
QC is the Quality Control
QE is the Quality Engineering
QA vs QC
1) QA is a process related thing where the higher management will follow the
process and make sure the respective teams will follow the process
QC is the actual testing of the SW and are involved during the testing
phase
2) QA focuses on building Quality
QC focuses on delivering Quality
3) QA is for preventing the defects
QC is for detecting the defects
4) QA is process oriented
QC is product oriented
5) QA is followed for the entire life cycle
QC is involved only during the Testing phase which comes under SDLC
process

QE: Quality Engineering


It is the Advanced term of QC
Automation testers who will write the code for automating the scripts are
called the QE…

Whitebox testing: Testing carried out by knowing the Code is the


Whitebox testing. It is done by the developers. Ex: Unit testing &
Integration testing

Black box testing: Testing carried out without knowing the code or logic
behind is the Black box testing. The testing team performs this testing. Ex:
System testing & UAT testing
Levels of testing:
1) Unit testing
2) Integration testing
3) System testing
4) User Acceptance testing (UAT)
Unit testing:
1) A Unit is a single component or module of a software
2) Unit testing is conducted on a single program or a single module
3) Unit testing is a white box testing technique
4) Unit testing is conducted by the Developers
Unit testing techniques:
1) Basis path testing: Testing every line of the code is the Basis path testing
2) Control structure testing: Under this we have two different sub-techniques
Conditional coverage & Loops coverage
3) Mutation testing: Mutation is nothing but the repetition

Ex for Conditional coverage: a=10 b=20


If a>b
Print a is largest
Else
Print b is largest
Ex for Loops coverage: I want to print numbers from 1 to 100
I=1
Max=100
While i<=100
{
Print i
I=i+1
}
Ex for Mutation testing:
If user = ‘sam’ password = ‘abc’
Allow login
Else
Not allow login
Integration testing:
1) Integration testing is performed on 2 or more modules
2) Integration testing focuses on checking the data communication between
the modules/systems
3) Integration testing is a white box technique
Types of Integration testing:
1) Incremental Integration testing
2) Non-incremental integration testing
Incremental integration testing:
a) Top Down approach: Incrementally adding the modules and testing the
data flow between modules. Ensure the module added is the child of the
previous module
b) Bottom up approach: Incrementally adding the modules and testing the
data flow between modules. Ensure the module added is the parent of the
previous module
c) Sandwich/Hybrid approach: This is the combination of Top down & Bottom
up approach
2) Non-incremental Integration testing---Adding all the modules at once and start
testing the data flow between modules
We dont use this much because:
a) We might miss the data flow between some of the modules
b) If you find any defect we cant understand the root cause of the defect

System testing:
High level idea about the system testing is:
Testing over all functionality of the application with respect to client requirement
It is a black box testing technique
This testing is conducted by the testing team
After completion of the unit and integration level testing we start the system
testing
Before conducting the system testing, tester should know the customer
requirements very clearly

System testing focuses on below aspects:


a) User Interface Testing (GUI)
b) Functional testing
c) Non-Functional testing---Load testing, performance testing, compatibility
testing, security testing etc...
d) Usability testing---verifying the user manuals doc which involves every step of
the software functionality

User Acceptance Testing (UAT)


After completion of system testing UAT team conducts acceptance testing in two
levels
a) Alpha testing --- users will do the testing in development or testing or Demo
environment---Non-Prod envs
b) Beta testing --- users will do the testing in the production/customer
environment with some basic testing
System testing: Testing the over all application/software with respect to
Customer requirement

We have different types under this system testing


1) GUI testing
2) Usability testing
3) Functional testing
4) Non-functional testing

GUI testing: The process of testing the User Interface of an application


A graphical user interface includes all the elements such as menu, checkbox,
buttons, colors, fonts, sizes, icons, images etc...
GUI testing checklist:
1) Testing the size, position, width, height of the elements
2) Testing of the error messages that are getting displayed
3) Testing the different sections of the screen
4) Testing of the font whether it is readable or not
5) Testing of the screen in different resolutions with the help of zooming in &
zooming out
6) Testing the colors of the font
7) Testing whether the image has good clarity
8) Testing the alignments
9) Testing the spellings
10) Testing whether the interface is attractive
11) Testing of the scrollbars according to the size
12) Testing of the disabled fields if applicable
13) Testing the size of the images
14) Testing the headers if they are properly aligned
15) Testing the color of the hyperlink
16) Testing UI elements like button, textbox, text area, check box, radio buttons,
drop downs, links etc
-----------------------------

Usability testing: Checks how easily the end user can understand and operate
the application
Testing the help menu documents
-----------------------------
Functional testing:
1) Object properties testing
2) Database testing
3) Error handling
4) Calculation/Manipulations testing
5) Links existence & Links execution
6) Cookies & sessions
Functionality is nothing but the behaviour of the application/software. Testing
whether the application is behaving as per the customer requirement or not

Object properties testing: Check the properties of objects present on the


application
Object is an element in the application. Every element on the web page is having
properties
Ex: Radio buttons, dropdown menus, message box enable or disable, Multiple
choices

Database testing: Checking the DB operations with respect to user operations.


We do the DB testing by using some SQL queries for command and get the data
from the backend
Data will be stored in the form of tables in the DB
Ex: Employee info will be stored in the Employees table
Data Manipulation Language (DML) operations are the base for this testing like
we have select, insert, delete, update commands
DB testing includes black box testing & white box testing which in turn called as
Grey box testing

Other things that we test in DB testing:


1) Table & column level validations (column type, column length, number of
columns in a table)
2) Relation between the tables (Normalization)
3) Functions
4) Procedures
5) View tables etc
All these comes under white box testing
---------------------------------
Error handling: Mainly focus on error msgs
Tester verifies the error messages while performing incorrect actions on the
application
Error messages should be readable
Errors should be in simple language which can be understood by the user
All these error messages will be clearly mentioned in the requirement
Continuation of System testing
Calculations/Manipulations testing: Tester should check the calulations by
manipulating the values
----------------------------------
Links existence & Links execution: Where exactly the links are placed is Links
existance
When we click on the links, is it navigating to the correct page or not is the links
execution

Three types of links:


Internal links: Navigating to some other section in the same page
External links: Navigating to other page or any other website
Broken links: Link is there but its not taking you to anywhere when clicked on it
----------------------------------
Cookies & Sessions: This is performed only on the web applications
When you are browsing some online application, browser is acting as an
interpreter. Actual data will be in the server. While performing this activity, our
browser is intelligent enough to remember the data
Ex: You would have checked some products in a website, even if you go to some
other web page, those products will be shown because of the cookies
Cookies are the temporary files created by the browser which contains the data
when we are using an online application
Testing these cookies are called as the Cookies testing

Sessions will be created on the server side. These are time slots created by the
server. Sessions will expire after some time if you are idle
Main reason to use this is for the security mechanism
-----------------------------------
While performing Functional testing, it is based on Customer requirement
Non-Functional testing: This testing is purely based on Customer expectation
after functionality is stable
1) Performance testing: Speed of the application like how well the application is
responding when more number of users are using the application. Performance
will be tested on the web based applications
a) Load: Gradually increasing the load on the application slowly by increasing
the no.of users and verify the response time
b) Stress: We suddenly increase/decrease the load on the application and
check the speed of the application
c) Volume: How much data our application is able to handle

2) Security testing: How well our application is providing the security or how
secure our application is. Main focus is on Authentication and Authorisation
Authentication: Testing whether the users are valid or not
Authorisation/Access control: If a valid user logs in, he will have only few access
granted. Permissions of the valid user

3) Recovery testing: Recovering the lost data is the recovery testing. Checks the
system change from abnormal to normal (Last known good configuration)

4) Compatibility testing: Verifying the application is compatible with different


environments. We have 3 types here.
a) Forward compatibility: Checking whether the SW should be able to upgrade
from older version to new version is the forward compatibility
b) Backward compatibility: Checking whether the new version is compatible
with the older versions of OS
C) Hardware compatibility: Checking if the application can be installed on
different hardware configurations. This is also called as Configuration testing

5) Installation testing: Testing the installation process whether everything looks


good or not. We also check if the Uninstallation is properly happening or not
6) Sanitation/Garbage testing: Apart from the SRS requirements, if there is some
other extra functionality coming in, it is a bug. So, we have to ask the Developer
to remove the extra functionality as it is not specified in the SRS document.

--------------------------------------------------------------------------
Summarise the differences between Functional & Non-functional testing:

1) Validates Functionality of the software


Verify the Performance, Reliability, Security of the SW

2) Functionality describes what the SW does


Non-functionality describes how SW works
3) Concentrates on User requirements
Concentrates on User expectations

4) Functional testing is performed before Non-functional testing


Non-functional testing is performed after finishing the Functional testing
----------------------------------------------------------------------------

Software testing
terminology
1) Regression testing: Testing conducted on the modified build to make
sure there won’t be any impact on existing functionalities because of the
changes like adding/deleting/modifying features.
A) Unit regression testing: Testing only the changes/modifications done
by the developer
B) Regional regression testing: --Testing the modified module along with
the impacted modules
-- Impact analysis meetings conduct’s to identify the impacted modules
with Dev & QA
C) Full regression: --Testing the main feature & the remaining part of the
application
--Ex: Dev has done changes in many modules, instead of identifying
impacted modules, we perform one round of Full Regression

In simple terms, Regression testing is conducted by the tester to make


sure the existing functionalities are working as expected

2)Re-testing: Testing the bug fixed by the developer to make sure the
bug is working as expected is called Re-testing
--Tester will close the bug if its working fine else, he will reopen the bug
--To make sure if the bugs reported in the earlier build are fixed properly
in the current build

3) Smoke & Sanity testing:


Smoke & Sanity testing comes into picture after the build is released to
the QA environment
a) Smoke testing is done to make sure the build we received is
testable/stable
Sanity is done during the release phase to check the main functionalities
without going deep
b) Smoke testing is performed by both Dev & Testing team
Sanity is performed by Testers
c) Smoke testing, build might be stable or unstable
Sanity, build should be relatively stable
d) It is done during the start of the build
It is done on stable builds
e) It is a part of basic testing
It is a part of Regression testing
f) We do it whenever we get a new build for testing
It is planned when we don’t have enough time to perform testing in
depth

4) Exploratory testing:
a) We have to explore the application, understand completely and test
b) Understand the application, identify all possible scenarios and then
start the testing
c) We perform the Exploratory testing when we have the application with
no proper requirement
d) Tester will perform Exploratory testing when we have no/proper
requirement
Drawbacks:
a) Time consuming
b) Tester will never know if there is a bug in the application
c) Tester might understand any feature as a bug or any bug as a feature
as we dont have any requirement

5) Adhoc testing:
a) Testing application randomly without any TCs or any requirement
documents
b) Its an informal testing type with an aim to break the system
c) Testers should have knowledge of the application even though we dont
have any requirements/TCs
d) This is an unplanned activity and hence we call it as an Adhoc testing

6) Monkey/Gorilla testing:
a) Testing application randomly without any TCs or any requirement
documents
b) Tester do not have knowledge of the application
c) Suitable for gaming applications

Differences between Adhoc/Monkey/Exploratory testing:


a) Adhoc testing--No documentation
Monkey testing--No documentation
Exploratory testing--No documentation
b) No plan
No plan
No plan
c) Tester should know the application functionality
Testers doesn't know the application functionality
Testers doesn't know the application functionality
d) Random testing
Random testing
Random testing
e) Intension is to break the application/findout corner defects
Intension is to break the application/findout corner scenarios
Intension is to learn or explore functionality of the application
f) Any applications
Gaming applications
Any applications which is new to testers

7) Positive testing:
Testing the application with valid inputs is called positive testing
We check if the application behaves as expected with the valid inputs

8) Negative testing:
Testing the application with invalid inputs is called Negative testing
It checks if the application behaves as expected with the invalid inputs
Positive vs Negative TCs:

Requirement: Text box accepts 6-20 characters and only alphabets


Positive TCs:
--Text box accepts 6 characters
--Text box accepts 20 characters
--Text box accepts any value between 6-20 characters
--Text box accepts all the alphabets
Negative TCs:
--Text box shouldn't accept less than 6 characters
--Text box should not accept more than 20 characters
--Text box should not accept special characters
--Text box should not accept numerals/digits

End-to-end testing:
Testing the overall functionalities of the system including the data
integration among all the modules is called E2E testing
RegisterLogin--->Add payee---a) Delete payee b) Edit payee--->Transfer
funds-Logout

9) Globalization/Internationalization/I18N testing:
Performed to ensure the system or application can run globally
Different aspects of application are tested to make sure it supports every
language
It tests the different currency formats, mobile number formats and
address formats are supported by the application
Ex: Amazon supports many languages, address, mobile numbers,
currency

10) Localization testing:


Performed to check the application for a specific region
This supports only specific language which is usable in a specific region
It tests the specific currency formats, mobile number formats and address
formats are supported by the application
Ex: qq.com

Software Design Techniques


Test design techniques helps to design better test cases
It reduces the number of TCs to be executed
Test design techniques/ test data is used to prepare data for testing
1) Reduce the Data
2) More Coverage
We use 5 techniques under Test design/Test data design techniques
a) Equivalence class partitioning
b) Boundary value analysis
c) Decision table
d) State Transition
e) Error guessing

a) Equivalence Class partitioning


Checking the values by classifying the data into multiple classes
Partitioning data into various classes and we can select data according to class
then test. It reduces the no.of test cases and saves the testing time
Ex: Requirement says the user should be able to enter 1-500 numbers only in a
text box
Normal test data will be like, we have to enter numbers from 1 till 500 which will
create 500 TCs
By diving the values into multiple classes in equivalence class partition, we can
reduce the number of TCs
-100 to 0--- -60(Invalid)
1-100--- 75(Valid)
101-200---164(Valid)
201-300---241(Valid)
301-400---379(Valid)
401-500---499(Valid)
501-600---566(Invalid)

One more ex: There is a text box which will allow only alphabet
A--Z--->Valid XGK
a---z--->Valid tdj
Special characters-->Invalid !@#
Numbers-->Invalid 123
Spaces-->Invalid xY Z

Boundary value analysis: We focus on the boundary of the value


Ex: A text box to enter age between 18-50
Here we just test the boundaries

18 is Min value & 50 is Max value

We shall test with only 6 parameters


Min=18(Pass) Max=50(Pass)
Min-1=17(Fail) Max-1=49(Pass)
Min+1=19(Pass) Max+1=51(Fail)

Note: We will use Equivalence class partitioning & Boundary value analysis
techniques for Input Domain testing

Input domain testing: The values will be verified in the text box/Input fields.
Hence we use ECP & BVA techniques to prepare test data

Decision Table: If we have more number of conditions and their corresponding


actions, we use Decision table technique
Decision table is also called as Cause-Effect table
This technique will be used if we have more conditions and their respective
corresponding actions
In this technique, we deal with combinations of inputs
To identify the TCs with decision table, we consider conditions and actions

Ex: Transferring money online to an account which is already added

Conditions to transfer money are:


--Account already approved/added
--OTP matched
--Sufficient amount in the account

Actions performed are:


--Transfer money
--Show a message as insufficient amount
--Block the transaction in case of suspicious transaction

TC1 TC2 TC3 TC4 TC5


Condition1 Account already approved TRUE TRUE TRUE TRUE FALSE
Condition2 OTP matched TURE TRUE FALSE FALSE X
Condition3 Sufficient money in the account TRUE FALSE TRUE FALSE X
Action1 Transfer money Execute
Show message "Insufficient
Action2 Amount" Execute
Action3 Block the transaction in case of Execute Execut X
suspicious transaction e

TC1 is a valid TC
TC2, TC3, TC4 are negative TCs
TC5 is an Invalid TC

State transition:
--In State Transition technique, if changes in input conditions, it change the state
of the application
--This technique allows the tester to test the behavior of an Application
--The tester can perform this action by entering various input conditions in a
sequence
--In this technique, the tester will provide positive as well as negative input test
values for evaluating the system behavior

Ex: Take an example of a login page of an application which locks the user after
3 wrong attempts of password

State Login Correct PW Incorrect PW


S1 First attempt S4 S2
S2 Second attempt S4 S3
S3 Third attempt S4 S5
S4 Home page
Display message as "Account locked. Please contact
S5 Admin"

Error guessing:
Error guessing is one of the testing techniques used to find bugs in a SW
application based on tester's prior experience
In Error guessing we dont follow any rules
It depends on tester analytical skills and experience
Ex: Submitting a form without completing all the values
Entering invalid values like for age, entering alphabets..
STLC: Software Testing Life Cycle
STLC talks about completely on the Testing process
Phases in STLC:
a) Requirement analysis
b) Test planning
c) Test Design
d) Test Execution
e) Bug/Defect Reporting & Tracking
f) Test Closure
Flowchart describing the STLC phases:

Test plan: A Test Plan refers to a detailed document that catalogues the test
strategy, objectives, schedule, estimations, deadlines, and the resources
required for completing a particular project.
Traceability matrix: Traceability matrix or software testing traceability matrix
is a document that traces and maps the relationship between two baseline
documents. This includes one with the requirement specifications and another
one with the test cases.

Phase Input Activities Responsibility Outcome


Identify the Test Lead/Team Test plan
Test Planning Project plan resources Lead(70%) document
Functional Test
What to test Requirements Team formation Manager(30%)
How to test Test estimation
Preparation of
When to test test plan
Reviews on test
plan
Test plan sign off

Preparation of Test Lead/Team Test cases


Test Designing Project plan Test scenarios Lead(30%) document
Functional Preparation of Test Traceability
requirements Test cases Manager(10%) matrix
Reviews on Test Test
Test plan cases Engineers(60%)
Traceability
Design docs matrix
Use cases Test case signoff

Functional Executing Test Test Lead/Team Status/Test


Test Execution requirements cases Lead(30%) Reports
Preparation of Test
Test plan Test Report Engineers(70%)
Identifying
Test cases Defects/Bugs
Build from
Development
team

Defect Reporting Preparation of Test Lead/Team


& Tracking Test Cases Defect Report Lead(20%) Defect Report
Reporting defects Test
Test Reports to Dev Engineers(80%)

Test Analysing Test Test Lead/Test Test Summary


closure/Sign-off Test Reports Reports Manager(70%) Reports
Analysing Bug Test
Defect Reports Reports Engineers(30%)
Evaluating Exit
criteria

Test Plan contents: A Test Plan is a document that describes the Test scope,
Test strategy, Objectives, Schedule, Deliverables and Resources to perform
testing for a software product.
Test plan template contents:
a) Overview
b) Scope----Inclusions, Test Environments, Exclusions
c) Test Strategy
d) Defect Reporting procedure
e) Roles/Responsibilities
f) Test schedules
g) Test deliverables
h) Pricing
i) Entry/Exit criteria
j) Suspension & Resumption criteria
k) Tools
l) Risks & Mitigations
m) Approvals

Use Case, Test Scenario & Test Case


Use Case:
--Use case describes the requirement
--Use case contains three items
Actor, which is the user, which can be a single person or a group of people,
interacting with a process
Action, Which is to reach the final outcome
Outcome, which is the successful user outcome
Test scenario:
A possible area to be tested(What to test)
Test case: (How to test)
Step by step actions to be performed to validate functionality of the
Application
Test case contains Test steps, Expected Result & Actual Result

Difference between Use Case & Test Case


Use case: Describes Functional Requirement prepared by BA
Test case: Describes Test steps/procedure prepared by Test Engineer. Test case
is prepared based on the Use Case/SRS document

Test Scenario vs Test Case


--Test scenario is “What to be tested” and Test Case is “How to be tested”
--Ex: Test scenario: Testing the functionality of Login button
Test case 1: Click the login button without entering user name & password
Test case 2: Click the login button by only entering user name
Test case 3: Click the button by entering wrong password etc:

Test Suite: Group of Test cases which belongs to same category


Ex: Grouping the TCs related to Regression testing, Smoke testing, E2E testing
etc:

Test case: A Test Case is a set of actions executed to validate particular feature
or functionality of a Software application
Contents of Test case:
a) Test Case ID
b) Test Case Title
c) Description
d) Pre-condition/pre-requisite
e) Requirement ID
f) Steps
g) Expected Result
h) Actual Result
i) Test Data
Test case template example:

Requirement Traceability Matrix:


--RTM describes the mapping of Requirements with the TCs
--Main purpose of RTM is to see that all the TCs are covered so that no
functionality should miss while doing Software testing
RTM parameters include:
--Requirement ID
--Requirement Description
--Test Case IDs
Sample RTM template

Test Environment:
Test environment is a platform specifically built for test case execution on the
software product
It is created by integrating the required software and hardware along with proper
network configurations
Test environment simulates the Production environment
Another name of Test Environment is Test Bed
Ex: www.gmail.com--Production environment
www.qa.gmail.com--Test/QA Environment
www.dev.gmail.com--Dev Environment

Test Execution:
During this phase Testing team will carry out the testing based on the Test plan
& the Test cases prepared
Entry criteria: Test cases, Test data & Test plan

Activities:
Test cases are executed based on the Test planning
Status of TCs are marked like Pass, Fail, No Run, Blocked
Documentation of Test Results and log defects for failed cases is done
All the blocked and failed TCs are assigned Bug IDs
Retesting once the defects are fixed
Defects are tracked till closure
Deliverables: Provide defect & Test case execution report with completed
results
Guidelines for Test execution:
The build being deployed to the QA environment is the most important part of
Test Execution cycle
Test execution is done in QA environment
Test execution happens in multiple cycles
Test execution phase consists of Executing the Test cases + Test
scripts(Automation)

Defects/Bugs:
Any mismatched functionality/deviation in the functionality in an application is
called a Bug/Defect
During Test Execution Test Engineers report deviations as Defects to Dev
through templates or Tools
Defect Reporting tools:
a) HP ALM
b) Rally
c) JIRA
d) Bug Jilla etc:
Defect report contents:
Defect_ID: Unique identification number for the defects
Defect Description: Detailed information of the defect including information
about the module in which defect was found
Version: Version of the software in which defect was found
Steps: Detailed steps along with screen shots with which the developer can
reproduce the defects
Date raised: Date when the defect is raised
Reference: Where you provide the reference to the documents like
requirements, design, architecture, or any screen shots of the error to help in
understanding the defect
Detected by: name/ID of the tester who raised the defect
Status: Status of the defect (We shall speak more on this in Defect life cycle)
Fixed by: Name/ID of the developer who fixed the defect
Date closed: when the defect is closed
Severity: which describes the impact of the defect on application
Priority: which is related to defect fixing urgency. Severity could be
High/Medium/Low based on the impact urgency at which the defect should be
fixed
Defect Severity:
--Severity describes the seriousness of the defect and how much impact on
business flow
Defect severity can be categorized into 4 classes:
--Blocker(Show stopper): This defect indicates nothing can proceed further
--Critical: The main functionality is not working. Customer business workflow is
broken. They cannot proceed further.
Ex: Fund transfer is not working in Net banking
Ordering a product in Ecommerce application is not working
--Major: It causes an undesirable behavior but the feature or the application is
still functional.
Ex: After sending an email, there is no confirmation
After booking a cab, there is no confirmation
--Minor: It won’t cause any major breakdown of the system
Ex: Look & feel issues, spellings, alignments
Defect Priority:
Priority describes the importance of the defect
Defect priority states the order in which a defect should be fixed
Defect priority can be classified into 3 classes:
--P0(High): The defect must be resolved immediately as it affects the system
severely and cannot be used till the defect is fixed
--P1(Medium): It can wait until a new version/build is created
--P2(Low): Developer can fix it in the later releases
Examples of Severity & priority combinations:
Low priority & Low severity: A spelling mistake in a page not frequently
navigated by the user
Low priority & High severity: Application crashing in some corner cases
High priority & Low severity: Slight color change in logo or spelling mistake in
company name
High priority & High severity: Issue with Login functionality(user is not able to
login to the application)
High Severity & Low priority: Web page not found when user clicks on a link(user
doesn’t visit that page generally)

Defect Resolution:
After receiving the defect report from the testing team, Dev team will conduct a
review meeting to fix the defects. Then they send the resolution type to the
testing team for further communication.
Resolution types:
Accept
Reject
Duplicate
Enhancement---Till the time we get a confirmation from client, we mark the
defect status as Deferred
Need more information
Not reproducible
Fixed
Backlogged
Defect Life cycle flow chart:
QA/Testing Activities:
1) Understanding the requirements and functional specifications of the
application
2) Identifying required Test scenarios
3) Designing TCs to validate/test the application
4) Setting up Test environment (Test Bed)
5) Execute TCs in valid application
6) Log Test results (No .of TCs pass/fail)
7) Defect reporting & tracking
8) Retest fixed defects of previous builds
9) Perform various types of testing in the application(Sanity, Regression, E2E,
Smoke, Exploratory, Monkey testing) etc😉
10) Reports to Test lead about the status of assigned tasks
11) Participate in regular team meetings
12) Creating automation scripts
13) Provides recommendation on whether the build is ready for production
deployment

7 principles of software testing:


1) Start SW Testing at the early stages which means from the beginning when
you get the requirements
2) Test the SW in order to find the defects
3) Highly impossible to give the bug free SW to the customer
4) Should not do exhaustive testing which means we should not use same type
of data for testing every time
5) Testing is context based which means we have to decide what types of testing
should be conducted based on the type of application
6) We have to modify the TCs in every cycle/release in order to find more
defects. As, if we execute same TCs every time, we will not find more defects
7) We should follow defect clustering. Means some of the modules contains most
of the defects. By experience, we can identify such risky modules. 80% of the
defects will be found in only 20% of the modules

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy