0% found this document useful (0 votes)
8 views

ManualtestingNotes (1)

The document outlines the importance of software testing in identifying and fixing bugs early in the development process to ensure quality and reliability. It discusses various testing types, including manual and automated testing, static and dynamic testing techniques, and different levels of testing such as unit, integration, system, and acceptance testing. Additionally, it highlights the significance of specifications and design in bug creation, the cost implications of bugs at different stages, and various software development life cycle models.

Uploaded by

sameerrza77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

ManualtestingNotes (1)

The document outlines the importance of software testing in identifying and fixing bugs early in the development process to ensure quality and reliability. It discusses various testing types, including manual and automated testing, static and dynamic testing techniques, and different levels of testing such as unit, integration, system, and acceptance testing. Additionally, it highlights the significance of specifications and design in bug creation, the cost implications of bugs at different stages, and various software development life cycle models.

Uploaded by

sameerrza77
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 56

1

Manual testing
2
3

The goal of a software tester is to find bugs, find them as early as possible, and make
sure they get fixed.

WHY TESTING IS NECESSARY?


If any level of testing cannot declare that there is no defect in the product, then Why is it required at all?

Development people assume

that whatever they have developed is as per customer requirements and Will always work. But, it is

imperative to create real-life scenario and undertake actual execution of a product at each level of

software building (including system level) to assess whether it really works or not.

Developers may have excellent skills of coding but integration issues

can be present when different units do not work together, even though they work
independently

One must bring individual units together and make the final product, as some defects may be
possible when the sources are developed by people sitting at different places.

The primary role Of software testing is not to demonstrate the correctness of

software product, but to expose hidden defects so that they can be fixed. Testing is done to protect the

common users from any failure of system during usage.

Testing is a process Of demonstrating that errors are not present in the product,

This approach is used in acceptance testing where if the application meets acceptance criteria, then it
must be accepted by the customer,

Testing gives number of detects present which indirectly gives a measurement of software quality.

More number of defects Vindicate bad software and bad processes Of development,

software bug occurs when

1. The software doesn’t do something that the product specification says it should do.
2. The software does something that the product specification says it shouldn’t do.
3. The software does something that the product specification doesn’t mention
4

The number one cause of software bugs is the specification

The bugs are caused for many reasons but main one is Specification.

specifications are the largest bug producer

it’s constantly changing, or it’s not communicated well to the entire development team.

Planning software is vitally important. If it’s not done correctly, bugs will be created.

The next largest source of bugs is the design if not designed well then bugs are occurred

This means that wrong design ,, more the bug

The Cost of Bugs


5

The costs are logarithmic—that is, they increase tenfold as time increases. A bug found and
fixed during the early stages when the specification is being written might cost next to nothing,
or 10 cents in our example. The same bug, if not found until the software is coded and tested,
might cost $1 to $10.
But If a customer finds it, the cost could easily top $100.
Example
root cause of the problem was that the software wouldn’t work on a very popular PC platform.
If, in the early specification stage, someone had researched what PCs were popular and specified
that the software needed to be designed and tested to work on those configurations, the
cost of that effort would have been almost nothing.

If that didn’t occur, a backup would have been for the software testers to collect samples of the popular PCs
and verify the software on them. They would have found the bug, but it would have been more expensive to
fix because the software would have to be debugged, fixed, and retested.

The goal of a software tester is to find bugs, find them as early as possible, and make
sure they get fixed.
6

Software testing- it is process to test an application to find out error in it.


Checking the software is ok.
The goal of software tester to find bug.
verifying and validating that a software or application is bug free

Testing Types
1. Manual testing :- Manual testing includes testing a software manually, i.e., without using any automated
tool or any script

2. Automation Testing: Automation testing, which is also known as Test Automation, is when the tester writes
scripts and uses another software to test the product. This process involves automation of a manual process.
Automation Testing is used to re-run the test scenarios that were performed manually, quickly, and
repeatedly.

Static Testing Techniques

Analysis of a program carried out without executing the program

Done during verification process i.e before development

As we know 85% errors found in desgin phase

Is code is tested in static testing ? “No” || the documentation is tested

Software development starts, continues, and ends with


documentation Early documentation → is used to defi ne the
software to be built.
7

Later documentation covers → the software training, installation, and operation (user guides)..

Static = not while running

The primary goal of static testing is reduce defect by reducing defects in the

documentation from which the software is developed

Review : is type of static testing | done before execution

Review is a process or meeting during which a work product or set of work products, is
presented to managers, users, customers, or other interested parties for comment or

walkthroug Review :

 It is not a formal process

 It is led by the authors

 Author guide the participants through the document according to his or her thought
process to achieve a common understanding and to gather feedback.

 Useful for the people if they are not from the software discipline, who are not used to
or cannot easily understand software development process.

Insepction Review:

It is the most formal review type

It is led by the trained

moderators

During inspection the documents are prepared and checked thoroughly by the reviewers before the meeting

Informal Review

Unplanned and Undocumented

Technical Review
Documented || Defined fault detection process || Includes peers and technical experts

No management participant
8

Walkthrough |not Formal

Static Testing
Review Insepction | formal|
Before execuation finding fault

Informal review Unplaned and


undcoument

Technical Review , falut process

Dynamic Testing Techniques

The process of evaluating a system or component based upon its behaviour during execution.

Black Box Tesing (Functinal Testing) System Testing

High Level ( Main Testing by Tester) = System + UAT (Tester) UAT Testing
Dynamic
Testing

Low Level (Programer ) = Unit + Integration


Unit Testing
White Box Testing (Non Functional ):

Integration Teting

Levels Of Testing
1. Unit Testing :- in unit testing individual component of software tested. The purpose of
this testing is that each module is working properly
It focuses on the smallest unit of software design
(done by developer by using sample input and observing its sample output)
Eg. In a program we are checking if loop, method or
function is working fine
9

2. Integration Testing: in integration Testing individual units are combined and tested as group (developer)
(i) Top-down
(ii) Bottom-up
(iii) Sandwich
(iv) Big-Bang
Main Purpose of Integration Testing :
To check moudules are communicating each other as DFD Data Flow Diagram which is specified in
TDD ( Technical Document Diagram )

3. System Testing : in this testing we can test whole application (complete / integrated software is tested)
done by tester

4. Acceptance Testing : a level of software Testing in which software is tested for user acceptance
UAT done at client location where software is actually used
→ Alpha Testing : done by tester in company in presence of customer
→ Beta Testing : done by customer to check software is ok , satisfy requirement

Unit Testing
White box testing >> |internal logic | developer

Integration Testing

System
Testing
Black box testing → | no need code |only functional |tester
( Actual Testing | Tester)

User acceptance UAT

Testing Types

Functional testing

is the process through which QAs determine if a piece of software is acting in accordance with pre-

determined requirements. It uses black-box testing techniques, in which the tester has no knowledge of the

internal system logic. Functional testing is only concerned with validating if a system works as intende
10

Functional Tesing : Testing what system does

Functional testing is a type of black-box testing. “does this actually work?”


The ultimate goal of functional testing is to ensure that software works according to specifications and
user expectations
input values
test cases
Compare actual and expected output

Eg. Login funcatinallity, registration funcationality

1. Non Functional Testing :

 Load testing
 Reliability
 The readiness of a system
 Usability testing

Eg. A practical example would be: checking how many people can simultaneously check out of a shopping
basket

2. Black Box Testing : (without code) High level

Black box testing is that kind of software testing you can do when you do not have the source code,

just the executable code in hand

The testing is done without the internal knowledge of the products

3. White box testing : (with code) low level

Monitoring internal structure . check internal logic . done by developer

4. Smoke testing : - (Testing on newly released build → compulsary requirement )

It is first testing on newly released build … (Build Verification Testing)

Check → the deployed software build is stable or not.

Eg
QC → Build s/w → QA → Testing

Compulsory Testing → smoke testing


11

optional Compulsary

name

password

Remember Me

Sign In

Sanity Testing : ( Testing on newly released build → check Compulsory + optional )

In above fig all filed are compulary then it is sanity Testing

Retesting : Testing functionality once again

Name

password

Enable keyboard Defect Send to QC

Login Reset QA Test Again


12

Regression Testing
(Re- running if code changes) :
It is overall testing when ever new change is occurred .

Re-running functional + non Functional test

Code change , does not impact existing function.

After change code , is software works Ok.

Every time a new module is added leads to changes in the program.

This type of testing makes sure that the whole component works properly even after adding

components to the complete program.

Only compulsary feature test on new build

Check compulsary + optional


because its stable build

Static Testing :
Review
Inspection
Walkthrough

Test software without executing software ( just test documents )

This test done for avoid bugs in early stage (review testing) look and feel

Why Static testing


→ Early defect detection and correction
→ To get fewer defect at a later stage of testing
13

Static technique has 3 types

Review : review before development i.e simple document

Read the document, correct or not


Document s/b correct and complete
Requirement review , design review , Test plan review
Review testing can do anybody , manager|developer|tester| coworker etc.

Walkthorough :

its is informal , anytime , not planned , done when ever required


author of document will explain to their team

Inspection :

Most formal , 3- 8 people in meeting


Proper meeting , schedule which is intimated by mail

Dynamic Testing : A testing which is done after code development.

The main purpose of dynamic testing is to test software behaviour with dynamic
variables

dynamic testing requires code to be executed

static testing → just analyse the code, no need execution

Alpha Tesing : Its is final testing in development

Advantage : immediate solution is possible

Beta Testing : it is 1st testing in client side . it is also called user user acceptance testing UAT

Disadvantage : no immediate solution if defect is found

Installation Testing : providing required resources at client location

It is type of testing in which test engineer check deployment process is successful as


per user guideline

Deployment document /user manual : it is document prepared by project

manager Usability Testing : checking application for user friendliness

Monkey Testing : used for game testing, used for random input

To check the application or system will crash

Portablity Testing : Developed application Shoud support multiple enviorment

Forced error Testing : to check valid error message will display


14

Exploratary Testing : When test enginer does not have idea of functinal testing then he is learnig through

exploring application

End to End Testing : We can check all internal componant for successful response

Internal componant like Client , Network, Server Database etc are working fine

Means Testing internal componant

Security Testing : Checking Secuirity of application

Relablity Testing : The Developed application Should Support Longer Duration i.e. Stablity

Audit : it is independent evolution of software .

Inspection : it is formal evolution of software

Concurrency Testing : multiuser Testing

Debugging : executing program line by line for finding errors.

Some of the most popular SDLC models are:

→ no matter which module is used each has same phases

* Waterfall Model
* V-Shaped Model
* Incremental Life Cycle Model
* Spiral Model

SDLC : software development life cycle

It is process used by software company to develop , design , test software .

1. Requirement analysis
2. Design (blue print)
3. Coding or development
4. Testing
5. Maintenance

Waterfall Module : it is old and traditional model

It is linear model i.e. steps 1 after another

Each phase must complete to start new phase ( i.e called one after another)
15

→ In waterfall module quality of product is good because every phase has clear

Documentation

→ SRS (system requirement specification ) not changed hence no bug

→ Initial investment is less because no tester involved

→ no changes in middle

→ testing will start after coding

V shaped Model : verification and validation

Verification → done before development → check we are doing correct ?

Verifying document >> because no software

Verification = before s/w = static ready Review | walkthrough | inspection

Static Testing = verifying doc

Validation → Actually testing software → done after software ready → done


Right ?

Product is ready just check ok or not


Validation = after s/w = dynamic
Dynamic Testing = unit | integration | system | user acceptance
16

System design = before development = verification = static testing = verify documents

System Integration =after devep = validation = dynamic testing = unit | integration |

system | UAT In V model = testing is involved every phase

Disadvantage : more documents

Spiral Model
17

Incremental Model : requirement are divided into multiple module each module

goes through SDLC phase i.e. analysis, desing, coding , testing

, maintenance

Requirement → module 1 + module 2 +…… module n

When use incremental model :

→ A project has a lengthy development schedule.

→ When the requirements are superior.

Disadvantages :

 Need for good planning  Total Cost is high.


18

QA | QC |
QE
QA = Highest Possible quality

quality assurance | process related | high level

management Process designed by QA | Responsible for

highest possible quality

QC = quality control | product related | actual tester

QA is process orientated | QC product orientated (that work on product

actual tester ) QA = responsible for preventing defect (involves all phase

i.e. design coding ..)

QC = responsible for finding defect ( involves only

testing) QE = Quality engineer

Responsible for write code for testing ( automation engineer)

System Testing

(Actual Testing)

System Testing : GUI Testing | Usability Testing | Functional Testing | non functional

1. GUI Testing :
:- Testing GUI application , user interface testing
:- such as menus , check boxes , icon, images
:- not functional , just look and feel
:- check size and position of element
:- image quality, spelling check , alignment
:- Fonts are understanding or not

2. Usability Testing :
:- check the easiness of application
:- helping messages are display if user confuse
:- check user friendly application or not ?
19

3. Functional Testing :
:- check behaviour of application
:- check database testing (work with database ok ?)
:- error handling . display error message ok ?
:- calculation and manipulation
Eg. 5+5 = 15( user requirement ) => we
follow this 5+5 = 10 ( math calculation)
:- check text box disable or enable as user requirement
:- Check database operation DML table, column , records etc

User form

UI Testing Database
Black box White box

Checking database operation

Black box testing + white box testing = Gray box Testing

4. Non Functional testing :

:- once functional testing done i.e. s/w work user requirement then

do non functional testing

:- performance testing
Load testing – gradually increase the load

Stress testing – suddenly increase the load (Eg. Online filling

form) Volume testing – how much data handle

:- security of software

:- recovery of application

: - Compatibility testing – work with all platform


20

End to End Testing:

:- testing overall application after including all

module Eg : login → add customer

→ delete and edit customer

→ logout

Testing all function i.e. add delete edit and

logout Test Case Design Technique:

It helps better design and reduce the number of test case to be

executed Reduce data and more coverage

Black Box Testing


1. ECP Equivalence Class Partition
(Just design test case & run
2. BVA value analysis

3. Design Table All this are Test Case Design technique

4. State transition ( all this are in Black Box , Its Main


testing)

5. Error guessing
21

ECP → Equivalence Class Partition

:- value check

:- classify | divide | partition data in → multiple classes to save

testing time Eg.

Enter number * allow digit form 1-500

Normal test data to check txt Divide data in ECP Final test data after ECP done
box value to test case

1 -100 to 0 → -50 (invalid) -50


2 1 to 100 → 30 (valid) 30
3 101 to 200 → 160 (valid) 160
4 201 to 300 → 250 (valid) 250
5 301 to 400 → 350 (valid) 350
6 401 to 500 → 420 (valid) 420
Up to 500 501 to 600 → 550 (invalid) 550
22

BVA: Boundary value Analysis:

Design table technique:

this technique is used if we have more conditions and based on condition we have to

perform the action eg. transfer money from account to account

Condition :

1. Account no has to approved 🗸


2. OTP Matched 🗸
3. Sufficient money in account 🗸

If condition are ok then do action

Action :

1. Transfer money
2. Show message insufficient money (if any)
3. Block if any suspicious activity
23

State Transition Technique :

Take action depends of on state


24

Error Guessing
Technique :
:- no any specific rule

:- this test based on tester skill eg. Submit form

empty and guess error

Test case Scenario → simply the name of test what to test

(name of Test) Test case → how to test i.e. step

Group of steps that is to be execuated to check

functionality Eg. Test scenario = check

functionality of login button

Test case = TC01 , TC02 , TC03 , … etc

Test suite → group of test case

TC01, TC02, TC03


Test suite 1
Test case group
Test plan

Test suite 2 TC01, TC02, TC03

Test case group


25

Test case
document
:- Test case Id

:- Test case Title

:- Description

: Precondition

:- Priority

:- Request id

:- Steps/ Action

:- Excepted result

:- Actual result

:- Test Data
26

Requirement Traceblity matrix : (RTM)

:- Trace how many Test case are execuated or covered

: in simple keep track of test cases

Test Case Execution:

Executing test case based on

test plan Mark status Pass | Fail |

Blocked Reports defects in bug

report

Defect Reporting Tool :

ClearQuest :- only bug report

Devtrack : only bug report

Jira → test management tool (track each

activity) Bugzilla → test management tool (track

each activity)
27

Defect Report Details :

:- Defect id

:- Defect Version

:- Step : details of step along with developer what to do

:- date

: - detected by

:- status

:- fixed by in process | fixed

: severity – impact

:- priority – high| medium | low

Severity Of defect :- Blocker | Critical | Major

| Minor

Seriousness of application

Testing engineer decides the severity level of the defect.

Blocker : this defect show application not processed

Critical : main function not working

Major : some undesirable behaviour eg. Email sent but msg not display

Minor : look and feel

Priority of defect: - High | Medium | Low

importance of defect

On which priority defect will be solved or fixed

P0 - High fixed immediately in same version

P1 - Medium fixed in next release

P2- Low next version


28

Manual Testing project: E commerce

:- Project introduction

:- Understanding and explore the functionality

:- Test Plan

:- Writing test scenario

:- Writing test cases

:- Environment setup and build and development

:- Test execution

:- Bug reporting and tracking

:- Sanity testing , smoke testing , regression testing

:- test sign off

E commerce project

- Login
- Search for product and item
- Add them to cart
- Do payment
- Product will be delivered
- Return the product
- Etc.

SRS document

A software requirements specification (SRS) is a document that describes

what the software will do and how it will be expected to perform.

Refer SRS Document


29

E commerce project

:- Project information

:- Understanding the functionality of project

:- Test Plan → a detailed document of testing activity

:- writing test scenario

:- Test case and review

:- Environment setup and deployment for testing application

:- Test Execution

:- bug reporting and tracking

:- Sanity and regression testing

:- Test Sign off

Version page

Opencart.com project

1. FRS Document : → How Software works

An FRS, or functional requirement specification is the document that describes all the functions that software or
product has to perform

2 Test Plan Document → a detailed document of testing activity

3. Test Scenario Document : → Anything that can be tested is a Test Scenario.

→ Simply name of test case

Test Scenario Document


30

4. Test Case Design :

While writing test case refer the test Test Scenario and FRS Document

Test case template


31
32

All above are sample test case

After executing above steps i.e. test case update the


field Actual result
Result → pass or fail
Priority

After executing test case maintain or update sheet RTM eg. No test executing or blocked etc

If the bug found in testing report it in bug


tracking file Eg.
33

Test case with result


34

Big Bang Testing Approach

Big bang' approach involve testing software system after development work is completed. This is
also termed *system testing' or final testing done before releasing software to the customer for
acceptance testing.
Big Bang == System Testing== Final Testing == Before Release
This is last part Of Software development as per waterfall methodlogy

• Black Box testing: Mainly perform by testers

• White box testing: Mainly perform by developers

• Unit testing: Part of White box testing

• Acceptance testing: This is the final testing done by Customer based on the agreements Load / stress
/ performance testing : Testing an application load capacity

• Usability testing: Testing to determine the user friendly ness of the application

• Install / Uninstall testing: Testing of full, partial, or upgrade install / uninstall processes.

• Recovery / failover testing: Testing to determine how well a system recovers from crashes, failures,
or other major problems.

• Incremental integration testing: Continuous testing of an application as new functionality is added

• Ad-hoc testing: Conducting testing without requirements

• Comparison testing: Comparing software weaknesses and strengths to competing products.

• Alpha testing: Part of UAT

• Beta testing: Part of UAT

• Integration testing: validating combined modules of an application

• Functional testing: part of black box testing

• System testing: part of black box testing and validating the system requirements

• End to End testing: similar to system testing

• Sanity testing or smoke testing: An initial validation of a New build or release

• Regression testing: validating the existing functionality of the application once new fixes added

• Compatibility testing: Testing an application in different environments.


35

A bug is a issue or error in code or any environmental

issue. What is Test Case?

A test case is a detailed explanation of a scenario. Test case is a document which describes pre-
condition, post-condition, test data, actors and navigation of the particular functionality.Each and every
test case should have unique test name and test ID.

What is Test Plan?

Test Plan is a document that explains what to test, when to test, where to test and when to complete

3- Retesting is only done for failed Test cases while Regression is done for passed test
cases
36
37

min,imum or maxlmum value of arMge.


B011mda,ry v:.iue ,Ablackbox test d,esi9n techn tJein \Vhidh test cases aredesigna:! bo1Sa:l on
a.111i'Jy-sl:S boundary values.

A white boi test de;ign ta:'.hniquein whidh test rases aredesigna:l to exerute
B1!'al'ldl testing
branches.
B li.lSToos.sprore-ss-
1
An pproach to testing in vhic:h testtasesa-ed.signedhased on dem'lptlons
!based testl1119 a1d/ or know la:lge of business pro,ce.sse;;

Capture/ ,pllaybadc A twe of test exerut.iontool Yiher:e ln,put:sare r:eco,rdedduring manual testing
in order to gmeriie autom.ta:l test scripts th.1ran he executa:l later (i.e.
tool
repla'\'M j. These tools are oftenused to support automated re;i1ressiontesting.

Cf:l'tiflcaitlon
The process of confim1ing that a oomponent. system or person complies with its I
sp-a:ified re:iuirement:s, e.g. by ,passingan exam,

An analysis method 'th.t determines n(hlt'h . artsof 1lhesoftware haw been


Code rage ei:e:uted (covered j by the test suite a1d wh ch p.artshave notbeen &etuted, e.g.
statemtflt coverage, decision coverage or to.nditioncoverage.
I-
-
Co.mrpna:nceti;sl:11119 The proc.iess of testing to determlne the wmpllance of the component or system,

Co.m,one:nt Ter;tlng performed to aipose def,ectsIn the interfa:es and inte tion between
1hrte:grat[o.ntestlng integrated components,

Comlitl:on testlng A hit,ebox tstdesign t1:rhniQue1n which test cases aredeslgna:l to execute
condition outromes.

(ol'lllerstontesting T,e;ting of softw.re usa:l to convert da:a from existing systems for 1Jse in
r:eplooement s:i,stems,

A scrlptlng ternniqu,eti.t stores te51: Inputand e<pected results In a table or


spreild:shEft, so that a single oontroI:scnpt can exro.ite alI of the tests ln the
Dat!a1d!i\lell testi:119 t.:ble.
Data driven testing is often used to su:pportthe plitation of test exett.1tion

Data'bas-e itl'ltegrity
tools suc:h as raprure/ plily'ba::k tooIs.
Testing the methods .l'ld ,pmcesm used to .r:cess and m.l'l.a.gethe data(basej,to
I
msure occess mahods, processes and data rules funct:ion as e<pected .l'ld
that
test! durlng access to thedataba5e, data is not corrupted or u,nexpi:!:tedly delEta:l, I
updated or c.reated.

A fla'ol in a component or fSl-em that can cause the component or system to


fail to perform its requira:l f,unction,e.g. an incorrect statement or data
I
Defect definition. A defect, if encountered during exet!Jtion, may rause a f. lure of the
component or system.

Defect masllng I An oc:rurnnc:e in whlth one def&tprevents the da.a:1:ion of anoth«.

1 I vww .ajoyslngh.a.info
38

Formil'I testing vitihrespec:tto u:serneeds,f-£!ul iremf!'l ,and businesspmc,esse; I


A«eptance: testing condurtedto Ma-minew'hithff o.r not a sijS'temsii1sfiest'heac:cept-ancecrit.ria
anil to enabletheuser,customersor oth.r authorizedentlfy to daaYnine
\lhetheror not to ac-;cept the system.
1-------------------------+
f,e,;ting carried out informa'lly:no formal test prep.ration takes .pla(e, no
Ad hoc te-stlng mcognileil te51design technique is used, thereare no e:c:pect..tio n:sfor resuIts and
aMrcrine;:s guk::tes the test execution acthJity.

Te;tingpracticefot a pmjeli uaSingagilema:hodologies,:sue,has ei:tra-ne


programmlng(X PI,trmtlng developmentas therustom.r of test!ng;rid
a-n,pM,izin9tie test-fir:9i1lesignparadigm.

Simulatedor actual operation.ii testlng by potfntial usa-s,I customer:s or an


independent tt5tteam at thedevelo,per:s' :site, 'but out:si-de thedevelopment
Alfpha mstfng organizition. A lpl'la testing i:s of-t-fn employa:J foroff-the-sMf softlvare as a form
of intern.al axept<llce testlng.

Testingin 'l'hic.htwo or morevar1ant:s of acomponentor system.re exeruteil


vlth thesameinputs,the outputscompared,andanalyzedincases of
disc.repancies.

0 perational te:sti-ng tiypotential and/ore}( tStlng user:s/ rustomer.s ;tan ext:erna


si,tenot oth,erwiseinvolveil \Iith the devaopers, to deter,m,ine \Ihether o,nr
ot a component or system satisfiestheuser/ctJstomerneeils and fit:s wJthiin
ttie busi,nessprocesses. Baatt5ttng is often a-nployed a; a fi:rnm of eX:la-1nal
c.cc.epta,-c.e testing for off.the-sMf softl.,iare tnorder to a:qulre
feeil'bacikf:rom them.ark .

A typeof int rationtest.Ing in vhir>.hsottwareel.ements, hard


v.areaements,or oot.harecombinedalI .t oncei,nto a mmponent or anova-
alI:systa-n, rather than inst,'JJe;,

Testing, eitha- fandional or non-functional, w,hout referenceto the internal

---
s1Jt1cture of thecomponent or 5'l'Stem.

le
- Proc,eciureto deriveand/or :select test casesbasedon ananalysis{)f !!he
-
Bilac•box ms!: desllgn :sp-erifitatio.n,either func.1:lonal or non-functiona, of a component.o,r system
teelnlque: without riffermc.eto its intemal:stmcrure.

A test c,asethatcannot,beexerumct b.r:ause thepremnditions forltsex&tJtion


Bll'cd:e,dtest case ;re not fulfilled.
Anin«emental a proa:1hto lntegritio nte51:ing \!hfl'ethe lowe level
1

component:sa-eed first:,andthen usedto facilitatethetestingof 'hi9herlevel


eottom•u:pmcSting c-0.mpo-nent:s.Th1s pmc.ess1:s repeatei:ut ntll thecomponent at the topof the
i-------- a-c'.hy istesteil. _J
An in,putvaueo,routputvaluewhic.his oin the eilgeMJJ equivalence partition
or at.the :smallest.i ncrementa distance oneither side of an edge,for ei:.rn;plethe I

2 Iwww .ajoyslngh.a.info
39

Elements of Software Testing Bt Bug Trad.e< Keeps i,...,k o,fc!tware bugs. usually
Monitors the softw r-eengln erlng
P"''"""" co ensure amgh-<Jaal[ty product made up of an autornati( flew of bu.gs,.a
St Soirware set at custornt:able rep1J.rts, and different
Testing $o ware testing Is any a<tlvlty aimed at
e.,.ah..aating an att.ribute. or capability Dt a Software testing is an .essential service !y!tem role! and perrns!ioos.
progn,m or system and deiermlnlng that 11 Tm T.. t Stc;rre:i t s.tlng Infor; r.1tlo11from lffi';l te-.L
m!::'.t.>-1:s. its r {iuirt.'Ll resu
---------------
for any business implementing a new MaJiag \'!n1 planning ttage,tllroog ,ecuti<Jn of
S,"_st,em i:a:se:s, up :to the rapcrtln_g stage
SOFTWARE TESTING TYPES system or updating an old one. Helps a oornp.an manage its software
Am Ap.plkatJon
Ut Unit TE!sts. unlt:5 of c de usjng autarna1i,an co Software development technologies are LHOC:,"clCI d-e,elopm-en, 11!,e cycle In• dedicated,
consta.ntly
trng Management centrali:zl:!d :place.
T.. ensure that ea<h unit works
ind[vldual(y-;ils inkgration i:nto the S)'Skrn
changing, but the essentia.l elements s.......
asa whole is tested facer. for successful software testing remain At Autom.atloo Control tho e-.<uUon 0! tests and tho
Tools. almp,arison of actual CJuli:C.Omrs 'With predl<!
Sm smoke
Tes ing
,, qu,<kn,J ,;n,ple test to em re that the
ma[ar furxtiarn a s.oftwa work a
the same. ed out,;omes.
1n1er.1<>e1, ons1"'11 •rdere<;,I,,, • fl.• pt Perfurmanc;:e Meas.tJre an app:Jication's response Y'a(hik!
""1en new hardware d;dn't cal ch fire Tesiin;g Tools 5K"nulnUng the tr,ilffi, of ter,i; Qr evl;!n
hun.dreds of thousands of l.l!iers


Fu Fun<ti,mol Assesses tundional co1nipc::m1en.ts ottbe perfonmlng a wide r.nge of task, enU,
tlng
T .. S:('-tem.ilncl enwr,ec; itMt It rform;;. the ar,pa(.)tlO .

1
bask funnlons Uiat It was desfg!led to fulfill

11
■11111
In Jntogmion Peterrri,ne< ff' u1ilple compo nc< >'·c>rli TESTINC. STRATEGIES

■■■
Toning together prope-!ly within I.he system
.AnaJyz.es the behavior of the ,..tJole Rb RlskB.a!:il!!d Prioritizes lest prucr.:: !:cs bas d on the ri:s.k.
Testing t!hefr fi.a\'1'5 pose to an applicationr and che
S'!,'s.tem and whether or rrot :lr fit$ effect tho:;e!laws co, d ha,e on1h
lts.define(! requ1r,m1e11ts company or el')d Lr.!Er
Ua u...- Verlfieo tl,al ll can handle re.ak-loi1d EK bplia.·.-m;;iry Test cteslgn. and tes.t e;-.:ecutlon a,;: die
A«• ""><e sc ,,1ound15 oltffl a co
rait" I Te!:il:ln:g saone tlmi:· S-Ophls!lc•ted,100..gnlf.il
Testing 11.!qiJlrement for a«eptancc-ofthe wfu'Vare:; apprbi!lc.h ta ad hoc testing
perfonned b)i·actual encl use-s of the
Re Au A.utomated Uses cooeo scrlpt< tc control me
s.ystem Tenin:g e>:e<YHon of te ts \vlthoul rnanuL'!I
Li Ll\'e TIE!stlng Tust!i hmctiions i!ln.d frmtu tha (i!ln- Dnly intt::r¥C11tii:m
be accurately tested Dn the live srstem

II 111111
Cl Co·ntlnuous Members of a team ln!egrateelf work
lnstei!ld of a test en\'lf"Clrtmerit lnu.-gratit;m fITqucntly, u ually t:!i!lch pcnan fntcgrat s
Re greuion EMuri!$th:!I d,,mge!i tu h c:od hav4::."TI't at le.ast dajl:,,-leading to multiple
Testing broken i:111ious!y-h.mc.1iorlal parts ct ttie lnl"Sfatlon• per clay.
system
Rm Requir,emenu Map.5 te5t ca5i!!5 co requirE!ments.to
An umbrella term th::it f:ncludi!:!i aJ:I pracess.e!i ensuf1:!
\"dili:;;h a ,;;e,;,s the,i5er',;experlern;e with the Tl'a<eoblllty t,,estlng ,co.,.er;:ige cf r,eq1Jl ments
syst<,nn
TE.STING DOCUM,ENTATION
Se Soc.urity DetermiDe.5 the! safe ot private dilta
Testing w:1,lhout c:ompromisfng J S)'Stfm' Tp To<tPb,, Describes t scop!:!, appmach, rC5-Cl rce:s1
fun<cionalitr 5.rh.e.dule, test it.e fe..ature.s co be teste<:1
Emure,;:f .Jn<tlon:;irity ot the s.ynem a,i:roi:: •11<1not tested, tasl\s, and contlngencie•
Co c;o.1.-,atlblillV
,il
nm.,g
""riet)·of browsers ar,d op..-atlng systems fur the entire t-estlag prnc.css
A iO( or ;nputi, p•ll<:ondltlon<, pr l(Ul
Mo Mobil• En-s.Ltre-.thr.1t the .;.ip,prlc.atf,on flmcri-Qns Tc Ti0HCii1Slil
...,,ults and e,;e<uUon condl!I ns for
acro'ii Te,:1:ing a varlety of mobile de,;c
pa-ri.icufar ob ctiv or te!lt condition
Ac A<co,>lblllty Ven"ties that )•our procluct is accessible ta
rng
1 ... (1.' lomerswho t'<l•e dlsatnltles As LtE'f!C: of a.ttlonj. rcirtJle e,.:,e,,c,JUc,r,{If
To Tii:!51
a test
Pto-:w=
Lo LOC: li::Qti(lr'I En-suresthat soft\\lare performs as re,quif"IE!d Spe,clflc.atlon
Testing In d[fferenl countne5. atld lht1t tr.3ns;[atlon5
pe,lfi-ei:; t i:;a procedures., l'p'Pl lly fo,
re<orrect Tie:iitSi;:ripl
Ts outomated I.sting
Bb Blad\6-0, Examines th-e tL111cticiaalit)' o aL1application
Reports the tlaw!ii}b.JB5 in comi;,o:nE!JU:5 or
T.. tlni; wthc,ut knowledge cf I • mal <lructures
Dr Dl!f I u, ,y,tem lt,olf that""""' I( le fal to
Wb IY ,, s.,, Tests.int'2!T1.3 process.es of an-.app□caticn, as. RepOH.I
p-r!rform i13 functiDn. properly
To.,lni; cp s•d to I functfcnal
Slll"Tirnari.2e:stesting activities .and rE!S-ults.
St St3itl,c Ex;;imirutiCln of th sy!itcm1s.codl! and Tll:!SI Sl..1mm:.1'7
Testrn.g
Tu
documentation. '""ithDLll rurrn1n it RCl'O"J

Ttchnique -.vhlcl, ..,aluat@> an appl ation on


Ut
QUALlrtEST
Usability
Testing how easy it Is tar-users. to ll5E! it 1',nf111,·■ra 1Ullllt. l!'=:'.IUU,

Vc:,rsion I
40
41

There are two main categories of testing:

1) Static Testing
2) Dynamic Testing

Static Testing Dynamic Testing

Static testing is completed without


Dynamic testing is completed with the execution of program.
executing the program.

This testing is executed in verification


This testing is executed in validation stage.
stage.

Static testing is executed before the


Dynamic testing is executed after the compilation.
compilation.

This testing prevents the defects. This testing finds and fixes the defects.

The cost is less for finding the defects and


The cost is high for finding and fixing the defects.
fixes.

It consists of Walkthrough, Inspection, It consists of specification based, structure based, Experience based, unit
reviews etc. testing, integration testing etc.
42

Mistakin codeing (Programmer)


e

Error found by Tester

found by end user


Erro
r
43

There are main 4 Lelves of

Testing But these are

catagirised by 2 level High

and low level

Unit and Integration Testing


Low Level Testing
(Programmer)

System Testing || UAT


High Level Testing (It is Main)
(Tester)

Test Planning : a document that describe how to perform testing on entire


application Quatliyt lead or QA Manager >> Prepare Test
Plan
What to test ?
When to test ?
How to test ?
44

The main use of BVA is to reduce the


test cases
It is Black Box Testing because checks
functionality
i.e. AUT = Application Under Test
45

Verification is the process confirming that something—software—meets its specification


Validation is the process confirming that it meets the user’s requirements
Verification = meets specication as document
Validation = meets User requirement ( because software is ready hence main tested for User)
46

Verification : Its static process of analysing the document , not actural product
Validation : it involes Dynamic Testing (unit, integration ,system testing )
47
48
49
50

Black Box Testing White Box Testing

It is a way of software testing in which the internal It is a way of testing the software in which the tester has

structure or the program or the code is hidden and knowledge about the internal structure or the code or the

nothing is known about it. program of the software.

It is mostly done by software testers. It is mostly done by software developers.

No knowledge of implementation is needed. Knowledge of implementation is required.

It can be referred as outer or external software


It is the inner or the internal software testing.
testing.

It is functional test of the software.(Black box) It is structural test of the software.

This testing can be initiated on the basis of This type of testing of software is started after detail

requirement specifications document. design document.

No knowledge of programming is required. It is mandatory to have knowledge of programming.

It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of testing of It is generally applicable to the lower levels of software

software. testing.

It is also called closed testing. It is also called as clear box testing.

It is least time consuming. It is most time consuming.

It is not suitable or preferred for algorithm testing. It is suitable for algorithm testing.

Data domains along with inner or internal boundaries can


Can be done by trial and error ways and methods.
be better tested.

Example: search something on google by using


Example: by input to check and verify loops
keywords
51

Black Box Testing White Box Testing

Types of Black Box Testing:


Types of White Box Testing:

 A. Functional Testing
 A. Path Testing
 B. Non-functional testing
 B. Loop Testing
 C. Regression Testing  C. Condition testing
52

Black box testing == data driven testing = input/output

testing White box testing == logic drivern testing = Logic-

Coverage Testing
53

BVA = Boundary value analysis based on verification of only extreme boundary values, e.g., maximum,
minimum, and typical (Eg lower boundery 18 higher 56)

ECP = Equivalence class testing based on checking one value from each partition

1 2 3 4 4 partition
54

ERROR Guessing = based on the previous experience of a QA engineer also called experience based
testing

Decision table testing based on a tabular representation of combinations of inputs and


correspondent system behavior
depends of conition

Depends of condition or decision loan is given

Graph-based testing where a test case is written for each graph that represents the object
55

In·temals Int e.maills


1

Partially Full
Known Known

Compariso1br1etween the11hree!Testing Types

Black Box
1.-;e Internal Wo kingsof SorM hat knowledge ESrer has flllll
i0• anapplicatim, are oo. iji,e rntennal workings kno ledgeof the
are l!lired to be known n€Mn lnt@m I o ngs of
e· .l'rntian
2. lso nown as dl05ed Ario.ther·:ermfor grey·box Also nownas clear
box t-esting is tran&1i.tmr1t bm: le-sting,smiomral
ens g, da:DIdmn t-esting as,the tester has. testingor rode based
test ng and limi!B. kno edge 0 the testing
1

hmctional testing insidesof the a caliion


Perfo:nned by enduse .s INorma I cfoI e by
1
1

3. IP.erfurmedby erid users and also by tsste .· anddivelopers


anrl ailso,by testers and tiestwsand de'i@lo -
oe;vieror. s estingrsdone on the lntEmal o kingsare
4. - es ng isb ed basis of hig-level fully kno and the I

o:n external database:dia;grams and tsste am 1'.fesign te-st


1

expootations. data noi ,d a·rarn-s aarta1accord·1 I·


-ml'emal behavior of Ute Partly· me consuming Themostexha:ustive
a • I'cationisunkoowr1 and haus ve and time consuming
5. Thisis . h least time ., e oftc n
ronsumin.gmid . o suitooto a thm
exhaustive: Olithm t@sting testing
6. Not suj ed to a
test ng
7. Dis can,ontvbe don.eoy Data oo:mains and Dara d'.om• a
trial and en;ormehod ]ntema bo nda - ran s.and lntEmal
be·ties:too1 if 1nown boundaries can b@
betterr tested
56

White box testing


is an approach that allows testers to inspect and verify the inner workings of a software system—its
code, infrastructure, and integrations with external systems

White box testing techniques

There are many ways you can analyze software with white box testing. Most testers will use a process called
code coverage analysis to eliminate gaps in the testing of the code. There are a variety of techniques you can
use to accomplish this, including:

Statement coverage:

This technique ensures that each line in the code is tested at least once to find faulty code more easily.

Branch coverage:

Using this technique, each possible path or decision point of a software application is checked for
accuracy.

Condition coverage:

All individual conditions are checked.

Multiple condition coverage:

All imaginable combinations of all the conceivable condition outcomes are tested at least once.

Basis path testing:

Control graphs are created from either flowcharts or code. Cyclomatic complexity is then calculated to
define the number of independent paths so that the minimum number of test cases can be designed for
each path.

Flow chart notation:

This technique uses a directed graph made up of nodes and edges, where each node represents a decision
point or sequence of statements.

Cyclomatic complexity:

This is the measure of a software's logical and cyclomatic complexity. It is used to define how many
independent paths are present.

Loop testing:

Loops are commonly used in white box testing and are fundamental to many algorithms. Problems
are often found at the beginning or the end of a loop. Loop testing can be divided into simple loops,
nested loops and concatenated loops.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy