0% found this document useful (0 votes)
69 views29 pages

ISTQP Interview Questions (1) - 230209 - 121036

Quality assurance helps build processes and ensures the right things are done correctly, while quality control implements processes and finds defects by executing code. Some key differences are that quality assurance involves the whole team and focuses on prevention, while quality control only involves testers and focuses on detection, correction and reporting of issues. Acceptance testing aims to establish confidence in the system as a whole by validating requirements are met rather than finding defects.

Uploaded by

yara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views29 pages

ISTQP Interview Questions (1) - 230209 - 121036

Quality assurance helps build processes and ensures the right things are done correctly, while quality control implements processes and finds defects by executing code. Some key differences are that quality assurance involves the whole team and focuses on prevention, while quality control only involves testers and focuses on detection, correction and reporting of issues. Acceptance testing aims to establish confidence in the system as a whole by validating requirements are met rather than finding defects.

Uploaded by

yara
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

1-What are differences between QA&QC ?

      Quality Assurance          Quality Control


1. Quality Assurance helps us to build processes. 1. Quality Control helps us to implements the
build processes.
2. It is the Duty of the complete team. 2. It is only the Duty of the Testing team.

3. QA comes under the category of Verification. 3. QC comes under the category of Validation.

4. Quality Assurance is considered as the 4. Quality Control is considered as the product


process oriented exercise. oriented exercise.

5. It prevents the occurrence of issues, bugs or 5. It always detects, corrects and reports the bugs
defects in the application. or defects in the application.

6. It does not involve executing the program or 6. It always involves executing the program or
code. code.
7. It is done before Quality Control. 7. It is done only after Quality Assurance activity is
completed.

8. It can catch an error and mistakes that Quality 8. It can catch an error that Quality Assurance
Control cannot catch, that is why considered as cannot catch, that is why considered as High
Low Level Activity. Level Activity.

9. It is human based checking of documents or 9. It is computer based execution of program or


files. code.
10. Quality Assurance means Planning done for 10. Quality Control Means Action has taken on the
doing a process. process by execute them.

11. Its main focuses on preventingDefects or 11. Its main focuses on identifying Defects or
Bugs in the system. Bugs in the system.

12. It is not considered as a time consuming 12. It is always considered as a time consuming
activity. activity.

13. Quality Assurance makes sure that you are 13. Quality Control makes sure that whatever we
doing the right things in the right way that is the have done is as per the requirement means it is as
reason it is always comes under the category of per what we have expected, that is the reason it is
verification activity. comes under the category of validation activity.

14. QA is Pro-active means it identifies 14. QC is Reactive means it identifies the defects
weaknesses in the processes. and also corrects the defects or bugs also.

2-What are testing objectives?

Finding defects, Gaining confidence about the level of quality,


providing information for decision-making, preventing defects
• In development testing (e.g., component, integration, and system
testing), the main objective may be to cause as many failures as
possible so the defects in the software are identified and can be
fixed.

• In acceptance testing, the main objective may be to confirm that


the system works as expected, to gain confidence that it has met
the requirements

• Maintenance testing includes testing that no new defects have


been introduced during development of the changes.

• During operational testing, the main objective may be to assess


system characteristics such as reliability or availability

3-What is difference between testing and debugging?


-Dynamic testing can show failures that are caused by defects

- Debugging is the development activity that finds, analyzes and


removes the cause of the failure.

4-What is difference between Regression testing and


confirmation testing (retesting)?
Confirmation testing: After a defect is detected and fixed, the
software should be re-tested to confirm that the original defect
has been successfully removed

Regression testing: is the repeated testing of an already tested


program, after modification, to discover any defects introduced or
uncovered as a result of the change

-These defects may be either in the software being tested, or in


another related or unrelated software component.

- It is performed when the software, or its environment, is


changed.

5- What are Test Levels?


Component testing:

- also known as unit, module or program testing

- Objective: Reducing risk, finding defects in the component,


preventing defects from escaping to higher test levels, Building
confidence in the component’s quality, Verifying whether the
functional and non-functional behaviors of the component are as

Searches for defects - Component testing occurs with access to


the code being tested and with the support of development
environment, such as a unit test framework or debugging tool

- Component testing usually involves the programmer who wrote


the code.

- Test basis: Detailed design, Code, Data model, Component


specifications

Integration testing :

Tests interfaces between components, interactions with different


parts of a system, such as the operating system, file system and
hardware and interfaces between systems.

Objectives of integration Testing: Reducing risk, finding defects


(which may be in the interfaces themselves or within the
components or systems), Preventing defects from escaping to
higher test levels, Verifying whether the functional and non-
functional behaviors of the interfaces are as designed and
specified, Building confidence in the quality of the interfaces

There may be more than one level of integration testing and it


may be carried out on test objects of varying sizes as follows:

1. Component integration testing: tests the interaction between


software components and is done after component testing

2. System integration testing: tests the interactions between


different systems or between hardware and software and may be
done after system testing. In this case, the developing
organization may control only one side of the interface.

- Component integration testing is often the responsibility of


developers. System integration testing is generally the
responsibility of testers

- Test basis: Software and system design, Sequence diagrams,


Interface and communication protocol specifications, Use cases,
Workflows , External interface definitions

System testing :

Is concerned with the behavior of a whole system/product. The


testing scope shall be clearly addressed in the Master and/or
Level Test Plan for that test level.

• In system testing, the test environment should correspond to the


final target or production environment as much as possible in
order to minimize the risk of environment-specific failures not
being found in testing.

-System testing should investigate functional and non-functional


requirements of the system, and data quality characteristics.

Acceptance testing:

- acceptance testing, like system testing, typically focuses on


the behavior and capabilities of a whole system or product.
Objectives of acceptance testing include:

Establishing confidence in the quality of the system as a whole

Validating that the system is complete and will work as expected

Verifying that functional and non-functional behaviors of the


system are as specified

- Defects may be found during acceptance testing, but finding


defects is often not an objective

- Is often the responsibility of the customers or users of a


system; other stakeholders may be involved as well

6-what are objective of acceptance testing?


The goal of acceptance testing is to establish confidence in the
system, parts of the system or specific non-functional
characteristics of the system. Finding defects is not the main
focus in acceptance testing. Acceptance testing may assess the
system’s readiness for deployment and use

7-What is the difference between Alpha testing & beta


testing?
Alpha testing: is performed at the developing organization’s site,
not by the development team, but by potential or existing
customers, and/or operators or an independent test team.

Beta testing: is performed by potential or existing customers,


and/or operators at their own locations. Beta testing may come
after alpha testing, or may occur without any preceding alpha
testing having occurred.

One objective of alpha and beta testing is building confidence


among potential or existing customers

Organizations may use other terms as well, such as factory


acceptance testing and site acceptance testing for systems that
are tested before and after being moved to a customer’s site.

8-What are types of acceptance testing?


-User Acceptance Testing: Typically verifies the fitness for use of
the system by business users.

-Operational Acceptance Testing: The acceptance of the system


by the system administrators

-Contract & Regulation Acceptance Testing: Contract acceptance


testing is performed against a contract’s acceptance criteria for
producing custom developed software. Acceptance criteria
should be defined when the parties agree to the contract.
Regulation acceptance testing is performed against any
regulations that must be adhered to, such as government, legal or
safety regulations.
- What are Test types?
Functional Testing:
The function that a system, subsystem or component are to
perform may be described in work products such as a
requirements specification, use cases, or a functional
specification, or they may be undocumented. The functions are
“what” the system does.

-Functional tests are based on functions and features (described


in documents or understood by the tester

-Functional testing considers the external behavior of the


software

Non-Functional Testing:

-Non-functional testing includes, but is not limited to,


performance testing, load testing, stress testing, usability testing,
maintainability testing, reliability testing and portability testing. It
is the testing of “how” the system works.

-The term nonfunctional testing describes the tests required to


measure the characteristics of systems and software

- Non-functional testing considers the external behavior of the


software

Structural testing:

- Structural (white-box) testing may be performed at all test levels.

- Structural techniques are best used after specification-based


techniques, in order to help measure the thoroughness of testing
through assessment of coverage of a type of structure.

- At all test levels, but especially in component testing and


component integration testing, tools can be used to measure the
code coverage of elements, such as statements or decisions

testing related to change :

confirmation testing & regression testing

Maintenance Testing:

- The scope of maintenance testing is related to the risk of the


change, the size of the existing system and to the size of the
change. Depending on the changes, maintenance testing may be
done at any or all test level and for any or all test types.

- Maintenance testing is done on an existing operational system,


and is triggered by modifications, migration, or retirement of the
software or system

- Maintenance testing can be difficult if specifications are out of


date or missing, or testers with domain knowledge are not
available.

what is impact analysis ?


Determining how the existing system may be affected by changes
is called impact analysis, and is used to help decide how much
regression testing to do. The impact analysis may be used to
determine the regression test suite.

-What is test condition?


Defined as an item or event that could be verified by one or more
test cases

-What is test case?

A test case consists of a set of input values, execution


preconditions, expected results and execution post conditions,
defined to cover a certain test objective(s) or test condition(s).

-What is test procedure?


The test procedure specifies the sequence of actions for the
execution of a test. If tests are run using a test execution tool, the
sequence of actions is specified in a test script (which is an
automated test procedure)

What are Test design techniques?


Black-box test design techniques :

(Also called specification-based techniques) are a way to derive


and select test conditions, test cases, or test data based on an
analysis of the test basis documentation. This includes both
functional and non-functional testing.

• Black-box testing, by definition, does not use any information


regarding the internal structure of the component or system to be
tested.
White-box test design techniques:

(Also called structure-based techniques) are based on an


analysis of the structure of the component or system.

• Common characteristics of structure-based test design


techniques include:

Information about how the software is constructed is used to


derive the test cases
experience-based techniques:

The test cases are derived from the tester’s skill and intuition, and
their experience with similar applications and technologies. These
techniques can be helpful in identifying tests that were not easily
identified by other more systematic techniques. Depending on the
tester’s approach and experience

Error Guessing: create a list of possible mistakes, defects, and


failures, and design tests that will expose those failures and the
defects that caused them. These mistake, defect, failure lists can
be built based on experience, defect and failure data, or from
common knowledge about why software fails.

How the application has worked in the past, what types of


mistakes the developers tend to make, Failures that have
occurred in other applications

Exploratory Testing: informal (not pre-defined) tests are designed,


executed, logged, and evaluated dynamically during test execution.
The test results are used to learn more about the component or
system, and to create tests for the areas that may need more
testing.

Exploratory testing is most useful when there are few or


inadequate specifications or significant time pressure on testing.
Exploratory testing is also useful to complement other more
formal testing techniques.

Checklist-based Testing: testers design, implement, and execute


tests to cover test conditions found in a checklist. As part of
analysis, testers create a new checklist or expand an existing
checklist, but testers may also use an existing checklist without
modification. Such checklists can be built based on experience,
knowledge about what is important for the user, or an
understanding of why and how software fails.

What is Smoke Testing?

Performed after software build to ascertain that the critical


functionalities of the program is working fine. It is executed
"before" any detailed functional or regression tests are executed
on the software build. The purpose is to reject a badly broken
application, so that the QA team does not waste time installing
and testing the software application.
What is Sanity Testing?

Performed after receiving a software build, with minor changes in


code, or functionality, to ascertain that the bugs have been fixed
and no further issues are introduced due to these changes. The
goal is to determine that the proposed functionality works roughly
as expected. If sanity test fails, the build is rejected to save the
time and costs involved in a more rigorous testing.

What is Adhoc Testing?

Aim to break the system. This testing is usually an unplanned


activity.

It does not follow any test design techniques to create test cases,
this testing is primarily performed if the knowledge of testers in
the system under test is very high. Testers randomly test the
application without any test cases or any business requirement
document.

It is randomly done on any part of application. Main aim of this


testing is to find defects by random checking

Ad hoc testing can be performed when there is limited time to do


elaborative testing. Usually adhoc testing is performed after the
formal test execution.

Ad hoc testing will be effective only if the tester is knowledgeable


of the System under Test.
what is exploratory testing ?

is simultaneous process of test design and test execution all


done at the same time
What is Performance Testing?

To ensure software applications will perform well under their


expected workload.

Features and Functionality supported by a software system is not


the only concern. A software application's performance like its
response time, reliability, resource usage and scalability do matter.
The goal of Performance Testing is not to find bugs but to
eliminate performance bottlenecks.

What is Load Testing?

Load testing is a kind of Performance Testing which determines a


system's performance under real-life load conditions. This testing
helps determine how the application behaves when multiple users
access it at the same time

What is Stress Testing?

Stress testing is used to test the stability & reliability of the


system. This test mainly determines the system on its robustness
and error handling under extremely heavy load conditions.

2- What is severity and priority of bug? Give some example.

Priority: concern with application from the business point of view.

It answers: How quickly we need to fix the bug? Or how soon the
bug should get fixed?

Severity: concern with functionality of application.

How much the bug is affecting the functionality of the application

Ex.

1. High Priority and Low Severity:

If a company logo is not properly displayed on their website.

2. High Priority and High Severity:

Suppose you are doing online shopping and filled payment


information, but after submitting the form, you get a message like
"Order has been cancelled."

3. Low Priority and High Severity:

If we have a typical scenario in which the application get crashed,


but that scenario exists rarely‫ ))ﻧﺎدرا‬.

4. Low Priority and Low Severity:

There is a mistake like "You have registered success" instead of


successfully, success is written
Dynamic testing & Static testing?
Dynamic testing: which requires the execution of software

Static testing techniques: rely on the manual examination


(reviews) and automated analysis (static analysis) of the code or
other project documentation without the execution of the code.

Almost any work product can be examined using static testing


(reviews and/or static analysis), for example:

Specifications, including business requirements, functional


requirements, and security requirements, Epics, user stories, and
acceptance criteria, Architecture and design specifications, Code,
Testware, including test plans, test cases, test procedures, and
automated test scripts, User guides, Web pages, Contracts,
project plans, schedules, and budgets, Models, such as activity
diagrams, which may be used for Model-Based testing

-Why static testing is important?


Early defect detection and correction

• Development productivity improvements

• Reduced development timescales

• Reduced testing cost and time

• Lifetime cost reductions


• Fewer defects and improved communication.

• Reviews can find omissions, for example, in requirements, which


are

Unlikely to be found in dynamic testing.

What is Work Product Review Process?

1-Planning:

-Defining the scope, this includes the purpose of the review, what
documents or parts of documents to review

-Estimating effort and timeframe

- Identifying review type with roles, activities, and checklists

- Selecting the people to participate in the review and allocating


roles

- Defining the entry and exit criteria for more formal review types
(e.g., inspections)

-Checking that entry criteria are met (for more formal review types)

2-Initiate review

- Distributing the work product (physically or by electronic means)

- Explaining the scope, objectives, process, roles, and work


products to the participants

- Answering any questions that participants may have about the


review

3-Individual review (i.e., individual preparation)


- Reviewing all or part of the work product

- Noting potential defects, recommendations, and questions

4-Issue communication and analysis

- Communicating identified potential defects (e.g., in a review


meeting)

- Analyzing potential defects, assigning ownership and status to


them

- Evaluating and documenting quality characteristics

- Evaluating the review findings against the exit criteria to make a


review decision (reject; major changes needed; accept, possibly
with minor changes)

5-Fixing and reporting

-Creating defect reports for those findings that require changes

- Fixing defects found (typically done by the author) in the work


product reviewed

- Communicating defects to the appropriate person or team


(when found in a work product related to the work product
reviewed)

- Recording updated status of defects (in formal reviews),


potentially including the agreement of the comment originator

- Checking that exit criteria are met (for more formal review types)

- Accepting the work product when the exit criteria are reached
What are Roles and responsibilities in a formal review?

Author

Creates the work product under review

Fixes defects in the work product under review (if necessary)

Management

Is responsible for review planning

decides on the execution of reviews

Assigns staff, budget, and time

Monitors ongoing cost-effectiveness

Executes control decisions in the event of inadequate outcomes

Facilitator (often called moderator)

Ensures effective running of review meetings (when held)

Mediates, if necessary, between the various points of view

is often the person upon whom the success of the review


depends

Review leader

Takes overall responsibility for the review

Decides who will be involved and organizes when and where it


will take place

Reviewers
May be subject matter experts, persons working on the project,
stakeholders with an interest in the work product, and/or
individuals with specific technical or business backgrounds

Identify potential defects in the work product under review

May represent different perspectives (e.g., tester, programmer,


user, operator, business analyst, usability expert, etc.)

Scribe (or recorder)

Collates potential defects found during the individual review


activity

Records new potential defects, open points, and decisions from


the review meeting (when held)

What are Review Types ?


Informal review (e.g., buddy check, pairing, pair review)

Main purpose: detecting potential defects

Possible additional purposes: generating new ideas or solutions,


quickly solving minor problems

Not based on a formal (documented) process

May not involve a review meeting

May be performed by a colleague of the author (buddy check) or


by more people

Results may be documented

Varies in usefulness depending on the reviewers


Use of checklists is optional

Very commonly used in Agile development

Walkthrough

Main purposes: find defects, improve the software product,


consider alternative implementations,evaluate conformance to
standards and specifications

Possible additional purposes: exchanging ideas about


techniques or style variations, training of participants, achieving
consensus

Individual preparation before the review meeting is optional

Review meeting is typically led by the author of the work product

Scribe is mandatory

Use of checklists is optional

May take the form of scenarios, dry runs, or simulations

Potential defect logs and review reports may be produced

May vary in practice from quite informal to very formal

Technical review

Main purposes: gaining consensus, detecting potential defects

Possible further purposes: evaluating quality and building


confidence in the work product, generating new ideas, motivating
and enabling authors to improve future work products,
considering alternative implementations
Reviewers should be technical peers of the author, and technical
experts in the same or other disciplines

Individual preparation before the review meeting is required

Review meeting is optional, ideally led by a trained facilitator


(typically not the author)

Scribe is mandatory, ideally not the author

Use of checklists is optional

Potential defect logs and review reports are typically produced

Inspection

Main purposes: detecting potential defects, evaluating quality


and building confidence in the work product, preventing future
similar defects through author learning and root cause analysis

Possible further purposes: motivating and enabling authors to


improve future work products and the software development
process, achieving consensus

Follows a defined process with formal documented outputs,


based on rules and checklists

Uses clearly defined roles, which are mandatory, and may


include a dedicated reader (who reads the work product aloud
during the review meeting)

Individual preparation before the review meeting is required

Reviewers are either peers of the author or experts in other


disciplines that are relevant to the work product Specified entry
and exit criteria are used

Scribe is mandatory

Review meeting is led by a trained facilitator (not the author)

Author cannot act as the review leader, reader, or scribe

Potential defect logs and review report are produced

Metrics are collected and used to improve the entire software


development process, including the inspection process

What is Purpose and Content of a Test Plan?


A test plan outlines test activities for development and
maintenance projects. Planning is influenced by the test policy
and test strategy of the organization, the development lifecycles
and methods being used the scope of testing, objectives, risks,
constraints, criticality, testability, and the availability of resources.

-Determining the scope, objectives, and risks of testing

- Defining the overall approach of testing

- Integrating and coordinating the test activities into the software


lifecycle activities

- Making decisions about what to test, the people and other


resources required to perform the various test activities, and how
test activities will be carried out

- Scheduling of test analysis, design, implementation, execution,


and evaluation activities, either on particular dates (e.g., in
sequential development) or in the context of each iteration (e.g.,
in iterative development)

- Selecting metrics for test monitoring and control

- Budget for the test activities

- Determining the level of detail and structure for test


documentation (e.g., by providing templates or example
documents)

Test Strategy :

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy