0% found this document useful (0 votes)
25 views7 pages

ST 10 Important Questions

Uploaded by

vaishnavdeepd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views7 pages

ST 10 Important Questions

Uploaded by

vaishnavdeepd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Important questions for software testing.

Q.1) Explain fundamental principles of software testing?


Ans:-The fundamental principles of testing are as follows
1) The goal of testing is to find defects before customer find them out.
2) Exhaustive testing is not possible program testing can only show the presence of defects,
never their absence.
3) Testing applies all through the software life cycle and is not an end of cycle activity.
4) Understand the reason behind test.
5) Test the tests first.
6) Test develop immunity and have to be revise constantly.
7)Defects occur in cluster and testing should focus on the clusters.
8) Testing encompasses defect prevention.
9) Testing is a fine balance of defect prevention and defect detection.
10) Intelligent and well planned automation is key to realizing the benefits of testing.
11) testing requires talented committed people who believe in themselves and work in teams.

Q.2) Explain detail what is White Box testing?


Ans:- 1) White box testing determine external functionality of a product. It is
also called as Glass box testing, Open box testing and Clear box testing.
2) White box testing takes into account program code, code structure and internal design.
3) White box testing classified into two types that is Static testing and Structural testing.
4) Static testing requires only the source code of the product not the binaries or
executables.
5) Static testing does not involve executing the programs on a computers but involves
select people going through the code to find out whether
● The code works according to the functional requirements,
● The code has been written in accordance with the design developed earlier in the project
life cycle,
● The code for any functionality has been missed out,
● The code handles errors properly.
There are multiple methods of static testing by humans. They are as follow
1)Desk checking of the code
2) Code walkthrough
3) Code review
4) Code inspection
1)Desk checking:- i) Normally done manually by the author of the code, best checking is a
method to verify the portions of the code for correctness.
ii)Such verification is done by comparing the code with the design or specifications to
make sure that the code does what it is supposed to do and effectively. This is the desk checking
that most programmers do before compiling and executing the code.
iii)Whenever errors are found the author applies the corrections for errors on the spot.
2)Code walkthrough- i) This method and formal inspection are group oriented methods. Walk
through our lives formal than inspections.
ii) It brings multiple perspectives. Set of people look at the program code and raise
questions for the author. The author explains the logic of the code and answers the questions.
iii) if the author is unable to answer some questions he or she then text those question and
finds their answers.

3) Formal inspection-i) It is also called as Fagan inspection. There are four rules in inspection.
First is the author of the code. Second is a moderator who is expected to formally run the
inspection according to the process. Third are the inspectors. These are the people who actually
provides review comments for the code.
ii) finally there is a scribe who takes detail notes during the inspection meeting and
circulates them to the inspection team after the meeting

Q.3) What is Black box testing ?Explain the basic need of black box system?
Ans:- i) Black box testing involve looking at the specifications and does not required examining
the code of program.
ii) black boxing is done from the customer's viewpoint.
iii) the test engineer engage in black box testing only knows the set of inputs and expected
outputs and is unaware of how those inputs are transfer into outputs by the software.
Example- lock and key
We do not know how the labors in the lock work but we only know the set of inputs and the
expected outcome that is locking and unlocking.
iv) black box testing does requires a functional knowledge of the product to be tested.

Basic need of black box testing


1) Black box testing is done based on requirements- it helps in identifying any incomplete
inconsistent requirement as well as any issues involve in the system is tested as a complete
entity.
2) Black box system addresses the stated requirements-not all the requirements are stated
explicitly but are deemed implicit. For example inclusion of dates page and footer may not be
explicitly stated in the report generation requirement specification.
3) black boxing testing encompasses the end user perspective-since we want to test the
behaviors of the product from an external perspective, end user perspectives are an integral part
of black box testing.
4) black box testing handles valid and invalid input-it is natural for users to make errors while
using a product hence it is not sufficient for black box testing to simply handles valid input.
Testing from the universe are perspective includes testing for this error or invalid conditions.

Q.4) What is acceptance testing? Explain different criteria for acceptance testing?
Ans:- 1)Acceptance testing is a phase after system testing that is normally done by the customer
or representative of the customer.
2) the customer define a set of test cases that will be executed to qualify and accept the product.
3) this test cases are executed by the customers themselves to quickly judge the quality of the
product before deciding to buy the product.
4) acceptance testing are normally small in number and are not written with the intention of
finding defects.
5) sometimes acceptance test cases are developed jointly by the customers and product
organization. In this case the product organization will have complete understanding of what will
be tested by the customer for acceptance testing.
Acceptance criteria
1) Acceptance criteria-product acceptance -during the requirements face each requirement is
associated with acceptance criteria. It is possible that one or more requirements may be met to
form acceptance criteria. Whenever they are changes to requirement the acceptance criteria
accordingly modified and maintain.
Acceptance testing is not meant for executing test cases that have not been executed before.
Thanks the existing test cases are looked at and certain categories of testis can be grouped to
form acceptance criteria.
2) Acceptance criteria-procedure acceptance-acceptance criteria can be define based on the
procedures followed for delivery. An example of procedure acceptance could be documentation
and release media. Some example of acceptance criteria for this nature are as follows
1) user, administration and troubleshooting documentation should be part of the release.
2) along with a binary code the source code of the product will build scripts to be delivered in the
CD.
3) a minimum of 20 employees are train on the product uses prior to deployment.
This procedural acceptance criteria are verified and tested as a part of acceptance testing.

3) Acceptance criteria-service level agreements-service level agreement can become part of


acceptance criteria. Service level agreement are generally part of contract signed by a customer
and product organization. The important contract items are taken and verified as a part of
acceptance testing.
*All major defects that come up during first three months of deployment need to be fixed free of
cost,
*Down time of the implementation system should be less than 0.1%
*All major defects are to be fixed within 48 hours of reporting

Q.5) Explain different methodology for performance Testing?


Ans:- Performance testing is a complex and expensive due to large resource requirements and
the time it. Fancy required careful planning and robust methodology. A methodology for
performers testing involve the following steps.
1) Collecting requirements
2) writing test cases
3) automating performance test cases
4) executing performance test cases
5) analyzing performance test results
6) performance tuning
7) performance benchmarking
8) Recommending right configuration for the customer that is capacity planning
1)Collecting Requirements:-
The Sources for deriving performance requirements:-
a)Performance compared the previous release of the same product.
b)Performance compared to the competitive product.
c)Performance compared to the absolute number derived from actual need.
d)Performance number derived from architecture and design.
There are two type of requirements performance testing:-
i)Generic Requirements
ii)Specific Requirements
The performance values that are in acceptance limits when the load increase are
denoted by team called “graceful performance degradation”.
2)Writing Test Cases:- The next step involved in performance testing is writing test cases. As
we briefly discussed earlier, a test case for performance testing should have the following details
defined.
a) List of operations or business transactions to be tested.
b) Steps for executing those operations/transaction.
c) List of product, OS parameters that impact the performance testing their values.
d) Loading pattern.
e) Resources and their configuration.
f) The expected results.
g) The product versions/competitive product to be compared with and related information such
as their corresponding fields.
Performance test cases are repetitive in nature.
3)Automating performance Test Cases:- Performance testing naturally lends itself to
automations due to the following characteristics.
a) Performance testing is repetitive.
b) Performance test cases can not be effective without automation and in most cases it is, in fact,
almost impossible to do performance testing without automation.
c) The result of performance testing need to be accurate, and manually calculating the response
time, throughput, and so an can introduced inaccuracy.
d) Performance testing take into account several factors. There are far too many permutations
and combination of those factor and it will be different to remember all these and use them if the
tests are done manually.
e) The analysis of performance results and failures needs to take into account related information
such as resources utilizations, log flies, trace files, and so on that are collected at regular
intervals. It is impossible to do this testing and performance the book-keeping of all related
information and analysis manually.
4) Executing Performance Test Cases:-Data corresponding to the following points needs to be
collected while execution performance tests.
a)Starts and end time of test case execution.
b)Log and trace/audit files of the product and operating system.
c)Utilization of resources on a periodic basic.
d)Configuration of all environmental factors.
e)The response time, throughput, latency and so on as specific the test case documentation at
regular intervals.
5)Analyzing Performance Test Results:-It required multi-dimension thinking.
This is the most complex part of performance testing where product knowledge,
analytical thinking, and statistical background are all absolutely essential.
The process of removing some unwanted values un a set is called noise removal.
This enable them to present the data quickly when the same request is made again. This is
called caching.
6)Performance Testing:-
There are two steps involved in getting the optimum mileage form performance tuning.
They are also follow.
a)Tuning the product parameters
b)Tuning the operating system
These parameters in the operating system are grouped under different categories to
explain their impact, as given below.
i)File system related parameters.
ii)Disk management parameters.
iii)Memory management parameters.
iv)Processor management parameters.
v)Network parameters.
7)Performance Benchmarking:-Performance benchmarking is about comparing the
performance of product transaction with that of the competitors. No more product can have the
same architecture, design, functionality, and code. The customer and types of deployment on
those aspects. End user transactions/scenarios could be one approach for comparison.
8)Capacity Planning:-
Capacity planning corresponding to short, medium, and long-term requirements are
called
a)Minimum required configuration.
b)Typical configuration.
c)Special configuration.
There are two techniques that plays a major role in capacity planning
i)Load balancing.
ii)High availability.

Q.6) Explain different phases of software development?


Ans:- The software development phases are as follow:
1)Requirements gathering and analysis
2)Planning
3)Design
4)Development or coding
5)Testing
6)Deployment and maintenance
1)Requirement gathering and analysis:-The requirements get documented in the form of a
System Requirements Specification(SRS) documented.
The document acts as a bridge between the customer and the designer.
There are two types of software:-
a)Bespoke Software.
b)General Purpose Software.
2)Planning:- A plan explain how the requirements will be met and by which time. The planning
phase is applicable for both development and testing activities. At the end of this phase both
project plan and test plan documents are delivered.
3)Design:-The main purpose of design phase is to figure out how to satisfy the requirements
enumerated in the System Requirements Specification(SRS) document.
There are two level of design:
a)High level design
b)Low level design
The design step produce the System Design Description(SDD) documents.
4)Development and coding:-Design is a blueprint for the actual coding to proceed.
In addition of programming this phase also involve the creation of product documentation.
5)Testing:- As the program are coded they are also tested.
6)Deployment and maintenance:-
It is made up of three layer:
a)Corrective Maintenance
b)Adaptive Maintenance
c)Preventive Maintenance

Q.7) Explain Bi-directional integration and System Integration?


Ans:-
i)Bi-directional Integration:-
Bi-direction integration is a combination of the top-down integration approaches
used together to derive integration steps.
The individual components 1,2,3,4 and 5 are tested separately and bi-directional
integration is performed initially with the use of stubs and drivers. Drivers are used to provide
upstream connectivity. A driver is a function which redirects the request to some other
component and stubs simulate the behavior of a missing component. After the functionality of
these integrated components are tested, the drivers and stubs are discarded. Once components
6,7and 8 become available, the integration methodology the focuses only on these components,
as these are the components. Which need focus and are new. This approach is called “Sandwich
Integration”.
ii)System Integration:-
System integration mean that all component of the system are integrated and
tested as a single unit.
The interface can be divided into two types
a)Components or subsystem integration
b)Final integration testing or system integration
Instead of integrating components by components and testing. This approach
waits till all components arrive and one round of integration testing is done this approach is also
called big-bang integration. It reduce the efforts and remove duplication in testing.
Big-Bang integration is ideal for a product where the interfaces are stable with
less no of defects.
Advantages:-
1)It is very convenient to approach if the system is small. As the time taken for this
approach is more, big systems can lead to more consumption of time.
2)Fault detection is very easy with this, considering small system.
Disadvantages:-
1)Since all modules are coupled, if some fault arises in the system, it is difficult to spot it
on.
2)Time taken by the entire software system is much more than other integration testing
approaches.
3)Time taken for this approach is more, as many modules are coupled together, and
testing each module will take up more time.

Q.8) Explain Functional System Testing and Non-Functional System Testing?


Ans:- The Functional and Non-Functional System Testing is as follow:
Testing Aspects Functional Testing Non-Functional Testing

Involves Product features and functionality Quality factors

Tests Product behavior Behavior and experience

Result Condition Simple steps written to check Huge data collected and
expected result analyzed
Result varies due to Product implementation Product implementation,
resources, and configuration
Testing focus Defect detection Qualification of product

Knowledge required Product and domain Product, domain, design,


architecture, statistical skills
Failures normally due Code Architecture, design, and
to code
Testing phase Unit, component, integration system System
Test case repeatability Repeated many times Repeated only in case of
failures and for different
configuration
Configuration One time setup for a set of test cases Configuration changes for
each test case

Q.9) Explain what is test reporting?


Ans:- Testing require constant communication between the test team and other teams.
There are two types of reports or communication that are required:
1)Test Incident Report
2)Test Cycle Report
3)Test Summary Report
1)Test Incident Report:- A Test incident report is a communication that happen through the
testing cycle as and when defects are encountered. Earlier, we described the defects repository. A
test incident report is nothing but an entry made in the defect repository. Each defect has a
unique ID and this is used to identify the incident. The high impact test incidents are highlighted
in the test summary report.
2)Test Cycle Report:- Test project take place in units of test cycles. A test cycle entails
planning and running certain tests in cycles, each cycle using a different build of the product. As
the product progresses through the various cycles, it is to be expected to stabilize. A test cycle
report, at the end of the cycle, given
a)A summary of the activities carried out during that cycle.
b)Defects that were uncovered during that cycle, based on their severity ad impact.
c)Progress from the previous cycle to be current in team of defects fixed.
d)Outstanding defects that are yet to be fixed in this cycle
e)Any variation observed in effort or schedule.
3)Test Summary Report:- The Final step in a test cycle is to recommend the suitability of a
product for release. A report that summarizes the results of a test cycle is the test summary
report.
There are two types of test summary reports.
a)Phase-wise test summary, which is produced at the end of every phase.
b)Final test summary reports.
A summary report should present
i)A summary of the activities carried out during the test cycle or phase.
ii)Variance of the activities carried out form the activities planned. This included.
1)The tests that were planned to be run but could be run.
2)Modification to tests from what was in the original test specification.
3)Additional test that were run.
4)Differences in effort and time taken between what was planned and what was
executed.
5)Any other deviations from plan.
iii)Summary of results should include.
1)Tests that failed, with any root cause descriptions.
2)Severity of impact of the defects uncovered by the tests.
iv)Compressive assessment and recommendation for releases should include
1)”Fit for release” assessment.
2)Recommendation of release.

Q.10) Explain Test Case Specification in Test Processing?


Ans:- Using the test plan as the basic, the testing team design test case specification, which then
become the basic for preparing individual test cases. We have been using the term test cases
freely throughout this book. Formally, test case is nothing but a series of steps executed on a
product, using a pre-defined set of input data, expected to produce a pre-defined set of outputs, in
a given environment. Hence, a test case specification should clearly identify.
1)The purpose of the test: This lists what features or part the test is intended for The test
case should follow the naming conventions that are consistent with the feature/module being
tested.
2)Items being tested, along with their version/release numbers as appropriate.
3)Environment that needs to be set up for running the test case: This can include the
hardware environment setup, supporting software environment setup, setup of the product under
test.
4)Input data to be used for the test case: The choice of input data will be dependent on the
test case itself and the technique followed in the test case. The actual value to be used for the
various fields should be specified unambiguously. If automated testing is to be used, these values
should be captured in a file and used, rather than having to enter the data manually every times.
5)Steps to be followed to execute the test: If automated testing is used, then these steps
are translated to the scripting language of the tool. If the testing is manual, then the steps are
detailed instructions that can be used by a tester to execute the test. It is important to ensure that
the level of detail in documenting the steps is consistent with the skill and expertise level of the
person who will execute the tests.
6)The expected results that are considered to be “correct results”. These expected results
can be what the user may see in the form of a GUI, report, and so on and can be in the form of
update to persistent storage in a database or in files.
7)A step to compare the actual results produced with the expected results: This step
should do an “intelligent” comparison of the expected and actual results to highlight any
discrepancies. By “intelligent” we mean that the comparison should take care of “acceptable
differences” between the expected result and the actual results, like terminal ID, user ID, system
data, and so on.
8)Any relationship between this test and other tests: This can be in the form of
dependencies among the tests or the possibility of reuse across the tests.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy