0% found this document useful (0 votes)
13 views18 pages

Testing

The document outlines various software testing methodologies, including functional and non-functional testing, detailing specific types such as unit, integration, system, and acceptance testing. It explains the importance of test cases, test plans, and the Software Testing Life Cycle (STLC), as well as the differences between manual and automated testing. Additionally, it discusses handling changes in project requirements and invalid bug reports, emphasizing the significance of maintaining software quality throughout the development process.

Uploaded by

27Preeti Desai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views18 pages

Testing

The document outlines various software testing methodologies, including functional and non-functional testing, detailing specific types such as unit, integration, system, and acceptance testing. It explains the importance of test cases, test plans, and the Software Testing Life Cycle (STLC), as well as the differences between manual and automated testing. Additionally, it discusses handling changes in project requirements and invalid bug reports, emphasizing the significance of maintaining software quality throughout the development process.

Uploaded by

27Preeti Desai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Testing

Functional Testing: It checks if the software's features work as expected by verifying


inputs, outputs, and system behavior against requirements. Example: Checking login
functionality.
I check login functionality by testing:
1. Logging in with correct and incorrect credentials.
2. Checking what happens if fields are left blank.
3. Ensuring the password is hidden while typing.
4. Testing the 'Forgot Password' option.
5. Making sure the logout works properly.

Non-Functional Testing: It checks aspects like performance, scalability, security, and


usability, focusing on how the system performs rather than what it does. Example: Testing
how fast the login page loads under high traffic.
1)Performance Testing: Check how the website handles 1000 users
logging in at the same time.
2)Usability Testing: Verify the login page is user-friendly and easy to navigate.
3)Compatibility Testing: Check the login functionality on different browsers and devices.
4)Reliability Testing: Ensure the login works consistently without crashes.

"Functional testing ensures the software features work as expected. Non-functional testing
checks performance, security, and other quality aspects

Type of Testing
1)Unit Testing: Testing individual components or functions of the software for correctness.
Example: Testing the function that validates the username and password format to ensure it
correctly identifies invalid input like empty fields or invalid characters.

2)Integration Testing: Testing the interaction between different modules or components.


eg.Testing if the login form communicates correctly with the backend database to verify user
credentials.

3)System Testing: Testing the complete system to verify that it meets the specified
requirements.eg. Testing the entire login flow from entering the credentials to successfully
logging in and redirecting to the homepage.

4)Acceptance Testing: Verifying the system meets the user's needs and requirements (e.g.,
User Acceptance Testing or UAT). Eg. Testing the login form to ensure it meets the user's
requirement, like allowing login only with correct credentials and displaying appropriate
error messages.

5)Functional Testing: Testing the functionality of the application, ensuring it works as per the
requirements. Eg. Verifying that the "Submit" button works when valid login details are
entered and logs the user in successfully.

6)Non-Functional Testing: Testing aspects like performance, security, usability, and


compatibility.eg. Checking if the login form loads within 2 seconds (Performance Testing) and
ensuring it works across different devices (Compatibility Testing).

7)Regression Testing: Testing after changes (like updates or bug fixes) to ensure no new
issues are introduced. Eg. After fixing a bug in the password recovery process, testing the
login functionality again to make sure it still works without issues.

8)Smoke Testing: To verify that the basic, critical functions of the software are working after
a new build or update. Eg. Quickly checking if the login form is accessible, the fields are
working, and the "Login" button is functional, The login page loads correctly, he user can
enter a valid username and password, The login button works and allows the user to log in
successfully.

9)Sanity Testing: Checking if a specific functionality works as expected after a change.


Eg.After a minor update, testing if the login form still accepts correct credentials and logs the
user in.
 Smoke Testing is performed to check if the system is stable enough for further
testing, while Sanity Testing is done to verify that a specific functionality or change
works as expected.(work on specific area)

10)Performance Testing: Testing how the system performs under load (e.g., Load Testing,
Stress Testing).Eg. Testing the login form under heavy load to ensure it can handle 1000
users trying to log in at the same time.

11)Security Testing: Testing to identify vulnerabilities and ensure data protection.


Eg. Attempting a SQL injection attack through the username and password fields to ensure
the form is protected.

12)Compatibility Testing: Ensuring the software works across different browsers, devices, or
operating system. Eg. Checking if the login form displays and functions correctly across
multiple browsers like Chrome, Firefox, and Safari.

13)Exploratory Testing: Testing without predefined test cases, exploring the software to find
defects. Instead of following a set of scripted steps, testers use their knowledge, intuition,
and experience to find issues in the software Eg. Trying various inputs like special characters
or extremely long passwords in the username and password fields to discover any hidden
bugs or errors in the login form.

Black box testing and white box testing


1. Black Box Testing:
o In Black Box Testing, the tester focuses on testing the software’s functionality
without knowing the internal workings or code of the application.
o It tests the system from the user's perspective (input and output).
o Example: Testing a login form by entering a valid and invalid username and
password to see if it works correctly without knowing how the login process is
coded.
2. White Box Testing:
o In White Box Testing, the tester has knowledge of the internal code and
structure of the application. The testing is done by examining the code logic,
pathways, and system behavior.
o It involves testing individual functions, loops, or internal structures to ensure
they work as expected.
o Example: Checking the logic behind a login function to ensure it handles
correct passwords, error messages, and redirects properly, based on the
actual code.

Difference:
 Black Box Testing tests what the software does (functionality), while White Box
Testing tests how it works (internal code and structure).

Purpose of Regression Testing:


To ensure that changes, such as bug fixes or new features, do not break existing functionality
of the software. It helps maintain system stability and reliability after updates.
In an interview, you can say:
"The purpose of regression testing is to verify that new changes haven't affected the existing
functionality of the software."

Test Cases: Test cases are step-by-step instructions to check if a part of the software works
correctly. They include what to test, how to test, and the expected result.
Test cases are step-by-step instructions to check if a feature works correctly. For example, a
test case for a login form might include steps like entering a valid username and password,
and the expected result would be a successful login. Another test case could test invalid
login attempts, expecting an error message to appear

Test Plan: A test plan is a document that describes the testing strategy, scope, objectives,
resources, schedule, and activities for a testing process. It outlines what to test, how to test,
who will test, and when testing will happen.
"A test plan is a document that explains the testing process, including what will be tested,
how it will be tested, and the timeline for testing."
Eg. In simple words, you can say:
"A test plan is a document that explains how testing will be done. It covers what will be
tested, how it will be tested, who will test it, and when testing will happen. For example, a
test plan for a login form might include checking valid and invalid logins, the tools needed,
the team doing the testing, and the schedule."

Steps to Write Test Cases:


1. Understand Requirements: Analyze the requirements or user stories to identify what
needs testing.
2. Define Test Case ID: Assign a unique identifier to each test case.
3. Write Test Case Title: Provide a clear and concise title describing the test.
4. Specify Preconditions: Mention any setup needed before starting the test.
5. Steps to Execute: List detailed steps to perform the test.
6. Expected Result: Clearly state the expected outcome after execution.
7. Postconditions: Describe the state of the system after the test is done (if needed).
8. Review and Update: Validate test cases for accuracy and update as necessary.

Eg. Test Case Title: Verify successful login with valid credentials
Preconditions:
 The user must have a valid account with a username and password.
 The application should be up and running.
Steps to Execute:
1. Open the application or website.
2. Navigate to the login page.
3. Enter a valid username in the "Username" field.
4. Enter a valid password in the "Password" field.
5. Click on the "Login" button.
Expected Result:
 The user should be successfully logged in and redirected to the
homepage/dashboard.
Postconditions:
 The user should be logged into the system and able to access their dashboard.
Prioritizing Test Cases:
1. Risk and Impact: Test high-risk areas or critical features first.
2. Frequency of Use: Test frequently used features or modules.
3. Complexity: Prioritize complex or newly implemented features.
4. Dependencies: Test cases that have dependencies on other features should be prioritized.
5. Business Value: Focus on test cases that align with the application's most important
functions.
6. Test Coverage: Ensure all areas of the application are covered by test cases.

A test scenario is a high-level description of a feature or functionality to be tested. It


outlines the specific situation or condition under which the system is tested. Unlike test
cases, which are more detailed, test scenarios focus on the general flow of testing.
For example, a test scenario for a login form could be 'Test successful login with valid
credentials.' This scenario ensures the login feature works as expected, but it doesn't go into
the detailed steps or expected results like a test case would."

Test Case vs Test Scenario:


Test Case: A detailed set of steps, inputs, and expected results to verify a specific feature or
functionality. It focuses on a particular aspect of the system.
Test Scenario: A high-level description of what to test. It represents a situation or flow that
needs to be tested, but it doesn’t go into detailed steps or expected results.
"A test case is a detailed document with steps and expected results, while a test scenario is a
broad situation or feature that needs testing."

Bug and Defect:


 Bug: A bug is a mistake or error in the code that causes the software to behave
incorrectly.
o Example: When you click the "Submit" button on a form, nothing happens
because of a coding error in the button's function.
 Defect: A defect is any problem where the software doesn’t work as expected or
doesn't meet the requirements.
o Example: A login form requires a minimum of 8 characters for a password, but
it only accepts 6 characters. This is a defect because the software is not
following the requirement.
 "A bug is a coding mistake that causes the software to act unexpectedly, like a button
not working. A defect is any issue where the software doesn’t meet the expected
behavior or requirements, like a password field that doesn’t accept the correct length
of characters."

 Error
An error is a human mistake made during the development or design of software. It
occurs when a developer writes incorrect code or implements wrong logic, leading to
unexpected results or failures in the software.
"An error is a mistake made by a developer while coding or designing the software,
which can later cause bugs or defects in the application."

Explain how to handle changes in project requirements during testing


During testing, if there are changes in the project requirements, I handle it by first reviewing
the updated requirements with the team to fully understand the changes. Then, I update the
test cases, test scenarios, and test plans accordingly to align with the new requirements. I
also communicate with developers and stakeholders to ensure everyone is on the same
page. If the changes are significant, I might re-prioritize the testing effort to focus on the
areas most affected by the changes, and perform regression testing to ensure nothing else is
impacted."

Explain how you handle invalid bug reports like


"When I receive an invalid bug report, I first review the details provided to understand the
issue. I try to reproduce the bug in the environment and check the steps outlined in the
report. If I can't reproduce the issue or the issue is due to user error (e.g., incorrect input), I
communicate with the reporter to clarify the steps and understand their environment. If it's
determined to be invalid, I mark the bug as 'invalid' and provide clear feedback, explaining
why the report was not valid, so the reporter understands and can avoid similar issues in the
future."
Manual Testing is the process of testing software manually without using any
automation tools. Testers execute test cases step by step, check the software for defects,
and report any issues they find.
Advantages of Manual Testing:
1. No Need for Automation Skills: Testers don't need to know scripting or
programming.
2. Flexible: Can be used for exploratory testing, ad-hoc testing, and testing for complex
features that are difficult to automate.
3. Human Judgment: Testers can use their intuition and experience to find issues that
might not be caught by automated tests.
4. Better for Short-Term Projects: In cases where automation isn't cost-effective or
necessary.
Disadvantages of Manual Testing:
1. Time-Consuming: Manual testing takes more time as each test case must be
executed individually.
2. Prone to Human Error: Testers might miss issues or repeat tests incorrectly.
3. Not Scalable: As the application grows, it becomes harder and more time-consuming
to manually test all features.
4. Repetitive: Re-running the same tests multiple times (especially for regression
testing) can be tiring and inefficient.

Automation Testing
Advantages of Automation Testing:
1. Faster Execution: Automates repetitive tests, saving time compared to manual
testing.
Example: Running a regression suite after every build.
2. Improved Accuracy: Reduces human errors during testing.
3. Reusability: Test scripts can be reused across different versions of the software.
4. Supports Continuous Testing: Useful in CI/CD pipelines for frequent and quick
deployments.
5. Increased Coverage: Allows testing large datasets and multiple scenarios.

Disadvantages of Automation Testing:


1. High Initial Cost: Requires investment in tools and skilled resources.
2. Limited to Predefined Scenarios: Cannot handle exploratory or ad-hoc testing well.
3. Maintenance: Test scripts need regular updates when the application changes.
4. Not Suitable for Short-term Projects: High setup cost isn’t justified for small projects.
5. Tool Limitations: Some tools may not support all technologies or platforms.

Manual Testing vs Automated Testing


Manual Testing is when testers manually execute test cases without using any automation
tools. The tester checks the application by interacting with it just like an end user to find
bugs or issues.
Automated Testing is when tests are executed using automation tools or scripts. It allows
tests to be run automatically, without human intervention, and is used to check repetitive
tasks or large test suites efficiently.
Key Differences:
 Manual Testing: Done by testers, more time-consuming, and prone to human error.
 Automated Testing: Done using tools/scripts, faster, and reduces human error but
requires initial setup time and coding skills.
In an interview, you can explain:
"Manual testing is where testers check the software manually, while automated testing uses
tools or scripts to run tests automatically. Automated testing is faster for repetitive tasks, but
manual testing is useful for exploratory testing or when automation isn't feasible."

Software Testing Life Cycle (STLC)


"The Software Testing Life Cycle (STLC) is a series of steps or phases that testers follow to
ensure the quality of the software. It typically includes the following stages:
1. Requirement Analysis: Understanding the requirements and deciding what needs to
be tested.
2. Test Planning: Creating a plan, defining the scope, and deciding which resources and
tools will be needed for testing.
3. Test Design: Writing test cases and preparing the test data based on requirements.
4. Test Execution: Running the tests and checking whether the software behaves as
expected.
5. Defect Reporting: Logging any issues or bugs found during testing.
6. Test Closure: After testing, the results are reviewed, reports are generated, and the
testing process is concluded.
These steps help ensure that the software is thoroughly tested and free from critical issues
before release."

SDLC
Stage 1: Requirement Analysis
This stage involves gathering and understanding the customer's needs. Senior team
members, business analysts, and stakeholders define the product's objectives, users, and
potential risks. A Software Requirement Specification (SRS) document is created for clear
communication.
Stage 2: Defining Requirements
Once the analysis is complete, the requirements are documented in the SRS, reviewed, and
approved by stakeholders to guide the development process.
Stage 3: Designing the Software
This phase focuses on creating the software design based on the collected requirements. It
transforms the gathered knowledge into a blueprint for the product.
Stage 4: Developing the Project
The actual coding begins. Developers follow guidelines to implement the design using
programming languages and tools.
Stage 5: Testing
Once the code is developed, it's tested for functionality and quality. This includes various
tests like unit, integration, system, and acceptance testing to ensure it meets the
requirements.
Stage 6: Deployment
After testing, the software is deployed for use. It may be released as-is or with
improvements based on feedback.
Stage 7: Maintenance
Post-deployment, the software enters the maintenance phase where issues and updates are
addressed to ensure it continues to meet users' needs.
Waterfall Model
"The Waterfall Model is a traditional, linear approach to software development where each
phase must be completed before moving on to the next. It follows a clear, step-by-step
sequence, like a waterfall flowing from top to bottom.
The main stages are:
1. Requirements Gathering – All requirements are collected and documented.
2. System Design – A detailed design is created based on the requirements.
3. Implementation – The actual coding or development begins.
4. Testing – After development, the software is tested for bugs and issues.
5. Deployment – Once testing is complete, the software is deployed for use.
6. Maintenance – After deployment, the software is maintained and updated as
needed.
The key idea is that you don’t go back to a previous stage once it’s completed. It's most
suitable for projects with well-defined, stable requirements that won’t change much during
the development process."

V-Model
"The V-Model is a software development approach where each step of development is
paired with a corresponding testing step.
For example, as soon as the design phase starts, testers begin planning integration testing
for that design. When coding happens, testers immediately start doing unit testing.
The key point is that testing isn’t left until the end but happens at each stage of
development. This helps identify problems early, making the development process smoother
and less costly.
So, in the V-Model, development and testing go hand in hand, ensuring the software is
tested throughout, not just at the end."

Agile Model
"The Agile Model is a flexible and iterative approach to software development. Instead of
following a strict, step-by-step process like Waterfall, Agile breaks the project into smaller
parts called sprints (short cycles, usually lasting 1-4 weeks).
In each sprint:
1. A small piece of the software is developed.
2. It is tested and reviewed by the customer.
3. Feedback is gathered, and improvements are made in the next sprint.
The main benefit of Agile is that it allows for quick changes and continuous feedback, so the
software can evolve based on the customer's needs throughout the project. It’s ideal for
projects where requirements may change or evolve over time."

Incremental Model
In simple words, the Incremental Model is a way of developing software in small pieces,
called increments.
 First, a basic version of the software is built with essential features.
 Then, in each step or increment, new features are added and tested.
 After each increment, the software is updated, and users can give feedback on what
they need or want to change.
This model allows the software to be used and improved bit by bit, rather than waiting until
everything is complete. It’s helpful because it lets you deliver parts of the software early and
make adjustments along the way.

Spiral Model is a combination of iterative development and the systematic approach of


the Waterfall model. It focuses on risk management and involves repeating cycles or spirals
of development.
Here’s how it works:
1. Planning: The project’s goals and requirements are gathered.
2. Risk Analysis: Identify and analyze potential risks at each phase.
3. Engineering: Develop the product in smaller, iterative parts.
4. Evaluation: Review progress, get feedback, and decide if the product should be
released or if more development is needed.
Each cycle or spiral goes through these steps, allowing for continuous refinement and risk
management. The project evolves with feedback at each spiral, making it flexible to changes
and minimizing risks throughout the process. It’s especially useful for large and complex
projects where there are high risks.
RAID Model is a project management tool used to assess and manage project risks. RAID
stands for:
1. Risks: Identifying potential problems or challenges that could affect the project’s
success.
2. Assumptions: Documenting assumptions made during the project, like availability of
resources or client feedback.
3. Issues: Listing current problems that need to be addressed immediately.
4. Dependencies: Noting any external factors or teams the project depends on for
successful completion.
The RAID model helps project managers monitor and track the health of a project, ensuring
all potential risks, issues, and dependencies are managed efficiently throughout the project’s
lifecycle.

JIRA Works (In Simple Words):


JIRA is a tool used to track and manage tasks, bugs, and projects. Here's how it works:
1. Create Issues:
o You create an "issue" to represent a task, bug, or feature.
o Example: A developer reports a login bug and assigns it to the testing team.
2. Assign Tasks:
o Issues can be assigned to specific team members.
o Example: A tester is assigned to verify the bug and retest after it's fixed.
3. Track Progress:
o Issues move through different stages like To Do, In Progress, and Done.
o Example: The bug moves from "Open" to "In Progress" when the developer
starts fixing it.
4. Add Details:
o You can add descriptions, screenshots, comments, and deadlines to the issue.
o Example: The tester adds error details and a screenshot of the issue.
5. Monitor Workflow:
o JIRA provides dashboards and reports to monitor project progress.
o Example: The manager checks the bug status and ensures deadlines are met.
For the Interview:
"JIRA helps teams create, assign, and track tasks or bugs. It organizes work into stages, adds
details for clarity, and provides dashboards to monitor progress."

Selenium:
1. What is Selenium?
o Selenium is an open-source automation testing tool used for automating web
browsers. It supports multiple programming languages like Java, Python, C#,
and Ruby for writing test scripts.
2. What are the different components of Selenium?
o Selenium WebDriver: It is the core component that interacts with web
browsers and executes commands.
o Selenium IDE: A record-and-playback tool for creating tests without writing
code (mostly for beginners).
o Selenium Grid: It allows you to run tests on multiple machines and browsers
simultaneously.
3. What is the difference between Selenium and QTP?
o Selenium is an open-source tool for automating web applications, while QTP
(Quick Test Professional) is a commercial tool for automating both desktop
and web applications. Selenium supports multiple browsers and platforms,
while QTP is primarily for Windows-based applications.
4. What are the advantages of using Selenium?
o Open-source and free.
o Supports multiple programming languages (Java, Python, C#, etc.).
o Compatible with multiple browsers (Chrome, Firefox, IE, Safari).
o Can be integrated with testing frameworks like TestNG, JUnit.
o Supports parallel test execution with Selenium Grid.
5. What is WebDriver in Selenium?
o WebDriver is an interface in Selenium that provides a programming interface
to interact with browsers. It controls the browser by simulating user actions
like clicks, text input, etc., and retrieves results.
6. How do you handle pop-ups in Selenium?
o Selenium handles pop-ups using the Alert interface. You can accept, dismiss,
retrieve the text, and handle input in pop-ups (alert boxes) using methods like
alert.accept(), alert.dismiss(), and alert.getText().
7. What is the difference between findElement() and findElements() in Selenium?
o findElement() returns a single WebElement that matches the locator.
o findElements() returns a list of WebElements that match the locator, or an
empty list if no element is found.
8. What are locators in Selenium?
o Locators are used to find elements on a web page. Common locators include:
 ID
 Name
 Class Name
 Tag Name
 Link Text
 Partial Link Text
 CSS Selector
 XPath
9. How do you handle dynamic elements in Selenium?
o You can handle dynamic elements using:
 Explicit Waits (e.g., WebDriverWait)
 Fluent Waits
 XPath with dynamic attributes (e.g., using contains() for dynamic ID)
10. What is the difference between driver.close() and driver.quit()?
o driver.close() closes the current browser window.
o driver.quit() closes all the browser windows and ends the WebDriver session.
These questions will help assess your understanding of the basic concepts and capabilities of
Selenium.
Basics of QA
 What is Quality Assurance (QA)?
 Explain the difference between QA and Testing.
 What is the software testing life cycle (STLC)?
 What is the difference between functional and non-functional testing?
2. Testing Techniques
 Explain black-box, white-box, and gray-box testing.
 How do you write test cases? Can you provide an example?
 What are the components of a good test case?
 How do you prioritize test cases?
3. Bug Tracking
 What is the difference between a bug, defect, and error?
 How do you handle invalid bug reports?
 What tools have you used for bug tracking?
4. Testing Types
 What are smoke and sanity testing?
 Explain regression testing and when you perform it.
 What is exploratory testing?
 What is system testing?
5. Agile Testing
 What is your experience with Agile methodology?
 How does QA fit into an Agile environment?
 What is the difference between Agile and Waterfall testing approaches?
6. Automation Testing (if applicable)
 Have you worked on automation testing? If yes, which tools?
 What are the advantages of automation testing?
 Explain the difference between manual and automation testing.
7. Scenario-Based Questions
 How would you test a login functionality?
 How do you test an e-commerce website?
 What would you do if you find a critical bug just before release?
8. Tool Knowledge
 What testing tools have you worked with?
 Explain your experience with tools like Selenium, JIRA, or TestNG.
9. Problem-Solving
 How do you handle changes in requirements during testing?
 How do you ensure that your test coverage is adequate?
10. Banking Domain Knowledge (specific to SBI SO IT)
 What challenges do you foresee in testing banking applications?
 What are the critical areas to test in a banking app (e.g., security, transactions, etc.)?
 How do you test payment gateways?
11. Behavioral Questions
 Describe a situation where you found a critical defect. How did you handle it?
 How do you manage deadlines while ensuring quality?
 Have you ever disagreed with a developer or team member? How did you resolve it?
Tips:
 Be prepared with examples from your experience at Deloitte or other projects.
 Practice scenario-based and technical questions.
 Brush up on banking domain knowledge for relevance.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy