Sta Lab Record
Sta Lab Record
PANDUR, THIRUVALLUR
V SEMESTER/ III-YEAR
2024-2025
INDIRA INSTITUTE OF ENGINEERING &TECHNOLOGY
Pandur, Thiruvallur – 631 203
CERTIFICATE
AIM
Develop the Test Plan for Testing an E-Commerece Web/Mobile Application
(Www.Amazon.Com)
PROCEDURE:
1. Objective
The objective is to validate that the Amazon e commerce website functions as
expected across all features, including product search, shopping cart, order
processing, payment systems, and user accounts.
The goal is to identify any bugs or usability issues that could affect the user
experience or business operations.
2. Scope
In Scope: Product browsing, filtering, adding items to the cart, checkout process,
payment gateways, order confirmation, user account creation, and customer service
interactions.
Out of Scope: Backend inventory management, internal Amazon warehouse
systems, and third party logistics systems.
3. Test Methodology
Functional Testing: Verify that the features work as expected.
Usability Testing: Ensure the site is user friendly and intuitive.
Performance Testing: Check that the website performs well under different loads.
Security Testing: Protect customer data and transactions from vulnerabilities.
Cross browser Testing: Ensure the website works consistently across different
browsers and devices.
4. Approach
Test scenarios will be created based on user stories and requirements.
Test cases will be written for each scenario, and manual and automated tests will
be executed.
Regression testing will be performed whenever there are changes or new features
added to the system.
5. Assumptions
Users will have stable internet connections.
Payment gateways will have real time connectivity with banks.
Browsers used will be up to date with JavaScript enabled.
6. Risks
Unavailability of test environments.
Delays in receiving updates from third party payment services.
High traffic during major sales events could impact performance testing.
7. Mitigation Plan or Contingency Plan
If test environments are unavailable, prioritize testing on the production replica.
Work with payment service providers to set up a mock testing environment.
Schedule performance testing during non peak hours to reduce server load.
8. Roles and Responsibilities
Test Lead: Oversees the testing activities, assigns tasks, and manages timelines.
Test Engineers : Create and execute test cases, report defects.
Automation Engineers : Develop and maintain automated test scripts.
Product Owners : Clarify requirements and validate acceptance criteria.
9. Schedule
Week 1-2 : Requirement analysis and test planning.
Week 3-4 : Test case creation and review.
Week 5-6 : Test execution (manual and automation).
Week 7 : Bug fixes and retesting.
Week 8 : Final regression testing and sign off.
10. Defect Tracking
Defects will be logged and tracked using a tool like JIRA.
Each defect will be assigned a severity level and status, including Open, In
Progress, Resolved, and Closed.
Weekly review meetings will be held to discuss and prioritize the defects.
11. Test Environment
Test environments will replicate the production setup with real time product
catalogs.
Environments include devices like desktops, smartphones, and tablets to test
responsiveness.
12. Entry and Exit Criteria
Entry Criteria : All test cases are created, test environments are set up, test data is
ready.
Exit Criteria : All critical defects are resolved, and all planned test cases have been
executed successfully.
13. Test Automation
Automate regression test cases for critical features like login, search, checkout, and
order tracking.
Use tools like Selenium or TestNG for automated testing on multiple browsers.
14. Effort Estimation
Manual Testing: 300 hours (Functional + Regression).
Automation Testing: 200 hours (Script Development + Execution).
Performance Testing: 100 hours.
15. Test Deliverables
Test Plan document.
Test Cases and Test Scripts.
Test Summary Report.
Defect Report and Root Cause Analysis.
Final Test Sign off document.
16. Template Summary
Objective : To ensure that Amazon's e commerce platform is user friendly, secure,
and meets business requirements.
Scope : Includes all front end functionalities and excludes back end logistics.
Methodology : Functional, usability, performance, and security testing.
Approach : Combination of manual and automated testing.
Risks and Mitigation : Address test environment and third party integration issues.
Roles and Responsibilities : Clearly defined tasks for the testing team.
Defect Tracking : Organized defect tracking using tools like JIRA.
Automation : Focus on automating regression and repetitive tests.
Result:
Thus the Test Plan for Testing an E-Commerece Web/Mobile Application
(Www.Amazon.Com) is created successfully
EX. NO: 2 DESIGN THE TEST CASES FOR TESTING THE E-
COMMERCE APPLICATION
DATE:
AIM:
To design the test cases for testing the e-commerce application.
PROCEDURE:
The user should be able to navigate to all the pages in the website
There should be a fallback page for any page load errors
Verify that all the links and banners work properly
Search results should be displayed with the most relevant item being shown
first
All data related to the product – title, price, images, and description are all
visible clearly
Maintain a session for each user and test verify the session times out after a
while
TEST CASES
RESULT:
Test cases for testing the e-commerce application has been successfully
created.
EX. NO: 3 TEST THE E-COMMERCE APPLICATION AND REPORT
THE DEFECTS IN IT
DATE:
AIM:
To test the e-commerce application and report the defects in it.
PROCEDURE:
Testing Scenarios:
1. User Registration:
Create a new account with valid and invalid email formats.
Test the password strength requirements.
Check for proper error messages when registration fails.
2. Product Browsing:
Browse different product categories.
Verify that product details (price, description, images) are displayed
correctly.
Check for any broken or missing images.
3. Adding Items to Cart:
Add items to the cart and verify that the cart updates accurately.
Test adding items with different quantities.
4. Cart Functionality:
Update item quantities in the cart and ensure the cart total updates
accordingly.
Remove items from the cart and verify that the cart updates properly.
5. Checkout Process:
Proceed through the checkout process as both a registered and guest
user.
Test different payment options (credit card, PayPal, etc.).
Ensure that tax, shipping, and discounts are calculated correctly.
6. User Account Management:
Test updating account information (name, email, password).
Check for proper validation when changing account information.
Verify that users can log in and log out successfully.
7. Search Functionality:
Search for products using different keywords.
Check that the search results are relevant and displayed correctly.
8. Responsive Design:
Test the application on various devices and screen sizes to ensure
responsive design.
9. Security Testing:
Test for SQL injection and other common security vulnerabilities.
Defect Report
RESULT:
The defects in the e-commerce application is reported successfully
EX. NO: 4 DEVELOP THE TEST PLAN AND DESIGN THE TEST CASES
FOR AN INVENTORY CONTROL SYSTEM
DATE:
AIM:
To develop the test plan and design the test cases for an inventory control
system
Test Plan for an Inventory Control System:
1. Introduction
The Inventory Control System is designed to efficiently manage inventory,
track stock levels, and ensure accurate stock information. This test plan outlines the
testing approach, scope, objectives, and methodologies to verify the functionality,
performance, and usability of the system.
2. Objectives
The primary objectives of this test plan are to:
Validate that the Inventory Control System meets functional requirements.
Verify the accuracy and consistency of inventory tracking.
Assess system performance under varying loads.
Ensure the system's security mechanisms are effective.
Evaluate the system's usability and user-friendliness.
3. Test Scope
The testing will cover the following aspects of the Inventory Control System:
User authentication and authorization.
Adding, updating, and deleting products in the inventory.
Monitoring stock levels and receiving alerts for low stock.
Generating various inventory reports.
User interfaces for different user roles (admin, manager, employee).
4. Test Environment
Hardware: Servers, workstations, mobile devices for testing on different
platforms.
Software: Inventory Control System build, web browsers, databases.
Network: LAN/WAN with various network conditions (latency, bandwidth).
Test Data: Realistic inventory data and user scenarios for testing.
5. Test Data
Test data will include a range of products, quantities, prices, and suppliers.
Both normal and boundary test cases will be created to cover various scenarios.
Data will be structured to validate different calculations and reports.
6. Types of Testing
Unit Testing: Testing individual modules and functions.
Integration Testing: Testing interactions between different system components.
Functional Testing: Validating functional requirements.
Performance Testing: Evaluating system response times and resource usage.
Security Testing: Checking for vulnerabilities and unauthorized access.
Usability Testing: Assessing user-friendliness and navigation.
7. Defect Management
Defects will be logged using a defect tracking tool.
Defects will be classified by severity (critical, major, minor) and priority.
Defects will be retested after resolution.
8. Test Schedule
Testing Phases and Estimated Timeline:
Unit Testing: 1 week
Integration Testing: 2 weeks
Functional Testing: 2 weeks
Performance Testing: 1 week
Security Testing: 1 week
Usability Testing: 1 week
9. Risk Assessment
Identified Risks: Data loss, system downtime, security breaches.
Mitigation Strategies: Regular data backups, redundancy, security protocols.
10. Test Deliverables
The following deliverables will be produced during the testing process:
Test cases document.
Test execution results and defect reports.
Performance testing results and analysis.
Usability testing observations and feedback.
12. Exit Criteria
All high-priority defects are resolved.
Key performance indicators meet predefined targets.
All test cases are executed and passed.
RESULT:
Developing the test plan and designing the test cases for an inventory control
system was been successfully created.
EX. NO: 5 EXECUTE THE TEST CASES AGAINST A CLIENT SERVER
OR DESKTOP APPLICATION AND IDENTIFY THE
DATE:
DEFECTS
AIM:
To execute the test cases against a client server or desktop application and
identify the defects.
ID Action Steps Input Expected Actual Status Defect
Data Result Result Details
TC-1 Login Enter valid Userna Login Login Pass -
Validation username/ me: should succeeded
password admin succeed
Passwor with valid
d: credential
pass123 s
TC-2 Login Enter Userna Login Login Pass -
Validation invalid me: should failed
username/ invalid fail with with
password Passwor an error error:
d: message "Invalid
wrong1 Credential
23 s"
TC-3 Invoice Enter Invoice Invoice Save Fail Defect
Creation invoice No: should be button is ID:
details 1234 saved unrespons DEF-
Date: successful ive 001
25/11/2 ly Severity
024 : Major
Save
button
not
function
ing
TC-4 Stock Enter item Item: Stock Stock Fail Defect
Check details Laptop details details not ID:
should be displayed DEF-
displayed 002
automatic Severity
ally : Critical
Stock
check
function
ality
broken
TC-5 Quantity Enter Quantit Quantity Quantity Pass -
Input quantity for y: 5000 should be is
an item accepted accepted
TC-6 Price Select item Item: Total Price Fail Defect
Calculatio and quantity Laptop price calculatio ID:
n Quantit should be n DEF-
y: 5 calculated incorrect 003
automatic Severity
ally : Major
Incorrec
t price
calculati
on logic
TC-7 Add Item Add an item Item: Item Item is Pass -
to Cart to the cart Laptop should be added
Quantit added to successful
y: 2 the cart ly
successful
ly
RESULT:
Thus executing the test cases against a client server or desktop application
and identification of defects is done
EX. NO: 6 TEST THE PERFORMANCE OF THE E-COMMERCE
APPLICATION
DATE:
AIM:
To test the performance of the e-commerce application.
Testing the Performance of an E-Commerce Application
1. Define Performance Requirements
Clearly outline acceptable performance criteria:
Response Times: Define acceptable time limits for key actions (e.g., homepage
load, search results, checkout).
Concurrent Users: Specify the maximum number of users the system should
handle simultaneously without degradation.
Other Metrics: Include throughput, error rate, CPU/memory usage, and
database response times.
2. Identify Performance Testing Scenarios
Determine critical areas for testing, such as:
Load Testing: Evaluate the system’s ability to handle expected user loads under
normal conditions.
Stress Testing: Identify breaking points by testing with loads beyond expected
capacity.
Scalability Testing: Assess how the system performs when resources (e.g.,
servers, memory) are added.
Endurance Testing: Test system stability under a sustained load for an extended
period.
Peak Load Testing: Simulate maximum user load during sales or high traffic
events.
Response Time Testing: Measure the time taken for various transactions (e.g.,
login, search, checkout).
Concurrency Testing: Analyze the system’s ability to handle multiple
simultaneous user sessions.
Database Performance Testing: Evaluate the performance of database queries,
updates, and transactions.
Checkout Process Testing: Test the speed and reliability of the checkout
process.
3. Select Performance Testing Tools
Use suitable performance testing tools, such as:
Apache JMeter: For load and stress testing.
LoadRunner: Comprehensive performance testing with scalability insights.
Gatling: Lightweight and efficient tool for concurrency testing.
Google Lighthouse: To measure page load times and other frontend metrics.
4. Prepare Test Data
Simulate realistic test data, such as:
User profiles (regular users, admins).
Products and categories.
Transactions, orders, and payments.
5. Create Test Scripts
Develop automated scripts to simulate user activities:
Browsing products.
Searching for items.
Adding items to the cart.
Completing purchases.
6. Execute Performance Tests
Run the test cases defined in your scenarios using the selected tools.
Monitor key metrics:
Response times.
Resource utilization (CPU, memory, disk I/O).
Error rates.
7. Analyze Test Results
Identify bottlenecks and performance issues by analyzing test results.
Focus on:
Response times exceeding acceptable limits.
High error rates or failed transactions.
System resource consumption during peak loads.
8. Optimize and Retest
Work with developers and system administrators to address bottlenecks.
Optimize the application and repeat tests to verify improvements.
9. Reporting
Generate detailed reports, including:
Test metrics (response times, error rates).
System bottlenecks.
Recommendations for performance improvements.
10. Monitoring in Production
Use monitoring tools like New Relic, Datadog, or Dynatrace to continuously
assess application performance post-deployment.
Set alerts for key metrics (e.g., load times, error rates) to catch issues in real-
time.
RESULT:
Testing the performance of the e-commerce application has been done and the
errors has been executed successfully.
EX. NO: 7 AUTOMATE THE TESTING OF E-COMMERCE
APPLICATIONS USING SELENIUM
DATE:
AIM:
To automate the testing of e-commerce applications using selenium
PROCEDURE:
Set up the WebDriver with the correct browser driver (ChromeDriver in this
case) and launch the browser.
Open the e-commerce website by navigating to its URL.
Locate Elements such as username, password and login button by its id
Use WebDriverWait to ensure that the username, password input fields, and
login button are visible and interactable.
Check the fields if it was working correctly.
PROGRAM:
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import java.time.Duration;
public class ECommerceTest {
public static void main(String[] args) {
// Set the path to your ChromeDriver executable (preferably set the path in
system environment variable)
System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");
// Initialize WebDriver
WebDriver driver = new ChromeDriver();
try {
// Test Case 1: Navigate to login page
driver.get("url_of_your_ecommerce_application");
// Define the locators
WebElement usernameInput = driver.findElement(By.id("username"));
WebElement passwordInput = driver.findElement(By.id("password"));
WebElement loginButton = driver.findElement(By.id("loginButton"));
// Ensure elements are visible
WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10));
wait.until(ExpectedConditions.visibilityOf(usernameInput));
wait.until(ExpectedConditions.visibilityOf(passwordInput));
wait.until(ExpectedConditions.elementToBeClickable(loginButton));
// Test Case 2: Verify fields are displayed
if (usernameInput.isDisplayed()) {
System.out.println("Test Case 1 Passed: Username field is displayed");
} else {
System.out.println("Test Case 1 Failed: Username field is not displayed");
}
if (passwordInput.isDisplayed()) {
System.out.println("Test Case 2 Passed: Password field is displayed");
} else {
System.out.println("Test Case 2 Failed: Password field is not displayed");
}
// Test Case 3: Enter valid credentials and click login
usernameInput.sendKeys("valid_username");
passwordInput.sendKeys("valid_password");
loginButton.click();
// Test Case 4: Check if login was successful (assuming successful login
redirects to a dashboard)
WebElement dashboard =
wait.until(ExpectedConditions.presenceOfElementLocated(By.id("dashboard")));
if (dashboard.isDisplayed()) {
System.out.println("Test Case 3 Passed: Login Successful, Dashboard is
displayed.");
} else {
System.out.println("Test Case 3 Failed: Login was unsuccessful.");
}
// Add more test cases and actions here...
} catch (Exception e) {
System.out.println("An error occurred: " + e.getMessage());
} finally {
// Close the browser
driver.quit();
}
}
}
OUTPUT:
Console Output (Success Case):
Test Case 1 Passed: Username field is displayed
Test Case 2 Passed: Password field is displayed
Test Case 3 Passed: Login Successful, Dashboard is displayed.
RESULT:
Automating the testing of e-commerce applications using selenium is
successfully executed.
EX. NO: 8 INTEGRATE TestNG WITH THE ABOVE TEST
AUTOMATION
DATE:
AIM:
To integrate TestNG with the above test automation
PROCEDURE:
Initialize WebDriver and TestNG Setup
Use driver.get(url) to navigate to the e-commerce login page.
Create test cases to check if the module is working.
Close the browser using driver.quit() to clean up after the test.
The TestNG framework will manage the execution order of the test methods
based on the priorities defined (@Test(priority))
PROGRAM:
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.Assert;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.Test;
import java.time.Duration;
public class ECommerceLoginTest {
WebDriver driver;
WebDriverWait wait;
// Set the path to your ChromeDriver executable (preferably set the path in the
system environment variable)
String driverPath = "path/to/chromedriver";
String url = "url_of_your_ecommerce_application";
@BeforeMethod
public void setUp() {
// Set up ChromeDriver
System.setProperty("webdriver.chrome.driver", driverPath);
driver = new ChromeDriver();
wait = new WebDriverWait(driver, Duration.ofSeconds(10));
// Open the website
driver.get(url);
}
@Test(priority = 1)
public void verifyUsernameField() {
WebElement usernameInput = driver.findElement(By.id("username"));
Assert.assertTrue(usernameInput.isDisplayed(), "Username field is not
displayed.");
}
@Test(priority = 2)
public void verifyPasswordField() {
WebElement passwordInput = driver.findElement(By.id("password"));
Assert.assertTrue(passwordInput.isDisplayed(), "Password field is not
displayed.");
}
@Test(priority = 3)
public void performLogin() {
WebElement usernameInput = driver.findElement(By.id("username"));
WebElement passwordInput = driver.findElement(By.id("password"));
WebElement loginButton = driver.findElement(By.id("loginButton"));
usernameInput.sendKeys("valid_username");
passwordInput.sendKeys("valid_password");
loginButton.click();
WebElement dashboard =
wait.until(ExpectedConditions.presenceOfElementLocated(By.id("dashboard")));
Assert.assertTrue(dashboard.isDisplayed(), "Login failed. Dashboard not
displayed.");
}
@AfterMethod
public void tearDown() {
// Close the browser
if (driver != null) {
driver.quit();
}
}
}
OUTPUT:
ECommerceLoginTest
Tests run: 3, Failures: 1, Skips: 0, Time elapsed: 5.014 sec
===============================================
PASSED: verifyUsernameField
PASSED: verifyPasswordField
FAILED: performLogin
TestNGException: expected [true] but found [false]
RESULT:
Thus the program to integrate TestNG with the above test automation is
successfully executed.
EX. NO: 9 A. BUILD A DATA-DRIVEN FRAMEWORK USING
SELENIUM AND TestNG
DATE:
AIM:
To build a data-driven framework using selenium and testing.
PROCEDURE:
Initialize WebDriver and TestNG Setup
Define a @DataProvider method to supply test data (login credentials) from a
CSV file.
Write Test Cases for login valid and invalid credentials.
In the @AfterMethod, call driver.quit() to close the browser after each test
case
Execute the test suite defined in the testng.xml file.
PROGRAM:
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.Assert;
import org.testng.annotations.AfterMethod;
import org.testng.annotations.BeforeMethod;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
public class LoginTest {
WebDriver driver;
@BeforeMethod
public void setUp() {
// Initialize WebDriver before each test
System.setProperty("webdriver.chrome.driver", "path/to/chromedriver");
driver = new ChromeDriver();
driver.get("url_of_your_application_login_page");
}
@Test(dataProvider = "loginData")
public void testLoginWithValidCredentials(String username, String password) {
// Locate username and password fields
WebElement usernameField = driver.findElement(By.id("username"));
WebElement passwordField = driver.findElement(By.id("password"));
WebElement loginButton = driver.findElement(By.id("loginButton"));
// Enter valid username and password
usernameField.sendKeys(username);
passwordField.sendKeys(password);
// Click on the login button
loginButton.click();
// Verify successful login
WebElement welcomeMessage =
driver.findElement(By.id("welcomeMessage"));
Assert.assertTrue(welcomeMessage.isDisplayed(), "Login failed for user: " +
username);
// Optionally, verify the welcome message content
Assert.assertEquals(welcomeMessage.getText(), "Welcome, " + username);
}
@Test(dataProvider = "loginData")
public void testLoginWithInvalidCredentials(String username, String password) {
// Locate username and password fields
WebElement usernameField = driver.findElement(By.id("username"));
WebElement passwordField = driver.findElement(By.id("password"));
WebElement loginButton = driver.findElement(By.id("loginButton"));
// Enter invalid username and password
usernameField.sendKeys(username);
passwordField.sendKeys(password);
// Click on the login button
loginButton.click();
// Verify error message
WebElement errorMessage = driver.findElement(By.id("errorMessage"));
Assert.assertTrue(errorMessage.isDisplayed(), "Error message not displayed for
user: " + username);
}
@DataProvider(name = "loginData")
public Object[][] readData() {
List<Object[]> testData = new ArrayList<>();
String csvFile = "path/to/testdata.csv";
try (BufferedReader br = new BufferedReader(new FileReader(csvFile))) {
String line;
while ((line = br.readLine()) != null) {
String[] data = line.split(",");
testData.add(data);
}
} catch (IOException e) {
e.printStackTrace();
}
Object[][] result = new Object[testData.size()][2];
for (int i = 0; i < testData.size(); i++) {
result[i] = testData.get(i);
}
return result;
}
@AfterMethod
public void tearDown() {
// Close the browser after each test
if (driver != null) {
driver.quit();
}
}
}
Test Case 2:
- Username: testUser2
- Password: wrongpassword
- Result: Login Failed
- Passed
Test Case 3:
- Username: testUser3
- Password: password456
- Result: Login Successful
- Passed
RESULT:
Thus the program to build a data-driven framework using selenium and testing
was successfully executed.
EX. NO: 9 B. BUILD PAGE OBJECT MODEL USING SELENIUM
AND TestNG
DATE:
AIM:
To build page object model using selenium and testing.
PROCEDURE:
Start by initializing the WebDriver. This is typically done in the Test class.
Create a Page Object Class for Each Web Page
Inside the Page Object Class, define the locators for all the page elements (e.g.,
buttons, text fields, checkboxes)
Define methods for actions the user can perform on the page (e.g., login, form
submission).
The TestNG test methods should use the Page Object methods to perform the
tests.
PROGRAM:
LoginPage.java (Page Object)
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
public class LoginPage {
private final WebDriver driver;
// Constructor to initialize WebDriver
public LoginPage(WebDriver driver) {
this.driver = driver;
}
// Method to open the login page with a given URL
public void openLoginPage(String url) {
driver.get(url);
}
// Method to enter username in the username field
public void enterUsername(String username) {
WebElement usernameField = driver.findElement(By.id("username"));
usernameField.sendKeys(username);
}
public void enterPassword(String password) {
WebElement passwordField = driver.findElement(By.id("password"));
passwordField.sendKeys(password);
}
public void clickLoginButton() {
WebElement loginButton = driver.findElement(By.id("loginButton"));
loginButton.click();
}
public boolean isErrorMessageDisplayed() {
WebElement errorMessage = driver.findElement(By.id("errorMessage"));
return errorMessage.isDisplayed();
}
}
OUTPUT:
[INFO] Starting ChromeDriver...
[INFO] WebDriver initialized successfully.
[INFO] Opening the login page...
[INFO] Entering username: valid_username
[INFO] Entering password: valid_password
[INFO] Clicking login button...
[INFO] Asserted: URL contains "dashboard" (this indicates successful login).
PASSED: testLoginWithValidCredentials
RESULT:
Thus the program to build page object model using selenium and testing has
been successfully executed.
EX. NO: 9 C. BUILD BDD FRAMEWORK WITH SELENIUM,
TestNG AND CUCUMBER
DATE:
AIM:
To build BDD framework with selenium, TestNG and cucumber.
PROCEDURE:
Set up the Maven project with dependencies for Selenium, TestNG, and
Cucumber
Write the test scenarios in Gherkin syntax (Feature, Scenario, Given, When,
Then).
Create page classes with locators and reusable methods for interacting with
UI elements.
Display results for each scenario and step in the console, indicating pass or
fail status.Start execution.
PROGRAM:
pom.xml
<dependencies>
<!-- Selenium WebDriver -->
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>4.12.0</version>
</dependency>
<!-- TestNG -->
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>7.8.0</version>
</dependency>
<!-- Cucumber Java -->
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-java</artifactId>
<version>7.14.0</version>
</dependency>
<!-- Cucumber TestNG -->
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-testng</artifactId>
<version>7.14.0</version>
</dependency>
<!-- Cucumber JVM -->
<dependency>
<groupId>io.cucumber</groupId>
<artifactId>cucumber-jvm-deps</artifactId>
<version>1.0.6</version>
</dependency>
</dependencies>
Login.feature
Feature: Login to the Application
Scenario: Valid Login
Given the user is on the login page
When the user enters valid credentials
And clicks the login button
Then the user should be redirected to the dashboard
Scenario: Invalid Login
Given the user is on the login page
When the user enters invalid credentials
And clicks the login button
Then an error message should be displayed
LoginPage.java
package com.example.pages;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import com.example.pages.LoginPage;
import io.cucumber.java.en.*;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.Assert;
import io.cucumber.testng.AbstractTestNGCucumberTests;
import io.cucumber.testng.CucumberOptions;
@CucumberOptions(
features = "src/test/java/features",
glue = "com.example.stepdefinitions",
plugin = {"pretty", "html:target/cucumber-reports.html"},
monochrome = true
)
public class TestRunner extends AbstractTestNGCucumberTests {
}
OUTPUT:
Successful Test Scenario: Login with valid credentials
RESULT:
Thus the program to build BDD framework with selenium, TestNG and
cucumber has executed successfully.