0% found this document useful (0 votes)
28 views6 pages

21BCE0442

The document outlines individual assignment instructions for a Software Engineering course at VIT, focusing on various testing scenarios including API testing, performance testing, user acceptance testing, stress testing, and mutation testing. Each scenario details objectives, scope, test strategies, expected outcomes, and criteria for evaluation. Students are required to answer five scenarios based on provided mappings, with a marking scheme that penalizes for plagiarism.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views6 pages

21BCE0442

The document outlines individual assignment instructions for a Software Engineering course at VIT, focusing on various testing scenarios including API testing, performance testing, user acceptance testing, stress testing, and mutation testing. Each scenario details objectives, scope, test strategies, expected outcomes, and criteria for evaluation. Students are required to answer five scenarios based on provided mappings, with a marking scheme that penalizes for plagiarism.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

VELLORE INSTITUTE OF TECHNOLOGY

Vellore

SCHOOL OF COMPUTER SCIENCE AND ENGINEERING

Winter Semester 2023 - 2024

BCSE301L -Software

Engineering Digital

Assignment - II

Instructions:

• Assignments are not teamwork, it’s an individual work.


• Each student must answer 5 scenarios based on the question mapping done at the
end of this document.
Marking Scheme ( (5*2) – (1 if Plag% >25 ||2 if Plag% > 35 || 4 if plag%>45||6 if plag%>50)

Reg. No: 21BCE0442


Name: KISHORE P

3. **API Testing for Functional and Structural Verification:**

- Scenario: A company is developing an API that will be used by multiple internal and external

applications.

White-box Testing Scenario:

White-box testing for a Programming interface that will be utilized by numerous inner and outside
applications includes inspecting the inside design and rationale of the Programming interface to
guarantee both useful accuracy and primary honesty. Here is a situation for white-box testing this
specific circumstance:

API Functional Testing:

Test every Programming interface endpoint with legitimate contributions to check that it delivers the
normal result. Test edge cases and limit conditions to guarantee the Programming interface handles
uncommon data sources smoothly. Confirm blunder taking care of by testing with invalid data
sources and checking assuming that the Programming interface answers with suitable mistake
messages and status codes.
API Structural Testing:

Check the Programming interface's inner design, including code ways, branches, and conditions. Use
procedures like code inclusion examination (e.g., proclamation inclusion, branch inclusion) to
guarantee that all pieces of the Programming interface code are executed during testing. Direct way
testing to approve different execution ways inside the Programming interface, it are tried to
guarantee that every conceivable course.

6. **Performance Testing for Scalability Assessment:** - Scenario: A social media platform is


preparing for a surge in user traffic due to a marketing campaign. White-box Testing Scenario:

White-box Testing Scenario: Scalability Assessment of Backend API Endpoints

Scenario Title:
Internal Load and Throughput Analysis of the "Post Submission" Microservice

Objective:
To ensure the backend components, especially the "Post Submission" service, can handle a high
volume of concurrent requests without performance degradation or failure.

Scope:
Focus on the internal logic of the "Post Submission" API, particularly:

• Input validation logic

• Database write operations

• Queue/message broker handling

• Logging and audit trail subsystems

Test Strategy:

• Use white-box techniques (e.g., code instrumentation, internal metrics logging) to monitor
memory consumption, CPU utilization, and response times.

• Inject synthetic load (e.g., simulate 10,000 concurrent users posting updates) and trace the
function calls and their performance.

• Check for:

o Bottlenecks in database write operations

o Deadlocks or race conditions in concurrent access

o Exception handling during peak loads

o Caching behavior under stress

Tools:

• JMeter/Gatling for simulating load

• JProfiler/VisualVM for code profiling


• Internal logging with trace-level logging enabled

Expected Outcome:

• The system should maintain response time within acceptable thresholds (<300ms for post
requests)

• No memory leaks or crashes should occur

• The service should auto-scale appropriately if microservices are containerized (e.g.,


Kubernetes)

8) *User Acceptance Testing (UAT):** - Scenario: A software company is developing a new e-


commerce platform. Black-box Testing Sce

Black-box Testing Scenario: User Acceptance Testing for E-Commerce Platform

Scenario Title:
End-to-End Purchase Flow Validation for Registered Users

Objective:
To verify that a registered user can search for a product, add it to the cart, and complete the
purchase without encountering any errors or usability issues.

Scope:

• Homepage access

• Product search

• Product detail viewing

• Adding product to cart

• Checkout process (address, payment, confirmation)

Test Steps:

1. Log in with a valid registered user account.

2. Search for a product (e.g., "Wireless Mouse").

3. Select a product from the search results.

4. View product details.

5. Add the item to the shopping cart.

6. Proceed to checkout.

7. Enter shipping details.

8. Choose a payment method (e.g., credit card).

9. Complete the transaction.


10. Verify the order confirmation page and email.

Test Data:

• Valid user credentials

• Predefined product for consistent testing

• Dummy payment information for testing environment

Expected Outcome:

• Each step should complete successfully without any crashes or errors.

• The user should receive a confirmation page and order summary.

• An order confirmation email should be sent to the registered email address.

Acceptance Criteria:

• 100% functionality coverage for critical user flows

• Intuitive and error-free user experience

• No UI misalignments or broken links

13) Stress Testing for Performance Evaluation:** - Scenario: A banking application is subjected to
heavy loads during peak transaction periods. Black-box Testing Scenario:

Black-box Testing Scenario: Stress Testing of Banking Transactions Under Load

Scenario Title:
Transaction Handling Under Peak Load Conditions

Objective:
To evaluate how the banking application behaves under extreme transaction loads, simulating peak
hours such as month-end salary credits or festive season transfers.

Scope:

• Fund transfers between accounts

• Balance inquiries

• Login/logout operations

• Bill payments and transaction confirmations

Test Steps:

1. Simulate 10,000+ concurrent users performing various actions like logging in, transferring
funds, and checking balances.

2. Execute bulk fund transfers repeatedly (e.g., 1000 transactions per minute).

3. Submit simultaneous requests to check balances and pay bills.


4. Observe system response, errors, and system behavior under resource exhaustion.

Tools:

• Apache JMeter or Locust for load simulation

• Monitoring tools for tracking server/resource metrics (e.g., CPU, memory, DB connection
pool)

Expected Outcome:

• The application should not crash or produce incorrect results.

• Users should receive meaningful error messages if limits are reached.

• Core services like login and transaction verification should maintain availability.

• Response times should degrade gracefully, not abruptly.

Pass Criteria:

• No data corruption or loss during or after stress

• Application handles at least X concurrent users (based on SLA)

• Clear logs and metrics are generated for further analysis

18)Variable Mutations:** - Scenario: An application uses variables to store data. Mutation Testing
Scenario:

Mutation Testing Scenario: Variable Mutations in Data Storage

Scenario Title:
Mutation Testing on Variable Assignments and Usage

Objective:
To assess the strength of the test suite by introducing faults in how variables are assigned and used in
the code.

Scenario:
The application uses variables to store and manipulate user data such as age, balance, and login
attempts.

Mutations Introduced (Variable Mutations)

Mutation Type Original Mutated Version Intent

Assignment
age = 30 age = 18 Test boundary condition handling
Mutation

Arithmetic Mutation balance - 200 balance + 200 Reverse logic check


Mutation Type Original Mutated Version Intent

Variable login_attempts -=
login_attempts += 1 Test correctness of increment logic
Replacement 1

Check boundary evaluation in


Conditional Logic age > 18 age >= 18
condition

balance = balance - balance = age -


Variable Swap Check wrong variable assignment
200 200

Test Expectations:

• Unit tests should detect incorrect outcomes resulting from mutated variables (e.g., incorrect
balance or wrong age classification).

• Each mutation should ideally be killed by a specific test that depends on the correct value
and logic of the variables.

• Surviving mutations imply that certain logic or variable-dependent behaviors aren't well-
tested.

Pass Criteria:

• High mutation score (e.g., > 80% of mutations killed).

• No critical mutation should survive untested.

• Variable interactions and boundary conditions are well-covered.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy