Lesson 06 - Utilize Software Methodology
Lesson 06 - Utilize Software Methodology
Objectives:
1. Identify the key points in testing.
2. Document test objectives
3. Obtain feedback from user and incorporate to relevant changes.
4. Know test environment and code testing.
5. Integrate code into the production environment.
6. Apply to administer full system test.
Content:
Testing requirements and objectives are determined
What is Website Testing?
Source: Google.com
Website Testing refers to testing end-user scenarios on a website to test its behavior. These end-user
scenarios are scripted by QAs using an automation framework to mimic user interactions on a website’s
UI. QAs can also follow a written test plan that describes a set of unique test scenarios under manual
website testing.
Page 2 of 38
For example, a test script can be written to test a website’s Login page. This script will verify if the
Username and Password fields accept appropriate inputs and check whether the Login was successful.
With people having shorter attention spans, a single site anomaly in the user journey might lead to a
user bouncing or loss of possible revenue. Hence thorough website quality assurance testing should be
mandatory for every online business.
To testify to this, let’s look at some statistics that signify why QA testing of a website is critical:
One in three customers will stop interacting with a specific website if they encounter a bad user
experience.
57% of users do not recommend a business without a good mobile website design.
88% of online customers said they are less likely to return to a website if they’ve had a negative
experience in the first place.
Types of Website Testing
Several types of website testing serve different purposes in providing a high-quality user experience.
Here are some common types of website testing:
1. Functionality Testing: This type of testing focuses on checking if all the features and
functionalities of the website are working as intended.
2. Usability Testing: assesses how user-friendly and intuitive the website layout, design, and
overall UX is.
3. Compatibility Testing: ensures the website functions correctly across different device-browsers-
OS combinations with any inconsistencies in rendering or behavior.
4. Cross-Browser Testing: Cross-browser testing is crucial as it ensures the website looks and
functions correctly across browsers like Chrome, Firefox, Safari, and Edge. It’s important to test web
pages in multiple browsers to guarantee consistent performance and user experience.
5. Responsive Testing: This testing ensures that the website’s layout and design adapt
appropriately to different screen sizes and devices, providing a consistent user experience across
desktops, tablets, and smartphones.
6. Accessibility Testing: evaluates whether the website is usable by individuals with disabilities. QA
checks if the website adheres to accessibility standards under WCAG testing.
7. Performance Testing: evaluates the website’s speed, responsiveness, and overall performance.
8. User Acceptance Testing (UAT): UAT involves having end-users test the website to validate that
it meets their requirements and expectations.
9. Regression Testing: Regression testing involves retesting the website after making changes or
updates to ensure that new features or fixes don’t introduce new issues or break existing functionalities.
10. Localization Testing: If the website is designed to be used in multiple languages or regions,
localization testing checks if the content, formatting, and functionality work correctly for each specific
locale.
Each browser has its rendering engine. Moreover, rendering engines might also differ for different
browser versions. There’s a high probability of a website rendering uniquely across distinct browsers. In
simple terms, a website’s appearance may get inconsistent across distinct browsers or browser versions.
To avoid these inconsistencies in the viewing experience, QAs must thoroughly perform cross-browser
testing for their websites.
This will help teams optimize the website’s viewing experience for all the leading browsers and fix
rendering issues appearing for specific browsers.
A tool like BrowserStack can be convenient in such a case as it instantly empowers QAs to run cross-
browser tests across 3000+ real device browser combinations.
One needs to signup for free -> choose the desired browser-OS combination and start testing.
A responsive layout enables websites to resize themselves dynamically per the screen sizes in which it is
viewed. Nearly 60% of incoming website traffic comes from mobile devices.
However, it is critical to remember that responsive design can also introduce issues, such as misaligned
buttons or links that are difficult to tap.
To fix those issues in advance, it is ideal to use an online responsive design checker tool that helps you
view your websites across distinct device types (mobiles, tablets, desktops).
3. Functionality Testing
This is the most fundamental yet critical website QA testing phase, where the QA team must thoroughly
test all the UI elements considering maximum use case scenarios.
BrowserStack empowers QAs to run manual and automated UI tests using tools like Selenium on its real
Page 5 of 38
device cloud for testing at scale in real user conditions. Leveraging such a platform can help teams
achieve their test goals faster and release a full website faster.
4. Check for Broken Links
Broken links create an extremely frustrating experience for website visitors, particularly when searching
for crucial information. Moreover, broken links also adversely affect the SEO of a website. Naturally, QAs
must pay close attention to ensuring all links are directed toward the intended landing pages or
documents.
Teams can use online speed optimization plugins to identify and fix broken links.
Once a broken link is traced, it is imperative to add the appropriate link or a redirect to send the visitors
to the intended page.
Beyond this, QAs must verify that the key links are directed to the intended page (even if they aren’t
broken). For example, testing all the header navigation links to ensure they go to the intended landing
pages.
5. Ensure Security
In online business, websites may request personal information, especially when developing e-commerce
websites. As part of your QA checklist, security testing is paramount.
Ensure that your website’s SSL certificate is in place to protect sensitive user information.
This helps establish secure connections as the data is encrypted to prevent hacker attacks.
Leading credit card companies and payment gateway integrations will require this as a mandate for
checkout pages.
One must also ensure that all the HTTP traffic is redirected to the HTTPS version of your site.
An ideal way of testing this would be to make dummy payments in a sandbox environment to test all
payment modes and have all the necessary test cases created beforehand.
7. Cookie Testing
Cookies are text files in the user’s browser. These text files contain specific end-user information, such
as login information, cart details, visited pages, IP address, etc. For example, if you log in to a site, that
site will add a cookie for your login session. This cookie is later used for various purposes like
personalizing content for a returning web user, sending personalized ads, etc.
One must test their website across multiple user scenarios to evaluate their website’s behavior with
cookies enabled or disabled.
Page 6 of 38
Testing the website across leading browsers in real user conditions using a real device cloud is the best
way QAs can ensure that everything works as intended.
To balance and complement manual testing, it is crucial to start automating testing procedures as soon
as feasible. Without the use of automation, and comprehensive website testing tools, the job of a QA
becomes challenging.
The people in charge of testing have written down what they want to achieve and when they want to do
it. They've given this information to the people involved in the testing so everyone knows what to do.
The objectives of testing are the specific goals and outcomes that you want to achieve from testing.
They are derived from the scope of testing, and they guide your testing strategy, methods, techniques,
and criteria. The objectives of testing should be SMART: specific, measurable, achievable, relevant, and
time-bound.
Testing is an integral part of product launches. It doesn’t matter if you’re producing machinery, media,
or software—one way or another, you should probably test whatever you’re selling before you present
it to the client.
Furthermore, to ensure that the testing is as thorough, detailed, and effective as possible, it’s a good
idea to have a software documentation base.
Software testing documentation describes artifacts created before and during software testing. In other
words, it’s a record of the testing team’s strategy, objectives, processes, metrics, and results.
Here's a quick read recommendation before you continue the article. We spoke about various types of
software documentation to know about.
The below graph shows some testing documentation examples (outlined in red) and when they occur in
the software testing life cycle.
Page 8 of 38
These are only some types of testing documentation—an exhaustive list will be provided later.
Nevertheless, the visual gives a general overview.
Software testing is a formal component of software development and shouldn’t be just briefly
documented.
After all, the documentation facilitates and authenticates test planning, reviewing, and deployment.
However, the formality level depends on your company’s regular practices, development maturity level,
and the software type being tested.
It’s worth documenting your testing processes throughout, as doing so brings many benefits. However,
the main advantage of testing documentation is the detailed analysis it entails.
By chronicling software testing, you’ll have a clear overview of the entire process and can pinpoint both
efficiency blockers and boosters.
After recognizing the pain points, new approaches can be implemented to increase productivity. The
constant monitoring helps continuously improve testing.
Source: Archbee.com
Page 9 of 38
By measuring testing practices, teams can better manage those practices. For example, if KPIs aren’t
met, it will be easier to uncover why with testing documentation.
Testing documentation is also invaluable financially. Imagine if your server wasn’t correctly rendering or
routing pages and your website/app wasn’t loading the right displays.
Without testing documentation, hours would be spent resolving the issue, and Marketing would have to
perform damage control, causing costs to skyrocket.
However, a quality testing record should detail the reason for the error and offer possible solutions.
This way, testing documentation will help resolve future issues, acting as a reference point.
The below image visualizes the cost of defects throughout software development. If a flaw occurs during
the testing stage, there are huge costs.
Source: C# Corner
Costs rise as the project advances. As such, it’s imperative to record testing processes correctly and
minimize later risks.
Page 10 of 38
Besides economic fallout, testing documentation also helps avoid the occurrence of information silos. It
isn’t unusual for developers and QA teams not to know what the other is working on.
However, with testing documentation, visibility is 20/20. Everyone can gain insight into other people’s
work as needed, easing collaboration.
Furthermore, having one source of information reduces the chance of miscommunication, as all teams
continuously refer to documented processes.
Since all possible specifications are recorded, keeping track of information is easy.
This is especially useful when new hires enter the company, as the documentation can serve as training
material.
With so much information recorded, the employee onboarding process is accelerated; you won’t need
to assign mentors to new recruits. Instead, just point them to the documented intelligence.
The numbers below show how much time is typically spent on testing:
Source: Archbee.com
When all that testing is properly documented, you have a gold mine of information to pass on to new
hires.
Software testing documentation caters to two different audiences. First of all, it provides QA teams with
definitive data, so they can more easily strategize and perform testing.
Secondly, it communicates testing progress to relevant outside parties such as the Marketing team, the
Development team, and, of course, product owners.
Of course, there are various standards and document types; this all depends on the company, product,
and customer.
Source: Archbee.com
Test policy stipulates any testing rules that need to be followed; for example, if testers can use
private equipment or must only use corporate devices, etc.
Test strategy - high-level document outlining at which project levels testing be performed. As
the project advances, managers use this to check if everything’s on track.
Page 12 of 38
Test plan is the most comprehensive document, containing all essential information, such as the
testing scope, approach, members, resources, and limitations. This is distributed to all team
members.
Test scenarios classify the product’s interface and performance into modules. They ensure that
all possible process flows are tested from start to finish.
Test cases describe the how of testing. They detail a set of conditions, inputs, and step-by-step
guides. By comparing current resources with the desired outcome, they determine if everything
is in order.
Test data lists the data testers implement when executing test cases, e.g., media content,
generated users, statistics, and similar.
Test log lists different test cases and records test results, providing a detailed summary report.
Traceability matrix is a table of various test cases with their requirements. With this, testers can
track progress from design to coding and vice versa.
Now that you know what kinds of internal testing documentation there are, let’s take a look at those
that are more user-oriented.
Source: Archbee.com
Bug reports communicate all information about bugs in the software, including a short
description of the issue, severity reports, and priority classification. It should also explain the
steps for recreating the bug, as well as provide a solution.
Test summary reports summarize a test cycle’s findings. It often discloses the cost of locating
errors, general testing efficiency, test suite efficiency, the amount of rework and authentication
efforts, and similar.
User acceptance report outlines the results of the software testing. This document is then
handed over to stakeholders to ascertain that the developers’ and clients’ objectives are the
same; to ensure they share an identical vision.
In other words, the report verifies that the technical specifications comply with the customers’ wishes.
Now let’s take a look at some ways to ensure that your test documentation brings value to the reader.
There’s no point in doing anything by half; if your testing documentation isn’t well-executed, you may
as well have not written it.
Make sure your testing documentation is high-quality, there are a few practices to stick to. They will
ensure your documents are the best they could be.
Follow the tips below, and you’ll provide your co-workers with valuable and comprehensive software
testing resources.
Incorporating user feedback into the testing process is not just a best practice, it's a must-have for any
successful product. By listening to the voice of the customer, we can identify pain points, make informed
improvements, and ultimately create a product that exceeds expectations.
Page 14 of 38
User feedback is a crucial part of the software development process as it helps developers understand
how users interact with and perceive their products.
The feedback and insights users provide can enable developers to improve their software's functionality,
usability, and overall user experience.
User feedback refers to the comments, suggestions, and opinions that users of a software product
provide to the developers. It can be collected through various methods such as surveys, focus groups,
usability testing, and user testing. User feedback can be qualitative (based on opinions and perceptions)
and quantitative (based on numerical data and metrics).
Incorporating user feedback into the development process is crucial for several reasons:
1. It helps developers understand the needs and preferences of their target audience, which can
lead to the creation of more user-friendly and effective software.
2. User feedback can help identify issues or problems with the software, enabling developers to fix
and improve them.
3. Gathering user feedback can increase user satisfaction and loyalty, since users feel that their
opinions and needs are being considered and addressed.
In this article, we will take a closer look at the importance of user feedback and discuss the benefits of
incorporating it into the testing process. We will also look at how to gather and incorporate user
feedback and provide best practices effectively.
Incorporating user feedback into the testing process can bring numerous benefits to the development
and success of a software product.
Page 15 of 38
One of the primary benefits of incorporating user feedback into the testing process is the improved user
satisfaction and loyalty it can bring. By actively seeking out and addressing user feedback, developers
can ensure that their software meets the needs and expectations of their target audience.
This can lead to increased user satisfaction and a sense of ownership and engagement among users, as
they feel that their opinions and suggestions are being considered and implemented.
As a result, users are more likely to continue using the software and recommend it to others, leading to
increased user retention and loyalty.
Incorporating user feedback into the testing process can also increase product quality and usability. By
gathering feedback from actual users, developers can identify and fix real-world issues or problems with
the software, leading to a more stable and reliable product.
In addition, user feedback can provide valuable insights into the usability and user experience of the
software, enabling developers to make necessary improvements and enhancements. This can lead to
more user-friendly and intuitive software, increasing its adoption and usage.
Incorporating user feedback into the testing process can also enhance customer experience. By actively
engaging with users and considering their feedback, developers can create a software product that
better meets the requirements and expectations of its users. This can lead to a more enjoyable and
seamless user experience, fostering customer satisfaction and loyalty.
Incorporating user feedback into the testing process can also improve efficiency and cost-effectiveness
in the development process. By gathering user feedback early in the development process, developers
can identify and address any issues or problems before they become significant, saving time and
resources.
Moreover, by regularly incorporating user feedback throughout the development process, developers
can avoid the need for costly and time-consuming redesigns or changes later on.
Finally, incorporating user feedback into the testing process can lead to greater market success for a
software product. By creating a product that meets the needs and preferences of its target audience,
developers can increase its adoption and usage.
User feedback can provide valuable insights into the competitive landscape and market trends, enabling
developers to stay ahead of the curve and differentiate their products from competitors.
Page 16 of 38
This can contribute greatly towards increased market success and a competitive advantage for the
software product.
There are several steps that can help organizations effectively incorporate user feedback into the testing
process:
There are many methods to gather user feedback, including surveys, focus groups, usability testing, and
online reviews. It’s critical to use a combination of different methods to get a well-rounded
understanding of user needs and preferences.
Surveys can be useful for gathering large amounts of data quickly, while focus groups and usability
testing provide more in-depth and qualitative insights.
Online reviews can also be a valuable source of feedback, as they provide a glimpse into how users
interact with the product in the real world.
Once user feedback has been gathered, it’s necessary to analyze and prioritize it to determine which
feedback is most important to address.
Organizations can do this by categorizing feedback into themes and prioritizing based on the number of
users who have provided similar feedback, or the impact that addressing the feedback would have on
the product.
It’s also pivotal to consider the feasibility of implementing the feedback and the resources required.
Once the most important user feedback has been identified, organizations should incorporate it into the
testing plan.
This might involve creating new test cases, modifying existing test cases, or adding new features or
functionality based on user feedback. It’s crucial to involve the development team in this process to
ensure that the feedback can be effectively implemented.
After the changes have been made based on user feedback, it’s essential to test and validate them to
ensure that they meet the needs and expectations of the user base.
This might involve conducting additional usability testing or gathering further feedback through surveys
or focus groups. It’s also important to track and measure the impact of the changes to ensure that they
are effective and to identify further areas for improvement.
Page 17 of 38
By following the steps described above, you can effectively incorporate user feedback into your testing
process and improve the overall success of your product.
Certain best practices can help to make the process of incorporating user feedback into testing more
effective and efficient.
One of the most effective ways to incorporate user feedback into the testing process is to involve users
in the testing process itself directly.
This might involve conducting usability testing with a representative group of users or even recruiting a
group of users to act as beta testers for the product. Involving users in the testing process allows them
to provide direct feedback on the product as it’s being developed, which can be invaluable in identifying
and addressing potential issues before the product is released.
Gathering user feedback is an ongoing process. It’s fundamental to regularly gather and review feedback
to identify areas for improvement and keep the product up to date with user needs and expectations.
This can be accomplished by conducting regular surveys, monitoring online reviews, or conducting focus
groups regularly. By regularly gathering and reviewing user feedback, you can stay attuned to the
changing needs and preferences of your user base and make timely updates and improvements to the
product.
As mentioned before, there are many different methods for gathering user feedback, including surveys,
focus groups, usability testing, and online reviews. Using multiple methods allows you to get a well-
rounded understanding of user needs and preferences and ensures that you get feedback from a diverse
group of users.
This can be particularly useful when trying to identify trends or common issues among different
segments of the user base.
By following these best practices, you can judiciously incorporate user feedback into your testing
process and improve the overall success of your product.
Code-based testing is a crucial aspect of software development that ensures the integrity and quality of
the code. It involves systematically testing the code to identify bugs, defects, and vulnerabilities before
deploying the software.
Page 18 of 38
Each test environment or QA Environment is set up with a combination of the following elements:
Used to integrate individual software components and test the performance of the integrated system.
Integration tests check that the system acts as it is meant to – according to requirements documents.
In a DevOps setup, integration occurs multiple times a day, which means that the integration
environment will be in near-constant use. Naturally, it has to be modulated to replicate the production
environment as far as possible.
Used to verify how the software performs against previously determined standards. These goals can
range from response time, stability, and compatibility to throughput and concurrency. It depends on
what the app seeks to offer its users.
Page 19 of 38
Performance testing is a broad term and includes various test categories – load, stress, volume,
breakpoint, and the like. Essentially, performance tests operate every feature and identify bottlenecks
or restrictions in the user journey.
Generally, performance tests require significant time and funds. Thus, it is best to set up the QA
environment and run multiple tests simultaneously, usually when a major change has been
implemented to the software. It also makes sense to run performance tests before a software release
cycle.
3. Security Testing Environment
Used to check that the software does not have security gaps, flaws, or vulnerabilities concerning
authentication, confidentiality, authorization, and compliance.
Security testing QA environments are set up by internal and external security experts, who study the
software to determine which parts would likely be targeted and by which means such threats can come.
Used to introduce stressors that can cause failures in the software. Chaos testing intends to test the
resilience of the systems in the real world. Successful chaos tests identify areas of instability and ensure
that the software does not become complacent. It also helps testers and devs realize the systems’
critical dependencies and the main junctures of its possible failure.
Chaos testing environments must be configured for scale and high availability. Testers often run chaos
tests alongside performance tests, so it may be possible to perform both in the same interface.
Closing Notes
Source: Google.com
Code based testing involves a multitude of methodologies and techniques aimed at ensuring the
reliability and quality of software. From unit testing to integration testing and beyond, developers have
a range of strategies to choose from when validating their code.
A fundamental aspect of code testing is constructing a robust foundation of tests that cover diverse
scenarios and edge cases. These tests act as a safety net, providing continuous feedback on the
functionality and correctness of the code. By diligently testing their codebase, developers can identify
and address issues early on, minimizing the time and effort spent on debugging and maintenance in the
long run.
Furthermore, code testing plays a pivotal role in guaranteeing the overall quality and reliability of
software. Thorough testing not only enhances customer satisfaction but also prevents potential issues
that could lead to revenue loss or negative user
experiences. By implementing a comprehensive testing plan, the software development process
becomes more efficient, instilling confidence in both the development team and end users.
Code testing embraces various methodologies that go beyond any single approach.
Manual Testing: Manual testing involves human interaction with the system under test. Developers or
end users manually test the code by performing various tasks, providing inputs, and verifying the
outputs. This can be done by developers testing their own code or involving a sample of end users to
test different functionalities and report any issues they encounter.
While manual testing is quick to start with, it has some drawbacks. Human testers are prone to errors,
and for large-scale projects, it can be expensive to conduct extensive manual testing. However, manual
testing provides the flexibility to thoroughly examine the software, and it can be effective in discovering
usability issues and obtaining user feedback.
Automated Testing: To reduce costs and increase efficiency, automated testing uses scripts or tools to
automate the testing process. Test scripts are created with predefined test cases and expected
Page 21 of 38
outcomes. These scripts simulate user interactions and verify the correctness of the software’s
responses. In the event of a response deviating from the anticipated outcome, an error message or
warning is triggered.
While creating automated test scripts requires more upfront time and resources, once established, they
can be run multiple times throughout the software’s lifecycle. As the software evolves, the test scripts
can be updated to accommodate new functionalities without the need for extensive manual retesting.
Repeat Testing and Code Coverage: Even if automated test suites pass all tests, it is important to
account for potential regressions caused by changes in the code. Repeating the test script whenever a
new feature is ready for deployment helps ensure that existing functionality is not inadvertently
affected.
The terms code coverage and test coverage are relevant in testing. Code coverage refers to the
percentage of code that is executed during testing, while test coverage measures the percentage of
required features or specifications that are tested. Achieving 100% code coverage ensures that all code
paths have been tested, reducing the chances of untested scenarios causing issues.
Coding testing techniques play a crucial role in ensuring the success of any software development
project. These techniques involve a variety of testing approaches, spanning from evaluating small code
components to assessing the overall functionality of the application. Let’s dive into the core five
components of coding testing techniques:
Unit Tests: Unit testing is an integral part of the software development process. It involves testing small
units or components of code individually to ensure their proper operation. This testing can be
performed manually, but it is often automated in Agile and DevOps projects. Unit tests help identify
issues in specific code units and ensure they function as intended.
Integration/System Tests: Integration testing focuses on combining individual software modules and
testing them as a group. It occurs after unit testing and before functional testing. This testing stage
verifies the interactions and compatibility between different modules, ensuring they work together
seamlessly.
Functional Tests: Functional testing is conducted after integration testing. It involves testing the
software to ensure that it meets all specified business requirements and functions correctly. The goal is
to validate that the software has all the necessary features and capabilities for end users to utilize it
without encountering any issues.
Regression Tests: Regression testing is performed to verify that software, which may have undergone
changes such as enhancements, bug fixes, or compliance updates, still performs correctly. It ensures
Page 22 of 38
that the modifications made to the software do not introduce new issues or break existing functionality.
Regression tests help maintain the overall quality and stability of the software across different releases.
Acceptance Tests: Acceptance testing is carried out to evaluate the system’s acceptability from the end
user’s perspective. The primary objective is to assess whether the software complies with the business
requirements and is suitable for production deployment. Acceptance tests validate that the software
meets the expectations and needs of the end users.
By incorporating these coding testing techniques into the software development lifecycle, teams can
identify and rectify issues at different levels, ensuring a higher quality and more reliable software
product.
An environment, in the context of creating and deploying software, is the subset of infrastructure
resources used to execute a program under specific constraints. Throughout the various stages of
development, different environments are used to handle the requirements of the Development and
Operations team members. Each environment allows developers to test their code under the
environment’s specific set of resources and constraints.
Though the names and number of environments can vary from organization to organization, the five
environments we will cover in this article are:
To give context to the environments explained in this article, we’ll use an example of a company that is
making an email client, such as Gmail.
In our email client example, the local development environment is where developers would be
programming all the features and functionalities of the client. Individual developers may each be
assigned to locally develop – and test in isolation – a single feature, such as fetching the user’s emails,
displaying them, navigating between emails, drafting emails, etc.
Integration Environment
The integration environment is where developers attempt to merge their changes into a unified
codebase, often using source-control software like Git. The application is likely to have tests fail during
this integration step as multiple developers, who had previously been working in isolation,
simultaneously attempt to merge their code. If this happens, developers can work on fixes in their local
development environment and attempt to merge again. Integration tests may need to be updated in
this environment as well.
In our email client example, as developers complete their individual features locally, they may
simultaneously attempt to integrate their changes into a unified codebase.
QA / Testing
The quality assurance (QA) environment (a.k.a. the testing environment) is where tests are executed to
ensure the functionality and usability of each new feature as it is added to a project. These tests include
unit tests of individual units of code, integration tests of interactions between internal services, and end-
to-end tests which include all internal and external services running. When these tests are written and
performed depends on the organization, but new and existing features are typically run against a test
environment throughout the development process. The testing environment typically requires less
infrastructure than is used in production.
In our email client example, tests run automatically when there is a change to the main branch to verify
the functionality of units in isolation, such as testing displaying an email with mocked data. They’ll also
have integration tests executed that exclusively test the application’s internal services (the client as a
whole) with mocks for any external services needed (actual email data). End-to-end tests would also be
conducted that use real networking and external services, with the client working as an actual email
client.
Staging
The staging environment is an environment that attempts to match production as closely as possible in
terms of resources used, including computational load, hardware, and architecture. This means that
when an application is in staging, it should be able to handle the amount of work it is expected to be
doing in production. In some cases, an organization may choose to employ a period when the project is
used internally (often referred to as “dogfooding”) before moving to production.
In our email client example, the email client will be fully functional at this stage and will be tested,
simulated and in use internally within the organization. The architecture and hardware used for our
client is the same as it will be once our project reaches the production environment.
Production
Page 25 of 38
The production environment refers to the infrastructure resources that support the application accessed
by clients. This infrastructure consisted of hardware and software components including databases,
servers, APIs, and external services scaled for real-world usage. The infrastructure required in the
production environment must be able to handle large amounts of traffic, cyber-attacks, hardware
failures, etc.
Depending on how a company wants to release their project, deployment strategies can greatly differ.
Some examples of deployment strategies include:
These various approaches allow the development team to test their application in a full production
environment, including when the application is released to 100% of users.
For our email client example, the organization will use a phased approach – at first, only 10% of users
will be able to use the feature, gradually increasing to 100%.
System testing, also referred to as system-level testing or system integration testing, is the process in
which a quality assurance (QA) team evaluates how the various components of an application interact
together in the full, integrated system or application.
The purpose of system testing is to ensure that a system meets its specification and any non-functional
requirements (such as stability and throughput) that have been agreed with its users.
System testing abides by certain key principles to maintain integrity and reliability:
Let's consider a real-world example, imagine you're building a new social media platform:
Example:
In this situation, you would approach system testing by first testing each functionality separately (profile
creation, message functionality, photo upload, etc.). Once these tests are successful, you then test the
whole system in unison, evaluating everything from data handling and security measures to response times
and friend request functionality.
System testing is paramount in the field of computer science for numerous reasons:
Assurance of Quality Ensures the finished product meets the user's needs and the
Page 27 of 38
requirements specified
Preventing Business Prevents possible business losses due to app failure as it ensures
Losses system stability
Without system testing, you'd run the risk of deploying a software or app full of bugs, which could result
in poor user experience, tarnishing the reputation of your product and your brand. It's not just about
delivering functional software, but software that provides a seamless and engaging user experience.
System testing isn't a monolithic process but rather a multifaceted one that comprises various types. Each
type is unique and involves a distinct approach designed to target specific aspects and requirements of the
software.
System Integration Testing (SIT) is a crucial type of system testing. This process involves testing the
connectivity and interaction between different subsystems to ensure they work harmoniously.
Definition
integration testing is a testing phase where individual software modules are integrated logically and tested
as a group. The main aim of this testing is to expose faults in the interaction between the integrated
modules.
Think of it like putting together pieces of a jigsaw puzzle. Each piece, or subsystem, may look great on its
own, but the ultimate test lies in how well they fit together.
Top-down approach
Page 28 of 38
Bottom-up approach
Sandwich (a combination of top-down and bottom-up)
While the Top-Down approach tests the main module first and then moves towards testing subsidiary
modules, the Bottom-Up approach starts with subsidiary modules and gradually moves up to the main
module. The Sandwich approach is a mix of both and is named so because of the layered testing levels
representing a sandwich. SIT plays a crucial role in revealing discrepancies, communication gaps, and
inconsistencies in the data shared between different modules. Therefore, conducting SIT is of paramount
importance before deploying any software.
An Accessory Test System (ATS) in system testing is a set of tools designed to carry out system testing
effectively and efficiently. An ATS includes several components:
Fixtures and
The interfaces which align your product with the test equipment
Probing
ATS is often automated to increase accuracy and repeatability. The automated system can be coded to
perform predetermined tasks without human supervision.
Example:
For instance, in testing a music streaming app, the ATS would simulate various user actions such as song
selection, play, pause, skip, add to playlist, etc. and evaluate the system's response to each action. This
automation dramatically speeds up the testing process and reduces potential for human error.
In summary, the Accessory Test System plays an essential role in making system testing more efficient,
accurate, and reliable. Ensuring that we use a well-equipped and well-programmed ATS is just as important
as ensuring thorough system testing itself.
Practical Approach to System Testing
The world of theoretical knowledge can often feel worlds apart from practical application. However,
bridging this gap is essential for truly understanding any concept. This is especially true for System Testing,
where theory meets real-world use. It's not enough to just know what System Testing is; you need to
understand how to implement it.
Page 29 of 38
Diving headfirst into System Testing without a plan will lead to confusion and chaos. The most effective way
to navigate this process is to have a step-by-step plan:
Step 1: Define the Requirements Your testing needs to align with the specifications. Hence, having a
crystal-clear understanding of the requirements is essential.
For example, if you're testing a weather application, the requirements might involve:
Accurate real-time weather updates
An included forecast for the next 7 days
An alert system for serious weather changes
Step 2: Create a Test Plan A test plan outlines the strategy that will guide your testing efforts. It includes
scope, approach, resources, and schedule of intended activities.
Step 3: Design Test Cases Next, design the test cases according to the requirements. Remember, a great
test case not only checks functionality but also considers possible 'edge cases' where users might not follow
the expected path.
Definition
A test case in system testing is a set of conditions or variables that a tester will use to determine if a system
under test satisfies requirements and works correctly.
Step 4: Execute Test Cases Now, it's time to conduct your tests. Instead of undertaking them manually, use
tools for automated testing. Some popular tools include Selenium, JMeter, and Appium.
Step 5: Analyze the Results and Report Once your tests are done, compile the results and analyses them
against the expected outcomes. It's important to document everything well, as these records provide vital
information for future testing cycles and root cause analysis.
To further illustrate the complexities and execution of System Testing, let's explore some real-world
examples.
Example 1: System Testing an E-commerce Site Imagine you're testing an e-commerce website like
Amazon or eBay. Key components to test might include:
Payment processing
Product reviews
Each of these modules needs to undergo individual and integrated testing. Always factor in how different
modules interact with each other.
Example 2: System Testing a Mobile Application Mobile applications tend to face more diverse scenarios
due to factors such as varying operating systems, screen sizes, and network conditions. Consider system
testing an app like Uber. Some major elements that should be tested involve:
In each example, you can observe that system testing requires meticulous planning and execution. Each
individual component, as well as the system as a whole, needs to be rigorously tested to ensure a flawless
user experience. These examples serve to illuminate the complexity and rigor that goes into effective
system testing. System testing is an iterative process that helps developers spot issues, make
improvements, and ensure the final product can handle real-world usage scenarios adequately.
The realm of problem solving is broad and requires numerous strategies. One such approach is system
testing. System testing is not just about making sure software functions as expected—it's also a powerful
problem-solving tool. By systematically inspecting and interacting with a system, you can anticipate
problems, improve the system's reliability, and ultimately ensure a satisfactory user experience.
System testing in problem solving is essentially an investigation to clarify where and how a system might
fail. You can view a problem as a system fault or failure—a point where the system does not behave as
expected. By using systematic testing techniques, you can locate these points of failure and resolve them.
Definition
System Testing is essentially a series of investigative procedures where a complete, integrated system is
tested to evaluate its compliance with certain criteria. It is conducted on a complete system in order to
expose potential issues arising from interaction among system components.
Documentation is the only way to ensure that the processes are followed. Documentation also helps in
the communication of the current status of the project to the stakeholders. It also helps the QA team to
maintain consistency in the testing process. It helps to have a record of the performed testing activities.
Page 31 of 38
A lot of artifacts and processes are involved during the Software Testing Life Cycle. As part of the
traditional way, we write Test Strategies, Test Plans, Test Scenarios, Test Cases, Test Results, Traceability
Matrix, Status Reports, and Test Reports. During the testing life cycle, besides the artifacts mentioned
above, Testers are involved and deep dive into some areas that are good to be documented and stored
somewhere for future reference.
Based on my experience, let’s explore the list that can be documented other than the artifacts
mentioned above:
Okay, we use a lot of information during our testing process. Why should we document and store
them?
Page 32 of 38
To document anything, you should first understand what you are documenting. Gain a solid
understanding of what you are writing about. Gather all the information that you need to write about.
Inside a project when you are looking to document the test process, you should consider the mode and
the location where you will store the documents. Whether you prefer to write as a Google Doc, Wiki
Page, or Microsoft Word Document, depends on the type of information and the ease of access to the
information for future reference.
Once you have curated all the necessary details, you can share them with the intended audience. Also,
ensure to update the documents whenever necessary. It’s not a one-time activity. It should be a
continuous process.
“We are what we repeatedly do. Excellence, then, is not an act, but a habit.” - Aristotle, Greek
philosopher.
While documenting, employ language that is clear, succinct, and easy to comprehend. Avoid using
jargon or technical phrases that may confuse readers. To successfully communicate information, use
brief words and bullet points.
Include the necessary information in the documents you create. This comprises information such as the
test's objective, procedures to recreate a problem, expected and actual findings, and any supporting
resources such as screenshots or log files. By providing detailed and precise information, you ensure
that others can reproduce and comprehend the testing procedure. Make sure to add or link all the
associated documents and external links.
3. Organize Documentation
Organize your documentation into logical segments to make it easy to read and explore. Divide the
document into sections for distinct test cases or areas of emphasis. To highlight essential ideas or critical
discoveries, use formatting choices such as headings, subheadings, bold, or italics.
4. Use Visuals
Visuals may improve the clarity and impact of your material significantly. Include images, diagrams,
flowcharts, and mind maps to show complicated ideas or problems encountered during testing. Visuals
may significantly help readers comprehend the context and increase the overall efficacy of your content.
Screen capturing and video recording can be helpful for documenting complex stuff.
5. Create Templates
To ensure everyone is documenting all important information, create templates that everyone can use
to provide consistency.
Always get the documents reviewed by the SMEs and other stakeholders to make sure they are accurate
and up to date.
assurance documentation is not a one-time task. It should be updated on a regular basis to reflect
project changes and developing requirements. Maintain a well-organized system for documentation,
version control, and access. This guarantees that everyone gets access to the most recent and up-to-
date information.
Page 35 of 38
Teachers Activity:
Ask Question
Show Presentation
Demonstration
Show video:
https://www.youtube.com/watch?v=l5cAeQ3BhjI
Reference:
Site:
https://www.google.com/
https://www.youtube.com/
https://www.browserstack.com/guide/how-to-perform-website-qa-testing#:~:text=Website%20Testing
%20refers%20to%20testing,interactions%20on%20a%20website's%20UI.
https://www.archbee.com/blog/software-testing-documentation
https://www.functionize.com/blog/incorporating-user-feedback-into-testing
https://www.browserstack.com/guide/what-is-test-environment
https://www.browserstack.com/guide/code-based-testing
https://www.codecademy.com/article/environments
https://www.studysmarter.co.uk/explanations/computer-science/problem-solving-techniques/system-
testing/
https://muuktest.com/blog/software-testing-documentation#steps_in_the_documentation_process
eBook:
Web Application Testing by: Giuseppe A. Di Lucca and Anna Rita Fasolino
Page 36 of 38
Assessment 6-1:
Written Test
Test I: Multiple Choice: Write your chosen letter on the space provided.
__________ 1.
Make sure your testing documentation is_____________, there are a few
practices to stick to. They will ensure your documents are the best they
could be.
a. high-value b. high-intensity c. high-quality
__________ 2. User acceptance report outlines the ___________ of the software testing.
a. results b. procedure c. beginning
__________ 3. Software testing documentation describes artifacts created before and
___________ software testing.
a. after b. during c. further
__________ 4. Software testing is a __________ component of software development and
shouldn’t be just briefly documented.
a. informal b. formal c. normal
__________ 5. The _____________ monitoring helps continuously improve testing.
a. constant b. few c. one-time
__________ 6. By ___________ testing practices, teams can better manage those practices.
a. identifying b. documenting c. measuring
__________ 7. _____________ stipulates any testing rules that need to be followed; for
example, if testers can use private equipment or must only use corporate
devices, etc.
a. Test strategies b. Test plan c. Test policy
__________ 8. _______________ is a high-level document outlining at which project levels
testing be performed. As the project advances, managers use this to check if
everything’s on track.
a. Test strategy b. Test plan c. Test policy
__________ 9. _____________ is the most comprehensive document, containing all
essential information, such as the testing scope, approach, members,
resources, and limitations.
a. Test strategy b. Test plan c. Test policy
__________ 10. _____________ communicate all information about bugs in the software,
including a short description of the issue, severity reports, and priority
classification.
a. Summary report b. Bug reports c. Acceptance report
Test II: True or False: Write the letter T if the statement is true and F if the statement is false on the
space provided.
_____________ 1. For our email client example, the organization will use a phased approach
– at first, only 10% of users will be able to use the feature, gradually
increasing to 100%.
_____________ 2. The production environment refers to the infrastructure resources that
support the application accessed by clients.
Page 37 of 38
1. ______________________________________
2. ______________________________________
3. ______________________________________
4. ______________________________________
5. ______________________________________
1. ______________________________________
2. ______________________________________
3. ______________________________________
4. ______________________________________
5. ______________________________________
1. __________________________________
2. __________________________________
3. __________________________________
4. __________________________________
Page 38 of 38
1. __________________________________
2. __________________________________
3. __________________________________
4. __________________________________
1. __________________________________
2. __________________________________
3. __________________________________
4. __________________________________
5. __________________________________
Activity
Steps/Procedure: