0% found this document useful (0 votes)
2K views298 pages

UX Metrics and ROI

The document is a report on UX metrics and ROI, authored by Kate Moran and Feifei Liu, aimed at providing insights into usability research. It covers various aspects of UX improvements, benchmarking, and the calculation of ROI in different contexts. The report includes case studies and emphasizes the importance of both quantitative and qualitative data in demonstrating the value of UX.

Uploaded by

nthngoc7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2K views298 pages

UX Metrics and ROI

The document is a report on UX metrics and ROI, authored by Kate Moran and Feifei Liu, aimed at providing insights into usability research. It covers various aspects of UX improvements, benchmarking, and the calculation of ROI in different contexts. The report includes case studies and emphasizes the importance of both quantitative and qualitative data in demonstrating the value of UX.

Uploaded by

nthngoc7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 298

UX Metrics and ROI

Kate Moran, Feifei Liu

5 th
Edition

48105 Warm Springs Blvd., Fremont, CA 94539-7498 USA

Copyright © Nielsen Norman Group; All Rights Reserved.


Copyright Notice

Copyright Notice
Please do not post this document to the internet or to publicly available
file-sharing services.

This report required hundreds of hours of planning, recruiting, testing, analyzing, writing and
production. We sell these reports to fund independent, unbiased usability research; we do not
have investors, government funding or research grants that pay for this work.

We kindly request that you not post this document to the internet or to publicly available file-
sharing services. Even when people post documents with a private URL to share only with a
few colleagues or clients, search engines often index the copy anyway. Indexing means that
thousands of people will find the secret copy through searches.

If someone gave you a copy of this report, you can easily remedy the situation by going to
www.nngroup.com/reports and paying for a license.

We charge lower fees for our research reports than most other analyst firms do, because we
want usability research to be widely available and used.

Thank you!

Report Authors: Kate Moran, Feifei Liu


Table of Contents

Table of Contents
Executive Summary 6
Average UX Improvements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
UX Improvements Are Shrinking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
What’s New in the 5th Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

UX Metrics 10
Quantitative vs. Qualitative Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Why Quantitative Data Helps UX teams. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Using Quantitative Data for UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Methods and Metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Combining Quantitative and Qualitative Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

Getting Started with UX Benchmarking 23


Benchmarking Steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

ROI: Demonstrating the Value of UX 36


Why Bother Calculating ROI?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
How to Calculate ROI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Calculating ROI in Different Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Connecting UX Metrics to KPIs with Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

The Magnitude of UX Improvements 49


Estimating the Magnitude of Gains from Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Computing Improvement Scores. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Expected UX Improvements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
How Have UX Metric Improvements Changed Over Time? . . . . . . . . . . . . . . . . . . . . . . . 56
Future Predictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Table of Contents

CASE STUDIES 65

About the Case Studies 68


By Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
By Metric Category. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
By Industry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Case Studies — 5th Edition 77


Acumbamail (MailUp Group). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
AIR MILES Reward Program (LoyaltyOne). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Alchemy Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
American Kennel Club (ExpandTheRoom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Anonymous American Bank. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Anonymous Car Insurance Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Anonymous HR Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Anonymous Real Estate Company (Marketade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Arizona State University Online. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Asiacell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Baileigh Industrial (Marketade). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
CrossCountry (McCann Manchester). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Deep Sentinel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Healio CME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
HeliNY (ExpandTheRoom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
HelloFresh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Jira (Atlassian). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Kasasa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
myAir (ResMed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Oakley (Luxottica). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
PetSmart Charities (Marketade). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Philip Morris International HR Portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Table of Contents

Ray-Ban (Luxottica). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199


Shopify. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Starbucks College Achievement Plan (Arizona State University). . . . . . . . . . . . . . . . . 210
Syneto CENTRAL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
The Deal (ExpandTheRoom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Top 10 Online Casinoer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
User Interviews. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
WordFinder (LoveToKnow MEdia). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

Case Studies — 4th Edition 237


Harrisburg Area Community College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
University of Edinburgh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

Case Studies — 3rd Edition 248


Adobe kuler (kuler.adobe.com). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Capital One. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Direct Marketing Association. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Eurostar (Etre). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Health Care Without Harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Media News Group Interactive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Microsoft Office Help Pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
North Carolina State University. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Scandinavian Airlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Shelter.org.uk (England and Scotland). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Simply Business Insurance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Sarah Hopkins (artist). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288

About the Authors 291

Acknowledgements 292

About NN/g 294


Executive Summary

Executive Summary
A UX metric is a piece of numerical data that tells us about some aspect of the user experience of
a product or service. These metrics can be invaluable, helping teams to assess the quality of their
designs and track improvements over time. In some cases, they can be used to demonstrate how
UX work can impact business goals.

This report includes 44 real-life case studies of how teams have used UX metrics in their work.
Many are success stories, demonstrating how well-founded and researched design decisions
have a huge impact. Some are not success stories — the impact on the design was neutral or
even negative. This is the reality of design work: we can’t always predict the impact on our users.
This fact underscores the continued need for iterative design work based on solid research.

AVERAGE UX IMPROVEMENTS
From the case studies we accepted for the 5th edition, we collected a total of 80 pairs of metrics.
We included 76 of these pairs in our quantitative analysis. (The four metric pairs excluded were
outliers, with improvement scores of over 2,000%.)

Averaged across all organizations that reported metrics for our 5th edition, the average UX
improvement score was 75%, with a 95% confidence interval from 8% to 104%. In other words,
across all these organizations, the average redesign was 75% better than the original, for a
variety of metrics.

Does this mean that you should expect around a 75% improvement in your metrics when
redesigning your product? Possibly, but as the wide confidence interval shows, there is an
immense amount of variability in the data. In our data, half of the values were between 13%
and 157% (the interquartile range).

This doesn’t mean that we can expect 50% of all design projects will have an improvement score
within that range. It’s possible your own impact could be a 500% or 5,000% improvement, but
our data suggests that an improvement score that high is unusual.

Why Does UX Impact Vary?


We weren’t surprised to find that our data set had immense amounts of variability, because the
impact of any given UX project can be very different from another. The outcome of UX projects
depends heavily on:
• The existing quality of the experience being improved
• The expertise and talent of the team working on the project
• The quantity and quality of the changes implemented
Executive Summary

Improvements Are Smaller Because Designs Are Better


This decrease in average improvement scores doesn’t mean we’re doing a bad job; it shows that, as
an industry, we’ve done an excellent job over the past 10+ years. At the beginning of the human
factors and usability movement, just about every product had substantial room for improvement.

We’ve addressed many of the biggest problems. In some cases, those were individual fixes for
specific problems. But as an industry, our collective knowledge has grown as well — we now
have a rich set of best practices and design patterns. Each individual designer can build on the
existing work of the designers that have gone before.

So, are all the world’s UX problems now fixed, and are all designs perfect? Certainly not. There’s
still substantial room for improvement for the majority of experiences. (Lucky for us, that means
we have great job security.) This finding simply shows that we’ve done a good job improving
experiences overall and addressing the most glaring problems.

Expectations Rise Alongside UX Improvements


Does this mean that UX is less important or impactful today? Also no. We believe that even
though the magnitudes of these design changes have decreased, they are no less important.
Because experiences overall are getting better, user expectations have gotten much,
much higher. If you put a website considered adequate in 2006 in front of a user today, she
would refuse to use it.

We believe that those expectations will continue to rise in lockstep with the average quality of
experiences. (Again, this is good for UX professionals’ job security!) As a consequence, even small
improvements in the UX may be worth an organization’s time. This is even more true when you
consider that your competitors are likely improving their experiences as well (a UX arms race).

If these trends continue into the future, designs will continue to get better on average, and as a
result, UX interventions will continue to have small effects. However, we believe that those small
changes will continue to be more valuable because user expectations and standards will grow
ever higher as well.
Executive Summary

WHAT’S NEW IN THE 5 TH EDITION


The 5th edition is a radical transformation of this report. The report’s title for the first four editions
was Return on Investment (ROI) for Usability. Those previous editions focused on a big concern
from the early 2000s — how to argue that companies should bother to make products that
people actually want to use.

The need to make that argument still exists, and quantitative data can still help teams to show
that UX design is valuable. But it can also help to ensure that teams are doing UX design correctly
— making the right choices and having the right impact. In many cases, it can help teams fight for
a bigger budget or can help them change the way that UX is done.

Explanations and Instructions for Applying These Approaches


In this edition, we’ve added three chapters with definitions of important concepts, as well as
advice about how to apply these concepts on your own:
• UX Metrics (page 10) gives foundational explanations of how quantitative data is often
used in UX, as well as definitions of methods and metrics referenced in the case studies.
• Getting Started with UX Benchmarking (page 23) explains how benchmarking works,
including step-by-step instructions for how to benchmark your own product.
• ROI: Demonstrating the Value of UX (page 36) gives examples and instructions for
determining the impact of your UX work on business goals.

Thirty New Case Studies of Real-Life UX Projects and Metrics


Additionally, this report includes the details of 30 new case studies, from a huge variety of
industries and products. In addition to screenshots and metrics, many of the case studies also
include design artifacts like ideation sketches.

These case studies come from around the world, including:


• North America
• Europe
• Oceania
• South America
• The Middle East
• Asia

While collecting the case studies for this edition, we invited nine teams for in-depth interviews
to discuss their project, challenges, and advice for others. The quotes, stories, and tips from those
teams are included throughout the report.
UX Metrics

UX Metrics
In this chapter, we’ll define many of the terms, practices, methodologies, and techniques used in
the case studies.

QUANTITATIVE VS. QUALITATIVE DATA


Even highly educated and experienced professionals are sometimes unfamiliar with the
differences between quantitative and qualitative data. Let’s use a physical example to highlight
the differences between these two types of data.

Consider the photo of the cappuccino below. How can we describe it?

Photo by Tirachard Kumtanom from Pexels.

We might say that the cappuccino has a leaf pattern in the froth, or that it’s served in a sage-
colored antique cup. We might quote what the person receiving the cappuccino says about it: “I
love the leaf, but this costs too much.”

Those descriptions of the item and the quote from the consumer are pieces of qualitative data.

In describing the cappuccino, we can also use measurements. Maybe its size is 12 ounces, its
temperature is 120 degrees Farenheight, and its cost is US$7.35.
UX Metrics

Those numbers that describe the item are pieces of quantitative data. Each one is a
measurement, with a number (120) and a unit (degrees Fahrenheit).

A UX metric is a piece of numerical data that tells us about some aspect of the user experience of
a product or service. When conducting quantitative research for UX, we focus on collecting these
UX metrics.

UX metrics often describe specific aspects of the experience, such as:


• How much effort or time is required to complete a task or process
• How difficult a task or process seems to users
• How many users can successfully complete a task or process
• How satisfied users are with a product, service, task, or process
• How frequently users return to a feature, product, or service
• How many users a product or service has
• What percentage of users moves to the next step of a key task or process

WHY QUANTITATIVE DATA HELPS UX TEAMS


Qualitative research is by far the most popular type of research for UX, and for good reason. In a
qualitative study, we focus on collecting descriptions, stories, context, and quotes. Those are all
things that can help us understand the current problems with the experience and how to improve it.

However, quantitative data can be extremely valuable for UX work as well. It can help us understand
the scale of UX problems — in other words, how many people are impacted by it or how severe
a problem it is. Quantitative data can help us prioritize the UX issues we want to solve. It can also
provide opportunities for design experimentation, particularly through A/B testing.

But perhaps the most valuable aspect of quantitative data is its ability to impress stakeholders
and to demonstrate the value of UX.

Many stakeholders — particularly those in leadership positions — tend to respond positively to


quantitative data. This tendency may have to do with their backgrounds (people in leadership
positions often hold business degrees). It may also be due to the nature of their role; these
leaders have to make difficult decisions, and so concrete quantitative data may feel more
tangible and reliable to them.
UX Metrics

In large organizations, many leaders are themselves evaluated in a very quantitative way to
assess the impact of their leadership choices. Nora Fiore, UX Writer at Marketade, said she
felt like the numbers were a “corporate security blanket” for executives. Metrics and data
visualizations are very shareable, compelling elements to send around an organization and, in
particular, to share with superiors.

“I’ve gone into client meetings where I state the goals of a project
in a very qualitative way (‘I want to improve this thing,’ or, ‘Make
this thing easier’), and the executive says, ‘Well, I want to see
the numbers. Tell me, how is this going to track to our annual
goal of 20% fewer call-ins? How are you going to test this?’

It’s funny how just showing a before and after lift can make
a difference to people. I think there’s a psychological
component to it, like a corporate security blanket. That
executive thinks, ‘Ok, now I’ve got something to show my
superior. I can say that this is tracking to our goal.’”

Nora Fiore, UX Writer at Marketade

In this way, UX metrics can be impressive on their own, simply by being a numerical
representation of an improvement. But when those numbers can be connected to a business
KPI (like revenue or cost savings), that can take the metrics a step further by showing how UX
changes can impact the bottom line. Clearly linking UX to business success is an extremely
effective way to argue for more UX work, or a larger UX team. (These calculations are discussed in
the chapter on ROI.)
UX Metrics

“We made a minor, minor change to fix a problem we found


during qualitative usability testing, but it led to a 12% increase
in conversion rate. And that was absolutely massive for our
company, especially at our size.
So, these wins really help to build up support for UX. People
say, ‘Hey, you know, we should really invest more in this.’ And
as you proceed to do more of those projects, your investment
grows. You just need to keep on adapting to the direction of the
business and try to get into more of those strategy meetings
where they set the direction.
I think quantitative data is easier for the business to
understand. What’s the business language? How do they
speak? It helps to blur the line between business and UX
research. If you have the ability to get quantitative research
done at the beginning, I definitely advise that you do it.”

James Villacci, UX Research Lead at HelloFresh

USING QUANTITATIVE DATA FOR UX


There are two different opportunities for UX teams to use quantitative data during a project. To explain
those differences, let’s first define two different types of evaluations: formative and evaluative.

A formative evaluation of an experience happens during the ideation or prototyping phase of a


project. It usually involves gathering information about what works and what doesn’t to help a
team decide what form a design should take.

Qualitative methods are often used for this type of evaluation, but quantitative data can be used
here as well. For example, A/B testing is commonly used as a formative evaluation: two or more
alternative versions of a design are tested on the live site to see which version has the best
impact on desirable UX metrics.

A summative evaluation of an experience often happens at the end of one design project cycle,
before the beginning of the next one. It is often quantitative and usually involves collecting
UX metrics to evaluate the success of the project. In other words, it is used to summarize the
impact of the design changes made during the project. It describes how well a design performs
compared to what it used to be or, sometimes, compared to a competitor.

The case studies presented in this report are primarily summative, although we do include seven
case studies featuring A/B testing.
UX Metrics

When quantitative data is used in a summative evaluation to assess the quality of an experience,
that practice is often referred to as UX benchmarking.

UX benchmarking involves evaluating a product or service’s experience by using UX metrics


to gauge its relative performance against a meaningful standard. These benchmarks can be
compared to an industry average, a competitor, or against an earlier version of the study. The
majority of the case studies included in this report are examples of UX benchmarking. (For more
details on benchmarking and how it works, see Getting Started with Benchmarking on page 23.)

METHODS AND METRICS


The most common sources of metrics for UX benchmarking are surveys, quantitative usability
testing, and analytics. This report also contains case studies from other methodologies often
used in formative studies: A/B testing and tree testing. In addition, some case studies also
include metrics gathered through means other than user research, such as customer support
data, app store ratings, or revenue.

Surveys
A quantitative survey simply involves asking a large number of users one or more questions.
Rating scale questions are often used in quantitative surveys. (For example, “How easy or difficult
was this task to complete?” on a scale from 1 to 7.)

Quantitative surveys are sometimes combined with quantitative usability testing (for example, to
gather user ratings for individual tasks) and sometimes used independently (usually distributed
online or via email to respondents).

The number of possible survey metrics is virtually unlimited since you can ask your
respondents anything.

Survey Metric Examples

Example Metric Description


Satisfaction rating Respondents choose an option from a scale (usually 1–5 or 1–7) to
indicate how satisfied they are with the product, and then those
responses are averaged
Ease-of-use rating Respondents choose an option from a scale to indicate how easy or
difficult a task or product was, and then those responses are averaged
Questionnaire scores Standardized sets of questions are used to determine a score to
measure something like loyalty (NPS) or general usability (SUS)
UX Metrics

Survey Case Studies

Case Study Page

Acumbamail (MailUp Group) 77


Anonymous HR Tool 100
Arizona State University Online 106
HeliNY (ExpandTheRoom) 148
HelloFresh 160
MyAir (ResMed) 181
Shopify 203

Quantitative Usability Testing


In quantitative usability testing, participants perform key tasks in a product while researchers
collect metrics that describe their experience and performance with those tasks. This method
is the best way to obtain clear measurements of usability factors, particularly time-on-task and
success rates. This method is also often combined with surveys and questionnaires to collect self-
reported data in addition to performance data.

Quantitative Usability Testing Metric Examples

Example Metric Description


Time on task Participants attempt to complete realistic tasks. Researchers record
the amount of time that passes between the start and stop times of
the task and then average those times across participants
Success rate As participants attempt to perform each task, researchers record
whether or not they completed the task successfully and then take
the proportion of successful participants out of all participants who
attempted the task
Ease-of-use rating (post- After each task, the participants choose an option from a scale
task question) to indicate how easy or difficult the task was, and then those
responses are averaged

Quantitative usability testing is often perceived as the most expensive and time-consuming
method for gathering UX metrics. It requires recruiting and testing a large number of participants.
(We often conduct these tests with at least 39 participants, although some experts give
recommendations of between 20–50 participants.) Beyond the amount of resources required
for such a large study, some teams struggle to find that many participants in the first place —
particularly if their user population is very rare or specialized.
UX Metrics

Quantitative Usability Testing Case Studies

Case Study Page

Acumbamail (MailUp Group) 77


AIR MILES Reward Program (LoyaltyOne) 80
Alchemy Cloud 87
Anonymous Car Insurance Company 98
HelloFresh 160
Shopify 203
Syneto Central 215

Analytics
Tools like Google Analytics, Adobe Analytics, or Pendo are used to gather analytics data — data
that describes what people are doing with the product in real life.

Analytics tools are good at capturing what people do — for example, where they click/tap, how
far down a page they scroll, how long they stay in an app, how often they return to a website. But
analytics data often lacks context; in other words, it isn’t a method that can usually tell you why
someone tapped on that portion of the screen and what they expected to happen.

Analytics Metric Examples

Example Metric Description


Conversions A count of the number of times a user completes a goal during a
session; these are usually unique to the product and context (for
example, making a purchase or submitting a lead form)
Conversion rate The percentage of users who convert (perform the desired action)
Returning users People who come back to the product after their first interaction
with it
Churn rate The rate at which people leave a group (for example, customers
cancelling a service)
Bounce rate The percentage of sessions with a single pageview; In other words,
the percentage of people who end their session on the same page
they arrived on
Revenue per user Total revenue divided by the number of users to calculate the
average amount of revenue generated per person
UX Metrics

Compared to quantitative usability testing, analytics is very uncontrolled. Unlike usability testing,
which is conducted using tasks given to the participant, analytics collects data about what
happens in the real world. We have no way to control other variables that might influence user
behavior, so it is sometimes difficult to attribute those metric changes to design changes.

For example, imagine that you’re working on an ecommerce site. You launch a new version of your
site just as your competitor unveils a huge sale. Your conversion rate might go down sharply, but
is that a result of your design change or your competitor’s sale or both? With analytics data alone,
you may not be able to tell.

The case studies for the 5th edition of this report were collected through the spring and summer
of 2020 — coincidently the same time that the coronavirus known as COVID-19 emerged from
China and spread throughout the world. The highly infectious disease caused massive global
quarantines in an effort to slow the spread. This is an extreme example of an uncontrollable
influence. It certainly impacted user preferences and behaviors in almost every industry and
context, and almost certainly influenced the analytics metrics collected in this report.

Despite the substantially uncontrollable nature of analytics data, it was by far the most commonly used
method in these case studies. Roughly two-thirds of the case studies we collected used analytics.

In most cases, the popularity of analytics is explained because, compared to other methods, it
is cheap and fast. Analytics does not require compensating or recruiting any participants. It
doesn’t require a facilitator’s time to plan and conduct a study.

As long as an analytics tool is already implemented in the product, it’s actively collecting massive
amounts of data all of the time. The UX team simply has to decide which metrics to look at and to
pull the right reports (though those activities can sometimes be time-consuming).

Analytics Case Studies

Case Study Page

Acumbamail (MailUp Group) 77


Alchemy Cloud 87
Anonymous Car Insurance Company 98
Anonymous Real Estate Company (Marketade) 103
Arizona State University Online 106
Asiacell 115
CrossCountry (McCann Manchester) 130
Deep Sentinel 133
Healio CME 139
Jira (Atlassian) 167
UX Metrics

Kasasa 174
Starbucks College Achievement Plan (Arizona 210
State University)
The Deal (ExpandTheRoom) 217
Top 10 Online Casinoer 224

A/B Testing
A/B testing (sometimes called split testing) is a form of design experimentation using analytics
metrics as the dependent variables. In an A/B test, one or more design alternatives are shown
to live users on the site (usually without them being aware that they’re participating in an
experiment). An A/B testing tool then tracks each group’s behavior to see whether or not each
design change has an impact on user behaviors and choices. For example, an ecommerce site
might A/B test two alternative product page designs and see that version B is more persuasive
and leads to 12% more add-to-cart actions.

Conversions are usually used in A/B tests, and they’re often conducted as a formative evaluation
to test out different design ideas. As a consequence, A/B testing does not fall into the category of
benchmarking methodologies, even though it does involve collecting UX metrics.

A/B Testing Case Studies

Case Study Page

Anonymous American Bank 96


Anonymous Real Estate Company (Marketade) 103
Oakley (Luxottica) 185
PetSmart Charities (Marketade) 188
Ray-Ban 199
User Interviews 230
Wordfinder (LoveToKnow Media) 233

Tree Testing
Tree testing is used to evaluate or compare information architectures. In a tree test, participants
are shown only the labels and structure of a hierarchy. They are given tasks and asked to try to
find the area of the hierarchy where they would expect to be able to complete the task.
UX Metrics

Other Sources of UX Metrics


UX metrics often come from user research, but they can be found outside of research as well.

Customer support data are a useful source of UX metrics, particularly when they are numerical
representations of the struggles that customers have with the product. Teams will often look at
the number of help tickets for a specific task before and after a redesign.

Customer Support Case Studies

Case Study Page

Alchemy Cloud 87
Anonymous American Bank 96
Asiacell 115

Marketing departments can also be a source of metrics for estimating UX impact, as long as
the metrics of interest can be impacted by UX activities. For example, many marketing teams
calculate and track customer lifetime value: the average revenue collected over a long-term
relationship with a customer. When UX activities improve the quality of a product or service,
customers may stick around longer and spend more money, thus increasing the customer lifetime
value over time.

For mobile apps, app store ratings are a free source of satisfaction data. Major app stores allow
filtering of app store ratings by version so teams can easily see whether new versions have a
positive impact on user perceptions. These ratings are also extremely important, as higher ratings
may lead to more downloads and user acquisitions.

App Store Ratings Case Studies

Case Study Page

Asiacell 115
Deep Sentinel 133
UX Metrics

COMBINING QUANTITATIVE AND QUALITATIVE DATA


While quantitative data can be useful for UX, it should not — and cannot — replace qualitative
data. Many of the case studies in this report are showcases of the potential for combining the two
types of research.

Often quantitative and qualitative data can be used in a cyclical way to uncover, understand,
assess, and fix UX problems.

For example, Marketade, a UX research agency, worked on the website for PetSmart Charities, a
nonprofit. They started with a qualitative usability test where they found problems around a key
task, signing up for the nonprofit’s newsletter.

To better understand how many people were being impacted by this problem, the Marketade
team turned to Google Analytics data. They found that 73% of users who began the process of
signing up for a newsletter abandoned the task before completing the process. This was a major
lost opportunity for engaging people, and it was impacting a huge number of their users.

PetSmart Charities case study: Page 188

In this example, the Marketade team started by finding a problem in qualitative research, and
then turned to quantitative data to find out how many people were impacted by the
problem. This is a common way that quantitative and qualitative data can be used together.

This can also happen in the opposite direction: sometimes teams discover a problem by using
their quantitative data (a large number of people failing a task, for example) and then turn to
qualitative research to help them understand why the problem is occurring, and how they can
correct it.

“A lot of times, we find it’s a combination of quantitative


behavioral data but also a diagnosis — why are people falling
off? We try to get quantitative in with qualitative side-by-side.
And we try to apply quantitative in a way that’s actionable
and coupled with qualitative data. They’re complimentary.”

Emily Williams, UX Researcher at Marketade


UX Metrics

In some cases, teams decide to collect quantitative data purely because it is requested by
stakeholders or clients.

“I once did a project for a client that included both qualitative


and quantitative research. The quantitative portion was done
because the client requested ‘hard data.’
In the end, what we discussed the most, and what the client
seemed to find the most value in, were the qualitative interviews.
We didn’t actually learn much from the quantitative portion.
But the value came from being able to say, ‘Oh, X amount of
users did Y,’ and the fact that the data could be put into charts
and graphs — that seemed to really resonate with the client.”

Nora Fiore, UX Writer at Marketade


Getting Started with UX Benchmarking

Getting Started with UX Benchmarking


Benchmarking allows us to assess our impact and improvement. It’s helpful for reflecting on our
process and design choices. There’s a lot of value that benchmarking can provide to product or
service teams.

But benchmarking’s real power comes in when UX professionals show those results externally,
for example, to stakeholders or clients. Teams can demonstrate the impact of their UX work in a
concrete, unambiguous way.

Sometimes, those metrics are used to calculate return on investment (ROI) — an estimate used to
show stakeholders and clients how much they get in return for what they pay.

“To get [an anonymous large American bank] to start doing more
quantitative UX research, I basically used the design thinking
process to study the problem and build a solution.
I wanted to understand: who is suffering because of the lack
of data? Obviously, the customers are suffering. But also the
sprint teams (product managers, developers, designers) and the
business leaders were suffering. So I created a case for each of
those roles.
I interviewed these people and tried to understand why the
quantitative data didn’t exist. It came down to three things:
• Improper tooling
• A talent gap (a lack of expertise and even knowing that
you should be asking these research questions)
• Cultural incentives (teams were rewarded based on
shipping instead of outcomes)
I built a strategy to address all three of those problems. The core
value propositions I communicated were:
• We’ll generate business value (new revenue or cost-
savings prevention).
• We’ll improve customer satisfaction if we can improve
task success.
• We’ll win quickly and fail gracefully.
• We’ll measure team talent in the number of quantitative
tests we launch.
Getting Started with UX Benchmarking

After that, I was given a team of resources outside my own sprint


team to build this: a tech lead, data scientists, and developers.”

Anonymous Senior Innovation Product Manager

Anonymous American Bank case study: Page 96

“Performance is incredibly important to our merchants and their


customers. Being an entrepreneur is already one of the hardest
jobs in the world, and slow loading times and poor workflows
make that even harder. So we’ve baked that need for better
performance into our studies.
We need to know: How are our users completing their tasks on
Shopify? Are they doing them faster? Are they getting stuck?
How is this changing over time? And, just as important, for all
the numbers we collect, we need to know why. It’s a beautiful
mix of quantitative and qualitative research.”

Funbi Makinde, UX Researcher at Shopify

BENCHMARKING STEPS
The first thing you’ll need to do is set up your benchmarking practice. You can do this in three steps:
1. Choose what to measure
2. Choose how to measure
3. Collect the first measurement

Benchmarking is a practice — ideally, once a team has decided what metrics they want to track,
and how, they will continue to collect those measurements over time. Some teams gather UX
metrics after each major iteration of the product or on an annual or semi-annual basis.

The remaining steps in the process (steps 4–7) can be repeated to continue to gather metrics
about the experience over time.
4. Redesign the product
5. Collect another measurement
6. Interpret findings
7. Calculate ROI (optional)
Getting Started with UX Benchmarking

Consider these examples from some of the case studies described in the report.

Case Study Product Task Page


Deep Sentinel Home security system app Install your new security system 133
Philip Morris Internal HR tool Find guidelines about 194
International employee referrals
Arizona State Online degree description Request more information 106
University pages about a degree
HelloFresh Meal kit delivery app Find the recipes included in a 160
recent delivery

Once you know what tasks and features are most important in the product, decide what you want
to measure about them. We can use quantitative data to measure lots of aspects of the user
experience, for example:
• How much do users like our product or brand?
• How quickly or efficiently can users complete tasks with our product?
• How often do users return to and use our product?
• How easily can users find what they’re looking for within the product?
• How much or how little do users engage with the product overall or specific features?

Google’s HEART framework provides one way to structure and conceptualize different UX-related
metrics. We use an adapted form of the HEART framework, with Task effectiveness and efficiency
instead of Task success.
Getting Started with UX Benchmarking

Description Example metrics


Happiness Measures of user attitudes or perceptions Satisfaction rating
Ease-of-use rating
Net promoter score
Engagement Level of user involvement Average time on task
Feature usage
Conversion rate
Adoption Initial uptake of a product, service, or feature New accounts/visitors
Sales
Conversion rate
Retention How existing users return and remain active in Returning users
the product
Churn
Renewal rate
Task Efficiency, effectiveness, and errors Error count
effectiveness
Success rate
and efficiency
Time on task

Determining the most important metrics to measure may require sitting down with your
stakeholders to get alignment on goals. Try to understand your stakeholder’s priorities, and make
sure you choose a few UX metrics that could be related to business KPIs.

“I’ve noticed that some clients are not very good at establishing
meaningful metrics themselves. So often, they’re looking at
surface-level metrics like page views, the stuff that doesn’t feel
like it’s deep enough to really show a UX impact. They’re focused
on more traditional marketing metrics.
We take time with our clients and ask, ‘What are your goals? What
does success look like?’ Often, they’ll give really vague non-metric
answers, so we’ll try to help them make it measurable.”

Kerrin McLaughlin, Experience Designer and Researcher at ExpandTheRoom


Getting Started with UX Benchmarking

Step 2: Choose How to Measure


There are three user research methods that tend to work well for UX benchmarking:
• Surveys
• Analytics
• Quantitative usability testing

However, as mentioned in the Method and Metrics section (page 14), benchmarking metrics can
also come from other places, such as tree testing, customer service, or marketing departments.

In many cases, the metrics you choose will dictate which metrics you should use. For example, if
you decide that user satisfaction ratings will be an important part of your benchmarking practice,
you’ll have to use surveys — there’s no other way to obtain that metric.

“We did a complete overhaul of our website, so we wanted some


quantitative measures that we could use to compare the old site
to the new. We felt like it was important to include things like
time on task and satisfaction ratings, so that’s why we chose
quantitative usability testing instead of analytics.”

Tara Bassili, AD of UX Research at LoyaltyOne

AIR MILES case study: Page 80

Ideally, you’ll pair a survey (to get self-reported metrics) with a behavioral, observational method
(quantitative usability testing or analytics) to get a holistic view of the user experience.

Example: HelloFesh

HelloFresh’s UX team generated a group of important core tasks for their meal kit delivery app. To
start their UX benchmarking practice, they decided to use quantitative usability testing combined
with surveys to collect the following metrics for each key task:
• Time on task
• Success rate
• Subjective success rate
• SUS score
• Ease-of-use rating
• Confidence rating
Getting Started with UX Benchmarking

Step 3: Collect the First Measurement


The first measurement will form your baseline — an assessment of where your product’s
experience currently sits, which you can compare against later.

As you gather your first set of measurements, consider external factors that may affect your
data. For example, imagine you’re benchmarking an ecommerce website using analytics to collect
sales metrics. If your main competitor starts a big sale right as you implement your new design,
your sales could plummet — but it might not be the fault of your design.

One measurement of your site is not likely to be meaningful by itself. Even if you’ve just started
your benchmarking program and you don’t have prior data to compare to, you can still make
comparisons against competitors, an industry benchmark, or a stakeholder-determined goal.

Example: HelloFresh

HelloFresh’s UX team could compare against the following examples.


• Competitor: The team could conduct a round of quantitative usability testing on one or
two of their major competitors’ apps (such as BlueApron)
• Industry benchmark: If the team knew that the industry average ease-of-use score for
finding recipes in meal kit apps is 4.5 out of 5, for example, they could compare their own
average rating against that number.
• Stakeholder-determined goal: The team might decide they want the average time on
task for finding recipes in the app to be less than 20 seconds. They could compare the
time on task from their study (14 seconds) against that number.

Step 4: Redesign the Product


This step is enormous and beyond the scope of this particular report. The work performed during
this stage — the design changes — is what will be evaluated in the metrics.

Step 5: Collect Another Measurement


After your redesign is launched, measure your design again. Users often hate change, so if your
redesign was substantial, give them a bit of time to adapt to the new version before measuring
it. The amount of time varies depending on how frequently users access your product. For
products accessed daily, perhaps 2–3 weeks is enough time. For a product that users access once
or twice a week, one month or more might be necessary.

Step 6: Interpret Findings


In general, interpreting your metrics is highly contextual to your product and the metrics you’ve
chosen to collect. For example, time on task for an expense-reporting app is different than time
Getting Started with UX Benchmarking

on task for a mobile game. In an expense-reporting app, users want to get their tasks done
as quickly as possible, so a decreased time-on-task is desirable. However, for a mobile game,
designers want people to enjoy the game and choose to spend more time playing — in that
scenario, the design team hopes for an increase in the amount of time users spend in the app.

Confounding Variables

Ideally, benchmarking studies should be clean, controlled experiments where the only thing
that changes (the independent variable) is the design. When that happens, you can be quite
confident that any corresponding shifts in the UX metrics (the dependent variables) are due to
your design decisions.

Unfortunately, it isn’t always possible for us to keep our benchmarking studies clean and
controlled. It may actually be impossible, depending on your methodology and study setup.
Analytics, in particular, can be messy because you’re collecting your data based on what happens
in the real world. For example, are your analytics metric changes due to your design decisions,
or are they due to your competitor going out of business around the same time? Sometimes it’s
difficult or impossible to tell.

The best thing to do is to be aware of these external factors (confounding variables) that might
mess up your experiment. Try to avoid them when planning your study. However, it isn’t always
possible to avoid those external factors. For example, the 2020 COVID-19 global pandemic hit right
as we began collecting case studies for this report. A few case-study respondents who had used
analytics metrics mentioned it in their submission as a possible (and likely) confounding variable.

“We started this test on March 18th, right when the COVID
pandemic hit Europe and, soon after, the United States. One of
our challenges was collecting the data while widespread usage
of the internet was changing in big ways.”

Ana Victoria del Pino Pérez, Love to Know Media

WordFinder case study: Page 233

If you can’t avoid them, make sure you consider any potential confounding variables when
drawing conclusions from your data. You’ll also need to make sure you include those factors in
your reporting. You may worry that mentioning those confounding variables will undermine your
credibility, but the opposite is true — your audience will feel more confident that your results are
realistic, not a sales pitch.
Getting Started with UX Benchmarking

“For one client, we made a copy of copy changes and ran a few
A/B tests. But we had to factor in the seasonality, particularly
because we ran it over the Thanksgiving break.
We had to be very transparent with our client and say, ‘We may
not be able to take full credit for this. There were a lot of factors
involved.’ And I think that transparency actually worked
better than I was anticipating.
They said, ‘This is great, based on these numbers, we do
suspect that it’s better. Thank you for being honest, we’re
happy with this lift.’”

Nora Fiore, UX Writer at Marketade

Statistical Significance

You shouldn’t take your metrics at face value since the sample used for your study is likely much
smaller than the entire population of your users. For that reason, you will need to use statistical
methods to see whether any visible differences in your data are real or due to random noise. This
usually involves calculating statistical significance for each pair of metrics you’re comparing.1 If
a difference is statistically significant, it’s reliable from a statistics standpoint — in other words, it
probably isn’t due to random chance.

If the new design is truly different than the old, you stand a better chance of detecting that
difference and having a statistically significant result if you conduct your quantitative study with
the correct number of participants. This is one reason why it’s very important to meet minimum
sample size guidelines. We had to reject several case study submissions because they did not
use large enough samples. For example, one submission involved a “quantitative” usability test
conducted with only 5 participants. We often recommend using at least 20–40 participants for
quantitative usability studies.2

Example: HelloFresh

Consider the following results from HelloFresh’s quantitative usability testing for the task of
finding a recent order in the app.

1
For help calculating statistical significance in your UX research projects, we recommend our full-day course,
How to Interpret UX Numbers. We also recommend two books: Measuring the User Experience by Tom Tullis and Bill Albert;
and Quantifying the User Experience by Jeff Sauro and James Lewis.
2
For guidance on sample size guidelines and explanations of their origins, we recommend Jeff Sauro’s website, measuringu.com.
Getting Started with UX Benchmarking

Initial Design Redesign

Average time on task (seconds) 28 14

Average success rate 25% 100%

Average SUS rating (0–100 scale) 75 90

In summary, time on task decreased, success rate increased, and SUS rating increased. If these
differences were statistically significant, the HelloFresh team could be very confident that the new
design made users faster, happier, and more successful with this task.

Reporting Benchmarking Results

One of the biggest mistakes that people make when reporting quantitative data is that they just
throw the data at their audience and expect them to draw conclusions.

Always center the data within a story. Tell the audience or readers what you believe the main
takeaway is and use the data to support that argument. Cite your quantitative data sparingly and
only when they’re directly relevant. Tailor your tables and charts to support the story.

“The essence of this is narrative. You can’t just spit out numbers;
you have to tell a good story about it.
Be very disciplined in the story you’re telling. What is the
messaging you want to give around this? How did this start? Why
did we do it? What have we been tracking? How can you see the
message in multiple ways?”

Nora Fiore, UX Writer at Marketade


Getting Started with UX Benchmarking

“I think the hard part is turning quantitative data into something


actionable. That’s something we’re working on right now.
We’re currently creating an interactive data visualization, which
will allow us to show each athenahealth stakeholder more of what
matters to them. They’ll be able to click in and see the portion of
the findings that really applies to them so they can drill down to
the point where they’re seeing more actionable research.”

Aaron Powers, Director of Design Research at athenahealth

Be warned, it’s possible that the outcome of your benchmarking study won’t be exactly what
you expected or wanted. It’s certainly possible to find that your design is worse than your
competitor, or that the new design is the same or worse than the old one. We even have a few
case studies in this report with those outcomes.

While that may not feel like good news, information is always valuable. Try to see the upside. If
you realize you’re worse than your competitors, then you have ammunition to make an argument
to your stakeholders that you need to improve. If you realize your new design is worse than the old,
then you’ve caught that mistake before it could cause more damage. Maybe you can roll back to the
previous design. This should be a moment where you realize that, somewhere along the line, your
understanding of your users may have been flawed. It’s time for more qualitative research.

“Some design changes will have a big effect on the metrics you’re
tracking, and some won’t. A quantitative study helps us separate
those concerns and purely judge aspects like performance and
whether it increases or not.”

Funbi Makinde, UX Researcher at Shopify

Shopify case study: Page 203


Getting Started with UX Benchmarking

“One team introduced a feature, and they actually got a lower


average score than before. So, they got back together and said,
‘Wow, this didn’t work. We messed up.’ So, they read all the
comments in surveys, and they interviewed people.
In the next release, they brought that number back up to where it
had been before the design change. Then, in the third release, they
brought that number up even further. That’s a real success story.”

Aaron Powers, Director of Design Research at athenahealth

But if your interpretation is positive, that’s a great feeling. You have quantitative data that
suggests that you and your team are making the right choices. Share those results throughout
your organization and with your stakeholders. In some cases, you might want to take that positive
result a step further and calculate return on investment (ROI).

Step 7: Calculate ROI


Benchmarking allows you to track your success and demonstrate the value of your work. One way
to demonstrate the value of UX is to connect the UX metrics to the organization’s goals
and calculate return on investment (ROI). These calculations connect a UX metric to a key
performance indicator (KPI) such as profit, cost, employee productivity, or customer satisfaction.

Calculating ROI is extremely beneficial, though not widely practiced by UX professionals (perhaps
because relating your UX metric to a KPI is often convincing enough). In any case, if you struggle
to prove UX impact, calculating ROI can be persuasive.

The following section outlines how to calculate ROI using your benchmarking data.
ROI: Demonstrating the Value of UX

ROI: Demonstrating the Value of UX


The immediate benefit of UX work is that the user interface becomes easier and more enjoyable to
use. So, it’s good for humanity, but what’s the benefit to the company that has to fund the work?

As UX professionals, we see inherent value in improving the experiences of products and services. But
many people don’t see it that way. And sometimes, those people make decisions about your funding.

If you need to demonstrate the value of your design efforts, one of the most effective methods is
to calculate your return on investment (ROI). Essentially, you need to show how your design
changes impact the bottom line — revenue, cost savings, or another key performance indicator (KPI).

Often, ROI calculations involve connecting UX metrics to KPIs, but that isn’t always necessary.
Consider who you’ll be presenting these results to — what do they care most about? If you work
for a nonprofit museum that prioritizes outreach to as many community members as possible, a
monetary calculation may not be necessary. Instead, you might see if you can connect your UX
improvements to increases in visits.

WHY BOTHER CALCULATING ROI?


Our UX problems are often respect problems. Better respect for UX is hugely important for getting
more resources and inclusion, and thus ultimately increasing the UX maturity of the organization.
Showing impact is a big part of that, particularly if you can quantify that impact.

There are plenty of online ROI calculators that will do your work for you, but they can help only if
your scenario is exactly the one that the calculator is designed for.

It’s worth learning how to determine your own ROI conversion ratio and performing the
calculation yourself. Once you acquire this skill, you can apply it beyond the more obvious
scenarios and use it to calculate and demonstrate improvement for any project.

HOW TO CALCULATE ROI


There are three steps to calculate ROI. Before you can follow these steps, you’ll need to have
benchmarking data to work with.

Example: Health-Insurance Website

We’re working on the registration process for online accounts for our health-insurance policyholders.
We know from qualitative research that people often struggle to register for their accounts.
ROI: Demonstrating the Value of UX

Possible UX Metrics for the Registration Task


• Success rate (quantitative usability testing): If we ask participants to try to complete
the task, what proportion of them are successful?
• Completion rate (analytics): If we look at people who begin the registration process on
the site, what proportion of them complete the process?
• Ease-of-use rating (surveys or questionnaires): If we ask people how easy the process
is, how do they rate it?
• Customer-support tickets (customer support): How many people contact support in
order to complete this process?

In our benchmarking study, we might decide to collect several of these metrics, along with others
that describe the experience of the entire site. However, we’ll probably only use one of these UX
metrics in our ROI calculation.

Step 1: Choose a KPI


To start calculating ROI with your benchmarking data, you’ll need to select a KPI to translate your
UX metric into. Ask yourself: what does my organization care about? What are the metrics that
everyone — not just the design team — pays attention to?

Or, more specifically, think about who you’ll be presenting this ROI calculation to. Stakeholders,
executives, clients? What do they care about?

Emily Williams, UX Researcher at Marketade, told us about some of her previous experiences in
other industries and job roles. She started working in UX in the oil industry, and she calculated
the ROI of UX work at two different oil companies.

Despite working on similar projects in similar companies and the same industry, Emily found that
she always had to tailor her ROI calculations to her stakeholders. She recommends studying
your stakeholders in the same way you’d study your users — find out what matters to them and
what they’ll respond to.
ROI: Demonstrating the Value of UX

“In the first oil company I started at, we did a project where we
were researching the engineers who essentially get the oil out of
the ground.
In that project, we found that they were spending so much time
on surveillance that it was negatively impacting production
goals. That was a statistic we could use to connect to business
priorities, and so we could tell stakeholders that engineers were
spending 70% on this task, and that was hurting productivity.
Then I went to a different oil company and worked on the same
problem, but that same statistic didn’t resonate with them. So I
had to say, ‘Oh, OK, then what do you care about?’
And we found that the idea of data quality was much more
interesting to them. We ended up finding that when a certain
product caused estimates to be 10% off, that meant the
company was under- or overproducing oil, which was costing
millions of dollars.
The metrics that people latch onto are just as different as
the people that we work with.
What is the value of what we do? Do research on your
stakeholders. It’s helpful to have a big bag of tricks that you can
pull from; some things are going to stick with some people and
not others.
That’s the fun of it: How can I tell this story in a way that will
resonate with people?”

Emily Williams, UX Researcher at Marketade

Key performance indicators depend on the specifics and the culture of the organization, but most
KPIs come down to money (even in nonprofits).

Examples of KPIs:
• Profit
• Cost
• Customer Lifetime Value (CLV)
• Employee-Turnover Rate (ETR)
• Employee productivity
• Donors & donor growth
ROI: Demonstrating the Value of UX

In most cases, you’ll be trying to turn your UX metric into a monetary amount. That isn’t always
necessary, however. Remember, ROI calculation is about showing how design impacts what
the company cares about. Sometimes that might mean, for example, calculating the amount of
time that is saved with a more efficient design.

“One of the things we keep coming back to is, how do we tie UX


measurements to strategic initiatives for the company? What
does our company care about, and how can we tie UX to it?
This year, we’re talking a lot about merchant needs and goals. So
how are they doing with their tasks? Are they better and faster?
How is this changing over time? How is the UX team contributing
to business KPIs?”

Funbi Makinde, UX Researcher at Shopify

Of course, in some organizations with high UX maturity, the KPI and the UX metric may be the
same — maybe everyone already cares a lot about reducing time on task. In those cases, ROI
calculations may be unnecessary because the work of proving the design’s value is already done!

Step 2: Convert the UX Metric into the KPI


Calculating ROI is basically converting units. You’re taking one unit (for example, the
average number of seconds it takes a user to perform a task) and turning it into another
(monetary cost savings).

So, what would you do if we asked you how many liters are in two gallons?
ROI: Demonstrating the Value of UX

“We use ROI calculations for directional purposes. So, we


know that the dollar value is never going to be 100% precise,
but we can say directionally that one project will generate more
value and cost savings than another.
Most people loved it, but you know who didn’t? The people
working on things that produce no value! You have to
understand what you’re doing and why you’re doing it.
And creating a review feature for financial products on a bank’s
site may not be as valuable as working on a landing page that
gets users to convert.
If you’re the review feature manager, you have to understand
that. Why are you doing this, and how does it fit into a broader
ecosystem? That’s the hardest thing, because everyone wants to
feel like what they’re doing is the most important.”

Anonymous Senior Innovation Product Manager

The critical thing is to be transparent in your reporting. Make sure your audience understands
where your numbers came from. That’ll be useful in setting expectations but also in backing up
your calculation’s credibility.

“I’ve seen people go up to an executive and say, ‘Hey, UX is


important; it’s the number one thing. See this stats analysis? Put
more money into UX.’ That executive will smile and nod but go
and invest elsewhere. Because they’ll see that as being very
self-serving.
Make sure you aren’t seen that way. To get useful and accurate
results, you almost have to become a business expert in order
to make these arguments and have them be understood, useful,
and trusted.”

Aaron Powers, Director of Design Research at athenahealth

You’ll also want to factor in the cost of the project itself. And remember, design improvements
are cumulative. That means that an improved design could give us $300,000 in new revenue this
year, but it’ll also likely provide the same increase the year after that. For that reason, it’s a good
idea to look at ROI projected out onto 2–5 years.
ROI: Demonstrating the Value of UX

Example: Health-Insurance Website

We’ll want to make sure we include all of the important details in our reporting to our
stakeholders. For example:

“We observed a reduction in the number of support tickets for the registration task by 21,900
tickets. Assuming each support ticket costs us $6, that’s a projected savings of $131,400 in one
year, or $393,000 over three years.

Since the design project cost about $75,000, our return over three years may be around $318,000.”

CALCULATING ROI IN DIFFERENT CONTEXTS


Converting UX improvements to dollars is easy for ecommerce, where doubled sales have an
immediate monetary value. For intranets, productivity gains are also fairly easy to convert into
monetary estimates: simply multiply time saved by the hourly cost of your employees.

Other types of design projects are harder to convert into an exact ROI. What is the value of
increased customer satisfaction? What is the value of more traffic or more use of those features
you want to promote on your website? Those estimates will vary between companies, and thus
the monetary value will also vary.

The return on investment from UX is almost always larger when more people are using the design
because the benefits increase for every user who finds the product easier to use. Similarly,
doubling sales numbers results in more income for ecommerce sites (like Amazon) that had larger
sales to begin with.

The estimated productivity gains from redesigning an intranet to improve usability are eight times
larger than the costs for a company with 1,000 employees; 20 times larger for a company with
10,000 employees; and 50 times larger for a company with 100,000 employees.

The return on investment from UX improvements is generated in different ways for various types
of design projects, as discussed below.

Products with Clear Desired Outcomes


Ecommerce sites are the simplest case. The benefits can be measured in terms of increased
sales that result when it’s easier for customers to shop. Conversely, if an ecommerce site ever
launches a redesign with lowered usability, it will likely see sales drop, typically leading to a
decision to roll back the change.

Similarly, many other types of websites have a clearly defined desired outcome, such as applying
to a college or subscribing to a newsletter. When those desired outcomes are tied to revenue,
those ROI calculations can be fairly straightforward as well.
ROI: Demonstrating the Value of UX

Example: The Deal (ExpandTheRoom)

This financial news site offers subscriptions for premium content, primarily targeting organizations
like private equity firms. Potential subscribers can request a free trial to decide whether or not
they want to pay for a full subscription for their company.

When ExpandTheRoom helped them improve the design of their site, their free trial requests
increased from 76 per three-month period to 187. If we assume that 18% of free trials result in
paid subscribers on average and that the average revenue from a subscription is $1,000 (made-
up values), we can perform the following calculation.

(187-76) x 18% = 20 new subscriptions

20 x $1,000 = $20,000 per month

This leaves us with a rough estimate of around $20,000 per month of new revenue generated by
new subscribers. Potentially, those new free-trial leads will continue to come in over the months,
leading to even more new subscriptions and an even greater new per-month amount of revenue.

The Deal case study: Page 217

Content Sites
Some forms of content sites, such as newspaper sites, get their value from the sheer number of
users they can attract. For such sites, visitor counts or page views can provide a metric to assess
whether a redesign has done its job. When combined with ad-revenue details, these numbers can
be used to put a dollar amount on the increases from UX projects.

Intranets
Turning to intranets, we again find that hard numbers are easier to come by. The value of
usability for intranet designs comes from increased employee productivity: every time a user can
perform a task faster with the intranet, the company saves the cost of that person’s salary and
overhead for the amount of time that was saved.

Example: Anonymous HR Tool

One large company managed to save its HR team a total of 32 person-days per month (by
reducing the average time required to complete a big, frequent HR task). This is another easy
calculation: simply multiply the time saved by average salary.

Let’s assume that this company’s HR employees make an average of $50,000 per year. If we divide
that by the number of working days per year, we can find an approximation of their daily rate.
ROI: Demonstrating the Value of UX

$50,000 ÷ 261 = $191 per day

Then we multiply that number by the days saved in the new design.

$191 x 32 = $6,112 per month

Does this mean that the company will literally save $6,112 per month? No, because those
employees are full-time, salaried workers. But it does mean that those employees can now spend
that time doing other tasks, which may be more valuable for the company long term.

CONNECTING UX METRICS TO KPIS WITH CORRELATIONS


Many teams (particularly in contexts like enterprise products) struggle to calculate ROI when their
UX metrics describe aspects of the experience which are not directly tied to revenue. In particular,
many UX teams want to know how user perceptions of the experience can influence their actions.

John Nicholson of Marketade explained that he sees a lot of value in trying to make these difficult
connections, even if you aren’t successful in the calculation itself. In pursuit of that calculation,
you’ll learn the goals of the business, and you’ll develop an understanding of how the business
perceives and assesses value.

“There are metrics that are pretty easy to collect — things you can
get straight out of the lab or from a research tool — but those
are the metrics we’re least excited about and that might be least
connected to the business.
The metrics that executives care about are things like revenue,
cost reductions, and retention. But it’s hard to draw a direct line
from design changes to the business.
Even though it isn’t easy to connect the dots, even if it’s very
hard to collect that data, I encourage my team to try anyway.
There are a lot of advantages to us, as researchers, in
pursuing that connection. You break out of silos and meet
whoever owns the business research. You’re forced to better
understand the business and its priorities. You might even help
those people better understand the value of qualitative research
in the process.
You’re just going to become a better UXer in the process, and
you’ll be better able to make the case for the work that we do,
even if you don’t end up with a perfect, tidy metric to present.”

John Nicholson, Principal at Marketade


ROI: Demonstrating the Value of UX

One potentially fruitful approach to these more complex calculations is to look for correlations in
your data. In other words, look for ways that your UX metrics may be associated or related to KPIs
in some way.

Case Study: athenahealth


Athenahealth is an American company that creates enterprise products for the healthcare
industry. For example, they provide products like electronic health records, for doctors to review
while providing care, and applications to bill patients.

Aaron Powers, Director of Design Research at athenahealth, shared the work that his team
performed to tie UX improvements to business health.3 They conducted monthly surveys of their
users, ending up with over 50,000 completed surveys for one of their products in a two-year
period. The survey included questions about ease of use and reliability — two things that their
users mention often and which seem to be closely tied to their satisfaction.

The team hypothesized that ease-of-use and reliability perceptions would drive users’ satisfaction
with the product, which would, in turn, impact their likeliness to recommend it to others.

By combining their survey data with their business metrics, they found a statistically significant,
positive chain of correlations. This allowed them to demonstrate a relationship between user
perceptions and two key revenue-related business metrics: retention and referrals.

3
Aaron Powers published a Medium article on this case study: www.medium.com/athenahealth-design/measuring-the-
financial-impact-of-ux-in-two-enterprise-organizations-221f6c9ad9a3
The Magnitude of UX Improvements

The Magnitude of UX Improvements

ESTIMATING THE MAGNITUDE OF GAINS FROM DESIGN


For the purpose of estimating UX gains, we collected case studies from design projects that could
provide the same metric for two versions of the product. Comparing the “before” number with the
“after” number allows us to compute the percentage by which the experience was improved in the
redesign. These percentages allow us to analyze how much UX design impacts key metrics
across many different design projects.

Several of the case studies provided in this report include more than one metric. This is common
in benchmarking studies — teams usually select a set of relevant metrics to track, rather than a
single metric.

COMPUTING IMPROVEMENT SCORES


For each pair of metrics, we’ve calculated improvement scores — estimates of the relative
magnitude of each design project’s impact on the metric, expressed as a percentage.

In general, the improvement score is the ratio between the two measurements of the metric
(before and after) minus one. Thus, if the before and after measures were identical, then the
ratio would be 1.0 and the improvement would be 0%. However, the exact method of calculating
improvement scores depends on whether your metric is “good” or “bad.”

Metric Type Description Examples


“Good” metrics An increase in the value means an improvement Sales
in the user experience or a positive outcome for
Visitors
the business; in other words, this is a value that
we want to increase. Satisfaction ratings
Success rates
“Bad” metrics An increase in the value means a decline in the Errors
user experience or a negative outcome for the
Customer support tickets
business; in other words, this is a value that we
want to decrease. Time on task
(usually)

Conversion rates are a typical example of a good metric because they are inherently something
you define as good for your business (sales, donations, registrations, etc.). For good metrics, the
ratio is calculated as the after score divided by the before score.
The Magnitude of UX Improvements

For example, imagine that an ecommerce site recorded a conversion rate of 2% of visitors before
the redesign and increased this to 5% after the redesign. In this case, the ratio would be 5/2 = 2.5,
for an improvement of 150% in the conversion metric.

Time on task is a typical example of a bad metric because slower performance usually indicates
poorer productivity. However, context plays a role in whether or not the metric should be
considered good or bad. Entertainment contexts are an exception to this rule for time on task —
you may want people to spend more time in a mobile game app.

For bad metrics, the ratio is calculated as the before score divided by the after score. If, for
example, a task took 3 minutes to perform with the old design and 2 minutes to perform with the
new design, then the ratio would be 3/2 = 1.5, for a productivity gain of 50%.

A two-minute task time is 50% more productive than a three-minute task time because the faster
design allows users to perform 50% more work in a given amount of time. For example, it’s
possible to perform 30 two-minute tasks in an hour, which is 50% more than the 20 three-minute
tasks that would be the workload performed in the same hour with the slower design.

Metric Type Improvement Score Formula


“Good” metric (after/before) – 1
“Bad” metric (before/after) – 1

EXPECTED UX IMPROVEMENTS
From the 30 case studies we accepted for the 5th edition, we collected a total of 80 pairs of
metrics. We included 76 of these pairs in our quantitative analysis. (The four metric pairs excluded
were outliers, with improvement scores of over 2,000%.)

Averaged across all organizations that reported metrics for our 5th edition, the average UX
improvement score was 75%, with a 95% confidence interval from 8% to 104%. (This analysis4
excludes four outliers which had improvement scores of more than 2,000%.) In other words,
across all these organizations, the average redesign was 75% better than the original for a
variety of metrics.

Does this mean that you should expect around a 75% improvement in your metrics when
redesigning your product? Possibly, but as the wide confidence interval shows, there is an
immense amount of variability in the data. In our data, half of the values were between 13%

4
Some organizations provided multiple case studies (2–3) or multiple metrics per case study. We first took the average
improvement score per organization before calculating the overall average.
The Magnitude of UX Improvements

and 157% (the interquartile5 range).

This doesn’t mean that we can expect 50% of all design projects will have an improvement score
within that range. It’s possible your own impact could be a 500% or 5,000% improvement, but
our data suggests that an improvement score that high is unusual.

Why Does UX Impact Vary?


There’s a substantial difference between (for example) an 8% increase in a conversion rate, or
a 104% increase. We weren’t surprised to see such a wide range because the outcome of any
benchmarked design project depends heavily on the following factors.

The Existing Quality of the Experience

A product with many big UX problems has lots of opportunities for big improvements. A product
that already has an excellent experience will have fewer and smaller opportunities for big change.

The Expertise and Talent of the Team

The better the UX team, the more likely they are to make the right design choices (based on
research and experience). However, even an excellent UX professional won’t get every design
choice right every time (again, this is the challenging and iterative nature of our work).

The Quantity and Quality of the Changes

A large project with many changes may be more likely to have a big impact on metrics than a
small one. However, this is not always a hard rule — some of the case studies collected in this
report show big metric impacts as a result of small, smart design changes. For this reason, we did
not find any statistically significant differences in improvement scores based on project size.

The Precision and Sensitivity of the Methods and Metrics

Some research methods are more sensitive than others. Part of this difference comes from the
differing scale of sample size for each method.

For example, A/B testing on a main page of the site can capture thousands of data points easily,
enabling researchers to detect even a slight change of a few percentage points.

In contrast, a quantitative usability testing with 40 participants may require a more pronounced
difference to determine a statistical difference between two designs. For example, the Shopify

5
The interquartile range is the range between the first quartile and the third quartile in your data set. The lower bound of this
range (the first quartile) is greater than a quarter of your values, and the higher bound (the third quartile) is greater than three-
quarters of your dataset. In other words, half of the data points will be in the interquartile range. Outliers are included in
quartile calculations because they are not influenced by outlier values in the same way a mean would be.
The Magnitude of UX Improvements

Negative Impact on UX Metrics


This report does include a few case studies where the metrics stayed the same in the new
version and some where the metrics actually got worse. We applaud these organizations for
sharing real-life case studies — not just the perfect-world case studies where everything went
according to plan and resulted in stunning metric increases.

Those case studies with negative or neutral outcomes appear on the left of the chart. In nine
metric pairs, the metrics moved in an undesirable direction, causing a negative improvement
score. (For example, one company tried and failed to increase lead-form submissions, resulting in
a 65% decrease of those leads.)

A redesign can result in a diminished user experience or no change. Not all new design ideas
are good, even if they come from user testing and other research. Even the most experienced
and talented UX professionals can implement designs that don’t end up working as expected.
This is part of what makes UX work challenging and interesting, and this is why it must be
iterative in nature.

As long as you’re performing research and collecting data, you’ll be able to catch design
mistakes. If you know your design changes are having an unintended negative effect, you can
prevent them from being implemented or learn how to fix them if they’re already in place. The
earlier those mistakes are caught, the less damage they can do.

Improvements by Metric Category


We classified each metric pair into a metric category, depending on which aspect of the user
experience the metric attempted to capture:
• Adoption/Retention
• Effectiveness
• Efficiency
• Engagement/Usage
• Revenue
• Satisfaction/Perception
The Magnitude of UX Improvements

Metric Category Definitions

Metric Category Aspect of the UX Example Metrics


Adoption/Retention Acquiring new (or retaining old) Leads, application
users, clients, or customers. submissions, newsletter
Includes traffic and visitors signups, subscriptions, renewed
subscriptions, new visitors,
returning visitors, visitors from
organic search
Effectiveness Whether users are able to Success rate, completion rate
complete their tasks with
the product
Efficiency How quickly and smoothly Time on task, average session
users can perform their tasks duration, tree-test score,
completion time, error counts
Engagement/Usage How much, deeply, or Sessions per user, abandonment,
frequently users use the bounce rate, page value, feature
product or specific features of usage, active users
the product
Revenue How much money is generated Revenue per month, revenue per
from the product session, average order value
Satisfaction/ How users perceive the Ease-of-use score, CSAT, NPS, app-
Perception experience and how satisfied store rating, subjective success
they are

Average Improvements by Metric Category

Each metric category had slightly different average improvements, but the confidence intervals
were quite wide. We hypothesize that the large ranges are due in part to small sample sizes (for
example, we had only six metric pairs in the Effectiveness category). Larger samples might have
yielded narrower confidence intervals, but maybe not, due to the highly variable nature of this
data, discussed above (page 50).

(Due to too few metric pairs in the category of Revenue, we decided to exclude that category from
this analysis.)
The Magnitude of UX Improvements

HOW HAVE UX METRIC IMPROVEMENTS CHANGED OVER TIME?


Over the past 14 years, we’ve collected case studies for new editions of this report. This collection
of metric case studies allows us to look back over a variety of design projects since 2006 and
see how the impact of UX design has changed.

About the Case Studies and Metrics

Edition Year Case Studies Collected


1st 2006 41
2nd 2007 0
3rd 2008 24
4th 2012 4
5th 2020 30

The case study profiles from the 1st edition have been removed from this report, but several of the
most interesting ones from the 3rd and 4th editions are included at the end of this report (page
237). However, we still have the metrics from those 72 earlier edition case studies.

Average UX Improvements Have Been Shrinking Over Time


Average improvement scores6 have decreased substantially since 2006–2008: from 247% to
75% (a 69% decrease). This difference is statistically significant (p = 0.017) — we can be quite
confident that average improvement scores are lower now than they were 12–14 years ago.

6
For both sets of data (2006–2008 and 2020), we first took the average improvement score per organization and then averaged
those scores to get the overall average for each dataset.
7
The 2006–2008 data set was not normally distributed, so this p-value was calculated using a nonparametric test.
The Magnitude of UX Improvements

“In these early years, design was truly abominable — think splash
screens, search that couldn’t find anything, bloated graphics
everywhere. The only good thing about these early designs was
that they were so bad that it was easy for usability people to be
heroes: just run the smallest study and you would inevitably find
several immense opportunities for improvement.
Finding and fixing UX problems during the dot com bubble was
like shooting fish in a barrel — every design was so bad!”

Jakob Nielsen, Principal of Nielsen Norman Group

As an analogy, imagine that UX problems are like gold — valuable opportunities that, if you can find
and fix them, can result in profit. In the early 2000s, our industry was like an untouched stream full
of gold where nobody had ever looked before. You could just reach your hand out and grab a gold
nugget! Over the past two decades, those easy-to-find big nuggets have mostly been harvested.

We’ve addressed many of the biggest problems. In some cases, those were individual fixes for
specific problems. But as an industry, our collective knowledge has grown as well — we now
have a rich set of best practices and design patterns. Each individual designer can build on the
existing work of the designers that have gone before.

So, are all the world’s UX problems now fixed, and are all designs perfect? Certainly not. There’s
still substantial room for improvement for the majority of experiences. (Lucky for us, that means
we have great job security.) This finding simply shows that we’ve done a good job improving
experiences overall and addressing the most glaring problems.

Does this mean that UX is less important or impactful today? Also no. We believe that even
though the magnitude of these design changes have decreased, they are no less important.
Because experiences overall are getting better, user expectations have gotten much,
much higher. If you put a website considered adequate in 2006 in front of a user today, she
would refuse to use it.

We believe that those expectations will continue to rise in lockstep with the average quality of
experiences. (Again, this is good for UX professionals’ job security!) As a consequence, even small
improvements in the UX may be worth an organization’s time. This is even more true when you
consider that your competitors are likely improving their experiences as well (a UX arms race).
The Magnitude of UX Improvements

Case Studies
About the Case Studies 68
By Edition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
By Metric Category. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
By Industry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Case Studies — 5th Edition 77


Acumbamail (MailUp Group). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
AIR MILES Reward Program (LoyaltyOne). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Alchemy Cloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
American Kennel Club (ExpandTheRoom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Anonymous American Bank. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Anonymous Car Insurance Company . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Anonymous HR Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Anonymous Real Estate Company (Marketade) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Arizona State University Online. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Asiacell. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Baileigh Industrial (Marketade). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
CrossCountry (McCann Manchester). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Deep Sentinel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Healio CME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
HeliNY (ExpandTheRoom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
HelloFresh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Jira (Atlassian). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Kasasa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
myAir (ResMed) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Oakley (Luxottica). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
PetSmart Charities (Marketade). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Philip Morris International HR Portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
The Magnitude of UX Improvements

Ray-Ban (Luxottica). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199


Shopify. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Starbucks College Achievement Plan (Arizona
State University). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Syneto CENTRAL. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
The Deal (ExpandTheRoom). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Top 10 Online Casinoer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
User Interviews. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
WordFinder (LoveToKnow MEdia). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233

Case Studies — 4th Edition 237


Harrisburg Area Community College . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
University of Edinburgh. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244

Case Studies — 3rd Edition 248


Adobe kuler (kuler.adobe.com). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Capital One. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Direct Marketing Association. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Eurostar (Etre). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
Health Care Without Harm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Media News Group Interactive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
Microsoft Office Help Pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266
North Carolina State University. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
Scandinavian Airlines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Shelter.org.uk (England and Scotland). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Simply Business Insurance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
Sarah Hopkins (artist). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
About the Case Studies

About the Case Studies


The remainder of this report presents case studies of UX metrics. Each study includes:
• Metrics collected
• Methodology used
• Description of the product or service
• Description of the design problem(s) that the team attempted to address
• Description of the solution(s) implemented
• Results of the design change

Most case studies also include screenshots of the design. Some also include quotes from the
team or artifacts from the design process or research.

How Case Studies Were Collected


We called for case study submissions using:
• Twitter
• LinkedIn
• The NN/g newsletter

For the fifth edition, we received 51 case study submissions from teams. Of those, we accepted
only 30. Submissions were rejected if they were unable to provide enough detail, only had a
single set of metrics, or used questionable methodology (not collecting enough data points was
the most widespread problem).

Of those 30 accepted case studies, nine teams were invited to participate in in-depth interviews
with NN/g. Those interviews provided the quotes included throughout this report.

Anonymous Case Studies


Some of the case studies are anonymous, typically because they represent big companies that
don’t want details of their operations to become public. These companies still graciously shared
their internal information with us in return for being promised anonymity in the report. We know
the contributors, and we thank them, even though they have to remain nameless here.

There are also many cases where the exact numbers for certain metrics needed to be kept
out of the report. Many companies were not willing to have their sales data or other sensitive
information published, even though they were willing to share it with us in private.

Because we are looking at only relative improvements in this report, the underlying numbers can
be kept out of print and still allow us to publish the improvement scores.
About the Case Studies

If we want the best and most interesting case studies, like the projects profiled here, we must
respect anonymity and confidentiality requests.

BY EDITION
The case studies that follow are from 57 different redesign projects. They are organized by the
edition of this report that first included them:

Edition Year Collected Number of Case Studies Included


Here
5th 2020 30
4th 2012 2
3rd 2008 12

Note: Over the years, we’ve collected more case studies than the number printed here.
Unfortunately, many of the case studies from the 1st, 2nd, 3rd, and 4th editions are now so old as to
be likely uninteresting to modern UX professionals. We’ve kept the most interesting 14 older case
studies here, at the end of the report. Only the 30 newer case studies are listed in the categorized
tables below.

Within each edition section, the case studies are sorted alphabetically. In the following tables,
we organize the case studies based on metric category and industry. Read the summaries in the
tables to decide which case studies interest you, or browse through the entire set alphabetically.

BY METRIC CATEGORY

Adoption/Retention
Case Study Summary Page
Acumbamail (MailUp Group) Qualitative research revealed a big problem in 77
a subscription process, and a quick fix causes a
22% lift in a conversion rate.
Alchemy Cloud By redesigning to better reflect user needs, this 87
complex enterprise product observed reduced
training time and support tickets.
Anonymous Real Estate Company By skillfully combining quantitative and 103
(Marketade) qualitative research, this real estate company
realized they could increase leads by removing
an unhelpful apartment search feature.
About the Case Studies

Arizona State University Online An online university’s overhaul of its degree 106
pages resulted in some desired metric increases
but also caused an undesired increase in
bounce rate.
Asiacell A major telecommunications company increased 115
active users and revenue by redesigning their
mobile app to focus on frequent user tasks.
Cross Country (McCann Manchester) For this UK train operator, simplifying the 130
homepage and exposing all ticket search
options (instead of hiding them) corresponded
with increased ticket searches, decreased
bounce rate, and increased homepage value.
Deep Sentinel A major revision of the mobile app for a security 133
system led to easier and faster self-installation,
as well as a 50% reduction in returned products.
Healio A major redesign of an educational site 139
for physicians resulted in audience and
engagement growth.
PetSmart Charities (Marketade) By removing unnecessary form fields, this 188
nonprofit substantially increased its newsletter
signup completion rate.
Starbucks College Achievement A refresh of the visual design of this corporate 210
Plan (Arizona State University) scholarship site coincided with a 78% increase
in traffic.
The Deal (ExpandTheRoom) The Deal increased trial requests after removing 217
some of its content from behind a paywall,
making the trial program more visible and
simplifying the trial request form.
User Interviews A small change in visual design yielded a 230
big increase in account creation on User
Interviews’s marketplace site.
WordFinder (LoveToKnow Media) A small change to this entertainment utility site 233
resulted in a slight increase in returning users.

Effectiveness

Case Study Summary Page


American Kennel Club By reorganizing the information architecture 92
(ExpandTheRoom) to focus on topics instead of user roles, this
nonprofit increased task success rates by 20%
on average.
Anonymous American Bank Revised navigation labels helped customers 96
complete their tasks without needing help,
leading to a 25% decrease in calls to customer
support centers.
About the Case Studies

Anonymous Car Insurance Company Mobile optimization and reduced work for 98
users in an online insurance quote process
led to reduced time-on-task and an increased
completion rate.
Deep Sentinel A major revision of the mobile app for a security 133
system led to easier and faster self-installation,
as well as a 50% reduction in returned products.
HelloFresh Added visual hierarchy helped to communicate 160
complexity in this meal-kit service mobile app,
resulting in easier, faster, more satisfying tasks.
Philip Morris International HR Portal By revising an internal tool to prioritize 194
employee tasks, this large global company
gained big improvements in findability metrics.
Shopify Some minor changes to a billing page 203
showed qualitative improvements; however,
no statistically significant differences were
observed in the metrics.

Efficiency

Case Study Summary Page


Acumbamail (MailUp Group) Qualitative research revealed a big problem in 77
a subscription process, and a quick fix causes a
22% lift in a conversion rate.
Alchemy Cloud By redesigning to better reflect user needs, this 87
complex enterprise product observed reduced
training time and support tickets.
Anonymous Car Insurance Company Mobile optimization and reduced work for 98
users in an online insurance quote process
led to reduced time-on-task and an increased
completion rate.
Anonymous HR Tool Automating an inefficient task reduced the 100
amount of time required from a busy HR team
by 60%.
Baileigh Industrial (Marketade) A research-driven overhaul of a metal and 126
woodworking machinery manufacturer site’s
information architecture resulted in major
findability improvements.
Deep Sentinel A major revision of the mobile app for a security 133
system led to easier and faster self-installation,
as well as a 50% reduction in returned products.
HelloFresh Added visual hierarchy helped to communicate 160
complexity in this meal-kit service mobile app,
resulting in easier, faster, more satisfying tasks.
About the Case Studies

Philip Morris International HR Portal By revising an internal tool to prioritize 194


employee tasks, this large global company
gained big improvements in findability metrics.
Shopify Some minor changes to a billing page 203
showed qualitative improvements; however,
no statistically significant differences were
observed in the metrics.
WordFinder (LoveToKnow Media) A small change to this entertainment utility site 233
resulted in a slight increase in returning users.

Engagement/Usage

Case Study Summary Page


Arizona State University Online An online university’s overhaul of its degree 106
pages resulted in some desired metric increases
but also caused an undesired increase in
bounce rate.
Asiacell A major telecommunications company increased 115
active users and revenue by redesigning their
mobile app to focus on frequent user tasks.
CrossCountry (McCann Manchester) For this UK train operator, simplifying the 130
homepage and exposing all ticket search
options (instead of hiding them) corresponded
with increased ticket searches, decreased
bounce rate, and increased homepage value.
Healio A major redesign of an educational site 139
for physicians resulted in audience and
engagement growth.
Oakley (Luxottica) A small experiment with promoting sales in a 185
retailer site’s megamenu led to significant lifts
in four key ecommerce metrics.
Ray-Ban (Luxottica) A small change in an A/B test of the checkout 199
flow of a popular ecommerce site resulted in a
slight but definite decrease in conversion rate
and revenue per session.
Top 10 Online Casinoer A handful of small design changes increased 224
clickthrough and conversion rates for this site that
compares different Danish gambling websites.
WordFinder (LoveToKnow Media) A small change to this entertainment utility site 233
resulted in a slight increase in returning users.
About the Case Studies

Revenue

Case Study Summary Page


Asiacell A major telecommunications company increased 115
active users and revenue by redesigning their
mobile app to focus on frequent user tasks.
Oakley (Luxottica) A small experiment with promoting sales in a 185
retailer site’s megamenu led to significant lifts
in four key ecommerce metrics.
Ray-Ban (Luxottica) A small change in an A/B test of the checkout 199
flow of a popular ecommerce site resulted in a
slight but definite decrease in conversion rate
and revenue per session.

Satisfaction/Perception

Case Study Summary Page


AIR MILES Reward Program A major redesign of an airline rewards program 80
(LoyaltyOne) site resulted in slight increases in ease-of-use
scores and decreases in time-on-task.
Arizona State University Online An online university’s overhaul of its degree 106
pages resulted in some desired metric increases
but also caused an undesired increase in
bounce rate.
Asiacell A major telecommunications company increased 115
active users and revenue by redesigning their
mobile app to focus on frequent user tasks.
Deep Sentinel A major revision of the mobile app for a security 133
system led to easier and faster self-installation,
as well as a 50% reduction in returned products.
HeliNY (ExpandTheRoom) A redesign of the content and visual design of 148
HeliNY’s tourism site resulted in improvements
in self-reported rating scale metrics.
HelloFresh Added visual hierarchy helped to communicate 160
complexity in this meal-kit service mobile app,
resulting in easier, faster, more satisfying tasks.
myAir (ResMed) The visual redesign of an app helped to better 181
align it with brand values as measured by a survey.
Shopify Some minor changes to a billing page 203
showed qualitative improvements; however,
no statistically significant differences were
observed in the metrics.
About the Case Studies

BY INDUSTRY

Enterprise & B2B

Case Study Summary Page


Acumbamail (MailUp Group) Qualitative research revealed a big problem in 77
a subscription process, and a quick fix causes a
22% lift in a conversion rate.
Alchemy Cloud By redesigning to better reflect user needs, this 87
complex enterprise product observed reduced
training time and support tickets.
Baileigh Industrial (Marketade) A research-driven overhaul of a metal and 126
woodworking machinery manufacturer site’s
information architecture resulted in major
findability improvements.
Jira (Atlassian) A slight change in this popular agile tool’s backlog 167
views resulted in a 95% decrease in load time.
Kasasa Substantial user-centered changes in an 174
enterprise app resulted in a 24% increase in
client utilization.
Shopify Some minor changes to a billing page showed 203
qualitative improvements; however, no
statistically significant differences were observed
in the metrics.
Syneto CENTRAL Redesigning a critical task in Syneto CENTRAL’s 215
complex cloud services platform resulted in a
substantial reduction of time-on-task.
User Interviews A small change in visual design yielded a big 230
increase in account creation on User Interviews’s
marketplace site.

Nonprofit & Education


Case Study Summary Page
American Kennel Club By reorganizing the information architecture to 92
(ExpandTheRoom) focus on topics instead of user roles, this nonprofit
increased task success rates by 20% on average.
Arizona State University Online An online university’s overhaul of its degree pages 106
resulted in some desired metric increases but also
caused an undesired increase in bounce rate.
PetSmart Charities (Marketade) By removing unnecessary form fields, this 188
nonprofit substantially increased its newsletter
signup completion rate.
About the Case Studies

Starbucks College Achievement A refresh of the visual design of this corporate 210
Plan (Arizona State University) scholarship site coincided with a 78% increase
in traffic.

Healthcare & Insurance

Case Study Summary Page


Anonymous Car Insurance Mobile optimization and reduced work for users in 98
Company an online insurance quote process led to reduced
time-on-task and an increased completion rate.
Healio CME A major redesign of an educational site 139
for physicians resulted in audience and
engagement growth.
myAir (ResMed) The visual redesign of an app helped to better 181
align it with brand values as measured by a survey.

Entertainment & Tourism

Case Study Summary Page


HeliNY (ExpandTheRoom) A redesign of the content and visual design of 148
HeliNY’s tourism site resulted in improvements in
self-reported rating scale metrics.
Top 10 Online Casinoer A handful of small design changes increased 224
clickthrough and conversion rates for this site that
compares different Danish gambling websites.
WordFinder (LovetoKnow Media) A small change to this entertainment utility site 233
resulted in a slight increase in returning users.

Transportation & Real Estate

Case Study Summary Page


AIR MILES Reward Program A major redesign of an airline rewards program 80
(LoyaltyOne) site resulted in slight increases in ease-of-use
scores and decreases in time-on-task.
Anonymous Real Estate By skillfully combining quantitative and qualitative 103
Company (Marketade) research, this real estate company realized they
could increase leads by removing an unhelpful
apartment search feature.
CrossCountry (McCann For this UK train operator, simplifying the 130
Manchester) homepage and exposing all ticket search options
(instead of hiding them) corresponded with
increased ticket searches, decreased bounce rate,
and increased homepage value.
About the Case Studies

Ecommerce
Case Study Summary Page
HelloFresh Added visual hierarchy helped to communicate 160
complexity in this meal-kit service mobile app,
resulting in easier, faster, more satisfying tasks.
Oakley (Luxottica) A small experiment with promoting sales in a 185
retailer site’s megamenu led to significant lifts in
four key ecommerce metrics.
Ray-Ban (Luxottica) A small change in an A/B test of the checkout flow 199
of a popular ecommerce site resulted in a slight but
definite decrease in conversion rate and revenue
per session.

Finance

Case Study Summary Page


Anonymous American Bank Revised navigation labels helped customers 96
complete their tasks without needing help, leading to
a 25% decrease in calls to customer support centers.
The Deal (ExpandTheRoom) The Deal increased trial requests after removing 217
some of its content from behind a paywall, making
the trial program more visible and simplifying the
trial request form.

Intranet/Internal

Case Study Summary Page


Anonymous HR Tool Automating an inefficient task reduced the amount 100
of time required from a busy HR team by 60%.
Philip Morris International HR By revising an internal tool to prioritize employee 194
Portal tasks, this large global company gained big
improvements in findability metrics.

Miscellaneous

Case Study Summary Page


Asiacell A major telecommunications company increased 115
active users and revenue by redesigning their
mobile app to focus on frequent user tasks.
Deep Sentinel A major revision of the mobile app for a security 133
system led to easier and faster self-installation, as
well as a 50% reduction in returned products.
Case Studies — 5th Edition

Case Studies — 5th Edition

ACUMBAMAIL (MAILUP GROUP)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app Enterprise Small 5th

Summary: Qualitative research revealed a big problem in a subscription process, and a quick fix
causes a 22% lift in a conversion rate.

METRICS

Methodology:
Quantitative usability testing, Analytics

Metric: Time on task for subscribing Metric: Conversion rate for subscribing
Before: 3 min 54 sec Before: 9.5%
After: 3 min 30 sec After: 11.6%
Improvement Score: 11% Improvement Score: 22%
Percent Change: -10%

Metric: NPS survey


Before: 1
After: 51
Improvement Score: 5,000%

Product & Team


Acumbamail is a Spanish-based email marketing provider. It also offers SMS packages and
transactional services with a freemium business model suitable for micro-small businesses. It allows
customers to create, send, and manage their campaigns by also tracking real-time performance.

Acumbamail is owned by MailUp Group, a company based in Italy and specializing in cloud
marketing technologies.
Case Studies — 5th Edition

AIR MILES REWARD PROGRAM (LOYALT YONE)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Airlines/Reward Large 5th
program

Summary: A major redesign of an airline rewards program site resulted in slight increases in
ease-of-use scores and decreases in time-on-task.

METRICS

Methodology:
Quantitative usability testing

Metric: Ease-of-use, mobile Metric: Ease-of-use, desktop


Before: 4.3 Before: 4.1
After: 4.6 After: 4.4
Improvement Score: 7% Improvement Score: 7%

Product & Team


The AIR MILES Reward Program is Canada’s most recognized loyalty program, with nearly 11
million active members collecting and redeeming points. Members earn miles at more than 200
partners across the country at thousands of retail and service locations.

AIR MILES members can redeem their miles for merchandise, travel, or event tickets.

Problem & Goals


AIR MILES was founded in 1992, and while the website has evolved since then, it was due for an
overhaul. The airmiles.ca redesign and re-platform were completed to address the changes that
have taken place in the program and to better meet the expectations of users.
Case Studies — 5th Edition

ALCHEMY CLOUD
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app Enterprise/B2B Large 5th

Summary: By redesigning to better reflect user needs, this complex enterprise product observed
reduced training time and support tickets.

METRICS

Methodology:
Quantitative usability testing, analytics, customer support

Metric: New user training time Metric: Support tickets


Percent Change: -56% Percent Change: -50%
Improvement Score: 125%

Metric: Number of users


Improvement Score: 113%

Product & Team


Alchemy is a Silicon Valley–based cloud software company that helps chemicals companies
modernize how they work in order to accelerate the development, sale, and servicing of formulas.
Case Studies — 5th Edition

AMERICAN KENNEL CLUB (EXPANDTHEROOM)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Nonprofit Medium 5th

Summary: By reorganizing the information architecture to focus on topics instead of user roles,
this nonprofit increased task success rates by 20% on average.

METRICS

Methodology:
Tree testing

Metric: Average success rate across all tasks


Improvement Score: 20%

Product & Team


ExpandTheRoom (ETR)9 is a data-driven design agency working primarily with marketing and
IT product teams. Utilizing its customer-centric approach, “Purpose-Driven Design,” ETR creates
websites, custom productivity tools, applications, and interactive experiences. ETR’s team is fully
distributed across the United States.

The American Kennel Club (AKC) is an American registry of purebred dog pedigrees. AKC also holds
and supports “dog sports” including dog shows and agility competitions. Their website allows
owners to register their dogs, find breed information, and learn how to participate in dog sports.

Problems & Goals


The American Kennel Club’s website contains a huge amount of content: information on 190+ dog
breeds, 40+ categories of expert advice, 22,000+ events, training/health services, an online store,
and a breeder marketplace. When ExpandTheRoom (ETR) began working the AKC, that content
was not well organized. Users struggled to locate the information they needed.

The ETR team conducted a baseline tree-testing study on the existing information architecture. They
recruited real users directly from the site and asked them to perform tasks such as: You’re interested
in registering your mixed breed dog with the AKC. Where would you go to register your dog?

9
www.expandtheroom.com
Case Studies — 5th Edition

ANONYMOUS AMERICAN BANK


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Finance Small 5th

Summary: Revised navigation labels helped customers complete their tasks without needing
help, leading to a 25% decrease in calls to customer support centers.

METRICS

Methodology:
A/B Testing, Customer Service

Metric: Task completion for customers updating Metric: Customers calling in to complete the
their contact information contact information change
Increase: 10% Percent Change: -25%
Percent Change: Not given

Product & Team


A large and well-known American bank offering business and personal banking services,
including credit cards, auto loans, and savings accounts. This case study was shared by a senior
UX professional who chose to remain anonymous.

Problems & Goals


The bank’s design team noticed that many people were calling in to call centers for help
completing simple tasks that were available online. The team hypothesized that their navigation
labels might be preventing users from finding those self-service options.

Any unnecessary calls to customer support represent a big problem for the bank. Particularly
when those tasks should be simple and self-explanatory (like updating account contact
information), customers are annoyed when they have to call in for help. Each unnecessary call
also costs the bank money.

Solutions & Results


Using A/B testing, the team tested a revised navigation against the original version. The new
version included clearer labels as well as descriptive text beneath each label to help clarify what
customers would find there.
Case Studies — 5th Edition

The new design resulted in a 10 percentage point increase in the task completion rate for
updating account contact information, making it the winning version. They implemented the
changed labels for all users and saw a corresponding 25% drop in the number of customers
calling in for help with the same task. The decrease led to thousands of dollars of cost savings for
the bank.
Case Studies — 5th Edition

ANONYMOUS CAR INSURANCE COMPANY


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Insurance Medium 5th

Summary: Mobile optimization and reduced work for users in an online insurance quote process
led to reduced time-on-task and an increased completion rate.

METRICS

Methodology:
Quantitative usability testing, analytics

Metric: Time on task for getting an insurance Metric: Completion rate for the online quote
quote process
Before: 90 seconds Before: 25%
After: 60 seconds After: 60%
Improvement Score: 50% Improvement Score: 140%
Percent Change: -33%

Product & Team


A medium-sized digital agency with locations in the UK, Ireland, and Australia was hired to help
an insurance company.

Problems & Goals


An insurance company had their website designed and built by a marketing agency, with no user
research. As a consequence, the site wasn’t working well for their customers. They hired a UX-
focused agency to help them understand why their customers weren’t completing the online car
insurance quote process.

The agency found several major problems with the site, including:
• Poor mobile optimization
• A very long dropdown list of current insurers
• A tedious process of inputting all of the details about the user’s car (make, model, year, etc.)
Case Studies — 5th Edition

Solutions & Results


The agency team focused on substantially improving the mobile experience and finding ways to
save users time.

They realized that most users were currently insured with one out of 10 major competitors. Instead
of forcing users to scroll down a very long list to find their current insurer, they decided to place
the top 10 most common insurers at the top of the list.

The team also found that if users just entered their car’s license plate number, many of the car’s
details could be automatically imported, reducing a lot of work for users.

The agency used quantitative usability testing to benchmark their improvements to the quote
process. For the task of getting an online quote, they were able to reduce the average time on
task from 90 seconds to 60 seconds. After launching the redesign, they checked the analytics
data and found that the completion rate increased from 25% to 60%.
Case Studies — 5th Edition

ANONYMOUS HR TOOL
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Internal Large 5th

Summary: Automating an inefficient task reduced the amount of time required from a busy HR
team by 60%.

METRICS

Methodology:
Surveys

Metric: Time spent by the HR team per month


generating letters
Before: 53 working days
After: 21 working days
Improvement Score: 152%
Percent Change: -60%

Product & Team


This anonymous automotive technology company employs more than 3,000 people in its
Southeast Asia branch. The in-house UX team applies lean design and design thinking to the
internal products of the company.

Problems & Goals


The HR needs of this anonymous company’s 3,000 employees are provided by a relatively small
team of 10 people. The UX team conducted user research to better understand this small team’s
tasks and to find ways to improve their efficiency.

One of the opportunities they identified was related to the letters that the HR team had to
manually create. These letters were official HR documents, such as certificates of employment.
This task was extremely time consuming and slow. The old process of manually creating letters
usually happened in the following steps:
1. An employee emails the HR help desk to request a letter.
2. The HR help desk sends an automatic response email to acknowledge the request and
inform the employee that the HR team may take up to three working days to create the letter.
3. HR staff checks the email and replies to the employee, asking for the necessary details for
Case Studies — 5th Edition

ANONYMOUS REAL ESTATE COMPANY (MARKETADE)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Real Estate Medium 5th

Summary: By skillfully combining quantitative and qualitative research, this real estate company
realized they could increase leads by removing an unhelpful apartment search feature.

METRICS

Methodology:
Analytics, A/B testing

Metric: Leads
Before: 1%
After: 4%
Improvement Score: 277%

Product & Team


Marketade is a small but long-established and fully remote user-research company with expertise
in turning user insights into better experiences and business results. They were hired by a
multistate American apartment-finder company to overhaul their website.

Problems & Goals


An anonymous real estate company wanted to substantially redesign their website: new design,
new content, and new property search functionality. A few months later, they noticed that their
highest-priority KPI, renter leads, was down 60% compared to the prior year. They panicked and
reached out to Marketade for help.

Marketade started with qualitative research (interviews and usability testing) to help them
understand the user and why the new design was failing. Through qualitative research, they
identified over 50 problems on the site and the apartment search application. For the biggest of
these problems, they then turned to Google Analytics to help them quantify and size the issues —
how many people were being impacted by these problems?

After quantifying the problems, the Marketade team generated a long list of findings and
potential solutions.
Case Studies — 5th Edition

ARIZONA STATE UNIVERSIT Y ONLINE


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Education Large 5th

Summary: An online university’s overhaul of its degree pages resulted in some desired metric
increases but also caused an undesired increase in bounce rate.

METRICS

Methodology:
Analytics, Surveys

Metric: CSAT Metric: Conversion rate: Submitting a


request for more information
Before: 79
Improvement Score: 18%
After: 83
Improvement Score: 5%
Metric: Bounce rate
Metric: NPS
Improvement Score: 13%
Before: 53
After: 66
Improvement Score: 25%

Product & Team


Arizona State University (ASU) is one of the largest public universities by enrollment in the United
States. Their ASU Online program offers 200+ degrees which can be earned 100% virtually.

ASU has won the US News award for Most Innovative College for five consecutive years in a row.

EdPlus is a central enterprise unit for ASU focused on the design and scalable delivery of digital
teaching and learning models to increase student success and reduce barriers to achievement in
higher education.

Problems & Goals


The focus of this redesign project was ASU Online’s degree pages. These pages describe each
available degree, including:
• Required courses
• Possible careers
• Faculty credentials
Case Studies — 5th Edition

• Application requirements
• Awards
• Tuition calculator

The EdPlus design team conducted a large qualitative research project with prospective students,
which left them with a better understanding of which degree details were needed. The existing
design hid too many of those important details behind page tabs with dense walls of text.

The team decided to revise the structure and visual design of the page, hoping to improve
prospective student impressions of the site (CSAT and NPS) as well as improving a site KPI:
submissions of the request for more information form.
Case Studies — 5th Edition

Solutions & Results


The EdPlus team decided to pull the important content out from behind the page tabs. The result
was a very long page, so they added sticky in-page navigation elements. The team also cut some
of the length of the content and added more visuals and photos specific to each degree. They
also made the request for information contact form substantially larger, added a big photograph,
and placed it at the top of each degree page.
Case Studies — 5th Edition

Amanda Gulley, UX and Design Manager at EdPlus, reflected on this tradeoff:

“Our mission is not just to increase form submissions. We want


to create a better user experience, which may lead to students
directly applying to the program where they may not need to
speak to an enrollment advisor to learn more, but they can
gather all the information they need on these and move forward
independently.
So, it’s interesting to weigh KPI volume against the tradeoff and
see that an increase is not always a positive. You may get high
volume, but those leads may be low quality. In this case, maybe
we have the form too high up the page, and people aren’t
reading an ounce of content before contacting us. Their intent
to enroll may be much lower than those who actually spent time
researching the degree programs first.”

Amanda Gulley, UX and Design Manager at Arizona State University EdPlus

The EdPlus team plans to conduct another major redesign in the near future, focusing on deeper
content revisions and considering a different presentation of the contact form.
Case Studies — 5th Edition

ASIACELL
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Mobile App Utility/ Large 5th
Subscription

Summary: A major telecommunications company increased active users and revenue by


redesigning their mobile app to focus on frequent user tasks.

METRICS

Methodology:
Analytics, App stores, Finance

Metric: App store customer ratings, out of 5 stars Metric: Active users per month
Before: 3.4 Before: 30,000
After: 4.3 After: 125,000
Improvement Score: 26% Improvement Score: 317%

Metric: Users recharging their balance each Metric: Revenues generated through the app
month
Improvement Score: 225%
Improvement Score: 117%

Product & Team


Asiacell is a leading provider of quality mobile telecommunications and data services in Iraq
with a subscriber base of 14 million customers. Asiacell was the first mobile telecommunications
provider in Iraq to achieve nationwide coverage, offering its services across all of Iraq’s 19
governorates, including the national capital Baghdad and all other major Iraqi cities.

Problems & Goals


Asiacell’s design team noticed that app adoption and engagement metrics were lower than
desired, and they hypothesized that was because the app had poor usability and difficult
navigation. Even important and frequent user tasks (like checking account balance or purchasing
prepaid mobile plans) required several steps and accessing multiple menus.

Because the app was so cumbersome to use, many users were giving up and resorting to calling
Asiacell’s customer support, which was very expensive for the company.
Case Studies — 5th Edition

Solutions & Results


The team adopted a simplified design language system to make sure the visual experience is
consistent throughout the application.

They also focused on streamlining frequent user tasks and reducing unnecessary steps in processes.
As a part of this effort, they added important account information (like current balance) directly on
the home screen so users could check those details simply by opening the app.

The Asiacell design team also added a few innovative features to simplify their customer
experience. In the new app, users could seamlessly authenticate with a single tap, as long as they
were on an Asiacell data network. The team also added personalization features to help surface
useful information and actions depending on each individual user’s status and context.
Case Studies — 5th Edition

BAILEIGH INDUSTRIAL (MARKETADE)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website B2B/Enterprise Medium 5th

Summary: A research-driven overhaul of a metal and woodworking machinery manufacturer


site’s information architecture resulted in major findability improvements.

METRICS

Methodology:
Tree testing

Metric: Overall findability score (task success &


directness combined)
Before: 4 out of 10
After: 7 out of 10
Improvement Score: 85%

Product & Team


Marketade is a small but long-established and fully remote user-research company with expertise
in turning user insights into better experiences and business results.

Based in Wisconsin, Baileigh Industrial is a top manufacturer of industrial metal and woodworking
machinery. They sell through distributors and directly through Baileigh.com.

Problems & Goals


When Marketade began working with Baileigh, one of their first actions was to conduct qualitative
interviews with their sales reps. A major pain point among the reps was customers calling to ask
about small-ticket products, which limited the amount of time they could spend talking to big-
ticket product shoppers.

When the Marketade research team spoke to senior management, they heard the same
complaint. They realized that if Baileigh could improve self-service on the website for small-ticket
customers, it would free up sales reps to focus on people who truly needed their expertise.

Marketade then moved on to qualitative usability testing, and they soon realized a key barrier to
self-service: customers often struggled to find the product, or even the product category, that they
wanted. They repeatedly wasted time going down the wrong paths using the site’s navigation.
Case Studies — 5th Edition

CROSSCOUNTRY (MCCANN MANCHESTER)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Tourism/ Medium 5th
Transportation

Summary: For this UK train operator, simplifying the homepage and exposing all ticket search
options (instead of hiding them) corresponded with increased ticket searches, decreased bounce
rate, and increased homepage value.

METRICS

Methodology:
Analytics, A/B testing

Metric: Train ticket searches Metric: Homepage value


Improvement Score: 6% Improvement Score: 15%

Metric: Homepage bounce rate


Percent Change: -63%

Product & Team


McCann Manchester is a marketing communications agency based in Cheshire, UK. They help
brands build positive relationships with users and play a meaningful role in people’s lives.

CrossCountry Trains is a train operator with an extensive network of train routes in Great Britain.

Problems & Goals


CrossCountry’s primary goal for its site was to sell train tickets. As a consequence, its ticket search
panel (called the “journey search”) was the most important element on the site.

The basic search criteria of the journey search (origin, destination, departure, and return) were
all exposed on the homepage. However, other important options (number of travelers, whether
or not a railcard would be used, routes to use or avoid, services, and promo codes) were hidden
beneath an arrow button in a dropdown.
Case Studies — 5th Edition

DEEP SENTINEL
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Mobile app Security/ Subscription Large 5th

Summary: A major revision of the mobile app for a security system led to easier and faster self-
installation, as well as a 50% reduction in returned products.

METRICS

Methodology:
Analytics, Customer Service, App store reviews

Metric: Time to completion (installing and Metric: Return rate (returning security
activating security system) system)
Improvement Score: 200% Improvement Score: 100%
Percent Change: -67% Percent Change: -50%

Metric: Average app store rating, out of 5 Metric: Completion rate (installing and
activating security system)
Improvement Score: 19%
Improvement Score: 42%

Product & Team


Deep Sentinel is a security company based in California, USA. They sell home and retail security
systems that focus on using video cameras with live guards who monitor video streams in real
time. Human beings evaluate the video streams to determine whether or not intervention is
necessary and contact law enforcement when needed.

The Deep Sentinel app helps users install and set up their new security systems. Once installed,
users can check on their home security through the app, change settings, and enable “privacy
mode” to stop the live video feeds temporarily.

Problems & Goals


The Deep Sentinel design team realized that users were having trouble installing, setting up, and using
their new security systems through the mobile app. They identified a variety of problems, including:
• The navigation hierarchy and labels made it hard for users to find different features
and functions.
• Poor color contrast and small fonts were difficult for users to notice and read.
Case Studies — 5th Edition

• The video recording history timeline was challenging to use because multiple events could
happen close together, and it was difficult for users to get to the precise moment they wanted.
• Poor visual hierarchy made the app difficult to quickly scan and understand.
• Valuable screen real estate was occupied by the company logo in every screen.
• Icons lacked labels.
Case Studies — 5th Edition

Using analytics data, the Deep Sentinel team was able to track completion rates and average
time until completion for installing and setting up new security systems (from creating a new
account to activating the security system). With the new app design, they found a 42% increase in
completion rate and a 67% reduction in time until completion.

They found similarly encouraging changes in their customer satisfaction data as well — returned
security systems decreased by 50% after the app redesign, and the app store rating increased
from 3.6 to 4.3.
Case Studies — 5th Edition

HEALIO CME
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Healthcare Large 5th

Summary: A major redesign of an educational site for physicians resulted in audience and
engagement growth.

METRICS

Methodology:
Analytics

Metric: Conversion rate (learners taking test) Metric: Engagement (average learning
activities per user)
Before: 34.1%
Before: 1.5
After: 58.9%
After: 2
Improvement Score: 73%
Improvement Score: 33%

Metric: Conversion rate (learners to completer)


Metric: Visitors (average monthly learners)
Before: 28.2%
Before: 13,241
After: 57%
After: 15,802
Improvement Score: 102%
Improvement Score: 19%

Metric: Conversion rate (test taker to completer)


Before: 82.7%
After: 96.8%
Improvement Score: 17%

Product & Team


Healio is a medical website designed to provide physicians and healthcare practitioners with
news, journals, and educational content.

Healio CME is a medical website that offers healthcare professionals continuing medical
education credits, which are required for American physicians.

CME stands for Continuing Medical Education. These programs consist of activities that serve
to maintain and develop a physician’s knowledge over time. Some American states require a
specific number of CME credits annually for a physician to maintain their medical license and be
Case Studies — 5th Edition

HELINY (EXPANDTHEROOM)
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Tourism/ Large 5th
Transportation

Summary: A redesign of the content and visual design of HeliNY’s tourism site resulted in
improvements in self-reported rating scale metrics.

METRICS

Methodology:
Online surveys

Metric: Metric:
Percentage of respondents reporting that the Percentage of respondents reporting that
booking process was “Very Easy” they understood the differences between
HeliNY’s tours
Before: 31%
Before: 55%
After: 52%
After: 66%
Improvement Score: 68%
Improvement Score: 21%

Metric:
Metric:
Percentage of respondents selecting “modern” to
describe the site’s visual design Percentage of respondents selecting “tacky”
to describe the site’s visual design
Before: 15%
Before: 3%
After: 17%
After: 1%
Improvement Score: 18%
Improvement Score: 99%
Percent Change: -50%

Product & Team


ExpandTheRoom (ETR)10 is a data-driven design agency working primarily with marketing and
IT product teams. Utilizing its customer-centric approach, “Purpose-Driven Design,” ETR creates
websites, custom productivity tools, applications, and interactive experiences. ETR’s team is fully
distributed across the United States.

10
www.expandtheroom.com
Case Studies — 5th Edition

HeliNY offers helicopter charters and luxury aerial tours of New York City. Their website explains
the differences in their offerings and services and allows online booking.

Problems & Goals


HeliNY’s site needed help refining its booking process, as well as updating its visual design.
Before the project, the site suffered from two primary problems:
• Tours page: Users struggled to understand the difference between the different types of tours.
• Tour booking process: Users had to fill out a very long Google Form, which users said was
hard to use. The form also involved a lot of back-end work for employees.

With these concerns in mind, ETR began with an online survey of HeliNY’s customers. Among the
questions they asked were:
• How easy was it to go through our booking process? (Rating scale, 1 = Very Difficult, 5 =
Very Easy.)
• Which words do you think best describe the look and feel of our site? (Multiselect word
association with options like “modern,” “tacky,” and “clean.”)
Case Studies — 5th Edition

Solutions & Results


The team at ETR implemented a new visual design across the site, using a cooler color palette
and more negative space.

On the Tours page, they replaced the uninformative descriptions of each tour with a single,
scannable comparison table. Users could easily see which landmarks were included in each tour.
For example, Columbia University was included in the Ultimate Tour and the Deluxe Tour but not
the New Yorker Tour.

Finally, ETR converted the booking form to a native, multistep booking process with clear instructions.

“One UX issue was that the existing tours page seemed difficult
to understand and compare different options. We added an easy
to understand comparison chart as well as an interactive map.
We also wanted to greatly improve the experience of their tour
booking process; previously, it was simply one very long Google
Form. We converted it to a native, multistep booking process
with clear instructions.”

Kerrin McLaughlin, Experience Designer & Researcher at ExpandTheRoom


Case Studies — 5th Edition

HELLOFRESH
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Mobile app Ecommerce/ Medium 5th
Subscription

Summary: Added visual hierarchy helped to communicate complexity in this meal-kit service
mobile app, resulting in easier, faster, more satisfying tasks.

METRICS

Methodology:
Quantitative usability testing, surveys

Metric: SUS Metric: Success rate


Before: 75 Before: 25%
After: 90 After: 100%
Improvement Score: 20% Improvement Score: 300%

Metric: Time on task Metric: Ease-of-use rating


Before: 28 Before: 6.1 out of 7
After: 14 After: 6.9 out of 7
Improvement Score: 100% Improvement Score: 13%
Percent Change: -50%
Metric: Confidence rating
Metric: Subjective success rate Before: 5.9 out of 7
Before: 63% After: 6.9 out of 7
After: 100% Improvement Score: 17%
Improvement Score: 59%

Product & Team


HelloFresh provides home-cooking meal kits to customers. Subscribers to the service receive a set of
ingredients and recipes each week. HelloFresh is based in Berlin, Germany, and is the largest meal-kit
provider in the United States. It also serves Canada, Western Europe, New Zealand, and Australia.
Case Studies — 5th Edition

When new HelloFresh customers sign up, they choose a plan based on dietary preferences,
serving size, and how many meals they want per week. Then the service delivers different recipes
each week. In the HelloFresh app, users can check upcoming meals, skip a delivery, or change
which recipes will be delivered.

Problems & Goals


After reviewing qualitative research and customer feedback, the HelloFresh UX team found
some issues with the home screen of the service’s mobile app, particularly when users checked
upcoming and past meals. Customers struggled to find specific upcoming and past meals, and
they didn’t always understand when it was too late to make changes to upcoming orders.

“Communicating the difference between past and upcoming


meals has always been an issue. Users aren’t sure: When does
their ‘week’ start? When is a ‘HelloFresh’ week? So, we explored
different potential ways to group meals.”

James Villacci, UX Research Lead at HelloFresh


Case Studies — 5th Edition

• Next upcoming delivery: This meal week is imminently approaching, and users only
have a limited amount of time left to make changes to their recipes (Edit meals) before it’s
packed and shipped. The team used full-width images for the meals in this week to show
more detail and grab attention.
• Following upcoming delivery: Users can make changes to meals a few weeks out, but
the need is less urgent. The team used smaller thumbnail images to visually show the
difference between this delivery and the closer one.
• Current delivery (meal kit currently at home): This meal is already at the user’s home and
contains the recipes for the current week. The team moved this into the Closed deliveries
section. In this section, there are no thumbnails giving an overview of the meals included.
• Shipping soon: This is next week’s meal. When the delivery is one week away, it’s too
late for the user to make any changes to their recipe selections.
Case Studies — 5th Edition

The HelloFresh UX team measured the performance of the old and new versions of the billing
page, using quantitative usability testing and surveys. For each task, the team collected time
on task, success rate, subjective success rating (asking participants if they thought they were
successful), SUS (System Usability Scale), ease-of-use rating, and confidence rating (asking
participants how confident they felt in their answer).

One of the most important tasks tested was: “You want to see more info about the delivery you
received May 12. How would you do that?” For this task, they found the following improvements
on the metrics:
• Time on task reduced by 50%
• Success rate increased by 300%
• Subjective success rate increased by 59%
• SUS score increased by 20%
• Ease-of-use rating increased by 13%
• Confidence rating increased by 17%
Case Studies — 5th Edition

JIRA (ATLASSIAN)
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app Enterprise Small 5th

Summary: A slight change in this popular agile tool’s backlog views resulted in a 95% decrease
in load time.

METRICS

Methodology:
Analytics

Metric: Page load time for a big backlog (10,000 Metric: Page load time for a regular backlog
issues) (400 issues)
Before: 85 seconds Before: 5 seconds
After: 4 seconds After: 2 seconds
Improvement Score: 2025% Improvement Score: 2129%
Percent Change: -95% Percent Change: -58%

Product & Team


Jira is a software development tool for agile teams. It helps teams organize user stories and issues
and plan for upcoming sprints.

Atlassian is an Australian enterprise software company. Its products (such as Jira and Confluence)
are popular for software development, project management, and content management.

Spartez is a Polish development company that partners with Atlassian and works exclusively on
Atlassian products.

Problems & Goals


An agile backlog is a list of tasks that represents outstanding work on a project. Agile teams can
view all of the work to be done, prioritize the issues, and use them to plan for upcoming sprints.

The Jira team realized that many organizations had enormous backlogs of around 10,000 issues
— many more than the application was originally designed to contain. They found that these large
backlogs were taking an extremely long time to load — in some cases, multiple minutes.
Case Studies — 5th Edition

“There is technically no limit on the number of issues that can


be in a backlog. From our customers, we knew that some
of the backlog sizes were almost 10,000 issues and 1,000
epics. Backlogs weren’t designed for that size, because agile
methodologies mandate to keep the backlogs up to date.”

Imran Parvez, Designer at Atlassian, Spartez

They discovered that a major part of the problem was that Jira will load all issues in a backlog,
which is more than users really need when they first open the view (they can’t scan through
10,000 issues at first glance).
Case Studies — 5th Edition

The Jira team decided that if they could have Jira load a small subset of all issues (100–500
instead of tens of thousands), they could potentially improve performance without damaging
the user experience. The team set a goal of loading a large backlog of 10,000 issues in less than
three seconds.

They started by reviewing their existing knowledge about their primary users (developers and
product managers) and their understanding of how those two user groups used the backlog in
their work. They considered three primary use cases:
• Planning: Regular triage of the backlog, road mapping for the year, estimating issues,
planning across multiple sprints
• Finding issues: Linking issues or finding duplicates
• Creating issues: Directly in sprints or at the bottom of the backlog

The Jira team had to find a way to reduce load time without interfering with any of these use cases.
They had to consider which issues should be included in that limited set that would be loaded
first. The most recently updated or commented-on issues seemed like the best option, since
Atlassian knew from their data that the longer an issue sits in a backlog, the less likely it is to be
resolved or updated.
Case Studies — 5th Edition

KASASA
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app Enterprise Large 5th

Summary: Substantial user-centered changes in an enterprise app resulted in a 24% increase in


client utilization.

METRICS

Methodology:
Analytics

Metric: Utilization (session volume per financial Metric: Average page views per session
institution)
Improvement Score: 113%
Improvement Score: 24%

Product & Team


Kasasa (kasasa.com) is an award-winning financial technology and marketing technology
provider. Based in Austin, Texas, with 450 employees, Kasasa helps more than 800 community
financial institutions establish long-lasting relationships with consumers residing in their local markets,
through its branded retail products, world-class marketing capabilities, and expert consulting.

Problems & Goals


The company was looking to revise its analytics platform, Insight Exchange, to better meet user
needs. Insight Exchange was a B2B application, allowing financial institutions to see how well
their products were performing in the market.
Case Studies — 5th Edition

MYAIR (RESMED)
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app Healthcare Small 5th

Summary: The visual redesign of an app helped to better align it with brand values as measured
by a survey.

METRICS

Methodology:
Survey

Metric: Percentage agreeing or strongly agreeing Metric: Percentage agreeing or strongly


that the design conveys “restful sleep” agreeing that the design conveys
“innovation”
Before: 55%
Before: 53%
After: 60%
After: 57%
Improvement Score: 9%
Improvement Score: 8%

Product & Team


myAir is an online support program and app created for patients using the ResMed Air10™ CPAP
device. CPAP devices are used to treat sleep apnea — a disorder that disrupts breathing during sleep.

ResMed is a San Diego–based company specializing in cloud-connected medical devices to treat


respiratory conditions.

Problems & Goals


The myAir app collects data from the patient’s CPAP device while they sleep and then
automatically syncs the data to the cloud after the patient wakes up. The app provides coaching
and feedback on the patient’s progress.

The ResMed team set out to ensure that the visual design of the myAir app was consistent with
the company’s branding and its other digital products.
Case Studies — 5th Edition

OAKLEY (LUXOTTICA)
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Ecommerce/ Retail Small 5th

Summary: A small experiment with promoting sales in a retailer site’s megamenu led to
significant lifts in four key ecommerce metrics.

METRICS

Methodology:
A/B testing

Metric: Revenue per session Metric: Conversion rate


Improvement Score: 17% Improvement Score: 11%

Metric: Cart abandonment Metric: Average order value


Percent Change: -4% Improvement Score: 5%

Product & Team


Oakley is a California-based retailer of sports equipment, including apparel, backpacks, shoes,
sunglasses, and accessories.

Luxottica Group is an Italian eyewear conglomerate based in Milan and the world’s largest
eyewear company.

Problems & Goals


The Luxottica team wanted to experiment with the most effective ways to promote big sales
events. In the past, they had promoted sales on the Oakley site through homepage hero banners
and promotional strip bars. For this experiment, they decided to try integrating the promotion in
the site’s global navigation megamenus.

“We wanted to provide direct hints of a promotion going on


by adding discount badges directly on the navigation system
within the drop-down menu. On mobile, we will be testing soon
a sort of ‘notification badge’ that leads the user to click on the
hamburger menu and follow the trails to find the promotion.”

Marco Catani, Optimization & UX Research Director at Luxottica


Case Studies — 5th Edition

They found that this approach positively impacted four key ecommerce metrics:
• Revenue per session: 17.3%
• Cart abandonment: -3.8%
• Conversion rate: 11.1%
• Average order value: 5.4%

After reaching statistical power in the results, the Luxottica team pushed the badge design out to
all of their users.
Case Studies — 5th Edition

PETSMART CHARITIES (MARKETADE)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Nonprofit Small 5th

Summary: By removing unnecessary form fields, this nonprofit substantially increased its
newsletter signup completion rate.

METRICS

Methodology:
A/B Testing

Metric: Newsletter signup completion rate


Before: 27%
After: 71%
Improvement Score: 163%

Product & Team


Marketade is a small but long-established and fully remote user-research company with expertise
in turning user insights into better experiences and business results.

PetSmart Charities is an American nonprofit that aims to end pet homelessness.

Problems & Goals


Over a long-term client relationship, Marketade helped PetSmart Charities establish a quarterly
user-research cycle using qualitative usability testing and Google Analytics analysis, focusing on
one feature or area of the site at a time.

One of those tests focused on the newsletter signup process. While several animal-loving
participants said they would want to sign up, they were surprised by the length of the form and
the quantity of the information requested. “It’s asking for my address?” one participant said,
surprised. “Why do they need that?”

The length of the form seemed particularly annoying on mobile devices, where filling in forms is
often more tedious than on larger devices with separate keyboards.
Case Studies — 5th Edition

PHILIP MORRIS INTERNATIONAL HR PORTAL


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app Internal Large 5th

Summary: By revising an internal tool to prioritize employee tasks, this large global company
gained big improvements in findability metrics.

METRICS

Methodology:
Tree testing

Metric: Success rate Metric: Success rate


Task: Searching for guidelines on employee referral Task: Checking request status
Before: 17% Before: 24%
After: 80% After: 71%
Improvement Score: 371% Improvement Score: 196%

Metric: Time on task Metric: Time on task


Task: Searching for guidelines on employee referral Task: Checking request status
Before: 63 seconds Before: 37 seconds
After: 20 seconds After: 7 seconds
Improvement Score: 215% Improvement Score: 429%
Percent Change: -68% Percent Change: -81%

Product & Team


Philip Morris International (PMI) is the largest tobacco company in the world (excluding the
Chinese National Tobacco Corporation). Its products are sold in over 180 countries. Marlboro is
the most recognized and best-selling of the company’s products.

Their human resources portal is an internal tool, which Philip Morris’s employees use on a daily basis.

Problems & Goals


The original HR portal (branded as YourHR) was too focused on the needs of HR specialists and
not focused enough on the needs of the rest of Philip Morris’s employees. This is a common
problem that often occurs with intranets and internal tools.
Case Studies — 5th Edition

“We decided that information architecture should be one of


the key challenges that the new portal should address. Our
employees were really struggling to find any information there.
As a result, they were contacting HR teams to help them.
So, we wanted to fix that and make sure that employees could
find information and do simple transactions themselves, instead
of having to call HR.”

Gosia Majka, UX Architect at Philip Morris

In addition, they realized that employees often wanted to access YourHR from their smartphones,
so the team decided to provide mobile support.

Solutions & Results


To inform the large-scale redesign project, the UX team conducted workshops and in-depth
interviews with employees and managers from different countries and departments. They also
conducted card sorting with employees to help shape the new navigation.

The UX team revised the entire portal, including a new and vastly different information
architecture. They redesigned hundreds of pages, and they tested those designs with employees
in different Philip Morris offices around the globe.

“We wanted to provide a unified self-service entry point for employee interactions and extensive
omni-channel capabilities,” said UX Specialist Monika Zielonka.
Case Studies — 5th Edition

The tree-testing study included ten of the top employee tasks. For two of the most critical,
the tree testing showed that the new design resulted in substantial improvements in
efficiency and effectiveness.

When searching for guidelines on employee referral, success rates increased by 371%, and time
on task decreased by 68%. When checking the status of a request, success rates increased by
215%, and time on task decreased by 81%.

Task: Searching for guidelines on employee referral

Metric Before After


Success rate 17% 80%
Average time to finish 63 seconds 20 seconds

Task: Checking request status

Metric Before After


Success rate 24% 71%
Average time to finish 37 seconds 7 seconds
Case Studies — 5th Edition

RAY-BAN (LUXOTTICA)
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Ecommerce/ Retail Small 5th

Summary: A small change in an A/B test of the checkout flow of a popular ecommerce site
resulted in a slight but definite decrease in conversion rate and revenue per session.

METRICS

Methodology:
A/B testing

Metric: Conversion rate Metric: Revenue per session


Improvement Score: -3% Improvement Score: -3%

Metric: Cart abandonment rate


Percent Change: 0.8%

Product & Team


Ray-Ban is an American-founded Italian brand of luxury sunglasses and eyeglasses, owned by
Luxottica Group.

Luxottica Group is an Italian eyewear conglomerate based in Milan and the world’s largest
eyewear company.

Problems & Goals


The Luxottica team found a guideline from a UX advice company recommending that third-
party checkout options (for example, checkout with PayPal or Apple Pay) should not be visually
prominent in the checkout flow. The rationale behind the guideline was that some users
misunderstood what those third-party options were and clicked them accidentally.

In Ray-Ban’s checkout flow, the option for PayPal checkout was prominently featured next to the
site’s own CHECKOUT NOW button. The Luxottica team decided to try to follow the guideline and
downplay the PayPal option.
Case Studies — 5th Edition

SHOPIFY
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app B2B/Enterprise Small 5th

Summary: Some minor changes to a billing page showed qualitative improvements; however, no
statistically significant differences were observed in the metrics.

METRICS

Methodology:
Quantitative usability testing, surveys

Metric: Time on task Metric: Ease-of-use rating


Before: 59 seconds Before: 6 out of 7
After: 64 seconds After: 5.8 out of 7
Improvement Score: -8% Improvement Score: -3%
Percent Change: 8%
Metric: Confidence rating
Metric: Success rate Before: 6.1 out of 7
Before: 90 After: 6.1 out of 7
After: 90 Improvement Score: 0%
Improvement Score: 0%

Product & Team


Shopify is an all-in-one ecommerce platform based in Canada. In 2020, they hosted ecommerce
sites for over one million businesses in over 50 countries globally. The platform’s services include
payments, marketing, shipping, and customer engagement tools for small merchants to simplify
the process of running an online store.

Shopify’s UX team is a thought leader in the industry. Their popular Medium articles cover various
UX topics such as content strategy, research, culture, and leadership. Their branded design
system, Polaris, is often cited in UX circles as a role model for what a design system can and
should be.

At Shopify, UX Research is focused on helping product teams design and build for their users.
Depending on the research questions, Shopify’s UX researchers leverage quantitative or qualitative
research methods to uncover insights about their users. This project was used as a pilot to
demonstrate the value of UX benchmarking and improve performance from a UX perspective.
Case Studies — 5th Edition

“At the time we conducted the Billing UX Benchmarking pilot study,


I was mostly focused on qualitative research. The pilot study was
an exciting opportunity for me to explore some of the questions
we had about measuring performance from a UX perspective.
And, in-line with Shopify’s value of building for the long term, it
was important to us to find an approach that could serve as a
foundation for other UX teams at Shopify interested in different
ways to measure the user experience.”

Funbi Makinde, UX Researcher at Shopify

Problems & Goals


Shopify Billing is the section of the client-facing admin tool that enables merchants to:
• Update payment information
• Find information about upcoming Shopify expenses and store credits
• Find historical bills

While reviewing customer support logs, the Billing UX team learned that some merchants were
struggling to filter through and download their historical Shopify bills.

“People care about their money. So, if something’s off, they’re


motivated enough to call in and let us know there’s something
wrong. […] In the redesign, we wanted to make it easier for
merchants to find and download their bills so that they can
focus on other business needs.”

Funbi Makinde, UX Researcher at Shopify


Case Studies — 5th Edition

Using surveys and quantitative usability testing, the team collected the following metrics for
finding billing information, but they found no statistically significant changes:
• Time on task (8% increase)
• Success rate (0% change)
• Ease-of-use rating (3% decrease)
• Confidence rating (0% decrease)

In addition to gathering quantitative data, the Shopify team also collected qualitative insights
about participants’ interactions with each version of the Billing page. By observing participants’
interactions with both designs and asking followup questions, the team was able to gather rich
qualitative data on UX problems participants encountered and their impact on participants’
ability to complete each task. Those qualitative insights were used by the team in future redesign
projects to improve Shopify’s billing services.

“Some design changes will have a big effect on the metrics you’re
tracking, and some won’t. A quantitative study helps us separate
those concerns and purely judge aspects like performance and
whether it increases or not.”

Funbi Makinde, UX Researcher at Shopify

Within Shopify, this project had an even greater impact on UX research processes: it allowed the team
to create a UX benchmarking toolkit to help other teams at Shopify get started with these techniques.

“Performance is incredibly important to our merchants and their


customers. Being an entrepreneur is already one of the hardest
jobs in the world, and slow loading times and poor workflows
make that even harder. So we’ve baked that need for better
performance into our studies.
We need to know, How are our users completing their tasks on
Shopify? Are they doing them faster? Are they getting stuck?
How is this changing over time? And, just as important, for all
the numbers we collect, we need to know why. It’s a beautiful
mix of quantitative and qualitative research.”

Funbi Makinde, UX Researcher at Shopify


Case Studies — 5th Edition

STARBUCKS COLLEGE ACHIEVEMENT PLAN (ARIZONA


STATE UNIVERSIT Y)
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Education/ Nonprofit Medium 5th

Summary: A refresh of the visual design of this corporate scholarship site coincided with a 78%
increase in traffic.

METRICS

Methodology:
Analytics

Metric: Organic traffic Metric: Request for information


Improvement Score: 78% Improvement Score: -65%

Metric: All conversion rate (request for


information and Apply Now clicks)
Improvement Score: 0%

Product & Team


The Starbucks College Achievement Plan (SCAP) is a first-of-its-kind partnership that creates an
opportunity for all US benefits-eligible Starbucks partners (employees) to earn their bachelor’s
degree online at Arizona State University with 100% tuition coverage. SCAP currently offers
partners access to more than 100 undergraduate degree programs through ASU Online.

EdPlus is a central enterprise unit for ASU, focused on the design and scalable delivery of digital
teaching and learning models to increase student success and reduce barriers to achievement in
higher education.

Problems & Goals


The Starbucks College Achievement Plan site was already the highest-converting product in
ASU’s EdPlus program, with an organic conversion rate of 20% and a bounce rate of just 33%.
The EdPlus team knew that the incoming traffic to this particular site was already primed to
convert. But from qualitative research, they also knew that Starbucks students wanted to get the
information more quickly and without any marketing fluff in the content. The EdPlus team set out
to improve the experience on this already high-performing site.
Case Studies — 5th Edition

Solutions & Results


The EdPlus team found that their content-heavy pages sometimes overwhelmed users, so they
focused on reducing any unnecessary content on the site and adding more imagery to support
the message. They also chose to add student testimonials, as well as more details about the
degrees that Starbucks students could pursue.

They added a frequently asked question page to help students get to the answers they wanted
quickly. Finally, they added an Apply Now button in addition to the previous Request for
Information call-to-action.

Year over year, the EdPlus team observed a 78% increase in organic traffic — though due to the
nature of this metric, that could also be impacted by variables other than design (marketing
campaigns, SEO changes, etc.)

They did observe a decrease in the conversion rate of the request-for-information call-to-action (a
65% decrease), but those conversions have simply moved to the new call-to-action, Apply Now. In
this case, a decrease in a positive metric is actually a good result.
Case Studies — 5th Edition

SYNETO CENTRAL
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Web app Enterprise Small 5th

Summary: Redesigning a critical task in Syneto CENTRAL’s complex cloud services platform
resulted in a substantial reduction of time-on-task.

METRICS

Methodology:
Quantitative usability testing

Metric: Time on task for creating a new location


Improvement Score: 2150%
Percent Change: -96%

Product & Team


Syneto is a European data platform that enables computing, storage, and networking. Syneto
CENTRAL is a cloud services platform.

The design team at Syneto aims to include customer feedback as much as possible, as it
simplifies a very complex product.

Problems & Goals


Syneto CENTRAL allows customers to view “locations” — an aggregate representation of all of the
devices from a specific company in a specific location. For example, a company might have three
locations: New York, London, and Paris. Within each location view, users can see the collected
data for that location.

A critical task in this application was creating a new location. Through user interviews, the Syneto
design team identified that the workflow for that task should be improved and simplified.

Solutions & Results


After revising the design for creating a location, the team was able to reduce the average time on
task from 45 seconds to 1.6 seconds — a 96% decrease.

Unfortunately, the Syneto team was not able to provide screenshots of the design changes.
Case Studies — 5th Edition

THE DEAL (EXPANDTHEROOM)


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Finance/News Large 5th

Summary: The Deal increased trial requests after removing some of its content from behind a
paywall, making the trial program more visible and simplifying the trial request form.

METRICS

Methodology:
Analytics

Metric: Conversions (free trial request form Metric: Visitors from organic search
submissions)
Improvement Score: 60%
Improvement Score: 146%

Metric: New visitors


Improvement Score: 45%

Product & Team


ExpandTheRoom (ETR)11 is a data-driven design agency working primarily with marketing and
IT product teams. Utilizing its customer-centric approach, “Purpose-Driven Design,” ETR creates
websites, custom productivity tools, applications, and interactive experiences. ETR’s team is fully
distributed across the United States.

The Deal offers insider news on business information like mergers, acquisitions, and investment
strategies. Some content is offered for free, but premium content is behind a paywall.

Problems & Goals


ETR set out to perform a full redesign of The Deal’s marketing website, which advertised the gated
content. Their redesign included the visual design, content, and information architecture.

At the beginning of the project, ETR worked with The Deal’s team to define key metrics to
improve. Two of the high-priority metrics they identified were conversions (free trial request form
submissions) and visitors coming from organic search. After aligning, ETR identified problems
related to those metrics.

11
www.expandtheroom.com
Case Studies — 5th Edition

First, one obstacle to increasing trial request submissions was that the trial request call-to-action
was buried far down on each page, showing up just above the site footer. Another problem was
that the trial request form was too long — it contained 16 required fields.

Second, ETR realized that the site did not adhere to SEO best practices, which reduced visitors
coming in from organic search.
Case Studies — 5th Edition

TOP 10 ONLINE CASINOER


TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website Entertainment Small 5th

Summary: A handful of small design changes increased clickthrough and conversion rates for
this site that compares different Danish gambling websites.

METRICS

Methodology:
Analytics, A/B testing

Metric: Clickthrough rate to affiliated gambling Metric: Conversion rate, clicking the call-to-
sites action to visit or learn more about a site
Improvement Score: 140% After: Increased by 18 percentage points

Metric: Conversion rate, finishing registration and


playing at affiliated gambling site
After: Increased by 20 percentage points

Product & Team


Top 10 Online Casinoer (top10onlinecasinoer.com) is a Danish online gambling comparison site.
Top 10 Online Casinoer is owned by Ante Technologies, a global marketing company.

Problems & Goals


The site’s main purpose was to quickly generate clicks to the online gambling sites listed. The
team’s main goal was to increase the clickthrough rate.

The team had previously observed various small usability problems in a competitive analysis
study, and this helped them generate new design ideas.
Case Studies — 5th Edition

These small design changes yielded substantial metric improvements:


• Clickthrough rate to online gambling sites increased by 140%
• Conversion rate (finishing registration and start playing at a gambling site) increased by
20 percentage points
• Conversion rate (clicking the call-to-action button to visit a site or learn more about it)
increased by 18%
Case Studies — 5th Edition

USER INTERVIEWS
TYPE SUBJECT PROJECT REPORT EDITION
SIZE
Website B2B Small 5th

Summary: A small change in visual design yielded a big increase in account creation on User
Interviews’s marketplace site.

METRICS

Methodology:
A/B testing

Metric: Account creation


Before: 1.28%
After: 3.94%
Improvement Score: 208%

Product & Team


User Interviews is an online recruitment platform where UX researchers can find, recruit, and
schedule participants for their studies.

The design team at User Interviews knows that (as a user research recruitment platform) they
have to practice what they preach. They believe that consistent user feedback is key to success,
for their product as well as their customers’ products.

Problems & Goals


User Interviews realized they were suffering from a problem common to marketplace sites. They
had two primary user groups:
• researchers, who joined User Interviews to recruit; and
• participants, who joined User Interviews to participate in studies.

These two different groups had different types of accounts, but some users were getting confused
and signing up for the wrong account type. In particular, they often found that participants would
accidentally register as a researcher.
Case Studies — 5th Edition

WORDFINDER (LOVETOKNOW MEDIA)


Type: Website

Subject: Entertainment

Project size: Small

Summary: A small change to this entertainment utility site resulted in a slight increase in
returning users.

METRICS

Methodology:
A/B testing

Metric: Average session duration Metric: Returning visitors


Improvement Score: 4% Improvement Score: 8%
Percent Change: -4%

Metric: Sessions per user Metric: Pages per session


Improvement Score: -5% Improvement Score: 3%

Metric: Bounce rate


Improvement Score: -3%
Percent Change: 3%

Product & Team


WordFinder is a reference tool for Scrabble players. Users can enter the letters they have, and
WordFinder will return Scrabble words that could be made from those letters. WordFinder is
owned by LoveToKnow Media. LoveToKnow is an American digital media company based in
Silicon Valley.

Problems & Goals


WordFinder is a reference tool for Scrabble players. Users can enter the letters they have, and
WordFinder will return Scrabble words that could be made from those letters.

The WordFinder site’s revenue comes entirely from advertisements. To be profitable, users must
find it easy to use and want to return.
Case Studies — 5th Edition

However, they began testing the designs right as the COVID-19 pandemic began to hit Europe
and the US. They acknowledge that this event certainly impacted their findings, as user behaviors
began to shift in response to lockdowns.

The team did see a slight decrease in average session duration by 4%. They also observed a slight
increase in returning visitors by 8%.
Case Studies — 4th Edition

Case Studies — 4th Edition

HARRISBURG AREA COMMUNIT Y COLLEGE


TYPE SUBJECT REPORT EDITION
Website Transportation 4th

METRICS

Methodology:
Analytics

Metric: Visits
Before: 770,921
After: 1,058,906
Improvement Score: 37%

Product & Team


Harrisburg Area Community College (HACC) serves more than 20,000 students across multiple
locations in central Pennsylvania, as well as through online courses.

Problems & Goals


The school needed to promote new features and functionality (such as a student portal, college
events calendar, and email system). The original design had been created by a graphic artist who
worked with the public relations department. Thus it had a heavy public-relations influence.

Solutions & Results


The website was redesigned to prominently display links to important new features while
preserving as much as possible of the old layout. The teams studied Google Analytics traffic (to
identify underutilized features) and collected informal user feedback to prioritize placement and
prominence of new elements.

Students in the cafeteria participated in card sorting activities to help group the large number of
links in the menu in the “Student Services” section.

By creating an easy-to-find utility area in the top right corner of the homepage with links to
essential student resources such as the “MyHACC” student portal and the “HAWKMail” email
Case Studies — 4th Edition

system, the homepage became a much more useful destination for students, which contributed
to both the 37% increase in traffic to the website, and the 106% increase in traffic to the
HAWKMail pages.

In the old design, the “Course Schedules” page (which allowed students to search for classes)
was one of the top ten most-visited pages. The new design made this common task more efficient
by adding a specific “Search Class Schedules” module on the right side of the homepage, which
reduced the number of visits to the “Course Schedules” page by 46%.

Other design changes reduced the amount of clutter on the page and provided direct access
to rich content, which contributed to the increase in page views. For example, the old design
featured six small thumbnail images in the header, which weren’t large enough to see easily. The
new design eliminates these small header images and instead displays one large featured image
in the main page content. Other small changes, such as consolidating the color scheme into
fewer colors and using color blocks to divide different elements in the right sidebar, help make
the page easier to scan and understand quickly.

To allow for differences in the seasonal traffic of a school, the first six weeks of the fall quarter
before the redesign are compared with the first six weeks of the spring quarter after the redesign.

The new design greatly increased total visits, page views, and unique visitors. Enrollment
increased slightly between the two measurement periods, but that increase was not enough to
explain the change in web traffic.

Measurement Before After


Total Visits 770,921 1,058,906
Unique Visitors 212,489 272,223
Page views 2,440,035 3,180,935
HAWKMail Page views 59,276 124,227
Course Schedule Page views 92,335 49,696
Case Studies — 4th Edition

UNIVERSIT Y OF EDINBURGH
TYPE SUBJECT REPORT EDITION
Web app Internal/Content 4th
Management System

METRICS

Methodology:
Quantitative usability testing

Metric: Time on task


Improvement Score: 33%

Product & Team


The University of Edinburgh offers an online content management system interface that allows
content contributors to build web pages. The team in charge of this tool had been primarily
focused on adding functionality, but following a survey of the user group and feedback from
those responsible for training content contributors, the team decided to address the usability of
certain key existing features.

Problems & Goals


The original interface for building web pages was very basic, with little visual distinction between
the content and the page background and a repetitive, cumbersome workflow. For example,
contributors could only add new content elements at the very top or very bottom of a page. To
position a new element in the center, they had to move it above or below nearby elements
one step at a time using “Up” or “Down” buttons. Since a typical webpage consists of 10 to 30
elements, this process quickly became onerous.

The goals of this redesign were to:


• Reduce the number of clicks required to:
• insert an element midway through a page
• reorganize the layout of a page
• Reduce the time required to build or re-edit a page
• Improve the scannability of content in a page
• Reduce the risk of accidental deletions when elements are being reorganized
Case Studies — 4th Edition

Solutions & Results


The new interface allows contributors to more easily insert elements at a particular location and
move elements around on the page.

All the content elements on a page are identified by a number, and when creating a new
element, users can insert it before any specific numbered element. Also, an existing element
can be moved in just two steps by selecting its checkbox and clicking the “Move Here” buttons
in the desired destination.

Another benefit of the new design is that it places the “Move here” and element numbers on the
left side of the page, isolating the “Remove” button on the far right. This reduces the risk of an
element being removed accidentally.

To evaluate the effects of the design changes, a small group of experienced users attempted a
series of representative tasks using the original interface. About six weeks later, they undertook
the same tasks using the new interface.

In both studies, participants used a provided timer to time themselves completing tasks.

Average Time
Task Before After Ratio Improvement
Quote/ 140.0 91.0 154% 54%
Feature box
Paragraph/ 121.5 92.2 132% 32%
Bullets
Insert include article element 78.25 61.0 128% 28%
Create new article 804.75 679.40 118% 18%
Overall 133% 33%

Since these testers had only just begun to use the new interface, their performance will likely
continue to improve, further increasing the average time savings.

For the evaluation purposes, participants received prepared content to use for the “Create new
article” task. In reality, however, many contributors do not organize their content in advance and
instead tweak the content once it’s in the content management system. Therefore, actual time
saved on producing new web pages is likely to be even higher than reported here.

These changes are estimated to save between 3,688 and 4,610 hours annually. Put in monetary
terms, the estimated annual savings (based on a clerical grade hourly rate of £13.50) is between
£49,788 and £62,235 (or US$80,228 to US$100,285).
Case Studies — 3rd Edition

Case Studies — 3rd Edition

ADOBE KULER (KULER.ADOBE.COM)


TYPE SUBJECT REPORT EDITION
Web app Design 3rd

METRICS

Methodology:
Analytics

Metric: Average comments per day


Before: 6
After: 37
Improvement Score: 617%

Product & Team


Adobe kuler (now Adobe Color) is a Web application in which users create, share, rate, and
discuss individually designed color themes that can be exported for use in projects and
applications, such as web design, graphic design, interior design, or arts and crafts.

Problems & Goals


Although users engaged with most features on the site, very few were using the commenting
feature. The site was a highly stylized Flash design employing some non-intuitive conventions,
such as depicting the number of comments as a series of blocks below the comment area (when
users clicked a block, the comment was displayed). Only one comment was displayed at a time.
The button for posting a new comment was small and not clearly labeled.

Solutions & Results


A common-sense approach prevailed in redesigning the feature to more familiar parameters:

The button for posting a comment was enlarged and moved to a prominent position.

The original comments button had only a “plus” sign to indicate its function. The new version
states, “Add a comment” in large, easily readable type.
Case Studies — 3rd Edition

CAPITAL ONE
TYPE SUBJECT REPORT EDITION
Website Finance 3rd

METRICS

Methodology:
Surveys

Metric: Satisfaction
Improvement Score: 24%

Product & Team


Among its various financial businesses, Capital One has more than 10,000 employees who use its
“My One Place” intranet portal. Its highly functional portal design won it a place on Nielsen Norman
Group’s Ten Best Intranets of 2006. This case study was included in the report in the 3rd edition.

Problems & Goals


An ongoing project to improve the usability of the “My One Place” portal took user satisfaction
from 49% to 71% from September 2004 through May 2005. The goal was to lift that figure to 80%
by the end of 2005.

A survey asked users to identify features they would like to see added to My One Place. At the
top of the list was an automatic login function. Users were required to log in every time they
entered the portal, including if they opened a new browser window or a hosted application.
The obstacle was highlighted by the fact that intranets run by Capital One subsidiaries did not
require repeated verification.

Solutions & Results


The design team implemented “Speedpass,” an automatic login function. Users are only required
to log in once every 30 days on the same computer, regardless of browser sessions. Users can
also opt to have a browser automatically open and log in to My One Place as soon as they boot
the computer. Speedpass also covers single-sign-on applications within the intranet, eliminating
another layer of authentication.

The major interface change involved in implementing Speedpass is the addition of checkboxes for
“Remember me” and “Auto launch my browser,” enabling users to control their login experience.
Case Studies — 3rd Edition

DIRECT MARKETING ASSOCIATION


TYPE SUBJECT REPORT EDITION
Website Trade group 3rd

METRICS

Methodology:
Analytics

Metric: Registrations
Before: 1.6%
After: 4.1%
Improvement Score: 254%

Product & Team


The Direct Marketing Association (now called Data & Marketing Association) is a trade group with
more than 3,600 members among business and nonprofit organizations that use direct marketing
techniques. Its website, the-dma.org, offers resume posting and personalization features for both
members and nonmembers.

Problems & Goals


The design team set out to test the effectiveness of different methods of encouraging visitors to
sign up for a new account. The site’s default call to action was a text link underneath the sign-in
box on the home page, stating “Get A Free Web Account.”

Solutions & Results


Designers tested a call-to-action ad, placed in the center of the page, in addition to the existing
text link. The ad described some of the benefits of membership and featured a large color button
labeled “Subscribe Now.”

The combination of the text link and the call-to-action ad generated significantly more clicks than
the text link alone — more than two and a half times as many. The call-to-action also performed
better on its own than the text link by itself.

The text link, served on a page with no call-to-action ad, had a clickthrough of about 1.62%. The
call-to-action ad itself had a clickthrough of 2.11%. Additionally, the ad’s presence on the page
correlated to an increase in clickthrough for the original text ad — lifting it to 2.01%.
Case Studies — 3rd Edition

EUROSTAR (ETRE)
TYPE SUBJECT REPORT EDITION
Website Transportation 3rd

METRICS

Methodology:
Finance

Metric: Online sales


Before: £110 million/year
After: £136 million/year
Improvement Score: 124%

Product & Team


Eurostar is the high-speed train service that connects the United Kingdom with mainland Europe
and has been named “World’s Leading Rail Service” at the World Travel Awards every year since
1998. Its website, Eurostar.com, allows users to book trains, accommodation, and rental cars and
to obtain information about every aspect of the travel experience, from purchasing tickets to
boarding trains to sightseeing at the destinations the company serves.

Problems & Goals


In September 2005, Eurostar commissioned user-experience specialists Etre (www.etre.com)
to help redevelop its global web presence. The main objectives being to make significant
improvements to the usability and information architectures of the company’s family of websites
(spread across several different countries and languages); to introduce a host of new travel
booking features; to incorporate a new global brand identity; and to provide a market-leading
online experience for its customers — all within a six-month time frame.

To achieve these aims, Etre delivered an iterative user-centered design program comprising
three usability tests — the first of which identified more than 100 usability issues present on
Eurostar.com. Using this information as an input, Eurostar’s design team developed wireframes,
process flows, and subsequently a barebones HTML-design prototype, which was subjected to
a second round of testing. This time, 70 usability issues were identified. The designers used this
feedback to create a new “hi-fidelity” prototype, featuring near-final visual designs and HTML.
This prototype also underwent testing. Findings and recommendations arising from this third
study were then used to create the final version of the website’s design. Also incorporated was
feedback from several other user-experience-related activities, including card sorting and user
Case Studies — 3rd Edition

surveys (which aimed to address IA and labeling issues) and usability inspections (which were
used to evaluate areas of the site that could not be included in the user testing due to project
time constraints).

Solutions & Results


The new design sought to address several specific design issues identified during the
aforementioned user experience activities:

Error messages. Unspecific and unhelpful error messages were to blame for the majority of
problems that users experienced during the testing of Eurostar’s old website. For instance, when
desired train fares were unavailable, the site failed to recommend alternative choices, leaving users
at a dead end. And when users’ sessions timed out, error messages began stacking on top of each
other, eventually disabling the browser’s Back button and requiring them to close the browser
window and start over. Unfortunately, a number of technical issues have prevented Eurostar from
addressing these problems as thoroughly as it would have liked. However, the team firmly believes
that the improvements made to date are the main driver of the subsequent ROI improvements.

Confusing language. Product names and acronyms that were fairly transparent in one language
were completely opaque in another. Other labels were simply confusing or inconsistent, and the
site sometimes changed language unpredictably as users were navigating it. Card sorting helped
identify structural issues, while nomenclature surveys helped identify issues with the terminology
used to describe products, services, and navigational elements.

Confirmation pages. In the old site, confirmation pages failed to inform users that they had
successfully completed processes like account registration. These pages were subsequently
redesigned to eliminate confusion.

User accounts. The old website let users create two different types of account — a standard
website account and a frequent traveler account. Both were managed and maintained in separate
areas of the site and required users to complete different registration processes. This “branching”
created much confusion and, during testing, contributed to a failure rate of nearly 70% among
users who attempted to register to use the site. The two account types were merged into a single
account (i.e., a standard website account that could be extended to encompass frequent traveler
functionality as needed), which reduced the complexity of the overall site significantly.

HTML issues. The old version of Eurostar.com was plagued by technical issues. Indeed, a serious
level of degradation was evident when using the site with any browser/operating system other
than Internet Explorer on a PC. For example, completing various booking transactions in browsers
like Firefox, Safari, and Opera was nearly always problematic and sometimes even impossible
— in the majority of cases, client-side page interactions were erratic, and the overall design
aesthetic was significantly compromised. The redesign thus focused on redeveloping the site
in accordance with W3C and related web standards. Given the number of pages and the overall
Case Studies — 3rd Edition

HEALTH CARE WITHOUT HARM


TYPE SUBJECT REPORT EDITION
Website Healthcare 3rd

METRICS

Methodology:
Analytics

Metric: Reduce exits from issues pages


Before: 70% remained on site
After: 97.5% remained on site
Improvement Score: 139%

Product & Team


Health Care Without Harm (http://noharm.org/europe) is the website for a coalition of health care
providers and related organizations. Its goal is promoting safer products and practices. The site
features 10 issue areas, including medical waste, food and building contaminants, and green practices.

Each issue area incorporates a text overview and several subpages containing articles and other
resources. The main page is a short article about the subject and includes a sidebar with links to
resources. In some cases, links are also embedded in the overview’s text.

Problems & Goals


Visitors often exited the site from the main issue page without clicking through to the articles
and resources. This resulted in lower depth of visits and, of course, fewer people accessing the
resources the site offers. The objective, therefore, was to improve the stickiness of the issue page
and encourage people to access the section’s more targeted content.

Solutions & Results


As a trial, prior to a total site redesign, the designer replaced the text overview format with an
abbreviated text description and simple contents page that starts over the fold. The overview was
moved to an inside page, which was linked from the table of contents.

The design change was implemented on a randomly selected issue page, while the remainder of the
site kept the old design. Note that the comparisons provided are between the new design and the
old design on comparable-traffic pages during the same period, rather than period over period.
Case Studies — 3rd Edition

MEDIA NEWS GROUP INTERACTIVE


TYPE SUBJECT REPORT EDITION
Website News 3rd

METRICS

Methodology:
Analytics

Metric: Page views


Improvement Score: 106%

Product & Team


Media News Group’s Interactive subsidiary provides the online component for about 80
newspapers within the chain, including the Denver Post, San Jose Mercury News, the LA Daily
News, El Paso Times, and others. Although each site has an individualized design, the companies
share some key features.

Problems & Goals


A major goal for news sites is always to increase page views and get readers more invested in
the site. As opposed to a straight ecommerce storefront or business promotion, newspaper sites
are in the business of building a destination and creating the same sense of continuous and
comprehensive product that a newspaper delivers. Given the wide variety of online news sources,
this is an especially important challenge for local newspapers.

Getting people to the site is a matter of marketing and news. The Media News Group design team
wanted to look at ways to keep people on the site, with an eye toward increasing pages views
and branding the newspaper site as a destination for news.

Solutions & Results


The designers decided to add a module on most pages that displayed the most emailed and most
viewed stories at any given moment. The widget features a two-tab layout, with “most viewed”
on top and “most emailed” beneath. Each tab displays links to the top stories in its respective
category. The number of stories displayed varied by site.

Stats were provided for three newspaper sites — dailynews.com, twincities.com, and
mercurynews.com. As its target metric, the team focused on “residual page views,” defined as the
number of page views by a visitor after they viewed a news story page, showing how the module
Case Studies — 3rd Edition

MICROSOFT OFFICE HELP PAGES


TYPE SUBJECT REPORT EDITION
Website Productivity 3rd

METRICS

Methodology:
Surveys

Metric: Visits
Before: 770,921
After: 1,058,906
Improvement Score: 37%

Product Office Online Article Ratings


Metric Rating response
Variant A 100% (baseline)
Variant B 220%
Variant C 795%
Difference, A to C 695%

Product & Team


To assist users of Microsoft Office, the Office Online website (http://office.microsoft.com) provides
a search entry point for help queries. In addition, recent versions of Office (e.g., Office 2003 and
2007) provide a “better-when-connected” experience, where help queries from the Office clients
applications (e.g., Word, Excel, Powerpoint) can be answered by the Office Online service so that
users can get up-to-date help articles and so that editors of these articles can get feedback and
improve them or add new ones. This case study was included in the report in the 3rd edition.

Problems & Goals


Users are asked to rate the articles, and several alternatives for rating widgets were experimented
with, from a five-star system to yes/no/I-don’t-know. A text box for free-form input was also
available with a submit button, but the timing of its appearance varied as described below.
Case Studies — 3rd Edition

Solutions & Results


Three variants of the feedback form were tested.

Variant A showed an unlabeled five-star rating system and a text box labeled “Tell us why you
rated the content this way (optional).” The free-form text box appeared below the five-star rating
as shown in the figure below.

Variant B presented visitors with a five-star rating option labeled from “Not Helpful” (one star) to
“Very Helpful” (five stars). When a visitor clicked on a rating, a text box was then served asking,
“Why did you rate the information this way?”

Variant C showed yes/no/don’t know buttons, but added three customized text box responses
served when the user clicked on one of the ratings. Each text box was tailored to the response —
“How was this information helpful?” “How can we make this information more helpful?” and “What
are you trying to do?”

The third variant significantly outperformed the others.

Because of Microsoft’s extremely high traffic, it’s possible to make some very credible inferences
about how the layout of the feedback function influences response rates. Three variants of the
feedback form were tested, and each was viewed more than a million times. Working from an
arbitrary baseline value of one for Variant A (the actual response rates were normalized for
confidentiality), the success rates for the three approaches compare as follows:

A B C
1 2.21 7.95

The clear message here is that for increasing response rates, simplicity makes a big difference.

Variant A allowed visitors to rate the page from one to five stars, with an optional text box for
comments. Variant B was more than twice as successful. The major difference between the two
approaches is the removal of the text box, which in B is displayed only after a rating has already
been selected. B also more clearly explains the rating system.

Even though the text box is clearly labeled “optional” in A, its very presence appears to increase
the psychological investment required for a visitor to click a rating. Furthermore, the presence of a
“submit” button confuses the layout since it’s not clear that the button only applies to the text box.
Rating data is collected as soon as a user clicks a star.

Variant C offers just three text-based responses and only serves a text-entry box after the click.
The numbers here couldn’t be clearer — yes/no/don’t know vastly outperforms both of the five-
star systems.
Case Studies — 3rd Edition

NORTH CAROLINA STATE UNIVERSIT Y


TYPE SUBJECT REPORT EDITION
Website Education 3rd

METRICS

Methodology:
Quantitative usability testing

Metric: Task success (A vs. Before) Metric: Time on task (A vs. Before)
Improvement Score: 15% Improvement Score: 44%

Metric: Task success (B vs Before) Metric: Task success (B vs Before)


Improvement Score: 68% Improvement Score: 55%

Product & Team


The North Carolina State University library system offers various online tools to search the
archives and find journal references. Users were given the choice between searching academic
journals and narrowing the search to specialized databases, organized by a wide variety of
categories and criteria, including source, general type of publication, citation information,
and topic. The university created a usability task force to evaluate the site, test alternative
configurations, and recommend solutions.

Problems & Goals


Under the original design, users were taking too long to complete searches. User testing showed
users completing assigned article-finding tasks on the site barely half of the time. The testing
identified several specific usability problems.

The site’s “find articles” page contained a search tool and links to additional search tools. Users
gravitated to the search tool on the search page, in part due to predictable inertia and the
magnetic attraction of any type-in field, and in part because the descriptive language for the
additional tools was not sufficiently clear.

Search forms included drop-down menus — sometimes multiple drop-downs — that further
refined which search tool was being employed. Users selected from the drop-down lists at
random, usually resulting in a critical obstacle to task completion.
Case Studies — 3rd Edition

Tool labeling did not correspond to the usefulness of the tool. As a result, users were inclined to
select a less useful tool when a better one was available.

The language describing the types of searches offered was not sufficiently descriptive.

Solutions & Results


User testing evaluated two different approaches to the collection of search tools. Users were
evaluated on average task time in seconds and task completion (one for a completed task and
zero for a failed task).

Design Average task time Average task success


Original site 339 0.53
Model A 236 0.61
Model B 219 0.89

In this project, the same version (Model B) was superior on both of the measured usability
attributes. Thus, B is clearly better than A. Not all studies have this simple outcome. Sometimes
you will find that one design wins on one metric whereas another design wins on another metric.
In this case, you have several options: The optimal approach is often to produce yet another
design, taking the best aspects of both contenders. If you don’t have time for an additional
iteration, you might decide that one of the metrics trumps the other (for example, sales may be
more important than anything else for an ecommerce site). Alternatively, you sometimes find that
one design was a huge win on one metric, whereas the other design was marginally better on the
other metric. In that case, you would pick the first design and suffer a small degradation on the
second metric in order to gain the big improvement on the first metric.

Model A included more direct access to the search tools offered by the site, with a front page
divided into two different approaches, including direct access to the “Citation Linker” tool on the
right hand side of the page. The latter tool was particularly problematic in the usability testing —
users had a tendency to indiscriminately enter search terms into the highly specific fields, often
searching for a nonfunctional term and searching into a database with a very limited scope
(academic journals). This tactic frequently resulted in a failed task.

Model B significantly outperformed its competitors in both time and rate of completion. Using
simple text links, the navigation steered users based on the type of information they wanted
to find. The search tools themselves were located on inside pages — users had to make
determinations based on the content they sought before getting access to a tool.

In most of the other designs this project team examined, success generally corresponded to
reducing the number of clicks to a goal. In this case, the opposite dynamic applied.
Case Studies — 3rd Edition

The library system offers several different search appliances, the parameters of which are often
dictated by outside vendors. Testing found that users had a strong tendency to use the first tool
they were presented with — whether or not it matched the data set they were supposed to be
searching. Because of that factor, whatever disadvantage the extra clicks created was outweighed
by the advantage of preventing errors.

However, the final design did not entirely reflect the usability results. (The final design was
actually a redesign of the usability study’s recommendation.) Although the redesign did add text
guidance to steer visitors to the correct tool, it continued to include the Citation Linker on the
front page.

In part, the decision keep Citation Linker on the Find Articles page was motivated by testing
results that found reduced success rates for a couple of very specific tasks. However, the
placement of the tool continues to result in error responses.

Despite the retention of the Citation Linker, the new design did adopt other strategies that
reflected usability concerns.

For instance, rather than simply pointing users toward “Google Scholar,” the front page provides
a link to the tool but also describes the type of content that the search will provide — “scholarly
articles, conference papers, technical reports, books” — bringing more useful content to the surface.

Additionally, a small box in the lower right provides tips and links to more detailed instructions
on how to use the system. A dropdown at the bottom of the page offers a selection of more
specialized databases.
Case Studies — 3rd Edition

SCANDINAVIAN AIRLINES
TYPE SUBJECT REPORT EDITION
Website Air travel 3rd

METRICS

Methodology:
Surveys

Metric: Clickthrough rate


Improvement Score: 1406%

Product & Team


Like most air carriers, Scandinavian Airlines books much of its travel online, through its website at
flysas.com. After booking, the final confirmation page includes detailed information on times and
flight numbers.

Problems & Goals


As part of the confirmation page, the airline wanted to provide users with an informational page
designed to make travels go smoother, including information on baggage, security, and similar
tips. Users are alerted to the page by a banner button, which says, “Get a good start to your trip.”
Since the airline has a vested interest in savvy, prepared travelers (who require less customer
service), the designers wanted to encourage more customers to check out the tips page after
reviewing their details.

Solutions & Results


In the original design, the banner was placed on the top right corner of the confirmation page,
well above the fold. The confirmation details were fairly extensive and important to users, so by
the time they finished reviewing the information, the banner had disappeared off the top of the
page. The designers moved the button to the bottom right corner of the page.

Clickthroughs soared by more than 1300% after the change. The new button offered two
major advantages.

First, it caught customers at the bottom of the page when they were preparing to leave. This
approach doesn’t always fit a website’s profile, but it’s especially effective here since the visitor is
highly motivated to read the entire page.
Case Studies — 3rd Edition

SHELTER.ORG.UK (ENGLAND AND SCOTLAND)


TYPE SUBJECT REPORT EDITION
Website Nonprofit 3rd

METRICS

Methodology:
Surveys

Metric: Feeedback from survey page


Before: 27/month
After: 406/month
Improvement Score: 1403%

Product & Team


Shelter is a housing advocacy organization founded in 1968. Its website, Shelter.org.uk, serves
people with housing problems, ranging from financing and repairs to neighborhood issues and
homelessness. The site offers a variety of informational resources and online tools for people with
any sort of housing issue and also provides an online venue for fundraising.

Problems & Goals


The site solicits visitors to fill out a detailed survey form, which provides Shelter with information
about housing issues as well as collecting feedback about the website design. Navigation to the
survey page consisted of a graphical banner link on the right side of each content page. Several
banners of varying depths are featured in the right column.

Solutions & Results


The site designers moved the survey solicitation banner from the right side of the page to the
bottom of the content section. Context field depths vary, generally running between 1,000 to
2,000 pixels deep.

By placing the ads at the bottom of the page, visitors are solicited to respond at the optimum
moment — when they have finished reading the page, a natural break point. Additionally, the new
banner (365 x 67) was more than twice as wide as the right-column banner.

More importantly, from a navigational standpoint, the newly repositioned banner has no
competition from other graphical links or menus. In nearly every case, the banner is positioned
Case Studies — 3rd Edition

SIMPLY BUSINESS INSURANCE


TYPE SUBJECT REPORT EDITION
Website Financial services 3rd

METRICS

Methodology:
Analytics

Metric: Conversion rate (request for quote)


Improvement Score: 118%

Product & Team


Simply Business is a division of Xbridge Limited, a UK-based online broker of financial and
insurance services. The site offers visitors a tool for requesting competitive quotes on all types of
insurance, business financing, mortgages, and credit cards.

Problems & Goals


The site’s home page offers immediate access to a tool for requesting quotes for the site’s various
business services. The goal of the redesign was to move more visitors from entry to a completed
request for a quote — the first and necessary step in converting visitors to a sale. With a six-step
Request-For-Quote (RFQ) process, designers felt it was imperative to target the landing page in
order to highlight relevance for specific kinds of visitors and manage expectations about how
long the RFQ takes to complete.

Solutions & Results


Instead of a single entry page (the home page), entry pages were designed for different types
of insurance and financing (such as small business, auto, or mortgage). Traffic was driven to the
pages using pay-per-click advertising.

Several versions of the landing page were tested. The final page featured the site’s standardized
navigation bar, a list outlining what specific types of visitors would benefit from filling out the RFQ,
a short description of the product being quoted, and a short description of how the RFQ process
works. The page also features multiple, repetitive call-to-action links.

A final design has not yet been implemented, but during testing, the design team found that a
landing page targeted to the specific type of product resulted in a 17.5% increase in conversion of
clicks to RFQs, compared to the original design which presented a general form and a dropdown
Case Studies — 3rd Edition

SARAH HOPKINS (ARTIST)


TYPE SUBJECT REPORT EDITION
Website Design 3rd

METRICS

Methodology:
Survey

Metric: Satisfaction
Improvement Score: 77%

Product & Team


Artist Sarah Hopkins uses her website (http://www.sarah-hopkins.co.uk/) as a publicity tool to
display her work and contact information and to generate sales and leads from art galleries and
collectors. In addition, the artist is now included in some British school curricula, so a segment of
the audience includes students between the ages of 14 and 16. This case study was included in
the report in the 3rd edition.

Problems & Goals


A user-satisfaction survey yielded a positive rating of only 44%. After three years online, the site
had produced no leads. A complete overhaul was entirely appropriate.

The original design was wide (having been created with lower screen resolutions in mind). A lot of
room separated the menu selections; the page heading was deep and mostly empty space.

Solutions & Results


The deep, empty header (a scaling table) was replaced with a simple pattern, representative of
the artist’s thematic focus. The previous header featured a similar graphic element extracted from
the artist’s work, but most of it was hidden behind a solid-color table cell.

The spread-out navigation menu in the old design was problematic; it was replaced with a simple,
intuitive set of text links, flush to the left. The flush-left format puts the menu in a dominant eye-
tracking location and requires less mousing around because the selections are closer together.

When clicking to an inside page, the original design presented a horizontal submenu. The
redesign duplicated the improvement of the main menu, serving submenus as a tight unit of
vertical text links for more economical mouse tracking.
About the Authors

About the Authors


Kate Moran is a Senior User Experience Specialist with Nielsen Norman Group. She conducts
research and leads training seminars to help digital product teams expand and improve their
UX practice.

Kate has extensive experience conducting user research to guide UX strategy for websites and
applications. She provides UX advice to clients from a wide variety of industries, including finance,
healthcare, government agencies, ecommerce, B2B, and nonprofit organizations.

Kate’s recommendations and research findings are informed by her background in information
theory and design, as well as her development experience. Prior to joining NN/g, Kate worked
at IBM, first as a Web Content Manager and then later as a Front-End Web Developer. She also
worked in UX design for a small web design agency.

Kate holds a master’s degree in Information Science with a specialization in Human-Computer


Interaction from the University of North Carolina at Chapel Hill.

Feifei Liu (刘菲菲) is a User Experience Specialist with Nielsen Norman Group. She focuses on
planning and conducting research. Her background in psychology and human-computer interaction
gives her expertise in a wide variety of quantitative and qualitative research methodologies.

Feifei’s research investigates a variety of design issues that impact user experience. Her areas
of special interest include information-seeking behaviors, the cultural differences in design
preferences, and UX for children.

Prior to joining Nielsen Norman Group, Feifei worked as a Research Associate at Peking University
for three years and at Indiana University Bloomington for one year. In the Developmental and
Comparative Psychology Lab of Peking University, she led and conducted eyetracking studies on
children with autism spectrum disorders to investigate their attention patterns. In the Cultural
Research in Technology Lab at Indiana University, she conducted research to help menopausal
women to reflect and increase self-awareness.

Feifei holds a Master of Science in Human-Computer Interaction Design from Indiana University
Bloomington. She has a Bachelor of Science in Psychology from Peking University with the
distinction of Excellent Graduate. Feifei is based in Raleigh, North Carolina.
Acknowledgements

Acknowledgements
We thank the following people from Nielsen Norman Group:
• Kim Flaherty: Senior UX Specialist at Nielsen Norman Group whose thorough review,
feedback, and guidance made this report possible.
• Rachel Krause: UX Specialist at Nielsen Norman Group, who designed this report’s
visuals and layout.
• Alita Joyce: UX Specialist at Nielsen Norman Group, who assisted in the case-study
collection process.

We also thank the following people for sharing information for the 5th edition of this report:

Aaron Powers Julia Barham

Amanda Gulley Kapil Bhatia

Andrea Gaymon Kerrin McLaughlin

Anna Del Pino Leyla Jafarli

Anthony Rezendes Maja Otic

Basel Fakhoury Marco Catani

Chiara Scesa Mervin Ng

Chris Callaghan Monika Zielonka

Dawn Ta Simina Megyes

Funbi Makinde Tara Bassili

Imran Parvez Vikram Ramankutty

James Villacci Yunlu Shi 石蕴璐

John Nicholson
Acknowledgements

In addition, we thank the people who shared case studies in the earlier report editions:

Alex Wright John Gemmell Peter Bridger

Amanda French John Russell Raj Khera

Amy Hester Jordan Lynne Peterson Richard Scott

Ashton King Kelvin Green Rob Johnston

Barb Kempnich Laura Fleetwood Robert Blakeley

Bryan Skelton Laurel Rush Ron Pinder

Byron Fast Lorna Packer Sam Tillett

Cliff Knopik Lynne Arnold Sami Iwata

David Sequeira Marie Helen Høvik Sharon Tomer

Diana Persell Martin Hansen Andersen Sherrin Rieder

Domenic Mastrangeli Matt Johnson Simon Griffin

Duane John Megan Kirkwood Søren Engelbrecht

Ed Kohler Mike Corso Stephen Wang

Geoffrey V. Brown Neil Allison Steven Garrity

Gregor Jamroski Ole Hopland Theresa Richwine

Hal Shubin Pat Agnew Tina Foltmer

Janet Salm Paul Whaley Uffe David

Jason Fried Pavol Vallo Vikas Kamat

Jesús Encinar Petar Zivkovic Yann Schwermer


A Leading Voice In The Field of User Experience Since 1998

• Certification — Our certification program helps UX professionals quickly gain skills


and credibility. By taking 5 courses and passing related exams, practitioners earn
NN/g UX certification and badge, a recognized credential in the industry.
• In-house Training — Many of our courses can be taught at your location and
customized to fit your unique needs. In-house training is ideal for teams that
want to spread user experience perspective throughout a large group and those
working on large projects that need to kick start the creative process and head in
the right direction.
• Consulting — Our experts are available for custom consulting. Our services include but
are not limited to:
• Design Reviews and Competitive Analysis (starting at $38,000 USD)
• Usability testing (starting at $20,000)
• Benchmarking (starting at $45,000)

PUBLICATIONS AND CONTENT


Over the years, our experts have developed a comprehensive body of knowledge and UX
resources. We’ve taken the results of our decades of research and distilled it down into actionable
guidelines, best practices and proven methodologies.

Articles and Videos


Over the years, we have created one of largest and most comprehensive sources of free user
experience content and research insights.
• Articles — Each week we publish new articles on current UX topics available to
practitioners around the world at no cost.
• Videos — In addition to our free articles, we produce a variety of short topical UX videos
posted to our YouTube channel weekly.

Reports
Our research library contains more than 60 published reports and books addressing a variety of
topics including but not limited to the following:
• Doing UX in Agile Environments
• UX Design for specific audiences (e.g., children, college students, seniors, people with
disabilities)
• Information Architecture
• B2B Websites
A Leading Voice In The Field of User Experience Since 1998

• Corporate Websites
• Ecommerce UX Design
• Marketing Email and Newsletters
• Intranets
• Non-Profit Websites
• University Website

Shop for reports at https://www.nngroup.com/reports.

Our Free Newsletter


Nielsen Norman Group’s Weekly Newsletter includes summaries of our latest research and
insights published each week. Subscribe at https://www.nngroup.com/articles/subscribe.
INDIVIDUAL LICENSE
For Nielsen Norman Group Reports and Videos

You have purchased a report or video with an individual license. This means that you have the
following rights:

This is OK:
• You are allowed to make as many backup copies as you want and store on as many
personal computers as you want, as long as all the copies are only accessed by the same,
individual, user.
• For reports, you are allowed to print out one copy, as long as this printout is not given to
others.
• For reports, if your one allowed printout is destroyed, you are allowed to print out one (1)
new copy as a replacement.

This is NOT OK:


• You may NOT place the file(s) on an intranet, a shared drive, the public internet, or any
other file sharing or file distribution mechanism where other users can access the file(s).
• You may NOT give electronic copies or printed copies of the report(s) to other users.
• You may NOT print out more than the one (1) copy you are licensed for (except as a
replacement for a destroyed copy, as mentioned above).

Please contact support@nngroup.com if you have any questions about the NN/g Individual
License.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy