0% found this document useful (0 votes)
47 views36 pages

Hci - 16

1. Usability testing involves observing representative users completing typical tasks to evaluate how easily users can navigate and use a product. 2. Tests are usually conducted in a controlled lab setting and collect both quantitative data like task completion times and errors as well as qualitative feedback from users. 3. It is important to make users comfortable, protect their privacy, and ensure they understand the purpose of testing is to evaluate the product not their own abilities.

Uploaded by

Usama Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views36 pages

Hci - 16

1. Usability testing involves observing representative users completing typical tasks to evaluate how easily users can navigate and use a product. 2. Tests are usually conducted in a controlled lab setting and collect both quantitative data like task completion times and errors as well as qualitative feedback from users. 3. It is important to make users comfortable, protect their privacy, and ensure they understand the purpose of testing is to evaluate the product not their own abilities.

Uploaded by

Usama Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

HCI

Usability Engineering

Reqs Analysis

Evaluate Design

Develop

many iterations
Usability Engineering

Formative evaluation (early and


continuous)

Summative
evaluation (Conclude)
Types
• Formative
– Help form the result. All through the lifecycle.
Early, continuous. Iterative.

• Summative
– After a system has been finished. Make
judgments about final result. Sum things up.
How could you evaluate an interface?

• With users
– User testing
– Controlled experiments
– Natural Settings – Field studies, Living lab
• Without users
– Usability inspection
• Heuristic evaluation
• Cognitive walkthrough
Usability inspection
• A method of usability evaluation where an
interface is evaluated by usability experts
rather than end users.
• Usually several evaluators work
independently, then share experience
together later.
Heuristic Evaluation
• A type of usability inspection
• Proposed by Nielsen and Molich.
• Systematic inspection to see if interface complies to
guidelines
• usability criteria (heuristics) are identified
• design examined by experts to see if these are violated

• Example heuristics
– system behaviour is predictable (Expected)
– system behaviour is consistent
– feedback is provided

• Heuristic evaluation `debugs' design.


Heuristic evaluation
• Method
– 3-5 inspectors
– usability engineers, end users, experts…
– inspect interface in isolation (~1–2 hours for simple
interfaces)
– compare notes afterwards
• single evaluator only catches ~35% of usability problems
• 5 evaluators catch 75%

• Works for paper, prototypes, and working systems


How many inspectors?

75%

35%

Jakob Nielsen - www.useit.com


What is a cognitive walkthough (CW)?

Proposed by Polson et al.


– evaluates design on how well it supports user in
learning task
– usually performed by expert
– expert ‘walks though’ an interface following a
specified set of tasks.
– Step through the task. At each step, ask yourself
four questions.
CW Questions
1. Will users be trying to produce whatever effect the
action has?
2. Will users see the control (button, menu, switch,
etc.) for the action?
3. Once users find the control, will they recognize that
it produces the effect they want?
4. After the action, will users understand the
feedback they get so they can go on to the next
action with confidence?
CW Example - Rapid Transit Ticket Machine

• User wishes to purchase a round-trip ticket to


Dragon Plaza.
• The ticket costs $17.50 but the user doesn’t
know this yet.
• The user has only $10, but doesn’t know this
yet either.
Rapid Transit Ticket Machine
Step 1: Enter
Destination or Journey
Type

• User can do steps 1 and 2 in any order


• Or user can enter desired fare using keypad
CW Example
Rapid Transit Ticket Machine
• Design Flaw no. 1: Option to indicate (point to) journey type
first is not made sufficiently (satisfactory) evident (clear).
• Solution:
CW Example
Rapid Transit Ticket Machine
• Step 2: Enter Journey Type
CW Example
Rapid Transit Ticket Machine
• Step 3: Deposit money
CW Example
Rapid Transit Ticket Machine
• Design Flaw no. 2: No display of total
money received.
• Solution:
CW Example
Rapid Transit Ticket Machine
• Step 4: Retrieve $10 from machine since it
wasn’t enough
CW Example
Rapid Transit Ticket Machine
• Design Flaw no. 3: No means of retrieving
money deposited.
• Solution:
Usability testing
• Involves recording performance of typical users doing
typical tasks.
• Controlled settings.
• Users are observed and timed.
• Data is recorded on video & key presses are logged.
• The data is used to calculate performance times, and to
identify & explain errors.
• User satisfaction is evaluated using questionnaires &
interviews.
• Field observations may be used to provide contextual
understanding.
Usability testing
• Goals & questions focus on how well users perform
tasks with the product.
• Comparison of products or prototypes common.
• Focus is on time to complete task & number & type
of errors.
• Data collected by video & interaction logging.
• Testing is central.
• User satisfaction questionnaires & interviews provide
data about users’ opinions.
Basic Elements of Usability Testing
• Use of a representative sample of end users
• Representation of the actual work environment
• Observation of end users who either use or
review a representation of the product.
• Collection of quantitative and qualitative
performance and preference measures.
• Recommendation of improvements to the
design of the product.
Life Cycle and usability testing
Typical user-oriented questions
• What do users conceive and think about using the product?
• Does the product’s basic functionality have value to the user?
• How easily and successfully can users navigate?
• How easily do users make conclusion about how to use this user
interface, based on their previous experience?
• What type of prerequisite information does a person need to use
the product?
• Which functions of the product are ‘‘walk up and use’’ and which
will probably require either help or written documentation?
• How should the table of contents be organized to accommodate
both novice and experienced users?
Example
• As we review this design
together, I will be asking
you a series of questions
about what you see and
how you expect things to
work. Please feel free to ask
any questions and offer any
observations during the
session. There are no wrong
• The purpose of our session answers or stupid questions.
today is to review the design
This product is in a very
for a new web site and get
preliminary stage; do not be
your opinions about it.
concerned if it acts in
unexpected ways.
Questions
• Let’s begin with a hypothetical situation. You would like to
understand just what it is that this company offers.
• (User indicates how the task would be attempted, or attempts
to do the task if the navigation works.)
• You would like to calculate the cost for offerings from this
company. How do you start?
• (User indicates how the task would be attempted, or attempts
to do the task if the navigation works.)
• Okay, you’ve found the pricing page. What does it tell you?
• (User discusses the information on the page, describing what is
useful, clear (or not), and where there could be more detail.)
Usability lab with observers watching a user
& assistant
Testing conditions
• Usability lab or other controlled space.
• Emphasis on:
– selecting representative users;
– developing representative tasks.
• 5-10 users typically selected.
– Depends on schedule for testing; availability of
participants; cost of running tests.
• Tasks usually completed, no more than 30 minutes.
• The test conditions should be the same for every
participant.
Some type of data
· Time to complete a task.
· Time to complete a task after a specified.
· Number and type of errors per task.
· Number of errors per unit of time.
· Number of navigations to online help or manuals.
· Number of users making a particular error.
· Number of users completing task successfully.
User testing ethics
1. Respecting their time by not wasting it.
2. Do everything you can to make the user comfortable, in order to offset
(balance) the psychological (mental) pressures of a user test.
3. Give the user as much information about the test as they need or want
to know, as long as the information doesn’t bias the test. Don’t hide
things from them unnecessarily.
4. Preserve the user’s privacy to the maximum degree. Don’t report their
performance on the user test in a way that allows the user to be
personally identified.
5. The user is always in control, not in the sense that they’re running the
user test and deciding what to do next, but in the sense that the final
decision of whether or not to participate remains theirs, throughout
the experiment.
To do before a user test
• Pilot-test your entire test
• Brief the users first
– introducing the purpose of the application and the purpose of the
test.
• To make the user comfortable, you should also say the
following things (in some form):
– “Keep in mind that we’re testing the computer system. We’re not
testing you.” (comfort)
– “The system is likely to have problems in it that make it hard to use.
We need your help to find those problems.” (comfort)
– “Your test results will be completely confidential.” (privacy)
– “You can stop the test and leave at any time.” (control)
• Inform the user if the test will be audiotaped,
videotaped, or watched by hidden observers.
• Any observers actually present in the room
should be introduced to the user.
• At the end of the briefing, you should ask “Do
you have any questions I can answer before
we begin?” Try to answer any questions the
user has.
To do during a user test
• During the test, arrange the testing environment
to make the user comfortable.
– Keep the atmosphere calm, relaxed, and free of
distractions. If the testing session is long, give the user
bathroom, water, or coffee breaks, or just a chance to
stand up and stretch.
• Don’t act disappointed when the user runs into
difficulty, because the user will feel it as
disappointment in their performance, not in the
user interface.
• Give them only one task at a time. Ideally, the
first task should be an easy warmup task, to
give the user an early success experience.
• Answer the user’s questions as long as they
don’t bias the test.
• Keep the user in control. If they get tired of a
task, let them give up on it and go on to
another. If they want to quit the test, pay them
and let them go.
To do after a user test
• After the test is over, thank the user for their help and
tell them how they’ve helped. It’s easy to be open with
information at this point, so do so.
• Later, if you disseminate data from the user test, don’t
publish it in a way that allows users to be individually
identified. Certainly, avoid using their names.
• If you collected video or audio records of the user test,
don’t show them outside your development group
without explicit written permission from the user.
Q&A

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy