The Design of Everyday Things
The Design of Everyday Things
Type 📖 Book
About
Author
Topic
Dates
Last revised
Key info
Cover
1-Page Summary
1-Page Summary
Every man-made object, environment, or program in our world is designed. From
doorknobs to smartphone apps, design pervades our lives to the point that it often
becomes completely invisible. When we struggle with one of these designs, we
assume that our difficulties are our own fault, or that we’re just not smart enough to
figure it out. But that blame is misplaced. More often than not, the true culprit in
cases of “human error” is actually bad design.
In The Design of Everyday Things (originally released in 1988 under the title The
Psychology of Everyday Things and revised in 2013), cognitive psychologist and
engineer Don Norman explores the ways people understand and interact with the
physical environment (this is sometimes referred to as “user experience”). In doing
Affordances are the finite number of ways in which a user can possibly interact with
a given object. They answer the question, “What is this thing for?” For example,
chairs typically have a flat surface, which we intuitively recognize as an indicator of
support. In other words, the look of a chair suggests that it is for sitting on.
Signifiers are signals that draw the user’s attention to an affordance they may not
have intuitively discovered, like a “click here” button on a website or a “push” sign
on a door. For designers, signifiers are more important than affordances: The most
sophisticated technology is pretty useless if a user can’t find the “on” button.
Mapping uses the position of two objects to communicate the relationship between
them. For example, if you see a row of three lights and a panel of three switches,
natural mapping would mean the position of the switch corresponds to the position
For example, many people have an inaccurate model of their home thermostat.
They assume the thermostat controls a valve that opens a certain amount
based on the setting, and that setting it higher will warm the room faster. In
reality, most thermostats are a simple on/off switch, so setting a higher
temperature has no effect on how fast the room warms up.
The System Image is the sum total of the information we have about an object,
including both its physical properties and information from user manuals, product
websites, or past experience. The system image is the only way designers can
communicate their model of how something works to the user.
Let’s use grocery shopping as an example to see the seven steps in action. In that
case, they may look something like this:
4. Perform: I’ll follow the usual route to the store instead of a new one.
This cycle will play out multiple times for any given action because most behaviors
have both an overall goal (like “go grocery shopping”) composed of several
subgoals (“start the car”). Determining the overall goal is important because it gives
designers a better idea of what users really want. To do this, we use root cause
analysis, or continually asking “why?” about a behavior until there is no further
answer. The root cause of a behavior might be internal goal-driven (studying for a
test) or external event-driven (putting in earplugs in a noisy environment).
The visceral level is subconscious and involves our most primitive reflexes, like
startling at a loud noise or flinching when something flies towards us
unexpectedly. Visceral reactions can have a powerful influence on how users
respond to an object. An otherwise well-designed product can fail if it provokes
a negative visceral response in the user (like with a sudden, blaring alarm, or an
unpleasant odor.)
The behavioral level is the home of the subconscious process of turning thought
into action. It fills the gap between intention (like speaking) and action (moving
your lips, tongue, and jaw in specific ways). This level is important in design
because it functions based on expectations—if you flip a light switch, you
subconsciously expect a light to turn on. If it doesn’t, your ability to process the
action is interrupted by bad design.
For example, if an alarm clock keeps perfect time and is easy to use, but the
alarm sound itself is so loud and jarring that you wake up each morning thinking
the house is on fire, the memory of that visceral response might make you view
the interaction negatively and avoid that specific clock (or brand) in the future.
* Memory
Memory also impacts our interactions with objects. There are two kinds of
knowledge: “knowledge in the head” (memory) and “knowledge in the world," which
is anything we don’t have to remember because it’s contained in the environment
Long-term memory isn’t limited by time or number of items, but memories are stored
subjectively. Meaningful things are easy to remember; arbitrary things are not. To
remember arbitrary things, we need to impose our own meaning through
mnemonics or approximate mental models. Designers can make this easier for
users by making arbitrary information map onto existing mental models (for
example, think of the way Apple has kept the location of the power and volume
buttons relatively the same with each new version of the iPhone.)
* Detecting Errors
Errors can be divided into “slips” (errors of doing) and “mistakes” (errors of thinking).
Accidentally putting salt instead of sugar in your coffee is a slip—your thinking was
correct, but the action went awry. Pressing the wrong button on a new remote
control is a mistake—you carried out the action fine, but your thought about the
button’s function was wrong.
* Causes of Error
One major cause of error is that our technology is engineered for “perfect” humans
who never lose focus, get tired, forget information, or get interrupted. Unfortunately,
these humans don’t exist. Interruptions in particular are a major source of error,
especially in high-risk environments like medicine and aviation.
Social and economic pressures also cause error. The larger the system, the more
expensive it is to shut down to investigate and fix errors. As a result, people
overlook errors and make questionable decisions to save time and money. If
conditions line up in a certain way, what starts as a small error can escalate into
disastrous consequences.
Social and economic pressures played a critical role in the Tenerife airport
disaster, when a plane taking off before receiving clearance crashed into
another plane taxiing down the runway at the wrong time. The first plane had
already been delayed, and the captain decided to take off early to get ahead of
a heavy fog rolling in, ignoring the objections of the first officer. The crew of the
second plane questioned the unusual order from air traffic control to taxi on the
runway, but obeyed anyway. Social hierarchy and economic pressure led both
crews to make critical mistakes, ultimately costing 583 lives.
*
Preventing Errors
Good design can minimize errors in many ways. One approach is resilience
engineering, which focuses on building robust systems where error is expected and
prepared for in advance. There are three main tenets of resilience engineering.
Constraints
Designers can also use constraints, which limit the ways users can interact with an
object. There are four main types of constraints: physical, cultural, semantic, and
logical.
Physical constraints are physical qualities of an object that limit the ways it can
interact with users or other objects. The shape and size of a key is a physical
constraint that determines the types of locks the key can fit into. Childproof caps on
medicine bottles are physical constraints that limit the type of users who can open
the bottle.
Cultural constraints are the “rules” of society that help us understand how to interact
with our environment. For example, when we see a traditional doorknob, we expect
that whatever surface it’s attached to is a door that can be opened. This isn’t
caused by the design of the doorknob, but by the cultural convention that says
“knobs open doors."
When these agreements about how things are done are codified into law or official
literature, they become standards. We rely on standards when design alone isn’t
enough to make sure everyone knows the “rules” of a situation (for example, the
layout of numbers on an analog clock is standardized so that we can read any
clock, anywhere in the world).
Although they’re less common, semantic and logical constraints are still important.
Semantic constraints dictate whether information is meaningful. This is why we can
ignore streetlights while driving, but still notice brake lights—we’ve assigned
meaning to brake lights (“stop!”), so we know to pay attention and react.
Logical constraints make use of fundamental logic (like process of elimination) to
guide behavior. For example, if you take apart the plumbing beneath a sink drain to
fix a leak, then discover an extra part leftover after you’ve reassembled the pipes,
you know you’ve done something wrong because, logically, all the parts that came
out should have gone back in.
Once the prototype is refined, the testing phase begins, where members of the
target user group are asked to try out the prototype and give their feedback.
Designers then repeat the entire process based on the feedback from the first round
of testing. The iterative design thinking process emphasizes testing in small batches
with refinement in between rather than waiting until the final product and testing with
a much larger group.
* Technological Innovation
Economic pressures drive innovation. This can take the form of “featuritis," or the
tendency to add more and more features to a product to keep up with competitors.
These features ultimately degrade the design quality of the original product. Rather
than winning over customers with new features, it’s better to do one thing better
than anyone else on the market.
Interaction design focuses on the interface between user and object, usually in
a digital context (think website design).
For example, the style of a door handle and the location of the hinges make it
easy to discern what that object is and how it works—it is easily discoverable.
But a modern or industrial door with no visible hardware can be almost
impossible to figure out.
Understanding, in this context, refers to the user’s ability to make meaning out of
the discoverable features of the object. Understanding answers the questions,
“What is this, and why do I want to use it in the first place?” On a normal door,
1. Traditionally, the objects and technology we interact with on a daily basis are
created by engineers, who are typically logical thinkers who have been trained
to focus only on function. Their goal is to create a superior product—and
because they understand how to use that product, they often assume others will
understand, too. In other words, engineers create products under the false
assumption that people perform like machines—they always act logically, aren’t
influenced by emotion, and rarely make errors.
2. Engineers and designers typically don’t have the ultimate say in all decisions
about a product. They’re limited by the budget set by the company or client, by
the logistical capabilities of the manufacturer, and by the needs of the marketing
team. The final product must be not only well-designed, but also possible to
produce (at scale and within budget) and easy to sell.
* Human-Centered Design
One solution to this problem is human-centered design. Human-centered design is
not a subfield like industrial or interaction design. It is a design philosophy that can
be applied in any design specialization. The goal of human-centered design is to flip
the traditional design process on its head by focusing on human needs and
behaviors first, and designing products to fit those needs, rather than designing a
product and hoping that users figure out how to use it.
Let’s use a simple fork as an example. The traditional design process would most
likely begin with a designer thinking, “I’d like to design a new kind of fork.” She
would then brainstorm new versions of the fork, create sketches and prototypes of
those ideas, and tweak those prototypes until she was happy with the finished
product.
* Affordances
The term “affordance” refers to the relationship between an object and a user.
Affordances are the finite number of ways in which a user can possibly interact with
a given object. They answer the question, “What is this thing for?”
For example, think of a chair. Chairs typically have a flat surface, which we
intuitively recognize as an indicator of support, either for a person or an object. In
* Signifiers
The idea of hidden affordances highlights the need for signifiers. A signifier is a
signal that draws the user’s attention to an affordance they may not have intuitively
discovered. In a digital context, a “click here” button or flashing icon are possible
signifiers. In the example below, the “push” sign is a signifier. It contrasts with the
perceived affordance of the handles (pulling), which may cause confusion.
Affordances and signifiers are easy to mix up, and even seasoned designers
sometimes use one word when they really mean the other. The key difference is
that affordances describe the possible interactions between object and user,
whereas signifiers are a way of advertising those affordances (for example, if you
* Mapping
For some simple objects or interfaces, signifiers alone will give the user enough
information to use the object successfully. However, more complex objects might
also require the use of mapping in order to be usable. Mapping uses the position of
two objects to communicate the relationship between them. It’s the simplest way to
show the user which controls correspond to which affordances.
For example, picture the knobs on a stovetop. How do you know which knob
operates which burner? If the stove is well-designed, the arrangement of the knobs
will map onto the arrangement of the burners (typically a square). In that case, if
you want to turn on the bottom left burner, you intuitively reach for the bottom left
knob.
This stovetop example is notorious among designers because effective mapping of
knobs to burners is so rare. When most of us picture a stovetop and its controls, we
picture the burners arranged in a square, but the knobs arranged in a line. In this
setup, the user has to invest far more time and mental energy to figure out which
knob controls which burner, and may have to resort to trial and error.
* Feedback
The next clue for interacting with an object is feedback. If you’ve followed signifiers
to an affordance and used mapping to figure out which control you need to use, how
do you know whether you got it right? Feedback is a sensory signal that alerts the
user that what they’re doing to an object is having some effect. Information that
results from a user’s action is called “feedback”; information that shows a user how
to act in the first place is called “feedforward." Feedforward guides users through
the execution phase, while feedback guides them through evaluation.
Our sensory systems automatically provide basic feedback about our environment
through all of our senses. We automatically process the look, feel, sound, and scent
of objects around us. However, for more complex objects, feedback signals may not
Clearly, feedback is important, and too much or too little can cause problems. But
the type of feedback and the way it’s presented is also crucial. A car’s turn signal
flashes on the side of the car that matches the direction the driver intends to turn. If
the opposite side flashes, or both at the same time, that gives you no useful
information about what the car in front of you is about to do.
* Models
So far, we know that affordances tell us what an object is for, signifiers tell us what
and where those affordances are, mapping helps us find the right controls to
engage with those affordances, and feedback tells us whether everything is working
Seeing this, you’d probably assume that the refrigerator and the freezer each
contain an independent temperature sensor that controls an independent cooling
mechanism. This mental model would look something like this:
What signifiers or perceivable affordances does the object have that would help
your friend figure out its purpose? (Remember: Signifiers are signals that tell us
where or how to interact with something, such as a “push” sign on a door.
Perceivable affordances are obvious ways we can interact with an object, like a flat
surface for supporting weight, or a hollow vessel for holding liquid.)
If you were to redesign this object to make it easier to use and understand, what
changes would you make? (For example, adding signifiers, or rearranging controls
to naturally map onto the object.) How would these changes help?
Now that you’re thinking like a designer, look around the room again. What other
obvious signifiers do you see on other objects or appliances?
The Gulf of Execution refers to the process of figuring out what an object does
and how to use it. This can happen either before using the object or while trying
it out. Affordances, signifiers, and mapping are tools designers use to help
users bridge this gulf.
The Gulf of Evaluation occurs after using an object and refers to the process of
evaluating what the device did and whether that action matched our goals.
Feedback and accurate mental models are the most helpful tools for bridging
this gulf.
The seven stages of action are: goal, plan, specify, perform, perceive, interpret,
compare. These steps carry the user across the gulfs of both execution and
evaluation. The first stage, “goal," sets the standard that will be used later to
determine if the action was successful. The next three stages (plan, specify,
perform) bridge the Gulf of Execution, while the final three stages (perceive,
interpret, compare) bridge the Gulf of Evaluation. Each of these stages answers a
particular question:
4. Perform: I’ll follow the usual route to the store instead of a new one.
In the example above, the action was successful in achieving the goal. However,
the goal of going grocery shopping is part of an overall system that includes both
larger and smaller goals. For example, if I’m making a particular recipe but don’t
have an ingredient I need, going grocery shopping would become a subgoal of my
overall goal of making that recipe. Grocery shopping itself would have multiple
subgoals: locating each ingredient in the store, loading the groceries back into the
car, and so on.
Generally speaking, people are only consciously aware of a small portion of their
thoughts and emotions. The rest of our opinions, decisions, emotions, and reactions
happen without any conscious input. When we learn a new skill, we need conscious
focus at first, but once we fully master the skill and make it a frequent habit,
performing requires less and less conscious effort until it is fully subconscious. The
process of mastering a skill to the point that it can be executed subconsciously is
called “overlearning." Think of the new driver compared to the experienced driver—
the new driver is actively concentrating, while the experienced driver can safely
carry on a conversation or sing along to the radio.
(Overlearning applies to complex skills like driving, walking, and learning a
language, but can also apply to factual information. For example, if you’re filling out
a form and are asked for your phone number, you’ll most likely be able to answer
without much effort. But if you’re asked for the address of the second house you
ever lived in, it will take you much longer to come up with the answer.)
The behavioral level also primarily deals with subconscious processing. This might
seem counterintuitive, since we typically choose our behaviors and can observe
them consciously. But the behavioral level of processing is not concerned with why
we act the way we do, but how.
For example, if you want to speak, you have to control your lips, tongue, and jaw in
very specific ways to produce the right sounds. You might consciously choose what
you want to say, but most of us don’t actively will our mouths to make certain
shapes. The same applies to wiggling your fingers or opening a drawer-- we’re not
conscious of the neurological processes involved in those actions. We decide what
to do, and our brains subconsciously forward the message to the correct body parts.
(Unlike the visceral level, responses at the behavioral level can be learned and
changed. This is where overlearning comes in—when we practice something over
and over until it becomes a habit, we’ve moved that skill from a conscious level to
the subconscious behavioral level. Now, when the associated trigger pops up, we
carry out that action without any conscious thought. Overlearning is an important
factor in understanding human error, which is covered more thoroughly in Chapter
5.)
Behavioral processing also has implications for design. By definition, behavioral
responses have a specific expectation attached. If you open your laptop and press
the power button, you expect it to turn on. When you turn a doorknob and push, you
expect the door to open. These expectations are crucial for designers to understand
We often make these associations without realizing it. If your laptop reliably powers
on each time you expect it to, you learn to associate the laptop with satisfaction and
confirmed expectations. If the door frequently doesn’t open when you expect it to
(perhaps because it lacks the necessary signifiers), you associate that type of door
with frustration and annoyance.
The most important design tool for managing user expectations is feedback. If an
experience defies our expectations, we might feel helpless or confused about how
to proceed, ultimately influencing how we think and feel about the experience.
Feedback mitigates this damage by explaining what went wrong, allowing users to
regain a sense of control. Even better, if feedback gives us information about the
problem and how to fix it, we’re much less likely to experience feelings of
helplessness or confusion.
The Reflective Level
The reflective level is the level of conscious processing. Where visceral and
behavioral processing happens instinctively and immediately, reflective processing
is deliberate and therefore much slower. The reflective level allows us to brainstorm,
consider alternatives, exercise logic and creativity, examine a new idea, and, as the
name implies, reflect back on past experiences.
Emotion plays an important role at this level as well. Where the visceral and
behavioral levels deal with subconscious, automatic emotional responses, the
reflective level provokes emotional responses based on our own interpretation of an
experience. For example, while fear is an automatic visceral response, anxiety
about possible future events is a reflective response. Anxiety arises from our ability
to predict possible futures based on current trends. But this is the same process
that underlies feelings like excitement and anticipation. Our own interpretation of our
predictions decides which of these emotions we experience.
Another example of this effect is the difference between guilt and pride. In order for
us to feel either of these emotions, we have to believe we’re directly responsible for
the outcome of a situation. If we judge the outcome of that situation positively, we’re
For example, if an alarm clock keeps perfect time and is easy to use, but the
alarm sound itself is so loud and jarring that you wake up each morning thinking
the house is on fire, the memory of that visceral response might make you view
the interaction negatively and avoid that specific clock (or brand) in the future.
This is one reason common objects are so often confusing or disappointing to use.
The ultramodern, solid glass door described in Chapter 1 might provoke a positive
visceral response, but the lack of a clear, logical way to interact with it creates
confusion at the reflective level. At the other extreme, medical equipment is often
designed purely to perform a specific function. These machines might be incredibly
technologically sophisticated, but the confusing or sterile aesthetic can trigger a
visceral fear response in patients.
First, we need to know which of the three levels is responsible for processing
the desired emotion (in this case, the fact that being in flow is determined by
expectations tells us we are dealing with the behavioral level).
Next, we need to know which of the stages of action are involved (in this case,
“specify” and “interpret”), since these stages tell us what specific activities
influence that level of processing. In other words, understanding the levels of
processing tells us where design can intervene, and understanding the seven
stages of action tells us how.
Since you’re reading this, we know you met your goal—the gulf of evaluation has
been bridged. Now let’s apply this process to a task you haven’t completed yet.
Think of a small goal you’d like to accomplish by the end of today. This could be as
simple as making lunch, or slightly more involved, like finishing a small work project.
What is your goal?
List three different possible ways you could accomplish your small goal. Which of
these options will you choose?
Once you act on your plan, you’ll need to bridge a new gap: the gulf of evaluation.
How will you know when you’ve achieved your goal? (For example, if your goal was
to eat lunch, the feeling of no longer being hungry might be a way to measure
success.)
How did it feel to break down a simple task into such tiny steps? Were any of the
steps more difficult to identify than others?
(BUTTON)
Submit
For designers, understanding the way users think about their interactions with
technology is important for creating a positive user experience. It is not enough to
know how something works on a technical level—we need to understand how the
user thinks the object works, and how they explain what happened if something
goes wrong, since these are important factors in determining how people respond to
technology. For designers and non-designers alike, understanding the biases that
shape our own stories helps us make sense of our encounters with bad design.
* Causes of Behavior
To understand the way people think about their interactions with technology, we
need to distinguish between a user’s overarching goal and the smaller subgoals
and actions that lead up to it. Norman quotes Harvard Business School professor
Theodore Levitt as an example, who said, “People don’t want to buy a quarter-inch
drill. They want a quarter-inch hole!” However, it’s unlikely that anyone actually
wants a quarter-inch hole in their wall just for fun. Instead, drilling a hole is most
likely a subgoal leading up to a larger goal of mounting something on the wall.
Determining the overall goal of a behavior is important because it gives designers a
better idea of what users really want. If you’re designing in response to someone
buying a drill, you’ll keep making new kinds of drills. If you’re designing in response
to someone wanting to hang a shelf, you might come up with a new adhesive that
allows the user to mount shelves directly on their wall, without drilling holes. You’ve
addressed their real need and simplified the process of meeting it.
The example above shows how event-driven and goal-driven behavior can be
intertwined, since the event-driven behavior (putting in earplugs) only occurs as part
of the goal-driven behavior (studying). Designers need to be aware of the
differences between event-driven and goal-driven behaviors in order to design for
the user’s actual needs. In the studying example, focusing on designing better
earplugs focuses only on the external factors. Redesigning the entire environment
to be more conducive to the internal goal of studying would address both the
internal and external causes, ultimately creating an even better user experience
overall.
Positive psychology: See user “errors” as feedback for where the design needs
improvement. Instead of traditional error messages, provide feedback on the
problem that allows the user to fix it immediately, without starting over or losing
progress on the parts they’ve done correctly.
Root cause analysis: When users struggle, continue asking “why” until you find
the underlying problem. This fixes both the initial problem and any downstream
problems that may stem from it.
Users should be able to answer each of these questions easily. Designers can use
the following tools to provide these answers:
Discoverability: Users can easily figure out how to use the device, even without
background knowledge.
Feedback should happen within 0.1 seconds of an action. Any longer and users
may not automatically connect the feedback to their action.
Conceptual model: The way the device works is easy to understand, even when
important affordances are hidden.
Affordances: The device performs the functions users need. This must be true
for all potential users, not just the average user.
Signifiers: Where affordances are not obvious, the device draws the user’s
attention to them with signs, visible hardware, beeping, etc.
Mappings: Controls for the device are laid out in a logical way that gives clues
as to which controls go with which functions.
Did thinking through a root cause analysis in the above questions come naturally to
you? Is this tool something you use in everyday life? In what areas of your life would
this tool be useful?
(BUTTON)
Submit
Our imprecise knowledge of details is usually all we need for day to day life. But if
those details change significantly, our approximate models may no longer be
enough. Coins provide another great example, this time with real-world
consequences. In 1979, the United States released the Susan B. Anthony dollar
coin, which was nearly the exact size, shape, color, and weight of the existing
Why is it that two similar coins can cause so much confusion, but we can easily
distinguish a one dollar bill from a twenty dollar bill? Cultural and historical context
determines how we distinguish one object from another. Because American paper
money uses identically-sized bills, we are used to relying on the images on the bills
themselves to tell them apart. So introducing a new bill of the same size and color
would be no problem for Americans, but would cause uproar in countries that use
size to distinguish between different values of paper money.
* Types of Memory
This section dives deeper into the different types of memory. It’s important for
designers to have a working understanding of each type of memory, particularly in
the digital age, as increasingly complex technology requires users to combine
“knowledge in the head” with “knowledge in the world” in more and more
sophisticated ways.
Digital passwords are an especially important example of this effect. While new
technologies make it much easier to store knowledge in the world, doing so makes
the information far less secure, since knowledge in the world is accessible to
anyone in that environment. To protect private information, computer systems
require passwords. Simple passwords were sufficient at first, but the rise of hacking
and the ability to store sensitive information like bank records online quickly
required a new approach to security. Now, most programs have complex password
requirements that use a combination of numbers, letters, and symbols. Some
programs require the password to be changed on a regular basis.
* Short-Term Memory
As the name implies, short-term memory is the automatic storage of recent
information. Short term memory is also called “working memory” because it is the
information we keep in our minds in order to complete any given task.
The information held in short-term memory is constantly being replaced as we
encounter new stimuli, so holding onto any piece of information for more than a
second requires rehearsal, or consistently repeating the information until it’s no
longer needed (like when you hear a phone number spoken aloud and repeat it to
yourself over and over while you search for a pen to write it down).
Don’t require people to use short-term memory at all if possible, but certainly
not for more than a few seconds or for more than five items (although the
* Long-Term Memory
If information is important enough or rehearsed often enough, it moves from short-
term memory into long-term memory. Long term memory is more robust, and
memories encoded here do not automatically replace other memories. While we
encode and access short-term memory automatically, it takes time to encode long-
term memories, and it typically takes time to access them later.
We still don’t know exactly how short-term memories become encoded into long-
term memory, but most scientists agree that this process happens during sleep.
This is important for anyone designing a product or system that requires users to
store information in long-term memory—that process typically is not instant, and it
might require several encounters with the information with periods of sleep in
between.
Information stored in long-term memory is much more durable, but there is an
important caveat here: memories are encoded based on our interpretation of
events, not as they really happened. Much like performers reciting epic poetry, we
don’t remember every single detail of an event, but rather the main details and our
subjective interpretation of them. This also means that each time the memory is
recalled, we are recreating it based on that limited information. So each time we
access the memory, we inadvertently change small details of it and then re-encode
that version of events. This process has powerful implications for law and criminal
justice settings, since it demonstrates just how unreliable eyewitness testimony can
be.
Guidelines for Designers
Mnemonics are designed for this exact purpose. When learning to read music,
remembering which notes correspond to which lines on the staff is arbitrary.
Imposing the mnemonic “Every Good Boy Does Fine” turns an arbitrary series
At first, Sayeki struggled to map the direction of the signal lever (forward or
back) onto the direction of the signal light (left or right), since the connection
between the two seemed completely arbitrary. To fix this, Sayeki adjusted his
mental model of the turn signal so that the lever corresponded to the direction
the handlebars moved when turning, as opposed to the direction of the
motorcycle itself. Since pulling the left handlebar back turns the motorcycle to
the left, and pushing the left handlebar forward turns the motorcycle to the right,
this connection made sense—the information became meaningful and was then
much easier to remember.
Designers can make this process much easier by creating meaningful controls. For
example, in a traditional car, the turn signal is pushed up to signal right and down to
signal left. This takes advantage of our sense of clockwise and counter-clockwise
direction: If we could extend the motion, the turn signal lever would be like the hand
of a clock, and pushing up on it would ultimately send it to the right.
* Approximate Models
Professor Sayeki’s mental model of the turn signal doesn’t account for all the
mechanics of turning a motorcycle (like that fact that executing a left turn often
means first steering slightly to the right). But the model works—it imposed meaning
on the direction of the signal lever, making it easy to remember and use. For most
everyday situations, approximate models are all we need to successfully interact
with our environment.
We use approximate models all the time, often without realizing it. Mental math is a
great example. For instance, if your job pays weekly and an official form asks for
your monthly income, you need a precise answer. You’d multiply your weekly
income by the number of weeks you work in a year and divide the resulting number
On the other hand, if you want to know your monthly income to determine if you
can afford that new streaming service, you could just multiply your weekly
income by four (the average number of weeks in a month). This mental math is
much easier to do, but the model ignores the fact that not all months have
exactly four weeks, so the answer is not exact—it’s an approximate model.
Even this chapter has relied on approximate models when describing short-term
memory. In reality, there are no “slots” in the brain where information sits, waiting to
be replaced by new information. For neuroscientists, this model is too simplistic to
be useful. But for designers, it gives a close enough explanation of the process to
inform design.
Airplane technology is evolving to make this process even easier by relaying this
information digitally, allowing instruments to be set automatically and important
information to be displayed visually. The idea is to create a design that takes as
much burden as possible off the pilots’ memory, which reduces the risk of
dangerous mistakes.
* Reminding
Transferring information from short-term memory into external knowledge is
relatively simple when the information is immediately relevant. But what about
remembering things that haven’t happened yet, like the date and time of a dentist
The best-case scenario for natural mapping is controls that are incorporated
into the object itself. Gesture controlled faucets are an example of this: to
operate the faucet, the only object you need to interact with is the faucet itself.
All four of these designs can be found on stovetops currently on the market. The
layout of the burners doesn’t change, but the layout and location of the knobs
change each time. The lack of standardization alone would make things confusing,
but on top of that, each of them requires mentally mapping a one dimensional line
(the controls) onto a two dimensional square (the burners). This requires some
complicated mental gymnastics, and mistakes can easily lead to serious accidents
and injury.
The easiest way to reduce the risk of accidents is through natural mapping.
Effective natural mapping puts the knowledge of how to use the stove completely in
the world rather than in the head. Compare these layout designs to the ones above:
A slight change in layout makes a huge difference. If the solution is so easy, why
does this problem still exist? One major reason is that the people buying appliances
The question of “Who is moving, the user or the object?” also impacts the way we
read text on a screen. As you read this summary, which direction do you scroll to
keep reading? On modern touchscreens, swiping up with a finger almost always
makes the text scroll down. This is the same action you’d use to move printed
material in real life (like reading a newspaper lying flat on a table and pushing the
newspaper away from you in order to bring the bottom sections into your view).
But there is another way to visualize this. Before the advent of touchscreens,
computer displays used a “moving window” paradigm, where the text was visualized
as static display, and the screen as a small window onto that display that showed a
certain amount of the text at a time. To read more, the window would move, not the
text. In this case, the cursor controlled the window, not the text, so scrolling down
meant physically moving the cursor down, not up.
These examples make it clear that the “right way” to do something depends on our
mental model of it, and mental models can vary by culture. To design a successful
product, designers need to understand how the majority of target users will visualize
the concepts needed to use the product. If designers try to introduce a new
paradigm, users will be confused and frustrated. If this is a widely-implemented
change, the product is not necessarily doomed to fail, but there will be a significant
adjustment period.
In your reminder, what was the signal? What was the message? (Remember, the
signal is your cue to remember something, and the message is what should be
remembered.)
Did your reminder rely more on knowledge in the head or knowledge in the world?
Did this work well for you?
If you were asked to remember the same type of information in the future, would
you choose this type of reminder again? Why or why not?
(BUTTON)
Submit
* Physical Constraints
Physical constraints are physical qualities of an object that limit the ways it can
interact with users or other objects. The shape and size of a jar lid act as physical
constraints that prevent it from being attached to the wrong jar; different-sized holes
* Forcing Functions
Forcing functions are physical constraints specifically designed to prevent certain
actions from occurring at the wrong times or in the wrong contexts. Three important
forcing functions are interlocks, lock-ins, and lock-outs.
The “dead man’s switch” function of dangerous machinery like chainsaws and
riding lawn mowers is another common example of an interlock. This spring-
loaded switch requires continuous pressure: If it is released, the device
immediately shuts off, ensuring that dangerous equipment will not run wild if the
user becomes incapacitated.
* Conventions
Schemas help us interpret our environment, but conventions and standards tell us
how to interact with it. In the restaurant example, a schema helped you interpret the
environment by combining the information from your senses and matching that to
an existing mental model of “restaurant." Now that you understand the environment,
you need to choose how to act appropriately within it. For that, you’re more likely to
rely on conventions. Conventions are culturally-dependent agreements about how
things are done. They are social rules that constrain our behavior.
Conventions also act as cultural constraints that affect how we interpret signifiers
and perceived affordances. Think of a typical doorknob. The knob itself is the same
shape and size as a cupped hand, so “grasping” is an intuitive perceived
affordance. But nothing about the knob itself tells us it should be twisted, or that its
purpose is to open and close doors. Those functions are cultural conventions that
are learned from other people in our environment. To illustrate this, think of a
doorknob mounted on a regular wall. You probably wouldn’t attempt to twist it, or
expect it to open or close the wall, because the knowledge that “knobs open doors,
not walls” is a universal cultural convention.
For example, in a building with thirty stories and ten elevators, each elevator
would stop at three predetermined floors. If you wanted to go to the fifth floor,
you’d find the control panel, enter “5," and be directed to the elevator that
services that floor (perhaps “Elevator B," since the elevators themselves are
typically given non-numerical labels to avoid confusion).
For large buildings with many elevators (like hotels or office buildings), this design is
far more efficient than the traditional elevator, where the person going to the highest
floor has to wait while the elevator stops at every other passenger’s desired floor in
between. In spite of this, destination-control elevators are still rare. Cultural
convention is to blame here, since implementing this new system would require
overhauling a cultural convention about how elevators work. For designers and
developers, the increased efficiency was not worth the cultural taboo of violating
convention, and the legacy of less-efficient elevators won out.
The lesson of destination-control elevators is that consistency is important, because
there will always be cultural resistance to change. Typically, it’s not worth fighting
ingrained conventions for small changes. But if there is a revolutionary new way of
doing things that is objectively better, that change needs to be implemented
universally to avoid confusion.
* Standards
Laws are not the only official standards. Cars must be produced and operated
according to certain laws, but the transportation industry as a whole has its own
set of official standards, as does each specific automotive company.
* Discoverability
Physical, cultural, semantic, and logical constraints aid in discoverability. This
section applies these concepts to common objects like doors, switches, and
faucets.
* Switches
Switches are another notorious offender when it comes to discoverability. The
design and placement of switches need to provide two types of information: what
Is there a better way? The author recommends two solutions: a natural mapping
approach (as discussed in Chapter 3) or an activity-centered approach. In the
stovetop example, natural mapping is fairly straightforward, since all four controls
and all four burners are in view simultaneously. For a lighting system where not all
the lights are in view from any one spot, an overhead diagram of the room or entire
floor makes mapping much easier.
* Faucets
You’ve probably encountered a faucet that didn’t immediately work the way you
expected it to. Maybe the hot and cold taps were switched, or the drain mechanism
had no visible controls, or the knobs looked like they should be twisted when in
reality they needed to be pushed. Nearly every residential and commercial building
has at least one faucet, so why haven’t we found a universal standard that gets it
right?
Control only on/off, keeping both temperature and flow rate constant (as in the
case of automatic faucets).
One integrated for both temperature and flow rate (for example, a knob that
controls temperature when twisted left or right, and flow rate when pushed in or
pulled out).
One control for hot water, one for cold (flow rate is controlled by the degree to
which each tap is opened).
The fact that there are fewer controls than necessary functions puts the burden on
the user to figure out which control does what. For example, for designs with one
control for hot water and one for cold, how do you know which is which, and how do
you know how to turn each control on or off?
Cultural conventions can be helpful here—in most of the world, the left control is for
hot water, the right is for cold. In the United States and the United Kingdom, though,
this rule is considered more of a loose guideline at best.
The design of the handle also presents a problem. For knobs that need to be
twisted, we have another convention to help: Any mechanism with a screw thread is
The fact that there are so many possible configurations for faucet controls means
most of us resort to trial and error. While not ideal, this usually allows us to figure
things out pretty quickly and go about our day. But when feedback is not immediate
—when moving the control doesn’t create an immediate change in temperature or
flow rate—we have no confirmation that the system registered our input, so we
repeat it. This is a particular problem with shower controls, as the distance between
the control and the faucet causes a slight delay in feedback (which is how we end
If you found multiple physical constraints, choose just one to focus on for now. Why
do you think the designer chose to incorporate this specific constraint? What actions
do you think they were trying to prevent?
Ultimately, does this physical constraint make the device easier to use? If you were
asked to edit this constraint to make it easier to understand and use, what changes
would you make?
(BUTTON)
Submit
A “slip” is an error of execution. Slips happen when we have the right goal for
an action, but end up performing a different action without thinking (like
accidentally putting a chopstick in your drink instead of a straw). Slips happen
unconsciously—they are errors of doing.
The defining difference between slips and mistakes is that slips happen
subconsciously while mistakes involve conscious choices. Slips and mistakes can
be further broken down into subtypes. Mistakes can be broken down into
knowledge-based, rule-based, and memory-lapse mistakes. Slips can be classified
as either memory-lapse or action-based. Action-based slips can then be broken
down further into three types. Each of these subcategories will be defined in the
following sections.
Memory-lapse slips involve simple forgetting, like forgetting your phone at home
or driving away with your coffee cup still resting on top of the car.
Action-Based Slips
Action-based slips can be broken down even further into subtypes, including
capture slips, description-similarity slips, and mode errors.
The opportunity for the familiar activity to capture the new activity happens
when we forget that we’re supposed to be focusing on the new activity. The
element of forgetting means that capture slips can also be classified as
memory-lapse errors.
Mode errors happen when there is one control for multiple functions, such as a
universal remote. If you want to turn the TV volume up, but the remote is set to
“auxiliary mode," you may accidentally turn up the volume on a different device.
Mode errors can also have much more serious consequences, as in the case of
an airplane crash caused by a mode error involving the auto-pilot system. The
same instrument was used to control both vertical speed and angle of descent,
so toggling between the two variables required switching modes. In this case,
the pilot entered the correct value for angle of descent, but he did not realize
the equipment was still set to the vertical speed mode.
* Detecting Error
A truly error-proof design makes it easy to detect errors before they become
dangerous. Doing this requires understanding how we notice errors in the first
place, and more importantly, why we sometimes fail to notice them.
In general, slips are easier to detect than mistakes. Detecting simple errors like
action-based slips is typically easy—if you accidentally put your keys in the freezer,
you’re likely to realize it pretty quickly. Memory-lapse slips are harder to detect until
something cues retrieval of the memory (for example, not realizing you left your
wallet at home until you need to pay for gas).
Mistakes are difficult to detect because they are conscious choices. By definition,
we usually don’t recognize mistakes right away because we genuinely believe we’re
making the right choice as we’re making it. Mistakes only become apparent later,
when something goes wrong and the cause is traced back to the original mistake.
One reason we don’t catch mistakes earlier is the natural human tendency to
explain away minor deviations from the norm. The author tells a story of driving with
First, create more conditions that must be met in order for a product to function
properly (this adds more slices of cheese, making it statistically less likely that
holes will line up).
Third, include feedback at every stage (this alerts the user if certain holes are
beginning to line up, giving them a chance to stop the process before an
accident occurs).
Understand the factors that contribute to errors and identify which ones are
possible to control.
Make the “undo” function easy to access and available at every possible step.
For actions that cannot be undone, require multiple confirmations that the user
wants to proceed with that action. This should be paired with a clear image of
the specific item being acted on (to prevent such mistakes as deleting the
wrong file).
Use error messages to present users with guidance on how to fix the problem.
Remember that most errors don’t require completely starting over. Make it
easier for users to fix a single step instead of the entire chain of actions.
Use frequent, effective feedback to prevent slips (for example, in hospitals, both
prescription labels and patient ID bracelets are scanned before administering
any medicine to ensure that the right person is getting the right drug).
Focus on more than products. Consider all the systems involved in making,
selling, and using the products (including social systems).
Test under real-life conditions. For computerized systems, this might mean
shutting down parts of the system without warning to test backup functions as
well as employee response under real-life stress.
If you experienced this error personally, did you blame yourself at the time? If you
haven’t experienced this error, do you think you would blame yourself if it happened
in the future?
Now, think about the error in light of what you learned in this chapter. Is there an
underlying cause? (Remember, this could be the design of a physical object, but
also company culture, work schedules, digital programs, and so on.)
Let’s use the idea behind “the five whys” to push this thinking even further. Keep
asking yourself “why” until you get back to a fundamental issue underlying all the
others. How many “whys” did it take to find the source?
(BUTTON)
Submit
* Observation
The first step in addressing any design problem is observation. The most useful
observation for design teams takes place in the real world, not in a controlled setting
like a lab. One way to do this is through applied ethnography, which involves
observing users in their usual environments, carrying out everyday activities, for as
long as possible. This gives the designer the most comprehensive picture of users’
needs and expectations. Applied ethnography is based on techniques of academic
anthropology but has been adapted to be much faster and have a more specific
aim.
It’s important that the people being observed are part of the intended audience for
the final product. The nature of the product determines the type of approach
designers take to choosing people to observe. Activity-based approaches are useful
for products that are used in more or less the same way, regardless of cultural
differences (like cars, computers, and phones). At first, this may seem like a
contrast to human-centered design, since the focus is no longer on individual users.
But activity-based design is ultimately a tool of human-centered design, since it
focuses on helping the user create a working conceptual model.
For example, driving a car is a complex activity that requires operating multiple
systems at the same time (like working the pedals while also steering, checking
the mirrors, and following traffic laws). This makes learning to drive difficult, but
the fact that each of those tasks serves the broader activity of “driving” makes
them easier to learn than they would be in isolation. The activity helps us make
sense of disconnected components.
* Idea generation
Observation provides the necessary background knowledge to both discover and
define the problem. The first step to exploring solutions is idea generation. There
Second, don’t censor yourself. Don’t kill ideas before they have a chance to
make the list. Even “silly” ideas can spark useful discussion.
* Prototyping
The next step in the design process is to explore the most promising ideas in more
detail. This is done through rapid prototyping, which focuses on creating very rough
models of several ideas instead of a more accurate model of one specific idea.
Rapid prototyping can happen through sketches, cardboard models, arrangements
of sticky notes, spreadsheets, or even skits. More detailed prototypes can be tested
once the list has been narrowed down to one or two ideas.
The “Wizard of Oz technique” can be helpful for testing early prototypes. Just like
the wizard in the classic story uses smoke and mirrors to make himself appear
larger and more powerful, designers can create a facade that mimics the
experience of the final design (for example, by having a research assistant play the
part of a future computer program and supply answers in an “automated” chat with
users).
* Testing
Once the team has narrowed the list of possible solutions to one idea and
developed that idea into a more sophisticated prototype, it’s time for the testing
phase. This begins with bringing in members from the target user group (usually five
is enough) and having them use the product how they normally would.
If the product is meant to be used by just one person, it’s useful to put them in pairs,
with one using the prototype directly and the other offering suggestions,
commentary, and questions. This requires users to talk through their thought
processes out loud, which is helpful for designers observing the testing session.
Not all products are meant to be used by every type of person. The author gives the
example of clothing—we don’t expect every piece of clothing to fit every person.
Security systems and exterior doors that should only be operated by authorized
people.
Making controls invisible, or positioning them so that only certain people will be
able to see and use them (for example, a doorknob placed much higher than
normal on a daycare center door to ensure only adults are able to operate it).
Using unnatural (but specific) mapping, so that only those with appropriate
training know which control operates which function.
Making controls impossible to operate with only one person (for example, by
spacing them across the room and requiring simultaneous action). This ensures
that no single person can activate dangerous operations.
If you had the chance to carry out these observations in the settings you chose,
what specific things would you observe? What might be important for helping you
identify problems with the current keyboard design? (For example, whether people
have added wrist rests or other supports to their keyboard setups.)
(BUTTON)
Submit
* “Featuritis”
Competitive pressures can create unexpected consequences. For example,
“featuritis” is the “disease” affecting product development, characterized by what
Norman calls “creeping featurism," or the temptation to add more and more features
to an already well-designed product. There are several possible sources of creeping
featurism, including:
The problem with creeping featurism is that it often degrades the overall quality of a
product. Instead, Norman recommends companies focus on their strengths, and
develop them even further. Rather than winning over customers with new features,
it’s better to do one thing better than anyone else on the market.
Many people believe that technology makes us less intelligent, since it provides
constant distraction and prevents us from learning to do things for ourselves. Has
technology had this kind of effect in your own life, either in general or with particular
devices and experiences?
Is there a middle ground here? Is technology helpful for you in some areas and
harmful in others? Why do you think that is?
(BUTTON)
Submit
Do company values influence how likely you are to buy their products? What values
do you look for in an ideal company to support? (For example: fair wages,
sustainability, transparency, and so on.)
How important are those values to you when deciding whether to purchase
something? Do they outweigh any of the other considerations you listed above (like
(BUTTON)
Submit
Did the lessons in this book change the way you think about errors? Is it helpful to
think about tracing “human” errors in your own life back to design?
Going forward, what are your most important takeaways from this summary? How
will you apply them to your own life?