3.1 - Heim 2007 The Resonant Interface - ch1
3.1 - Heim 2007 The Resonant Interface - ch1
In this chapter we explore the various interaction paradigms that have evolved
over the years. This will form the basis for our study of interaction design and will
define the scope of our endeavors.
4 Chapter 1 Interaction Paradigms
In 1945 Vannevar Bush published an article in the July issue of the Atlantic Monthly
entitled "As We May Think." In this article, Bush envisioned a device that would
help people organize information in a meaningful way. He called this device the
"Memex":
A Memex is a device in which an individual stores all his books, records, and communi
cations, and which is mechanized so that it may be c'onsulted with exceeding speed and
flexibility. It is an enlarged intimate supplement to his memory. (Bush, 1945, 12)
Bush did not describe the interface in great detail. The bulk of the human-
computer interface was embodied in the physical setup, which incorporated the
desk, the screens, the levers, the platen, and the keyboard (Figure 1.1). These were
electromechanical components that were familiar to his audience and had some
what obvious functionality. The Memex was never developed, but the concept of a
personal computerized aid served as inspiration for many of the computer pioneers
of the period.
Douglas Engelbart was fascinated by Bush's ideas, and he thought about how com
puters could be used to enhance and augment human intelligence. Engelbart pur
sued this concept during the 1960s with 17 researchers in the Augmentation
Research Center (ARC) of the Stanford Research Institute (SRI) in Menlo Park, CA.
b
7.7. Innovation
Memex in the form of a desk would instantly bring files and material on any subject to the operator's fingertips. Slanting
tr,rnslucent viewing screens magnify supermicrofilm filed by code numbers. At left is a mech;1nism which automatically
photographs longhand notes, pictmes ;1nd letters, then fik>s them in the desk for future refercncu (LIFE 19(1 I), p. J 23).
Memex.
The concept of a personal, interactive computer was not common in that age of
large-scale mainframe computing:
It seemed clear to eve1yone else at the time that nobody would ever take seriously the
idea of using computers in direct, immediate interaction with people. The idea of inter
active computing-well, it seemed simply ludicrous to most sensible people. (Engelba11,
1968)
Engelbart remained dedicated to his vision, and his research team succeeded in
developing a human augmentation system based on their oNLine System (NLS),
which was unveiled at the Fall Joint Computer Conference in San Francisco in 1968.
The NLS was similar to contemporary desktop computing environments. It had
a raster-scan display, a keyboard, and a mouse device. The mouse, invented by
Engelhart, was first demonstrated at the 1968 conference. There was also a five-key
chorded keyboard. The NLS was a networked system that included e-mail and split
screen videoconferencing (Figure 1.2).
In the same year that Engelbart demonstrated the NLS,]. C. R. Licklider published
an article entitled "The Computer as a Communication Device" (Licklider, 1968). In
this article, Licklider describes a meeting arranged by Engelbart that involved a team
of researchers in a conference room with six television monitors. These monitors
displayed the alphanumeric output of a remote computer. Each participant could
use a keyboard and a mouse to point to and manipulate text that was visible on all
the screens. As each participant presented his or her work, the material was ac
cessed and made visible to the other members of the group.
6 Chapter 1 Interaction Paradigms
The essential insight of Licklider's article was the potential for advanced modes
of communication through a network of computers. The concept of networked
computers was not new-a number of institutions already had successfully net
worked their computers; however, Licklider pointed out that these networks were
restricted to groups of similar computers running the same software that resided in
the same location. These were basically homogeneous computer networks.
Licklider envisioned something more like the current manifestation of the Internet
with its diversity of computing technologies and platforms.
Along with the concept of heterogeneous computer networks, Licklider intro
duced another significant concept in the evolution of human-computer interaction.
Because Licklider imagined a high degree of integration in everyday life for net
worked computing, he saw the need to address the potentially overwhelming na
ture of the system. His solution was to create an automated assistant, which he
called "OLIVER" (online interactive vicarious expediter and responder), an acronym
that honors the originator of the concept, Oliver Selfridge.
OLIVER was designed to be a complex of applications, programmed to carry
out many low-level tasks, which would serve as an intermediary between the user
and his or her online community. OLIVER would manage files and communications,
take dictation, and keep track of transactions and appointments.
illumination is stable and screen position can be fixed to avoid glare. PDAs, on the
other hand, can be used in many types of environment, from the bright outdoors to
dark automotive interiors with intermittent outside illumination. The type of screen,
available display settings, use of color, size of type, and design of the icons must be
defined by the broad range of possible viewing environments.
~
b
7.2. Computing Environments 9
and be able to use them comfortably. Users who need to copy text from hard
copy should have some type of holder on which to place the source material so
that they can easily see both the hard copy and the screen.
illumination can affect the visibility of screen elements.
Lighting in outside computing environments can be difficult to control.
Noise-High levels of environmental noise can make audito1y interfaces difficult,
if not impossible, to use. Some environments, such as libraries and museums, are
sensitive to any type of auditory stimuli. Mobile information appliances, such as
cell phones and pagers, must have easily accessible audio controls. Many cell
phones have vibration settings to avoid annoying ring tones. Other devices, such
as wearable computers, also must be desigl!ed to address this concern.
@ Pollution-Some industrial environments, such as factories or warehouses, are
difficult to keep clean. Sometimes users must work with machinery that is dirty
or greasy. Peripherals such as keyboards and mouse devices can easily become
dysfunctional in such environments. Solutions such as plastic keyboard covers,
washable touchscreen covers, and auditory interfaces may address some envi
ronmental concerns.
Studies have shown that the social environment affects the way people use comput
ers. Computer use has also been shown to affect human social interaction. This is a
significant concern for collaborative computing systems. It would be unfortunate if
a collaborative system inhibited human-human interactions.
Different computing paradigms imply different social environments. For in
stance, personal computing is usually a solitary activity done in an office or an iso
lated corner of the house. This is very different than ubiquitous computing (see later
discussion), which implies a more public setting.
Public computers that are used for sensitive personal activities, such as ATMs,
should be designed to protect the user's privacy. Audible feedback of personally
identifiable information may be embarrassing or even detrimental. Negative audi
tory feedback that announces when a user makes an error can cause embarrassment
for people who work in group settings.
Computers that are used in group settings must afford all members adequate
viewing and auditory feedback. Group members may also need to access system
functionality through peripheral devices. These must be made available to the
group members who need them.
Computer system designs must address the relevant issues that arise from these dif
ferent cognitive and environmental dynamics. The following list covers some of as
pects related to the cognitive computing environment:
Age-Some systems are designed specifically for young children who are
learning basic skills such as reading or arithmetic. Other systems are designed
for scientists and technicians who need to study the human genome.
Computing environments that are not age appropriate run the risk of being frus
trating or insulting. Technicians may find cute animations annoying, and chil
dren may become bored with long lists of hyperlinks.
• Conditions Related to Disabilities-Computer systems can be designed
specifically for people with conditions related to disabilities. As more public
information is being made available through information appliances, the need
to make these devices accessible to people with cognitive disabilities is be
coming more important. If public information appliances are designed poorly,
they may significantly diminish the quality of life for a significant portion of the
population.
,..,...,....~,..,,.., of Technical Knowledge-Some systems are designed for particular
user profiles with specific skills and facilities. These systems can be targeted to
the particular needs of the user. ·Other systems are designed to be used by the
general population and must take into consideration a wider range of cognitive
abilities. General computing devices are also used by people with diverse tech
nical backgrounds. It may be necessary to afford users different levels of func
tionality so they can use the system in the most efficient way.
,-..ll..,.., of Focus-Computing systems can be designed for people who are
JLJ? ..
completely focused on the task at hand, such as people playing digital games,
or they can be designed for someone who is, say, monitoring many work
related activities that involve heavy machinery in an environment that has ex
cessive noise pollution. The priorities and requirements for the interface design
will be different for these two levels of focus.
• Stress-Computers are used in diverse environments that impose
different levels of cognitive stress on the user, from leisure activities such as lis
tening to music to mission-critical environments as in air traffic control. People
who use computers in stressful mission-critical environments need clear and
unambiguous interface designs. Although this may be an important concern for
all computing devices, there is no room for error in fields like medicine, aero
space, or the military. The priority level of certain design criteria may increase
in certain situations.
Since the publication of Vannevar Bush's article describing the Memex, computer
scientists and researchers have created numerous and diverse configurations for
computing systems. These configurations involve the construction and arrangement
7.3. Analyzing Interaction Paradigms
We will use the SW+ H heuristic to define existing interaction paradigms and spaces
and explore the elements and objects with which the user interacts; this will help to
give us an understanding of how these systems work and how to apply that knowl
edge to the development of future systems.
We will make a slight alteration in the sequence and add a few couplings to the
heuristic. Although all of the heuristic's elements are interrelated, some have greater
effects on the others. These couplings are, therefore, based on degree of relation
ship and are not exclusive.
What/How-An in-depth understanding of the physical and virtual interface com
ponents of the various computing systems (the what) is essential for the creation of
usable systems. We will look at the various physical components of interfaces, such
as the I/0 devices, and briefly explore the various related interface elements that de
fine how we use these systems (the how), such as windows and icons. The interface
components themselves will be more thoroughly explored in Chapter 10.
Where/When-Computer systems can also be defined by their particular physi
cal computing space. This can clearly be seen by comparing the desktop computing
space with the wearable computing space. Wearable computing is a result of ad
vances in the fields of mobile and network computing and has given rise to a new
network scope: the personal area network (PAN). PAN is defined by the user's abil
ity to access computing functionality remotely (the where) and at any time (the
when), regardless of his or her physical location or even while he or she is in tran
sit if desired. This is not something one would attempt with a cathode ray tube
(CRT) display.
Who/Why-We will also look at the types of tasks these physical devices and in
terface components facilitate. These tasks define the reasons why we use comput
ers. This does not disregard the fact that technological developments are often
driven by the need to accomplish new tasks or refine the way current solutions are
implemented. It simply means that current systems facilitate certain tasks, which
create particular motivations for their use. For instance, mainframes were used by
highly trained technicians who were well versed in the inner workings of the sys
tem. These were often the same people who programmed the computer's various
procedures; they understood the logic and syntax intuitively and were involved in
large-scale computing for the governmental, industrial, and research communities.
12 Chapter 1 Interaction Paradigms
We will now explore some of the various interaction paradigms and their manifest in
teraction spaces. In the following sections, we investigate the various components of
these systems. We will then, in Chapter 2, look at the various styles of interaction that
these spaces involve, from text-based interaction to graphical and multimedia interfaces.
The principal paradigms we will look at include the following (Figure 1.3):
e Large-scale computing
Personal computing
Networked computing
Mobile computing
We will look at some of the significant manifestations of these paradigms and
explore the synergies that arise from the confluence of particular paradigms:
Desktop computing (personal and networked)
@ Public-personal computing (personal and networked)
o Information appliances (personal, mobile, and networked)
7.4. Interaction Paradigms
Information
Appliances
Large circles represent principal paradigms. _Oblong shapes represent convergent paradigms.
Words without surrounding shapes represent specific system architectures (sometimes used
for a paradigm reference, as in desktop computing for personal computing).
We will then investigate some of the convergent interaction spaces that have
evolved in recent years:
Collaborative environments (personal, networked, and mobile)
Embodied virtuality systems:
- Ubiquitous computing (personal, networked, and mobile)
- Ambient computing (networked and mobile)
- Invisible computing (mobile and networked)
- Wearable computing (personal, networked, and mobile)
"' Immersive virtual reality (personal, networked, and mobile)
a b
the user and the computer. Large programs were usually processed in batches
overnight when the syste:ri;i was not needed for routine daily processing.
Due to the great expense of maintaining mainframe computers, it was not pos
sible to give excessive amounts of time to single users of the system. This led to the
development of time-sharing services (TSSs). TSSs were schemes that used the
downtime of one user for another user who was currently active. TSSs were gener
ally available during the day.
Smaller, less expensive versions of the mainframe, called minicomputers, even
tually became available to companies who could not afford the larger and more
powerful mainframes or who simply did not need as much computing power.
These minicomputers were structured according to the same interaction architecture
as the mainframes.
Concurrent developments in computing for scientific research resulted in the
development of high-powered machines, called supercomputers, which were de
signed to be used for specific computing tasks and to do them as quickly as possi
ble. These highly specialized machines crunched large amounts of data at high
speed, as in computing fluid dynamics, weather patterns, seismic activity predic
tions, and nuclear explosion dynamics.
Supercomputers are used for the very high speed backbone (vBNS) connec
tions that constitute the core of the Internet. Mainframe computing, on the other
hand, has often seemed to be on the verge of extinction. However, this is far from
the truth. They exist in their current incarnation as so-called enterprise servers that
are used in large-scale computing environments such as Wall Street.
When/Where-Most large-scale computers were owned by government agen
cies and large research institutes that were often affiliated with large universities.
7.4. Interaction Paradigms 15
The personal vision of scientists like Douglas Engelbart and the work done by the
researchers at the Augmentation Research Center, along with contemporaneous ad
vances in technology, such as the shift from vacuum tubes to integrated circuits, led
to the development of smaller, less expensive microcomputers and a new paradigm:
personal computing. This new paradigm of personal computing dispersed comput
ing power from a centralized institutional space to a personal space and led to re
lated developments in hardware and software design that involved a greater focus
on the human interface.
Computing power was now available to people with little or no technical back
ground, therefore the interaction needed to become more intuitive and less techni
cal. Researchers began exploring the use of graphical representations of computer
functionality in the form of symbols and metaphors.
The Alto, developed at the Xerox Palo Alto Research Center in 1973, was the
first computer to use a GUI that involved the desktop metaphor: pop-up menus,
windows, and icons (Figure 1.5). It is considered to be the first example of a per
sonal computer (PC). Even though the Alto should technically be considered a small
minicomputer and not a microcomputer, it served as the profile for the personal
computing paradigm: one user sitting at a desk, with uninterrupted access and full
responsibility for the functioning and content of the computer's files and programs.
The NLS had a great influence on the development of the Alto (many of the
PARC researchers came directly from ARC), which involved Ethernet connections, a
CRT display (mounted on its side in portrait orientation), a keyboard, a three-button
mouse, and the same five-key chorded keyboard. The system also included a laser
printer developed specifically for the Alto. The Alto was never marketed
commercially, but its interaction architecture provided the inspiration for the devel
opment of the Apple® Lisa, which had a more sophisticated GUI environment.
16 Chapter 1 Interaction Paradigms
com~mter (1973).
Courtesy Palo Alto Research Center.
Along with the development of PC hardware and operating systems was a par
allel evolution in application software. With the development of programs like the
VisiCalc spreadsheet program and Lotus 1-2-3, as well as word processing applica
tions and digital games, the PC became a powerfol tool for both the office and the
newly emerging home office/entertainment center. The PC has gradually become an
integral part of our professional and personal lives.
The development of the PC meant that users did not have to submit their pro
grams to a system administrator who would place them in a queue to be run
when the system was available. Users could now access the computer's function
ality continuously without constraint. The software industry boomed by creating
programs that ranged from productivity suites, to games, to just about anything
people might want.
7.4. Interaction Paradigms 17
Numerous storage media options are available for PCs; some are permanent,
like CDs and DVD ROMs, and some are updateable, like CDRWs. Storage media
technology is becoming cheaper and smaller in size. There is a consistent evolution
in storage media that quickly makes today's technology obsolete tomorrow. For in
stance, the once ubiquitous floppy disk is rapidly joining the mountain of obsolete
storage media. This may have consequences in the future for GUis that traditionally
use the image of a floppy disk on the icon for the "Save" function. Portable hard
drive technologies range from the large-capacity arrays used in industry to the pen
sized USB sticks.
Where/When-With the advent of the PC, computing power became available
for home use and could address the home/office and entertainment needs of a large
segment of the population. Home access was basically unfettered; users could ac
cess computing functionality whenever they wanted. The home computer has be
come a staple in countless households. A generation that takes the presence of a PC
for granted has already grown to adulthood; they have never lived in a house with
out one.
Who/Why-PCs are used for a wide variety of reasons. Applications range
from productivity tools that include word processing, spreadsheet, database, and
presentation software to communication tools like e-mail and instant messaging
software.
Shopping on the Web is becoming an integral part of consumer behavior.
People also use the Web for banking and bill paying. Legal and illegal uses of file
sharing technologies have become common. The PC has become a center of activ
ity in the home with applications for both work and play.
Perhaps the more serious question is not who uses personal computers but
rather who does not use personal computers. Researchers have begun to explore
the ramifications of socioeconomic issues on the information gap-the distance be
tween people who have access to electronic information delivery and those who do
not-and education lies at the center of this conversation. PCs, especially those with
an Internet connection, are seen as a significant resource for educational institu
tions. Students in schools that cannot afford to offer this resource are at a disadvan
tage in a world that increasingly requires basic computer literacy for entry-level
employment opportunities.
lack full privileges and ,cannot access some operating system functionalities. This is
done to ensure the system's proper functioning and protect against malicious or ac
cidental alterations of its settings. There is also a need to make sure that the periph
eral devices such as keyboards and pointing devices are not damaged or stolen.
Public Information Appliances-Public information appliances are context- and
task-specific devices used by the public to access information or services offered by
corporate or governmental agencies. They most often take the form of information
kiosks or ticket-dispensing machines.
The most common kiosk-type device is the ATM machine, which basically con
sists of a touchscreen connected to a network of variable scope (Figure 1.6). The
screens comprise buttons, virtual QWER1Y keyboards, and number pads, all of
which are designed for easy operation and can accommodate different users of
varying finger sizes.
Public information appliances are found in both indoor and outdoor installa
tions. It is important that screen visibility remain high. This can be especially diffi
cult in outdoor locations, where lighting conditions can vary according to weather
and time of day.
Touchscreens are the preferred form for these kiosks because of the high vol
ume of public use. Peripherals such as keyboards and pointing devices would in
crease maintenance costs as well as the downtime some systems would incur due to
theft and mistreatment.
Although the personal computing architecture is well suited for the home, it does
not entirely address the needs of the office user. Office workers need to communi
cate and share documents with each other. They require access to corporate documents
7.4. Interaction Paradigms
Networked computers can offer more things to more people and, therefore,
have expanded the diversity of the user population. Young people are spending in
creasing amounts of time playing networked games and instant messaging each
other. Older people are enjoying the photos of grandchildren who live far away.
People of all ages and backgrounds with diverse interests and levels of com
puter literacy are finding reasons to use networked computers (mostly via the Web),
such as participating in online auctions and conducting job searches. Networks
have significantly altered the face of the user.
If we value the pursuit of knowledge, we must be free to follow wherever that search
may lead us. The free mind is not a barking dog, to be tethered on a ten-foot chain.
Adlai Stevenson
Data become information when they are organized in a meaningful, actionable way.
Knowledge is obtained by the assimilation of that information. In today's world,
computers play an important role in the processing of data and the presentation of
information.
In the past, computer access to data was restricted by location. In many situa
tions, data had to be gathered and documented on location and transported to the
computer. The data would be processed at the central location, and the information
gained from that processing would initiate other data-gathering activities. This re
quired time and intermediate steps that could sometimes affect the gathering
process as well as the integrity of the data, especially because some data are ex
tremely time sensitive.
That is no longer the case. With mobile and networked computing technolo
gies, computers are no longer "tethered on a ten-{oot chain." They can be trans
ported to the remote locations where data are collected. Data can be captured,
processed, and presented to the user without his ever leaving the remote site.
Mobile technology embodies the essence of Adlai Stevenson's comments on the
search for knowledge.
What/How-Mobile computing technologies comprise a very diverse family of
devices that have rapidly become ubiquitous in most industrialized societies.
There are numerous brands and models of laptop computers, and tablet comput
ers have recently appeared. Digital game and MP3 players represent a thriving
commercial segment. Many corporations distribute handheld remote communica
tion and tracking devices to field agents to enhance on-site delivery and mainte
nance services.
Portable computers ship with embedded touchpads and pointing sticks. Their
keyboards range from a bit smaller than desktop keyboards to full size; however,
they do not have the added functionality of an extended keyboard with its number
pad and additional function keys.
7.4. Interaction Paradigms 21
a b
Laptop and tablet computers use LCD screens (Figure 1.7). Tablets also provide
touchscreen interaction and handwriting recognition. Tablets are made in either a
"slate format," without a keyboard, or in a laptop design that can be opened into
the slate format.
Handheld devices, on the other hand, vary significantly in terms of hardware
and software configurations. They can range from small computer-like cell phones
with tiny keyboards and screens to MP3 players like the Apple iPod®, which uses a
touch wheel and small LCD window (Figure 1.8).
a b
portable computers, and new interaction designs are appearing. The tablet com
puter with its slate format and handwriting recognition software represents such a
new approach to mobile interaction designs.
The most common format for handheld computers is the PDA. These are gen
erally palm-sized devices that run a native operating system (OS), such as the Palm
OS® or some version of Microsoft® Windows® like Windows CE®. Some of these
interfaces do not differ greatly from their desktop cousins and can become cumber
some due to small screen size. Others, like the Palm OS, that used more innovative
designs have become quite popular. In these devices, input is generally accom
plished with a stylus using icons, menus, and text entry through miniature soft key
boards. Effective handwriting recognition is not very common, but the calligraphic
systems built into the operating systems of these devices are stable; reliable, and rel
atively intuitive.
The small size of these devices, thefr limited battery life, and the relatively low
power of their processors can be problematic for some operations. Therefore, it is
advantageous to use a desktop system in tandem with the mobile device for off
loading complex tasks or uploading latest versions of information services.
Handhelds can also use flash card technology to store and transfer files and pro
grams to and from their desktop cousins.
Mobile systems can access network resources, such as the Internet, through
wireless access points (known as Wi-Fi hot spots) using the IEEE 802.11 family of
wireless protocols. Relatively reliable connections and high speeds can be obtained
if the signal is strong. Mobile devices can also connect to the Internet through a cell
7.4. Interaction Paradigms 23
On-board naviaa1tion
Courtesy BigStockPhoto.com.
involve the nature of the group tasks, the reasons for group interaction, and the po
tential tools (groupware) for facilitating the collaborative tasks.
What/How-Computer-mediated communication, as currently practiced, involves
networked PCs using familiar desktop environments such as Microsoft Windows®
Mac OS X®, or some graphical interface to a Linux system. These systems are often
augmented by the addition of audio and video devices such as microphones and
Web cams. Engelbart's NLS incorporated many of the components of a CSCW envi
ronment, including audio and video conferencing and work flow support.
Current CMC environments may also involve large projected displays that use
digital projectors and "smart screens,'' such as the SMART Board® from SMART
Technologies Inc., which can be operated by touch or a computer interface
(Figure 1.10).
The Web, with its suite of standardized protocols for network layers, has been a
significant force . in the development of collaborative environments by allowing
greater interoperability in heterogeneous computing environments.
Where/When-CMC has many possible manifestations that cover same- or
different-place/time scenarios and involve diverse families of groupware applications
and devices. They can be synchronous, as in videoconferencing and instant
messaging; or asynchronous, as in the recommender systems built into e-commerce
Web sites such as Amazon.com or e-mail systems. They can also involve remote
access white boards, chat rooms, and bulletin board services.
Who/Why-CMC can aid business people who need to collaborate with remote
customers or employees. Product development teams involved in product design,
Some of us use the term "embodied virtuality" to refer to the process of drawing comput
ers out of their electronic shells. The "virtuality" of computer-readable data-all the dif
ferent ways in which it can be altered, processed and analyzed-is brought into the
physical world. (Weiser, 1991, 95)
Groupware and CMC applications have facilitated the decoupling of information and
location. Information that resides on one machine can be shared across a network
and distributed to group members in remote locations. CSCW environments must be
designed around the way people normally work on collaborative projects. The
diverse nature of CMC applications reflects the complexity of collaborative work.
One of the traditional obstacles in CSCW is the fact that computing power and
functionality has traditionally been concentrated inside the physical confines of a
7.4. Interaction Paradigms
0
manipulate.
Side 2-Manual/fixed devices such as ATMs and kiosks are manipulated by the Portable Fixed
4
user but are fixed in place.
• Side 3-Portable/automated devices are read by situated sensors, such as the Automated
car transceivers used for toll both payments. There are no possible manual op
erations. Embodied
Side 4-Automated/fixed devices such as alarm sensors can be used to detect virtuality environments
the presence of intruders or industrial hazards. location/operation.
There are often no clear-cut distinctions among the various approaches; they
represent a constellation of interaction paradigms, each combining different degrees
of user control and embedded architectures. Each approach, however, represents a
unique manifestation with particular characteristics that are useful in different cir
cumstances. We will concentrate on three, approaches: UbiComp, invisible, and
wearable computing. These are the most widely used terms and they cover the
overall scope of the EV domain.
The concept of a computational grid that is seamlessly integrated into our physical
environment is the essence of ambient computing. Smart environments that sense
when people are present can be programmed to adjust lighting and heating facilities
based on the location and number of people in the room.
These environments can be programmed to identify particular individuals using
voice or face recognition as well as wearable identity tags and adjust certain envi
ronmental parameters according to specific user preferences. With this technology,
public spaces could be made safer through the monitoring of environmental
p
conditions for the detection of gas leaks and other hazards. Such a system could
also indicate best possible escape routes in the case of fire.
Commercial enterprises could tailor displays that respond to gender or age and in
form customers about the merchandise they pick up and examine. Stores could be
come more accommodating to tourists who face obstacles because of language or
cultural differences. Consider the classic problem of the tourist who wants to buy a
souvenir in a country that uses a different currency: he or she must know the exchange
rate and do the mental calculation. Alternatively, he or she could use an information
appliance that had been set to the exchange rate and would do the calculation. In an
invisible computing space, however, the prices would be displayed to the shopper in
his or her native currency, based on the most recent exchange rate.
The most profound technologies are those that disappear. They weave themselves into
the fabric of everyday life until they are indistinguishable from it. (Weiser, 1991, 94)
The goal of humionics is to create an interface that is unobtrusive and easily op
erated under work-related conditions. Therefore, traditional I/0 technologies are
generally inadequate for this computing environment. Wearable systems must take
advantage of audit01y and haptic as well as visual interaction.
What/How-I/0 device design for wearable computers depends on the context
of use. Because wearable computing involves diverse real-world environments,
wearable systems may require custom-designed I/0 devices. This can be costly, and
their use can present steep learning curves to new users. Therefore, there is a
concerted effort in the wearable computing community to create low-cost interfaces
that use familiar concepts without resorting to standard mouse and keyboard
devices and their related styles of interaction.
7.4. Interaction Paradigms 31
One example of a new interaction style is the circular dial on the VuMan devel
oped by the Wearable Group at Carnegie Mellon. The VuMan is a wearable device
that allows users access to large amounts of stored information such as manuals,
charts, blueprints, or any other information from repositories that might be too heavy
or bulky to carry into the field or work site. It uses the concepts of circular input
and visualization to map the circular control device with the contents of the screen.
This type of navigation has become common to users of the popular iPod music
player from Apple Computer.
Wearable computing has created a new scope for network structures-the
personal area network (PAN). There are two types of PANs. The first type uses wire
less technologies like Bluetooth® to interconnect various wearable devices that may
be stitched into the user's clothing or to connect these wearable devices with other
information appliances within the personal space near the user or a few meters be
yond. An example is a PAN in an automobile that connects a cell phone with an em
bedded system that facilitates hands-free operation. Standards for wireless PANs
(WPANs) are being worked on by the Institute of Electrical and Electronics
Engineers (IEEE) 802.15 Working Group for WPAN.
The other type of PAN uses the body's electrical system to transmit network sig
nals. Work is being done on this at the MIT MediaLab with the Personal Information
Architecture Group and the Physics and Media Group.
Researchers at IBM's Almaden Research Center are researching PANs and have
created a device the size of a deck of cards that allows people to transmit business
card information simply by shaking hands. Data can be shared by up to four people
simultaneously. Work on PANs has also been done by the Microsoft Corporation,
which has registered a patent (no. 6,754,472) for this type of PAN that uses the skin
as a power conduit as well as a data bus.
Where/When-To understand the dynamics involved in wearable computing
more fully, we define three "spaces" that are involved in the completion of a task:
the work space, the information space, and the interaction space:
• An information space is defined by the artifacts people encounter such as doc
uments and schedules.
An interaction space is defined by the computing technology that is used. There
are two components: the physical interaction space and the virtual interaction
space. These components can also be understood as the execution space and
the evaluation space, respectively. There is a dynamic relationship between the
two components that creates the interaction space.
Work space refers to any physical location that may be involved in a task.
There may or may not be a direct correlation among the different spaces. That
is, the interaction space may or may not be the same as a person's work space or in
formation space. In the case of a computer programmer, Web designer, or a profes
sional writer, the information and work spaces are both part of the interaction space
created by a PC.
32 Chapter 1 Interaction Paradigms
Consider the context of library-based research. Many modern libraries use digi
tal card catalogs that contain meta-data about books in the form of abstracts and lo
cation indicators, the Dewey Decimal System. The digital catalog constitutes an in
teraction space, although it is only part of the work space. The book or books in
which the researcher is interested reside on a physical shelf that is outside the inter
action space. Therefore, while the catalog resides within the information, interac
tion, and work spaces, the book itself resides outside the interaction space. This cre
ates a gap between parts of the work space and the interaction space (Figure 1.13).
Wearable computing seeks to bridge some of these gaps and make computing
functionality more contextual. A possible solution in this scenario might be to em
bed a chip in the book that communicates meta-data about the book to the re
searcher's eyeglass-mounted display. The chip could respond to queries entered
into a system wearable by the user. Another possibility .might be to allow the user to
access a digital version of the book on a wearable tablet-type device. In this case,
the catalog would interact with the user's wearable system and fetch the contents of
the book in digital form.
Who/Why-Wearable interaction devices would be able to make information
contextually available, as well as tailor it for specific users. For instance, a student
using wearable technology would be able to visit a museum and receive auditory or
Manual textual information about works of art based on his or her proximity to distributed
O
sensors; this information would be filtered through personal preference settings
Portable Fixed such as grade level or course of study parameters.
4
The military has been involved in a number of research projects involving wear
Automated able computers. The systems they are developing are designed to enhance the in
formation processing and communication capabilities of infantry soldiers. Figure
Embedded 1.14 summarizes the various EV environments and their characteristics based on the
environments
.,;,.i~ ..... i;+., location/operation diagram. Table 1.1 outlines various EV environments and their
location/operation. characteristics.
p
Invisible User does not System takes care of Some system Some devices are
interact with com- all computer components are portable
puter functionality embedded
Wearable Many ofthe wear- Some ofthe wearable Some systems use Most system
able components components interact situated sensors that components are
allow automatically with interact with wearable portable (wearable)
. manual control embedded sensors components
The goals of the virtual reality (VR) community are the direct opposite of the goals
of the EV community. EV strives to integrate computer functionality with the real
world, whereas VR strives to immerse humans in a virtual world.
Virtual reality technologies can be divided into two distinct groups: immersive
and nonimmersive environments. Nonimmersive VR environments can be charac
terized as screen-based, pointer-driven, three-dimensional (3D) graphical presenta
tions that may involve haptic feedback from a haptic mouse or joystick.
In its simplest form, nonimmersive VR uses development tools and plug-ins for
Web browsers to create 3D images that are viewable on a normal computer display.
Two examples are the Virtual Reality Modeling Language (VRML), an International
Organization for Standardization/International Electrotechnical Commission (ISO/IEC)
standard that also can be used to create models for use in fully immersive VR environ
ments, and Apple Computer's QuickTime VR®, which uses two-dimensional (2D)
photographs to create interactive 3D models. More-sophisticated systems allow for
stereoscopic presentations using special glasses and display systems that involve over
lay filters or integrated display functionality.
The goal of immersive virtual reality is the realization of Ivan Sutherland's ulti
mate display. This goal may remain elusive for the foreseeable future, but progress
in that direction continues. Contemporary systems use visual, auditory, and haptic
technologies to create as realistic an experience as possible. Even the person's sense
of equilibrium is involved in some of the large platform VR devices that offer loco
motion simulation. Taste and smell, however, have not yet been addressed.
What/How-Immersive VR environments are designed to create a sense of
"being" in a world populated by virtual objects. To create a convincing illusion, they
must use as many human perceptual channels as possible. Therefore, sophisticated
34 Chapter 1 Interaction Paradigms
visual, auditory, and sometimes haptic technologies must be used to create a truly
engaging virtual experience.
One of the most common immersive VR I/0 devices is the head-mounted display
(HMD). These are relatively large head sets that present stereoscopic images of vir
tual objects. HMDs can be cumbersome and annoying due to the presence of con
necting cables, as well as the insufficient field of view and low resolution capabili
ties found in the lower-end models. Newer designs that incorporate wearable
displays attached to eyeglasses and the emerging laser technologies show promise
for lighter, less cumbersome, and more versatile displays.
Another approach to visual displays of VR environments is to use room-size
spatial immersive display (SID) technology. SIDs use wall-size visual projections to
immerse the user in a panoramic virtual environment. These rooms create a visual
and auditory environment in which people can move about and interact with virtual
objects. Examples of SIDs are the ImmersaDesk®, PowerWall®, Infinity Wall®,
VisionDome®, and Cave Automated Virtual Environment® (CAVE).
There are several CAVE environments in existence around the world. It has
been suggested that collaborative work could be done by networking these CAVE
systems and creating a tele-immersion environment. Traditional videoconferencing
allows people in remote locations to work together as though they were physically
in the same location. Tele-immersion would allow people to work together in the
same virtual environment, observing and modifying the same virtual objects.
Researchers are exploring ways to exploit the potential of this technology for
business and educational applications by making tele-immersion more available
and less expensive. New, lower-cost and less space-intensive desktop stereo dis
plays, such as the Totally Active Workspace, Personal Penta Panel, and Personal
Augmented Reality Immersive System, are being developed. These systems would
make tele-immersion more accessible for general computing situations.
Current desktop systems generally use larger-than-desktop displays and HMDs in
combination with high-quality audio systems. However, researchers at the Electronic
Visualization Laboratory, University of Illinois at Chicago, are exploring the possibility
of creating a truly desktop scale tele-Immersion system. The prototype is called the
ImmersaDesk3.
signals close to the user's eardrum. These systems are capable of creating realistic
3D sound environments by producing auditory feedback in any location within the
3D auditory environment. However, some issues remain involving up-down and
front-back localization.
Immersive VR environments require some type of user-tracking system that will en
able the display to sync with the user's movements. Tracking systems must be able to
calculate the position of the user relative to the virtual objects as well as to other users
in a networked environment. They must be able to follow the user's point of view and
update the imagery to reflect where the user is looking. They must also allow the user
to manipulate virtual objects through translational and rotational movements.
Head-movement-tracking systems can be added to the basic stereoscopic image
generation capabilities of HMD systems. These tracking devices are usually mounted
on the support that holds the HMD. They track the user's location and head move
ments and relay this information to the display software, which creates dynamically
updated depictions of the environment. Tracking ·systems may also use a dataglove
or some type of wand or thong-like device to manipulate the virtual objects.
Movement in a VR environment is often restricted due to the equipment and ca
bles required by the visual and auditory components of the system. There are times,
however, when motion is a significant part of the virtual experience. For instance,
flight simulations and military training focus on movement as an important part of
the environment. These motion systems require more versatile display technologies.
These technologies are categorized as passive or active:
• Passive systems, like flight simulators, use a platform device to transport the
user within the virtual space and emulate the feeling of being in a real aircraft.
The user controls the movement with a joystick or similar device.
• Active locomotion systems allow the user to walk "real" physical distances using a
treadmill or pedaling device. These systems are used in military training exercises
to simulate real-world physical conditions and test for stamina and endurance.
Where/When-Currently, immersive VR systems are found in large scientific
research laboratories. There are commercial applications for VR technologies in the
areas of architecture, product design, and engineering. However, only large
corporate entities can invest in these technologies with any degree of cost
effectiveness. The cost of just the basic system components is prohibitive. Outside
of the nonimmersive technologies like VRML and QuickTime VR, there are currently
no VR applications for the general computing population.
Who/Why-There are still many issues to resolve in the development of VR
systems; a number of application areas for VR technologies already exist, however,
such as scientific visualization. VR offers significant benefits in fields like aerospace,
aviation, medical, military, and industrial training, and other mission-critical
domains that involve work in hazardous or expensive environments.
36 Chapter 1 Interaction Paradigms
VR technologies have been used for the visualization of complex data sets. The
Visual Human Project (VHP), an outgrowth of the U.S. National Library of Medicine
(NLM) 1986 Long-Range Plan, is involved in the creation of a complete digital rep
resentation of the human body. A virtual colonoscopy has already been demon
strated, and there are plans for a virtual laparoscopic simulator.
There are applications for engineering-computer-aided design (CAD) and VR.
The Clemson Research in Engineering Design and Optimization (CREDO) labora
tory is exploring the development of interfaces for virtual reality rapid prototyping.
They are also interested in using VR environments in the design process.
The field of education can benefit from the use of VR environments. Due to the
overwhelming amount of recent research and investigation, the acquisition of new
knowledge and implementation of new concepts is becoming difficult. VR environ
ments can be used to facilitate this process.
The field of psychology has already demonstrated some advantages of VR simu
lations in the treatment of phobias. Carlin, Hoffman, and Weghorst 0997) were able
to treat arachnophobia by using a VR display to simulate a room in which subjects
were able to control the number and proximity of virtual spiders. VR has also been
used in the treatment of agoraphobia, claustrophobia, and the fear of flying.
Augmented reality (AR) is often described as the opposite of VR. VR immerses the
user in a virtual environment and supplants reality, whereas the goal of AR is to cre
ate a seamless integration between real and virtual objects in a way that augments
the user's perception and experience. In AR the user must maintain a sense of pres
ence in the real world.
AR environments have some implied criteria: the virtual information must be rele
vant to, and in sync with, the real-world environment. This increases the importance of
maintaining an accurate connection between virtual and real-world objects. Because
the virtual information must be kept in alignment with the real world, the system must
efficiently track the user's head movements and maintain a reasonable response time.
What/How-Like VR, AR environments use some type of eyeware however, the
eyewear is used to mix views of real-world objects with computer-generated
imagery or information. There are also some systems that use screen-based
configurations to generate AR environments. The user can view the real world either
orthoscopically through special eyewear or by using a video feed from cameras that
are attached to the head gear (this is also relevant to teleoperation environments in
which the camera is at a remote location).
In both cases, the virtual information can be combined with real objects by be
ing optically superimposed on the screen embedded in the eyewear. It is essential
in AR environments that the virtual information be precisely registered with the
real-world objects or the user will become confused or disoriented.
7.4. Interaction Paradigms
As an interaction designer you will need to decide which interaction paradigm best
suits the needs of the intended user. In most cases this will be one of the more gen
eral computing paradigms such as the personal computing or mobile paradigms.
However, the envelope is constantly being pushed, and what was once considered
"bleeding edge" technology has often become quite common. Consider the Memex;
it was once a dream, but now we carry around technology that is far more ad
vanced in our PDAs and cell phones.
To design a system that is consistent with the user's abilities, limitations, and
tasks, you must understand the different paradigms and how they function and the
tasks for which they are most suited. The simplest solution is often the best, but a
simplistic solution will never be adequate. Study the user's abilities, limitations,
tasks, environment, attitudes, and motivations, and then identify the paradigms that
best suit their needs. This is the basis for creating an interaction that resonates with
the people who use your design.
www.aw.com/heim
You will find links to the publications mentioned in the Introduction as well as
Web-based resources covering many of the computing paradigms discussed in this
chapter.
Suggested Reading 39