0% found this document useful (0 votes)
4 views37 pages

3.1 - Heim 2007 The Resonant Interface - ch1

The document discusses the evolution of interaction paradigms in human-computer interaction, emphasizing the importance of understanding these frameworks to improve interaction design. It highlights the contributions of innovators like Vannevar Bush, Douglas Engelbart, and Ivan Sutherland, who envisioned new computing technologies and interfaces that enhance human capabilities. The text also addresses the significance of physical, social, and cognitive environments in the design of computing systems to meet diverse user needs.

Uploaded by

l18037227418
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views37 pages

3.1 - Heim 2007 The Resonant Interface - ch1

The document discusses the evolution of interaction paradigms in human-computer interaction, emphasizing the importance of understanding these frameworks to improve interaction design. It highlights the contributions of innovators like Vannevar Bush, Douglas Engelbart, and Ivan Sutherland, who envisioned new computing technologies and interfaces that enhance human capabilities. The text also addresses the significance of physical, social, and cognitive environments in the design of computing systems to meet diverse user needs.

Uploaded by

l18037227418
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

An interaction paradigm is basically a conceptual framework that serves as a model

for thinking about human-computer interaction. It serves as a broad scaffold for


thinking about how, why, and where people use computers. Understanding the vari­
ous interaction paradigms can help us to frame questions about who uses computers
and when they are used. Finally it can help us to explore the various manifestations of
computing technologies from mainframes to pocket personal computers.

In this chapter we explore the various interaction paradigms that have evolved
over the years. This will form the basis for our study of interaction design and will
define the scope of our endeavors.
4 Chapter 1 Interaction Paradigms

Innovation involves inventing something new or finding a new way to do some­


thing. Interaction design strives for innovation by creating new computing devices
and interfaces or by improving the way people interact with existing technologies.
Past innovations in computing technologies, such as the popular MP3 players
that have become ubiquitous in a very short time, have made our lives more enjoy­
able. Innovations in medical technologies such as laparoscopic surgery have in­
creased the quality of our health care practices. These and other innovations have
quickly become part of our collective experience. The evolution of technology is
proceeding rapidly, and new developments are just around the corner. We must
gain a firm understanding of this process of innovation so that we can benefit from
the lessons learned.
We have benefited in many ways from the energies and creativity of the various
technology professionals who have contributed to this evolution over the years. It
will be instructive and inspirational to review the work of some of them who stand
out for their accomplishments. Their thoughts and efforts can help to motivate and
inspire the innovations of the future so that we can create more useful and more us­
able computing technologies.

In 1945 Vannevar Bush published an article in the July issue of the Atlantic Monthly
entitled "As We May Think." In this article, Bush envisioned a device that would
help people organize information in a meaningful way. He called this device the
"Memex":
A Memex is a device in which an individual stores all his books, records, and communi­
cations, and which is mechanized so that it may be c'onsulted with exceeding speed and
flexibility. It is an enlarged intimate supplement to his memory. (Bush, 1945, 12)

Bush did not describe the interface in great detail. The bulk of the human-­
computer interface was embodied in the physical setup, which incorporated the
desk, the screens, the levers, the platen, and the keyboard (Figure 1.1). These were
electromechanical components that were familiar to his audience and had some­
what obvious functionality. The Memex was never developed, but the concept of a
personal computerized aid served as inspiration for many of the computer pioneers
of the period.

Douglas Engelbart was fascinated by Bush's ideas, and he thought about how com­
puters could be used to enhance and augment human intelligence. Engelbart pur­
sued this concept during the 1960s with 17 researchers in the Augmentation
Research Center (ARC) of the Stanford Research Institute (SRI) in Menlo Park, CA.

b
7.7. Innovation

Memex in the form of a desk would instantly bring files and material on any subject to the operator's fingertips. Slanting
tr,rnslucent viewing screens magnify supermicrofilm filed by code numbers. At left is a mech;1nism which automatically
photographs longhand notes, pictmes ;1nd letters, then fik>s them in the desk for future refercncu (LIFE 19(1 I), p. J 23).

Memex.

The concept of a personal, interactive computer was not common in that age of
large-scale mainframe computing:
It seemed clear to eve1yone else at the time that nobody would ever take seriously the
idea of using computers in direct, immediate interaction with people. The idea of inter­
active computing-well, it seemed simply ludicrous to most sensible people. (Engelba11,
1968)

Engelbart remained dedicated to his vision, and his research team succeeded in
developing a human augmentation system based on their oNLine System (NLS),
which was unveiled at the Fall Joint Computer Conference in San Francisco in 1968.
The NLS was similar to contemporary desktop computing environments. It had
a raster-scan display, a keyboard, and a mouse device. The mouse, invented by
Engelhart, was first demonstrated at the 1968 conference. There was also a five-key
chorded keyboard. The NLS was a networked system that included e-mail and split­
screen videoconferencing (Figure 1.2).

In the same year that Engelbart demonstrated the NLS,]. C. R. Licklider published
an article entitled "The Computer as a Communication Device" (Licklider, 1968). In
this article, Licklider describes a meeting arranged by Engelbart that involved a team
of researchers in a conference room with six television monitors. These monitors
displayed the alphanumeric output of a remote computer. Each participant could
use a keyboard and a mouse to point to and manipulate text that was visible on all
the screens. As each participant presented his or her work, the material was ac­
cessed and made visible to the other members of the group.
6 Chapter 1 Interaction Paradigms

(a) The oNUne


Courtesy Douglas Engelbart and Bootstrap Alliance.

The essential insight of Licklider's article was the potential for advanced modes
of communication through a network of computers. The concept of networked
computers was not new-a number of institutions already had successfully net­
worked their computers; however, Licklider pointed out that these networks were
restricted to groups of similar computers running the same software that resided in
the same location. These were basically homogeneous computer networks.
Licklider envisioned something more like the current manifestation of the Internet
with its diversity of computing technologies and platforms.
Along with the concept of heterogeneous computer networks, Licklider intro­
duced another significant concept in the evolution of human-computer interaction.
Because Licklider imagined a high degree of integration in everyday life for net­
worked computing, he saw the need to address the potentially overwhelming na­
ture of the system. His solution was to create an automated assistant, which he
called "OLIVER" (online interactive vicarious expediter and responder), an acronym
that honors the originator of the concept, Oliver Selfridge.
OLIVER was designed to be a complex of applications, programmed to carry
out many low-level tasks, which would serve as an intermediary between the user
and his or her online community. OLIVER would manage files and communications,
take dictation, and keep track of transactions and appointments.

In 1965 Ivan Sutherland published an article entitled "The Ultimate Display," in


which he proposed novel ways of interacting with computers, including the concept
of a kinesthetic display. He argued that computers could already use haptic signals to
control machine1y-there already existed complex hand and arm manipulators-so
why not a haptic display that could track eye movement and, therefore, use vision as
an input mode? "Eye glance" interaction would involve eye-tracking algorithms that
translated eye fixation into commands that would be carried out by the computer.
7.2. Computing Environments 7

Sutherland also envisioned a display that would create a realistic environment


in which the user would interact with haptic, auditory, and visual stimuli and that
was not subject to the physical constraints of the real world:
The ultimate display would, of course, be a room within which the computer can con­
trol the existence of matter. A chair displayed in such a room would be good enough to
sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in
such a room would be fatal. With appropriate programming such a display could literally
be the Wonderland into which Alice walked. (Sutherland, 1965, 508)

These three men-Bush, Engelbart, and Sutherland-were innovators and visionar­


ies. They looked at the current state of technology and envisioned what could be.
Innovation was and is an important component of interaction design. To find cre­
ative solutions to existing problems or to define new potential for computing
technologies, we must first understand current technology and the different ways
people use computing technologies. We must then apply that knowledge to the de­
sign of systems that address human needs, and in the process develop new and
novel ways to aid people in their tasks.
To fully understand the complex nature of human-computer interaction we
must explore the social, physical, and cognitive environments involved as well as
the reasons people have for using computers. If we understand these factors, we
will be better able to find new and better computing paradigms for the tasks that
people need to accomplish.

The physical computing environment has evolved from dumb-terminal workstations


to any number of public/personal, fixed/mobile configurations. This has increased
the potential for computing technologies to enhance the quality of human activities
and has also increased the significance of interaction design.
The issues involved in the use of an automated teller machine (ATM) as op­
posed to a personal digital assistant (PDA) are not drastically different; however, the
solutions for their design may vary greatly. ATMs are fixed, public computing sta­
tions where people engage in personal computing activities related to financial in­
formation. PDAs are mobile, personal information appliances that can be used for a
variety of reasons that may also include online banking.
Both systems must have adequate lighting, the input/output (I/O) facilities must
be usable and reliable, and there must be enough physical space in which to use
the system. The system should not require uncomfortable reaching or difficult
movements. Accessibility to people with handicaps must also be considered.
For each of these platforms, the applicability and priority levels of these various
requirements may fluctuate, and the solutions in each case may be very different.
For instance, lighting for an ATM is designed in a fixed environment. Ambient
8 Chapter 1 Interaction Paradigms

illumination is stable and screen position can be fixed to avoid glare. PDAs, on the
other hand, can be used in many types of environment, from the bright outdoors to
dark automotive interiors with intermittent outside illumination. The type of screen,
available display settings, use of color, size of type, and design of the icons must be
defined by the broad range of possible viewing environments.

Default settings must be carefully thought out.

The physical computing environment will affect many aspects of an interface's


design, and some of the requirements may actually contradict each other. For in­
stance, if a system is to be used by many people with different physical attributes, a
ce11ain amount of flexibility should be built into the system so that users may adjust
it to their own needs. However, because the computer is in a public place, it is pos­
sible that one user may adjust certain aspects of the system, such as keyboard
height or screen position and angle, which would make it difficult for the next user,
who may not know how to change some of these settings. This may mean that cer­
tain default settings and adjustments should not be accessible to the general public.
This will require tradeoffs and increases the importance of determining the most ap­
propriate and widely accessible default settings through thorough user testing.
The following is a list of some of the issues involved in designing physical com­
puting environments:
@ Safety-One of the goals of ergonomics is to create a safe computing environ­
ment. This can be a significant concern for some systems, especially those used
in mission-critical environments. There are, unfortunately, numerous examples
of high-risk situations that proved fatal due to computer-related errors. One of
the better-known examples involved a computer-controlled radiation device
called the Therac-25. Due to an error in the device's design, medical technicians
inadvertently subjected ce11ain patients to lethal doe,s of radiation.
Efficiency-Systems should not make the user do more work than is neces­
sary. If the manipulation of a physical device is awkward or not consistent with
human capabilities, efficiency will be diminished.
@ User Space-The user must be able to use the system without discomfdrt. There
must be enough room to stand, sit, or move about comfortably, and there must be
enough room to use peripheral devices without difficulty. Some people use com­
puters for long periods of time. If the physical design is not adequate, the user may
experience fatigue, pain, or even injury. Public computers and information appli­
ances are used by many people with different physical attributes and must, there­
fore, be as adjustable as possible to ensure comfort and decrease the risk of pain
during long-term use. It must also be possible to easily reset adjustments to default
settings. These default settings must be determined as accurately as possible.
<ID Work Space-Users must be able to bring work objects, such as notebooks
and other information appliances such as PDAs, to the computing environment

~
b
7.2. Computing Environments 9

and be able to use them comfortably. Users who need to copy text from hard
copy should have some type of holder on which to place the source material so
that they can easily see both the hard copy and the screen.
illumination can affect the visibility of screen elements.
Lighting in outside computing environments can be difficult to control.
Noise-High levels of environmental noise can make audito1y interfaces difficult,
if not impossible, to use. Some environments, such as libraries and museums, are
sensitive to any type of auditory stimuli. Mobile information appliances, such as
cell phones and pagers, must have easily accessible audio controls. Many cell
phones have vibration settings to avoid annoying ring tones. Other devices, such
as wearable computers, also must be desigl!ed to address this concern.
@ Pollution-Some industrial environments, such as factories or warehouses, are
difficult to keep clean. Sometimes users must work with machinery that is dirty
or greasy. Peripherals such as keyboards and mouse devices can easily become
dysfunctional in such environments. Solutions such as plastic keyboard covers,
washable touchscreen covers, and auditory interfaces may address some envi­
ronmental concerns.

Studies have shown that the social environment affects the way people use comput­
ers. Computer use has also been shown to affect human social interaction. This is a
significant concern for collaborative computing systems. It would be unfortunate if
a collaborative system inhibited human-human interactions.
Different computing paradigms imply different social environments. For in­
stance, personal computing is usually a solitary activity done in an office or an iso­
lated corner of the house. This is very different than ubiquitous computing (see later
discussion), which implies a more public setting.
Public computers that are used for sensitive personal activities, such as ATMs,
should be designed to protect the user's privacy. Audible feedback of personally
identifiable information may be embarrassing or even detrimental. Negative audi­
tory feedback that announces when a user makes an error can cause embarrassment
for people who work in group settings.
Computers that are used in group settings must afford all members adequate
viewing and auditory feedback. Group members may also need to access system
functionality through peripheral devices. These must be made available to the
group members who need them.

Human cognitive aptitude spans a wide range of capabilities. Interaction design


must take into consideration many cognitive factors such as age and conditions re­
lating to disabilities. The computing environment can also affect cognition: some
environments involve high cognitive loads, whereas others are less demanding.
10 Chapter 1 Interaction Paradigms

Computer system designs must address the relevant issues that arise from these dif­
ferent cognitive and environmental dynamics. The following list covers some of as­
pects related to the cognitive computing environment:
Age-Some systems are designed specifically for young children who are
learning basic skills such as reading or arithmetic. Other systems are designed
for scientists and technicians who need to study the human genome.
Computing environments that are not age appropriate run the risk of being frus­
trating or insulting. Technicians may find cute animations annoying, and chil­
dren may become bored with long lists of hyperlinks.
• Conditions Related to Disabilities-Computer systems can be designed
specifically for people with conditions related to disabilities. As more public
information is being made available through information appliances, the need
to make these devices accessible to people with cognitive disabilities is be­
coming more important. If public information appliances are designed poorly,
they may significantly diminish the quality of life for a significant portion of the
population.
,..,...,....~,..,,.., of Technical Knowledge-Some systems are designed for particular
user profiles with specific skills and facilities. These systems can be targeted to
the particular needs of the user. ·Other systems are designed to be used by the
general population and must take into consideration a wider range of cognitive
abilities. General computing devices are also used by people with diverse tech­
nical backgrounds. It may be necessary to afford users different levels of func­
tionality so they can use the system in the most efficient way.
,-..ll..,.., of Focus-Computing systems can be designed for people who are
JLJ? ..

completely focused on the task at hand, such as people playing digital games,
or they can be designed for someone who is, say, monitoring many work­
related activities that involve heavy machinery in an environment that has ex­
cessive noise pollution. The priorities and requirements for the interface design
will be different for these two levels of focus.
• Stress-Computers are used in diverse environments that impose
different levels of cognitive stress on the user, from leisure activities such as lis­
tening to music to mission-critical environments as in air traffic control. People
who use computers in stressful mission-critical environments need clear and
unambiguous interface designs. Although this may be an important concern for
all computing devices, there is no room for error in fields like medicine, aero­
space, or the military. The priority level of certain design criteria may increase
in certain situations.

Since the publication of Vannevar Bush's article describing the Memex, computer
scientists and researchers have created numerous and diverse configurations for
computing systems. These configurations involve the construction and arrangement
7.3. Analyzing Interaction Paradigms

of hardware, the development of software applications to control the hardware, the


topologies of networked systems, and the components of the human interface that
define how people access the system's functionality.
Together these components comprise an interaction paradigm that defines the
"who, what, where, why, and how" (SW+H) of computer system use. We can use
this as a heuristic or procedure to more fully understand interaction paradigms.

We will use the SW+ H heuristic to define existing interaction paradigms and spaces
and explore the elements and objects with which the user interacts; this will help to
give us an understanding of how these systems work and how to apply that knowl­
edge to the development of future systems.
We will make a slight alteration in the sequence and add a few couplings to the
heuristic. Although all of the heuristic's elements are interrelated, some have greater
effects on the others. These couplings are, therefore, based on degree of relation­
ship and are not exclusive.
What/How-An in-depth understanding of the physical and virtual interface com­
ponents of the various computing systems (the what) is essential for the creation of
usable systems. We will look at the various physical components of interfaces, such
as the I/0 devices, and briefly explore the various related interface elements that de­
fine how we use these systems (the how), such as windows and icons. The interface
components themselves will be more thoroughly explored in Chapter 10.
Where/When-Computer systems can also be defined by their particular physi­
cal computing space. This can clearly be seen by comparing the desktop computing
space with the wearable computing space. Wearable computing is a result of ad­
vances in the fields of mobile and network computing and has given rise to a new
network scope: the personal area network (PAN). PAN is defined by the user's abil­
ity to access computing functionality remotely (the where) and at any time (the
when), regardless of his or her physical location or even while he or she is in tran­
sit if desired. This is not something one would attempt with a cathode ray tube
(CRT) display.
Who/Why-We will also look at the types of tasks these physical devices and in­
terface components facilitate. These tasks define the reasons why we use comput­
ers. This does not disregard the fact that technological developments are often
driven by the need to accomplish new tasks or refine the way current solutions are
implemented. It simply means that current systems facilitate certain tasks, which
create particular motivations for their use. For instance, mainframes were used by
highly trained technicians who were well versed in the inner workings of the sys­
tem. These were often the same people who programmed the computer's various
procedures; they understood the logic and syntax intuitively and were involved in
large-scale computing for the governmental, industrial, and research communities.
12 Chapter 1 Interaction Paradigms

Advances in technology have brought a broad new range of computer-assisted serv­


ices to a large portion of the general public. Interaction architectures must now be
designed for a greater range of human circumstances and a greater range of poten­
tial uses. Before we continue the discussion, let's clarify a few of the terms that have
been used to describe these developments.
Information Space-Defined by the information artifacts used and the content in­
cluded, for example, a book and the topics covered in the book
Interaction Architecture-The structure of an interactive system that describes
the relationship and methods of communication between the hardware and
software components
Interaction Mode-Refers to perceptual modalities, for example, visual, auditory,
or haptic (sometimes iused in the literature to refer to interaction styles or par­
ticular tasks such as browsing or data entry)
Interaction Paradigm-A model or pattern of human-computer interaction that
encompasses all aspects of interaction, including physical, virtual, perceptual,
and cognitive
Interaction Space-The abstract space defined by complex computing devices
such as displays, sensors, actuators, and processors
Interaction Style-The type of interface and the interaction it implies, for exam­
ple, command line, graphical user interface (GUI), or speech
Work Space-The place where people carry out work-related activities, which may
include virtual as well as physical locations, as in, for example, flight simulation
training

We will now explore some of the various interaction paradigms and their manifest in­
teraction spaces. In the following sections, we investigate the various components of
these systems. We will then, in Chapter 2, look at the various styles of interaction that
these spaces involve, from text-based interaction to graphical and multimedia interfaces.
The principal paradigms we will look at include the following (Figure 1.3):
e Large-scale computing
Personal computing
Networked computing
Mobile computing
We will look at some of the significant manifestations of these paradigms and
explore the synergies that arise from the confluence of particular paradigms:
Desktop computing (personal and networked)
@ Public-personal computing (personal and networked)
o Information appliances (personal, mobile, and networked)
7.4. Interaction Paradigms

Information
Appliances

"""-------1-1 Virtual Reality


Ambient
Invisible

Large circles represent principal paradigms. _Oblong shapes represent convergent paradigms.
Words without surrounding shapes represent specific system architectures (sometimes used
for a paradigm reference, as in desktop computing for personal computing).

We will then investigate some of the convergent interaction spaces that have
evolved in recent years:
Collaborative environments (personal, networked, and mobile)
Embodied virtuality systems:
- Ubiquitous computing (personal, networked, and mobile)
- Ambient computing (networked and mobile)
- Invisible computing (mobile and networked)
- Wearable computing (personal, networked, and mobile)
"' Immersive virtual reality (personal, networked, and mobile)

What/How-The original mainframe computers were large-scale computing ma­


chines, referred to as hosts, which resided in a central location and were accessed
by remote alphanumeric terminals equipped with keyboards. The terminals were
referred to as "dumb terminals" because they simply reflected the state of the main­
frame and had no computing capabilities of their own. These systems are also re­
ferred to as host/terminal systems.
Mainframes were originally programmed using punch cards (Figure 1.4). These
punch cards were created on a key punch machine and read into the n1ainframe by
a card reader. This was a slow process based on an indirect relationship between
14 Chapter 1 Interaction Paradigms

a b

IBM card machines.


Courtesy IBM Corporate Archives.

the user and the computer. Large programs were usually processed in batches
overnight when the syste:ri;i was not needed for routine daily processing.
Due to the great expense of maintaining mainframe computers, it was not pos­
sible to give excessive amounts of time to single users of the system. This led to the
development of time-sharing services (TSSs). TSSs were schemes that used the
downtime of one user for another user who was currently active. TSSs were gener­
ally available during the day.
Smaller, less expensive versions of the mainframe, called minicomputers, even­
tually became available to companies who could not afford the larger and more
powerful mainframes or who simply did not need as much computing power.
These minicomputers were structured according to the same interaction architecture
as the mainframes.
Concurrent developments in computing for scientific research resulted in the
development of high-powered machines, called supercomputers, which were de­
signed to be used for specific computing tasks and to do them as quickly as possi­
ble. These highly specialized machines crunched large amounts of data at high
speed, as in computing fluid dynamics, weather patterns, seismic activity predic­
tions, and nuclear explosion dynamics.
Supercomputers are used for the very high speed backbone (vBNS) connec­
tions that constitute the core of the Internet. Mainframe computing, on the other
hand, has often seemed to be on the verge of extinction. However, this is far from
the truth. They exist in their current incarnation as so-called enterprise servers that
are used in large-scale computing environments such as Wall Street.
When/Where-Most large-scale computers were owned by government agen­
cies and large research institutes that were often affiliated with large universities.
7.4. Interaction Paradigms 15

The large--scale computing paradigm fostered a bureaucratic and institutionalized


conceptualization of how interaction spaces should be constructed.
Who/Why--Large-scale computing resources were rare and expensive; they
were generally only used to carry out government-sponsored research projects and
university-based research for large corporate institutions. This was the motivation
behind the ARPAnet: to connect powerful computers for the purpose of sharing re­
sources within the scientific and military domains. The ARPAnet provided the theo­
retical and practical foundation on which the current Internet was formed.
Mainframes were used by highly trained technicians and scientists in laboratory
environments. These research labs, however, also attracted young "hackers" who
would sometimes live at these institutes and tum the machines into gaming environ­
ments overnight. Some of the earliest computer games originated in this way.
Games like Space Wars and Lunar Lander were created by the technicians at the
Control Data Corporation to provide incentives to get the system up and running.

The personal vision of scientists like Douglas Engelbart and the work done by the
researchers at the Augmentation Research Center, along with contemporaneous ad­
vances in technology, such as the shift from vacuum tubes to integrated circuits, led
to the development of smaller, less expensive microcomputers and a new paradigm:
personal computing. This new paradigm of personal computing dispersed comput­
ing power from a centralized institutional space to a personal space and led to re­
lated developments in hardware and software design that involved a greater focus
on the human interface.
Computing power was now available to people with little or no technical back­
ground, therefore the interaction needed to become more intuitive and less techni­
cal. Researchers began exploring the use of graphical representations of computer
functionality in the form of symbols and metaphors.
The Alto, developed at the Xerox Palo Alto Research Center in 1973, was the
first computer to use a GUI that involved the desktop metaphor: pop-up menus,
windows, and icons (Figure 1.5). It is considered to be the first example of a per­
sonal computer (PC). Even though the Alto should technically be considered a small
minicomputer and not a microcomputer, it served as the profile for the personal
computing paradigm: one user sitting at a desk, with uninterrupted access and full
responsibility for the functioning and content of the computer's files and programs.
The NLS had a great influence on the development of the Alto (many of the
PARC researchers came directly from ARC), which involved Ethernet connections, a
CRT display (mounted on its side in portrait orientation), a keyboard, a three-button
mouse, and the same five-key chorded keyboard. The system also included a laser
printer developed specifically for the Alto. The Alto was never marketed
commercially, but its interaction architecture provided the inspiration for the devel­
opment of the Apple® Lisa, which had a more sophisticated GUI environment.
16 Chapter 1 Interaction Paradigms

com~mter (1973).
Courtesy Palo Alto Research Center.

Along with the development of PC hardware and operating systems was a par­
allel evolution in application software. With the development of programs like the
VisiCalc spreadsheet program and Lotus 1-2-3, as well as word processing applica­
tions and digital games, the PC became a powerfol tool for both the office and the
newly emerging home office/entertainment center. The PC has gradually become an
integral part of our professional and personal lives.

What/How-The main interaction architecture for personal computing has tradi­


tionally been defined by the desktop environment, which is based on the hard­
ware/software configuration of the PC. This model can be defined as a single user
with a keyboard and pointing device interacting with software applications and
Internet resources.

The development of the PC meant that users did not have to submit their pro­
grams to a system administrator who would place them in a queue to be run
when the system was available. Users could now access the computer's function­
ality continuously without constraint. The software industry boomed by creating
programs that ranged from productivity suites, to games, to just about anything
people might want.
7.4. Interaction Paradigms 17

Numerous storage media options are available for PCs; some are permanent,
like CDs and DVD ROMs, and some are updateable, like CDRWs. Storage media
technology is becoming cheaper and smaller in size. There is a consistent evolution
in storage media that quickly makes today's technology obsolete tomorrow. For in­
stance, the once ubiquitous floppy disk is rapidly joining the mountain of obsolete
storage media. This may have consequences in the future for GUis that traditionally
use the image of a floppy disk on the icon for the "Save" function. Portable hard­
drive technologies range from the large-capacity arrays used in industry to the pen­
sized USB sticks.
Where/When-With the advent of the PC, computing power became available
for home use and could address the home/office and entertainment needs of a large
segment of the population. Home access was basically unfettered; users could ac­
cess computing functionality whenever they wanted. The home computer has be­
come a staple in countless households. A generation that takes the presence of a PC
for granted has already grown to adulthood; they have never lived in a house with­
out one.
Who/Why-PCs are used for a wide variety of reasons. Applications range
from productivity tools that include word processing, spreadsheet, database, and
presentation software to communication tools like e-mail and instant messaging
software.
Shopping on the Web is becoming an integral part of consumer behavior.
People also use the Web for banking and bill paying. Legal and illegal uses of file­
sharing technologies have become common. The PC has become a center of activ­
ity in the home with applications for both work and play.

Perhaps the more serious question is not who uses personal computers but
rather who does not use personal computers. Researchers have begun to explore
the ramifications of socioeconomic issues on the information gap-the distance be­
tween people who have access to electronic information delivery and those who do
not-and education lies at the center of this conversation. PCs, especially those with
an Internet connection, are seen as a significant resource for educational institu­
tions. Students in schools that cannot afford to offer this resource are at a disadvan­
tage in a world that increasingly requires basic computer literacy for entry-level
employment opportunities.

Public-personal computing has two distinct manifestations: public-access computing


and public information appliances. The Bill & Melinda Gates Foundation has been
an active leader in the support and development of public-access computing
through the Global Libraries Program, which addresses the need to bridge the infor­
mation gap, especially in geographic areas that are economically disadvantaged.
Public Access Computing-Public-access computers are generally desktop com­
puters that use software to "lock down" the machine. This means that some users
18 Chapter 1 Interaction Paradigms

Automated teller machine with . . ,,...,......,,..,,..,,..


Courtesy BigStockPhoto.com.

lack full privileges and ,cannot access some operating system functionalities. This is
done to ensure the system's proper functioning and protect against malicious or ac­
cidental alterations of its settings. There is also a need to make sure that the periph­
eral devices such as keyboards and pointing devices are not damaged or stolen.
Public Information Appliances-Public information appliances are context- and
task-specific devices used by the public to access information or services offered by
corporate or governmental agencies. They most often take the form of information
kiosks or ticket-dispensing machines.
The most common kiosk-type device is the ATM machine, which basically con­
sists of a touchscreen connected to a network of variable scope (Figure 1.6). The
screens comprise buttons, virtual QWER1Y keyboards, and number pads, all of
which are designed for easy operation and can accommodate different users of
varying finger sizes.
Public information appliances are found in both indoor and outdoor installa­
tions. It is important that screen visibility remain high. This can be especially diffi­
cult in outdoor locations, where lighting conditions can vary according to weather
and time of day.
Touchscreens are the preferred form for these kiosks because of the high vol­
ume of public use. Peripherals such as keyboards and pointing devices would in­
crease maintenance costs as well as the downtime some systems would incur due to
theft and mistreatment.

Although the personal computing architecture is well suited for the home, it does
not entirely address the needs of the office user. Office workers need to communi­
cate and share documents with each other. They require access to corporate documents
7.4. Interaction Paradigms

and databases. It is simply not practical or often possible to maintain individual


repositories of corporate information; office and institutional PCs need to be net­
worked. In fact, the Alto was originally designed as a networked office system.
The idea of creating a network of computers that would enable communication
between users can be traced to a series of memos written by J. C. R. Licklider in
1962 discussing his concept of the "Galactic Network." Although he did not stay at
ARPA long enough to see the project to completion, his vision came to fruition with
the launching of the ARPAnet at 10:30 pm on October 29, 1969.
What/How-The evolution of the Internet was a direct result of the ARPAnet,
and it was not long before numerous computer networking technologies revolution­
ized many industries, such as telecommunications and entertainment.
Networks may differ in scope (e.g., personal area network [PAN], local area net­
work [LAN], metropolitan area network [MAN], or wide area network [WAN]) and
connection modality (wired or. wireless); however, users generally access the net­
work through a PC. Therefore, the user experience is similar to that in the PC
model. There may be limits on file access and restrictions on network locations, but
systems often allow for "user profiles" that customize the interface for each individ­
ual so that he or she can work in a familiar environment.
Networking does not alter the "what" or the "how" of human-computer interac­
tion to any large degree. The user interacts with more or less the same hardware
and software configurations available on a nonnetworked PC and in much the same
way. Emerging mobile technologies, on the other hand, are shaping the way net­
works will be structured in the future and changing the interaction paradigms that
will become common for a large portion of the networked population.
Where/When-Networking has revolutionized telecommunications and altered
our concepts ?f place, distance, and time. It has changed our concept of the
"where" and the "when." By enabling remote access, networks have freed us from
location-based computing, altering our concept of where we use computers. We no
longer have to travel to the office to access the company's database; we can do it
from any location that has Internet access.
We also can access networked resources at any time. Connections via commu­
nity antenna television, more commonly known as community access television or
cable TV (CATV), allow us to view movies "on demand" according to our personal
time schedule. Large corporations with global locations can take advantage of the
work cycles on different continents to create work flows that optimize office hours
worldwide. Time constraints have been altered; "when" is now a more flexible issue.
Who/Why-The rise of the World Wide Web has created additional avenues for
marketing goods and services. Web-based commercial resources have created an
entirely new economic space: e-commerce. We now use computers to purchase air­
plane and movie tickets and to share photos with friends and relatives; activities that
we used to do in person.
20 Chapter 1 Interaction Paradigms

Networked computers can offer more things to more people and, therefore,
have expanded the diversity of the user population. Young people are spending in­
creasing amounts of time playing networked games and instant messaging each
other. Older people are enjoying the photos of grandchildren who live far away.
People of all ages and backgrounds with diverse interests and levels of com­
puter literacy are finding reasons to use networked computers (mostly via the Web),
such as participating in online auctions and conducting job searches. Networks
have significantly altered the face of the user.

If we value the pursuit of knowledge, we must be free to follow wherever that search
may lead us. The free mind is not a barking dog, to be tethered on a ten-foot chain.
Adlai Stevenson
Data become information when they are organized in a meaningful, actionable way.
Knowledge is obtained by the assimilation of that information. In today's world,
computers play an important role in the processing of data and the presentation of
information.
In the past, computer access to data was restricted by location. In many situa­
tions, data had to be gathered and documented on location and transported to the
computer. The data would be processed at the central location, and the information
gained from that processing would initiate other data-gathering activities. This re­
quired time and intermediate steps that could sometimes affect the gathering
process as well as the integrity of the data, especially because some data are ex­
tremely time sensitive.
That is no longer the case. With mobile and networked computing technolo­
gies, computers are no longer "tethered on a ten-{oot chain." They can be trans­
ported to the remote locations where data are collected. Data can be captured,
processed, and presented to the user without his ever leaving the remote site.
Mobile technology embodies the essence of Adlai Stevenson's comments on the
search for knowledge.
What/How-Mobile computing technologies comprise a very diverse family of
devices that have rapidly become ubiquitous in most industrialized societies.
There are numerous brands and models of laptop computers, and tablet comput­
ers have recently appeared. Digital game and MP3 players represent a thriving
commercial segment. Many corporations distribute handheld remote communica­
tion and tracking devices to field agents to enhance on-site delivery and mainte­
nance services.
Portable computers ship with embedded touchpads and pointing sticks. Their
keyboards range from a bit smaller than desktop keyboards to full size; however,
they do not have the added functionality of an extended keyboard with its number
pad and additional function keys.
7.4. Interaction Paradigms 21

a b

(a) com[iuter (b) Tablet comcmter


Courtesy BigStockPhoto.com.

Laptop and tablet computers use LCD screens (Figure 1.7). Tablets also provide
touchscreen interaction and handwriting recognition. Tablets are made in either a
"slate format," without a keyboard, or in a laptop design that can be opened into
the slate format.
Handheld devices, on the other hand, vary significantly in terms of hardware
and software configurations. They can range from small computer-like cell phones
with tiny keyboards and screens to MP3 players like the Apple iPod®, which uses a
touch wheel and small LCD window (Figure 1.8).

Desktop metaphors do not translate well to mobile devices.

Portable computers have traditionally inherited the desktop metaphor used in


contemporary GUI designs. The desktop metaphor is a direct result of the fact that
PCs were originally designed for use on or around a physical desk in an office en­
vironment. In the desktop paradigm, the relationship between the information
space and the work space is direct and, to some degree, intuitive. The desktop
metaphor, however, may not always reflect the work space of a person using a
portable computer.
Many people use laptop computers as a portable substitute for the desktop
model: although they may carry it from the home to the office, they use it exactly as
they would a desktop computer. However, others are finding new applications for
22 Chapter 1 Interaction Paradigms

a b

(a) Cell (b) MP3


Courtesy BigStockPhoto.com.

portable computers, and new interaction designs are appearing. The tablet com­
puter with its slate format and handwriting recognition software represents such a
new approach to mobile interaction designs.
The most common format for handheld computers is the PDA. These are gen­
erally palm-sized devices that run a native operating system (OS), such as the Palm
OS® or some version of Microsoft® Windows® like Windows CE®. Some of these
interfaces do not differ greatly from their desktop cousins and can become cumber­
some due to small screen size. Others, like the Palm OS, that used more innovative
designs have become quite popular. In these devices, input is generally accom­
plished with a stylus using icons, menus, and text entry through miniature soft key­
boards. Effective handwriting recognition is not very common, but the calligraphic
systems built into the operating systems of these devices are stable; reliable, and rel­
atively intuitive.

Hybrid desktop/mobile environments can afford optimal interaction efficiency.

The small size of these devices, thefr limited battery life, and the relatively low
power of their processors can be problematic for some operations. Therefore, it is
advantageous to use a desktop system in tandem with the mobile device for off­
loading complex tasks or uploading latest versions of information services.
Handhelds can also use flash card technology to store and transfer files and pro­
grams to and from their desktop cousins.
Mobile systems can access network resources, such as the Internet, through
wireless access points (known as Wi-Fi hot spots) using the IEEE 802.11 family of
wireless protocols. Relatively reliable connections and high speeds can be obtained
if the signal is strong. Mobile devices can also connect to the Internet through a cell
7.4. Interaction Paradigms 23

On-board naviaa1tion
Courtesy BigStockPhoto.com.

phone using Bluetooth or infrared technologies such as IrDA (Infrared Data


Association) devices.
Mobile devices can be connected to global positioning systems (GPS), giving
them global positioning capabilities with an accuracy of approximately 1 meter.
These have become popular in automotive navigation systems that use touch­
screens and voice interaction to alleviate potential visual attention problems during
driving (Figure 1.9).
One of the most widely adopted mobile computing devices is the cell phone.
Cell phones achieved a significant user base in a very short time. It is not uncom­
mon in many developed societies for each member of a family to have a cell phone.
Service providers offer incentives for combined family plans and provide inexpen­
if not free, phones.
With the development of more sophisticated cellular networks, cell phones are
also evolving into multifunction devices that provide voice, text, and video commu­
nications. They are capable of providing entertainment in the form of games and
music and are being used to store contact and scheduling information. They are
also providing a workout for the often ignored thumb.
Where/When-Mobile technology can be seen as a liberating force, allowing ac­
cess to information, regardless of location. Wi-Fi hot spots have been made avail­
able to the public in coffee shops, airports, and community parks. Business contact
information can be shared between two PDAs using infrared technology. Someone
stuck in a traffic jam can reschedule appointments with a simple cell phone call.
Due to their mobility, these devices can offer situational computing that can take
advantage of location-specific information through location-based mobile services
(LMS). LMS can be beneficial for location-sensitive advertisements, public service an­
nouncements, social interactions, and location-specific educational information.
24 Chapter 1 Interaction Paradigms

The GUIDE system provides visitors to Lancaster, England, with context-sensitive


information on a handheld device using 802.11 technology. The system is tailored to
the environmental context as well as the visitor's preferences. Base units situated
around the city interact with the handheld device by beaming Web pages and posi­
tioning information.
Mobile technology can also be perceived as a nuisance. Theatrical perform­
ances are often prefaced by an announcement to guests that they turn off all cell
phones, pagers, and other mobile telecommunication devices. Social context plays
a role in the use of mobile devices.
Who/Why-The reasons why people might use mobile technologies are inex­
haustible and range from improvements in medical care, to facilitated learning, to
more fulfilling travel and leisure experiences. All of these technologies, however,
share one primary benefit: they are designed to be used repetitively for brief and
routine tasks. Therefore, they have a different use profile than their desktop
cousins, and this should be taken into consideration and reflected in their design.
Mobile technology has become a staple for business people who must keep in
touch with the office or with corporate resources while in transit. Mobile technology
is also designed for niche applications that are highly technical in nature. Mobile de­
vices with diagnostic applications can facilitate access to large databases linking
field workers to more powerful database servers. This enables more immediate as­
sessment of potential problems at remote locations.
Mobile devices can be very useful and offer many advantages; however, they
also have disadvantages such as small screens, low bandwidth, and unstable con­
nections and awkward interface interactions as when inputting text.

Networks facilitate collaborative activities.

Networks allow members of a group to interact with other members on shared


files and documents, whether the members are in the same room or at remote lo­
cations. This creates a virtual space where people can collaborate and work collec­
tively.
Networks can support collaborative work by facilitating tasks such as communi­
cation, coordination, organization, and presentation. Computer-mediated communi­
cation (CMC) is a general term that describes the way computers are used to sup­
port group communication.
The term computer-supp01ted cooperative work (CSCW) was coined by Paul
Cashman and Irene Grief to describe technology that supports people who work in
groups. Some of the issues that have risen from the research on cooperative work
7.4. Interaction Paradigms 25

involve the nature of the group tasks, the reasons for group interaction, and the po­
tential tools (groupware) for facilitating the collaborative tasks.
What/How-Computer-mediated communication, as currently practiced, involves
networked PCs using familiar desktop environments such as Microsoft Windows®
Mac OS X®, or some graphical interface to a Linux system. These systems are often
augmented by the addition of audio and video devices such as microphones and
Web cams. Engelbart's NLS incorporated many of the components of a CSCW envi­
ronment, including audio and video conferencing and work flow support.
Current CMC environments may also involve large projected displays that use
digital projectors and "smart screens,'' such as the SMART Board® from SMART
Technologies Inc., which can be operated by touch or a computer interface
(Figure 1.10).
The Web, with its suite of standardized protocols for network layers, has been a
significant force . in the development of collaborative environments by allowing
greater interoperability in heterogeneous computing environments.
Where/When-CMC has many possible manifestations that cover same- or
different-place/time scenarios and involve diverse families of groupware applications
and devices. They can be synchronous, as in videoconferencing and instant
messaging; or asynchronous, as in the recommender systems built into e-commerce
Web sites such as Amazon.com or e-mail systems. They can also involve remote­
access white boards, chat rooms, and bulletin board services.
Who/Why-CMC can aid business people who need to collaborate with remote
customers or employees. Product development teams involved in product design,

(a) Actalysta interactive


Copyright 2001-2007 SMARTTechnologies Inc. All rights reserved.
26 Chapter 1 Interaction Paradigms

software engineering, financial reporting, and so on often require collaboration


among distributed resources and personnel. CMC can address this need.
Educational institutions have been exploring the use of CMC to build remote­
learning environments and enrich learning experiences.
The scientific community has also been interested in CMC: the ARPAnet was
built to support communication within the scientific and military communities. More
recently, "collaboratories" (laboratories without walls) have been developed to
allow the scientific community to perform and share research projects and results
regardless of physical location. Examples of collaboratories include the following:
• The Research Collaboratory for Structural Bioinformatics (RCSB)-"A non-profit
consortium dedicated to improving our understanding of the function of biolog­
ical systems through the study of the 3-D structure of biological macromole­
cules" (RCSB, 2005).
The Chimpanzee Collaboratory-"A collaborative project of attorneys, scientists
and public policy experts working to make significant and measurable progress
in protecting the lives and establishing the legal rights of chimpanzees and
other great apes" (TCC, 2005).
The National Fusion Grid-"A SciDAC (Scientific Discovery through Advanced
Computing Collaboratory) Pilot project that is creating and deploying collabora­
tive software tools throughout the magnetic fusion research community. The
goal of the project is to advance scientific understanding and innovation in
magnetic fusion research by enabling more efficient use of existing experimen­
tal facilities and more effective integration of experiment, theory, and model­
ing" (FusionGRID, 2005).
Space Physics and Aeronomy Research Collaboratory (SPARC)-"Allows space
physicists to conduct team science on a global scale. It is the realization of the
'net'-real-time access to a world of instruments, models, .and colleagues"
(SPARC, 2005).

Some of us use the term "embodied virtuality" to refer to the process of drawing comput­
ers out of their electronic shells. The "virtuality" of computer-readable data-all the dif­
ferent ways in which it can be altered, processed and analyzed-is brought into the
physical world. (Weiser, 1991, 95)

Groupware and CMC applications have facilitated the decoupling of information and
location. Information that resides on one machine can be shared across a network
and distributed to group members in remote locations. CSCW environments must be
designed around the way people normally work on collaborative projects. The
diverse nature of CMC applications reflects the complexity of collaborative work.
One of the traditional obstacles in CSCW is the fact that computing power and
functionality has traditionally been concentrated inside the physical confines of a
7.4. Interaction Paradigms

computer. Mobile technologies have enabled more flexibility by allowing people to


connect to networks without being physically tethered to a specific location; how­
ever, portable computers are modeled on the PC interaction architecture, which
tries to incorporate as much computing power in one place as possible-the Swiss
army knife of computing.
Embodied virtuality (EV), as conceived by Weiser, is focused on decoupling
computing power and functionality from "the box." This move is reminiscent of the
shift from mainframe computing to personal computing and represents a new para­
digm. It can be seen as a sort of "big bang," dispersing computing functionality
throughout the environment. EV represents a shift in the way we think about com­
puters; it opens up many new possibilities, but it also raises many questions.
How do we disperse computing functionality throughout the environment?
What form should EV computing take? What kind of interface does it require? How
much control should we retain, and how much should be automated?
There are a number of different approaches and schools of thought as to how
computers should be incorporated into our lives. We will look at a few of the
emerging fields in EV:
Ubiquitous/pervasive computing
Invisible/transparent computing
• Wearable computing
There are four discernible currents in the field of EV. These are reflected in the
location/operation diagram shown in Figure 1.11.
• Side 1-Portable/manual (sometimes wearable) devices such as cell phones,
MP3 players, digital cameras, and PDAs offer portable functionality the user can Manual

0
manipulate.
Side 2-Manual/fixed devices such as ATMs and kiosks are manipulated by the Portable Fixed
4
user but are fixed in place.
• Side 3-Portable/automated devices are read by situated sensors, such as the Automated
car transceivers used for toll both payments. There are no possible manual op­
erations. Embodied
Side 4-Automated/fixed devices such as alarm sensors can be used to detect virtuality environments­
the presence of intruders or industrial hazards. location/operation.

Embodied virtuality concepts are being pursued by researchers at universities and


corporate research laboratories under a variety of different designations: ubiqui­
tous (UbiComp), pervasive, wearable, invisible, transparent, ambient, and embed­
ded as well as other, lesser-known designations such as palpable and autonomic
computing.
28 Chapter 1 Interaction Paradigms

There are often no clear-cut distinctions among the various approaches; they
represent a constellation of interaction paradigms, each combining different degrees
of user control and embedded architectures. Each approach, however, represents a
unique manifestation with particular characteristics that are useful in different cir­
cumstances. We will concentrate on three, approaches: UbiComp, invisible, and
wearable computing. These are the most widely used terms and they cover the
overall scope of the EV domain.

Where/When-We are already surrounded by computing technologies: they are


in our cars regulating the breaking, steering, and engine functions; we carry Web­
enabled cell phones and PDAs; we listen to music downloaded from the Web on
our MP3 players; and we play massively multiplayer digital games on devices that
connect to the Internet. Embedded computing technologies also cany out
numerous mission-critical tasks, such as air traffic control and operating room
procedures.
Although we live in a world of ubiquitous computing, we are only at the begin­
ning of an evolutionary process. Alan Kay, a contributor to the development of the
modern GUI, the Smalltalk computer programming language, and the laptop com­
puter, considers UbiComp (calm technology) to be "third-paradigm" computing, af­
ter mainframe and personal computing.
Personal computers are still gaining functionality and complexity. They are not
disappearing anytime soon and, in a simplified form, may be well suited to handle
certain aspects of a ubiquitous system. However, the new generations of devices are
small and portable and facilitate tasks in a way that is simple and intuitive.
Devices like cameras, video recorders, musical instruments, and picture frames
are becoming "smart" through the introduction of embedded chips. These devices are
now able to communicate with other smart devices and offer all the advantages that
computing technologies afford.
The essence of UbiComp is that, to fulfill their potential, computing technolo­
gies must be considered a part of the fabric of our lives and not something that re­
sides in a gray box. One manifestation is ambient computing.

The concept of a computational grid that is seamlessly integrated into our physical
environment is the essence of ambient computing. Smart environments that sense
when people are present can be programmed to adjust lighting and heating facilities
based on the location and number of people in the room.
These environments can be programmed to identify particular individuals using
voice or face recognition as well as wearable identity tags and adjust certain envi­
ronmental parameters according to specific user preferences. With this technology,
public spaces could be made safer through the monitoring of environmental
p

7.4. Interaction Paradigms 29

conditions for the detection of gas leaks and other hazards. Such a system could
also indicate best possible escape routes in the case of fire.
Commercial enterprises could tailor displays that respond to gender or age and in­
form customers about the merchandise they pick up and examine. Stores could be­
come more accommodating to tourists who face obstacles because of language or
cultural differences. Consider the classic problem of the tourist who wants to buy a
souvenir in a country that uses a different currency: he or she must know the exchange
rate and do the mental calculation. Alternatively, he or she could use an information
appliance that had been set to the exchange rate and would do the calculation. In an
invisible computing space, however, the prices would be displayed to the shopper in
his or her native currency, based on the most recent exchange rate.

The most profound technologies are those that disappear. They weave themselves into
the fabric of everyday life until they are indistinguishable from it. (Weiser, 1991, 94)

What/How-The idea behind invisible computing is the facilitation of tasks


without involving the manipulation of a computer interface. The power of
computers has traditionally been filtered through the interface. The user must
translate a problem into a format that the interface will accept, a task that is usually
based on data and file types. The computer processes the data it receives and
displays a result. The user must then interpret the results in the context of the
problem he or she is trying to solve.
This process involves many interim steps between a problem and its solution­
the computer often presents an entirely unrelated set of interim problems to be
solved, such as file format and platform compatibility issues-and may result in a
mismatch between what is required by the situation and the information that is dis­
played by the computer.
Invisible computing removes computer interface issues from the process. This
can be done in two ways. One can make the interface so simple and intuitive that
the user does not have to spend any time or energy on interface..:related issues. This
would be like driving a car: once we learn to drive, we generally don't think about
the basic controls as we drive, right pedal for gas, left pedal for break. Computer in­
terfaces that are that simple and intuitive would virtually disappear; we would not
have to think about them, we would just use them.
The second way to make computing invisible is to remove the interface en­
tirely. There are many functions that can be automated and do not require user in­
tervention. We are unaware of the computers that control the steering and breaking
mechanisms of our cars; they simply do what they are supposed to do.
Invisible computing can be accomplished by programming devices to carry out
specific, well-defined tasks without user intervention. Some practitioners see no dif­
ference between UbiComp and invisible computing; however, there are those who
focus on the qualities of invisibility to distinguish between the two.
30 Chapter 1 Interaction Paradigms

Who/Why-There are many new types of small digital appliances appearing in


the market place that are geared to specific work- and entertainment-related
activities such as PDAs, BlackBerry® devices (Figure 1.12), digital cameras, MP3
players, and portable game players.
Although PCs are capable of carrying out many of these functions, there is a
growing interest in smaller, portable devices that address specific needs in an un­
complicated and easy-to-use way. These devices are referred to as information ap­
pliances (IAs), which Donald Norman defines as follows:
An appliance specializing in information: knowledge, facts, graphics, images, video, or
sound. An information appliance is designed to perform a specific activity, such as mu­
type of device. sic, photography, or writing. A distinguishing feature of information appliances is the
Courtesy of ability to share information among themselves. (Norman, 1998, 53)
BigStockPhoto.com.
Many of these devices retain their native interface elements but have embedded
communication, computational, and connectivity enhancements that are invisible to
the user. These additional features should not require additional setup or mainte­
nance. As we computerize these devices we must strive to avoid adding unnecessary
complexity to the interface. These IAs can be considered nodes in a computational
grid that can access resources wirelessly.

The underlying principle of wearable computing is the merging of information


space with work space, a field called humionics. Researchers at NASA are con­
cerned with creating a seamless integration between these two spaces. Their pro­
jects include the Body Wearable Computer (BWC) Project and the Wearable Voice
Activated Computer (WEVAC) Project. Similar research is being carried out at the
Wearable Group at Carnegie Mellon University and the Wearable Computing project
at the MIT Media Lab.

Wearable computing systems require multimodal interfaces.

The goal of humionics is to create an interface that is unobtrusive and easily op­
erated under work-related conditions. Therefore, traditional I/0 technologies are
generally inadequate for this computing environment. Wearable systems must take
advantage of audit01y and haptic as well as visual interaction.
What/How-I/0 device design for wearable computers depends on the context
of use. Because wearable computing involves diverse real-world environments,
wearable systems may require custom-designed I/0 devices. This can be costly, and
their use can present steep learning curves to new users. Therefore, there is a
concerted effort in the wearable computing community to create low-cost interfaces
that use familiar concepts without resorting to standard mouse and keyboard
devices and their related styles of interaction.
7.4. Interaction Paradigms 31

One example of a new interaction style is the circular dial on the VuMan devel­
oped by the Wearable Group at Carnegie Mellon. The VuMan is a wearable device
that allows users access to large amounts of stored information such as manuals,
charts, blueprints, or any other information from repositories that might be too heavy
or bulky to carry into the field or work site. It uses the concepts of circular input
and visualization to map the circular control device with the contents of the screen.
This type of navigation has become common to users of the popular iPod music
player from Apple Computer.
Wearable computing has created a new scope for network structures-the
personal area network (PAN). There are two types of PANs. The first type uses wire­
less technologies like Bluetooth® to interconnect various wearable devices that may
be stitched into the user's clothing or to connect these wearable devices with other
information appliances within the personal space near the user or a few meters be­
yond. An example is a PAN in an automobile that connects a cell phone with an em­
bedded system that facilitates hands-free operation. Standards for wireless PANs
(WPANs) are being worked on by the Institute of Electrical and Electronics
Engineers (IEEE) 802.15 Working Group for WPAN.
The other type of PAN uses the body's electrical system to transmit network sig­
nals. Work is being done on this at the MIT MediaLab with the Personal Information
Architecture Group and the Physics and Media Group.
Researchers at IBM's Almaden Research Center are researching PANs and have
created a device the size of a deck of cards that allows people to transmit business
card information simply by shaking hands. Data can be shared by up to four people
simultaneously. Work on PANs has also been done by the Microsoft Corporation,
which has registered a patent (no. 6,754,472) for this type of PAN that uses the skin
as a power conduit as well as a data bus.
Where/When-To understand the dynamics involved in wearable computing
more fully, we define three "spaces" that are involved in the completion of a task:
the work space, the information space, and the interaction space:
• An information space is defined by the artifacts people encounter such as doc­
uments and schedules.
An interaction space is defined by the computing technology that is used. There
are two components: the physical interaction space and the virtual interaction
space. These components can also be understood as the execution space and
the evaluation space, respectively. There is a dynamic relationship between the
two components that creates the interaction space.
Work space refers to any physical location that may be involved in a task.
There may or may not be a direct correlation among the different spaces. That
is, the interaction space may or may not be the same as a person's work space or in­
formation space. In the case of a computer programmer, Web designer, or a profes­
sional writer, the information and work spaces are both part of the interaction space
created by a PC.
32 Chapter 1 Interaction Paradigms

Venn diagram of a library space.

Consider the context of library-based research. Many modern libraries use digi­
tal card catalogs that contain meta-data about books in the form of abstracts and lo­
cation indicators, the Dewey Decimal System. The digital catalog constitutes an in­
teraction space, although it is only part of the work space. The book or books in
which the researcher is interested reside on a physical shelf that is outside the inter­
action space. Therefore, while the catalog resides within the information, interac­
tion, and work spaces, the book itself resides outside the interaction space. This cre­
ates a gap between parts of the work space and the interaction space (Figure 1.13).
Wearable computing seeks to bridge some of these gaps and make computing
functionality more contextual. A possible solution in this scenario might be to em­
bed a chip in the book that communicates meta-data about the book to the re­
searcher's eyeglass-mounted display. The chip could respond to queries entered
into a system wearable by the user. Another possibility .might be to allow the user to
access a digital version of the book on a wearable tablet-type device. In this case,
the catalog would interact with the user's wearable system and fetch the contents of
the book in digital form.
Who/Why-Wearable interaction devices would be able to make information
contextually available, as well as tailor it for specific users. For instance, a student
using wearable technology would be able to visit a museum and receive auditory or
Manual textual information about works of art based on his or her proximity to distributed

O
sensors; this information would be filtered through personal preference settings
Portable Fixed such as grade level or course of study parameters.
4
The military has been involved in a number of research projects involving wear­
Automated able computers. The systems they are developing are designed to enhance the in­
formation processing and communication capabilities of infantry soldiers. Figure
Embedded 1.14 summarizes the various EV environments and their characteristics based on the
environments­
.,;,.i~ ..... i;+., location/operation diagram. Table 1.1 outlines various EV environments and their
location/operation. characteristics.
p

7.4. Interaction Paradigms

Manual Automated Fixed Portable


Some systems are Some systems are Some components are Some devices are
manual automated embedded portable

Invisible User does not System takes care of Some system Some devices are
interact with com- all computer components are portable
puter functionality embedded

Wearable Many ofthe wear- Some ofthe wearable Some systems use Most system
able components components interact situated sensors that components are
allow automatically with interact with wearable portable (wearable)
. manual control embedded sensors components

The goals of the virtual reality (VR) community are the direct opposite of the goals
of the EV community. EV strives to integrate computer functionality with the real
world, whereas VR strives to immerse humans in a virtual world.
Virtual reality technologies can be divided into two distinct groups: immersive
and nonimmersive environments. Nonimmersive VR environments can be charac­
terized as screen-based, pointer-driven, three-dimensional (3D) graphical presenta­
tions that may involve haptic feedback from a haptic mouse or joystick.
In its simplest form, nonimmersive VR uses development tools and plug-ins for
Web browsers to create 3D images that are viewable on a normal computer display.
Two examples are the Virtual Reality Modeling Language (VRML), an International
Organization for Standardization/International Electrotechnical Commission (ISO/IEC)
standard that also can be used to create models for use in fully immersive VR environ­
ments, and Apple Computer's QuickTime VR®, which uses two-dimensional (2D)
photographs to create interactive 3D models. More-sophisticated systems allow for
stereoscopic presentations using special glasses and display systems that involve over­
lay filters or integrated display functionality.
The goal of immersive virtual reality is the realization of Ivan Sutherland's ulti­
mate display. This goal may remain elusive for the foreseeable future, but progress
in that direction continues. Contemporary systems use visual, auditory, and haptic
technologies to create as realistic an experience as possible. Even the person's sense
of equilibrium is involved in some of the large platform VR devices that offer loco­
motion simulation. Taste and smell, however, have not yet been addressed.
What/How-Immersive VR environments are designed to create a sense of
"being" in a world populated by virtual objects. To create a convincing illusion, they
must use as many human perceptual channels as possible. Therefore, sophisticated
34 Chapter 1 Interaction Paradigms

visual, auditory, and sometimes haptic technologies must be used to create a truly
engaging virtual experience.

One of the most common immersive VR I/0 devices is the head-mounted display
(HMD). These are relatively large head sets that present stereoscopic images of vir­
tual objects. HMDs can be cumbersome and annoying due to the presence of con­
necting cables, as well as the insufficient field of view and low resolution capabili­
ties found in the lower-end models. Newer designs that incorporate wearable
displays attached to eyeglasses and the emerging laser technologies show promise
for lighter, less cumbersome, and more versatile displays.
Another approach to visual displays of VR environments is to use room-size
spatial immersive display (SID) technology. SIDs use wall-size visual projections to
immerse the user in a panoramic virtual environment. These rooms create a visual
and auditory environment in which people can move about and interact with virtual
objects. Examples of SIDs are the ImmersaDesk®, PowerWall®, Infinity Wall®,
VisionDome®, and Cave Automated Virtual Environment® (CAVE).
There are several CAVE environments in existence around the world. It has
been suggested that collaborative work could be done by networking these CAVE
systems and creating a tele-immersion environment. Traditional videoconferencing
allows people in remote locations to work together as though they were physically
in the same location. Tele-immersion would allow people to work together in the
same virtual environment, observing and modifying the same virtual objects.
Researchers are exploring ways to exploit the potential of this technology for
business and educational applications by making tele-immersion more available
and less expensive. New, lower-cost and less space-intensive desktop stereo dis­
plays, such as the Totally Active Workspace, Personal Penta Panel, and Personal
Augmented Reality Immersive System, are being developed. These systems would
make tele-immersion more accessible for general computing situations.
Current desktop systems generally use larger-than-desktop displays and HMDs in
combination with high-quality audio systems. However, researchers at the Electronic
Visualization Laboratory, University of Illinois at Chicago, are exploring the possibility
of creating a truly desktop scale tele-Immersion system. The prototype is called the
ImmersaDesk3.

VR environments should foster a sense of presence that is defined as being im­


mersed in a virtual world. Generally, we rely most heavily on our visual channel;
however, the introduction of auditory and haptic stimuli can breathe life into an oth­
erwise detached virtual experience.
High-quality VR simulations use sophisticated audio systems based on head­
related transfer functions (HRTF) based on measurements taken of the auditory
7.4. Interaction Paradigms 35

signals close to the user's eardrum. These systems are capable of creating realistic
3D sound environments by producing auditory feedback in any location within the
3D auditory environment. However, some issues remain involving up-down and
front-back localization.

Immersive VR environments require some type of user-tracking system that will en­
able the display to sync with the user's movements. Tracking systems must be able to
calculate the position of the user relative to the virtual objects as well as to other users
in a networked environment. They must be able to follow the user's point of view and
update the imagery to reflect where the user is looking. They must also allow the user
to manipulate virtual objects through translational and rotational movements.
Head-movement-tracking systems can be added to the basic stereoscopic image
generation capabilities of HMD systems. These tracking devices are usually mounted
on the support that holds the HMD. They track the user's location and head move­
ments and relay this information to the display software, which creates dynamically
updated depictions of the environment. Tracking ·systems may also use a dataglove
or some type of wand or thong-like device to manipulate the virtual objects.
Movement in a VR environment is often restricted due to the equipment and ca­
bles required by the visual and auditory components of the system. There are times,
however, when motion is a significant part of the virtual experience. For instance,
flight simulations and military training focus on movement as an important part of
the environment. These motion systems require more versatile display technologies.
These technologies are categorized as passive or active:
• Passive systems, like flight simulators, use a platform device to transport the
user within the virtual space and emulate the feeling of being in a real aircraft.
The user controls the movement with a joystick or similar device.
• Active locomotion systems allow the user to walk "real" physical distances using a
treadmill or pedaling device. These systems are used in military training exercises
to simulate real-world physical conditions and test for stamina and endurance.
Where/When-Currently, immersive VR systems are found in large scientific
research laboratories. There are commercial applications for VR technologies in the
areas of architecture, product design, and engineering. However, only large
corporate entities can invest in these technologies with any degree of cost
effectiveness. The cost of just the basic system components is prohibitive. Outside
of the nonimmersive technologies like VRML and QuickTime VR, there are currently
no VR applications for the general computing population.
Who/Why-There are still many issues to resolve in the development of VR
systems; a number of application areas for VR technologies already exist, however,
such as scientific visualization. VR offers significant benefits in fields like aerospace,
aviation, medical, military, and industrial training, and other mission-critical
domains that involve work in hazardous or expensive environments.
36 Chapter 1 Interaction Paradigms

VR technologies have been used for the visualization of complex data sets. The
Visual Human Project (VHP), an outgrowth of the U.S. National Library of Medicine
(NLM) 1986 Long-Range Plan, is involved in the creation of a complete digital rep­
resentation of the human body. A virtual colonoscopy has already been demon­
strated, and there are plans for a virtual laparoscopic simulator.
There are applications for engineering-computer-aided design (CAD) and VR.
The Clemson Research in Engineering Design and Optimization (CREDO) labora­
tory is exploring the development of interfaces for virtual reality rapid prototyping.
They are also interested in using VR environments in the design process.
The field of education can benefit from the use of VR environments. Due to the
overwhelming amount of recent research and investigation, the acquisition of new
knowledge and implementation of new concepts is becoming difficult. VR environ­
ments can be used to facilitate this process.
The field of psychology has already demonstrated some advantages of VR simu­
lations in the treatment of phobias. Carlin, Hoffman, and Weghorst 0997) were able
to treat arachnophobia by using a VR display to simulate a room in which subjects
were able to control the number and proximity of virtual spiders. VR has also been
used in the treatment of agoraphobia, claustrophobia, and the fear of flying.

Augmented reality (AR) is often described as the opposite of VR. VR immerses the
user in a virtual environment and supplants reality, whereas the goal of AR is to cre­
ate a seamless integration between real and virtual objects in a way that augments
the user's perception and experience. In AR the user must maintain a sense of pres­
ence in the real world.
AR environments have some implied criteria: the virtual information must be rele­
vant to, and in sync with, the real-world environment. This increases the importance of
maintaining an accurate connection between virtual and real-world objects. Because
the virtual information must be kept in alignment with the real world, the system must
efficiently track the user's head movements and maintain a reasonable response time.
What/How-Like VR, AR environments use some type of eyeware however, the
eyewear is used to mix views of real-world objects with computer-generated
imagery or information. There are also some systems that use screen-based
configurations to generate AR environments. The user can view the real world either
orthoscopically through special eyewear or by using a video feed from cameras that
are attached to the head gear (this is also relevant to teleoperation environments in
which the camera is at a remote location).
In both cases, the virtual information can be combined with real objects by be­
ing optically superimposed on the screen embedded in the eyewear. It is essential
in AR environments that the virtual information be precisely registered with the
real-world objects or the user will become confused or disoriented.
7.4. Interaction Paradigms

There is a natural correlation between AR and invisible computing in the sense


that a successful AR environment would integrate virtual and real objects in such a
way that it would not be possible to visually distinguish between them. Therefore,
the user would not explicitly engage the computer interface; he or she would exist
in a "middle ground" between the virtual and the real.
Other embedded virtuality domains such as UbiComp and wearable computing
involve AR principles; however, they use computing functionality that is not specif­
ically designed to blend real-world and virtual objects. AR is also referred to by
other designations, such as mixed reality, computer-mediated reality, and computer­
augmented environments.
There are many different approaches to blending virtual and real-world ob­
jects. The virtuality continuum (VC) shown in Figure 1.15 is a spectrum that
ranges from· real environments to virtual environments and includes views of real
objects (either viewed from a video feed or viewed naturally) as well as views of
virtual objects.
Environments that fall on the left side of the continuum use virtual information
to augment reality. Environments that fall on the right side use real-world stimuli to
augment immersive virtual reality environments.
Both VR and AR are generally considered visual environments, although audi­
tory and haptic stimulation can also be used. Real heat, motion, or olfactory stimu­
lation can be used to augment a virtual environment, which would be an example
of augmented virtuality (AV).
Some examples of AR systems include the following:
Virtual Glassboat-A cart-mounted computer display that allows views of an
underground gas and water piping system.
Outdoor Wearable Collaboration System
A class of systems that use wearable technology to facilitate collaboration in
outdoor environments.
system that uses real objects to ac­
complish virtual tasks. A 2D desktop environment is superimposed on a real
desktop, fusing the virtual and real tools found in each. This allows the user to
use, for example, a real stapler to attach two virtual documents.

i- Mixed Reality (MR) ---i

Real Augmented Augmented Virtual


Environment Reality (AR) Virtuality (AV) Environment

Virtuality Continuum (VC)


38 Chapter 1 Interaction Paradigms

Where/When-In a general sense, AR is beneficial to people who need situated


computing functionality that augments their knowledge and understanding of
specific places or things.
AR technology is applicable in situations in which people need access to com­
puting functionality and cannot afford to leave their work site: for instance, techni­
cians who need access to large amounts of information such as diagrams and tech­
nical charts while they are in the field. This information can be stored electronically
and made available through an eyeware system while the technician is working.
The diagrams can be superimposed on an actual mechanism, thereby augmenting
the technician'.s knowledge of the object.
Who/Why-AR environments can be beneficial for people whose tasks require a
high degree of mobility as well as access to large amounts of information that would
be difficult to transport. It can also be appropriate for people who need access to
situational information such as the status of an electronic grid within a building.

As an interaction designer you will need to decide which interaction paradigm best
suits the needs of the intended user. In most cases this will be one of the more gen­
eral computing paradigms such as the personal computing or mobile paradigms.
However, the envelope is constantly being pushed, and what was once considered
"bleeding edge" technology has often become quite common. Consider the Memex;
it was once a dream, but now we carry around technology that is far more ad­
vanced in our PDAs and cell phones.
To design a system that is consistent with the user's abilities, limitations, and
tasks, you must understand the different paradigms and how they function and the
tasks for which they are most suited. The simplest solution is often the best, but a
simplistic solution will never be adequate. Study the user's abilities, limitations,
tasks, environment, attitudes, and motivations, and then identify the paradigms that
best suit their needs. This is the basis for creating an interaction that resonates with
the people who use your design.

www.aw.com/heim
You will find links to the publications mentioned in the Introduction as well as
Web-based resources covering many of the computing paradigms discussed in this
chapter.
Suggested Reading 39

Caroll,]. (Ed.). (2001). Human-Computer Interaction in the New Millennium (1 ed.):


Addison-Wesley Professional.
Earnshaw, R., Guedj, R., Van Dam, A., & Vince, ]. (Eds.). (2001). Frontiers of
Human-Centred Computing, Online Communities and Virtual Environments
(1 ed.): Springer.
Pirhonen, A., Isomaki, H., Roast, C., & Saariluoma, P. (Eds.). (2005). Future Interaction
Design (1 ed.): Springer.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy