0% found this document useful (0 votes)
81 views39 pages

System - User Interface Design Goals: Interdisciplinary Work

This document discusses human factors that should be considered in user interface design. It outlines various disciplines that contribute to interface design, such as psychology and human engineering. Five key human factors for evaluation are learning time, performance speed, error rates, retention over time, and subjective satisfaction. The document also discusses accommodating diversity among users in terms of physical abilities, cognitive and perceptual abilities, personality, culture, disabilities, and age. Interaction devices like keyboards and pointing devices are outlined, including different keyboard layouts, key features, and uses for pointing devices like selecting, positioning, and orienting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views39 pages

System - User Interface Design Goals: Interdisciplinary Work

This document discusses human factors that should be considered in user interface design. It outlines various disciplines that contribute to interface design, such as psychology and human engineering. Five key human factors for evaluation are learning time, performance speed, error rates, retention over time, and subjective satisfaction. The document also discusses accommodating diversity among users in terms of physical abilities, cognitive and perceptual abilities, personality, culture, disabilities, and age. Interaction devices like keyboards and pointing devices are outlined, including different keyboard layouts, key features, and uses for pointing devices like selecting, positioning, and orienting.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

1.

Introduction
User Interfaces Are Products of Interdisciplinary Work - Who is Involved?

 Psychologists
 Graphic Designers
 Technical Writers
 Human Ergonomical Engineers
 Anthropologists and Sociologists

System - User Interface Design Goals

 Define the target user community associated with the interface


 Communities evolve and change
 5 human factors central to community evaluation:
1. Time to learn
How long does it take for typical members of the community to learn
relevant task?
2. Speed of performance
How long does it take to perform relevant benchmarks?
3. Rate of errors by users
How many and what kinds of errors are commonly made during
typical applications?
4. Retention over time
Frequency of use and ease of learning help make for better user
retention
5. Subjective satisfaction
Allow for user feedback via interviews, free-form comments and
satisfaction scales.
 Trade-offs sometimes must be allowed in development, use tools such as
macros and shortcuts to ease some burdens
 Test all design alternatives using a wide range of mock-ups

Motivations for Human Factors in Design


Most of today's systems are poorly designed from a human-interaction standpoint

1. Life-critical systems
o Air traffic control, nuclear reactors, power utilities, police & fire
dispatch systems
o High costs, reliability and effectiveness are expected
o Length training periods are acceptable provide error-free performance
o Subject satisfaction is less an issue due to well motivated users
Retention via frequent use and practice
2. Industrial and commercial uses
o Banking, insurance, order entry, inventory management, reservation,
billing, and point-of-sales systems
o Lower cost may sacrifice reliability
o Training is expensive, learning must be easy
o Speed and error rates are relative to cost, however speed is the
supreme concern Subject satisfaction is fairly important to limit
operator burnout
3. Office, home, and entertainment applications
o Word processing, electronic mail, computer conferencing, and video
game systems
o Choosing functionality is difficult because the population has a wide
range of both novice and expert users
o Competition cause the need for low cost
4. Exploratory, creative, and cooperative systems
o Database, artist toolkits, statistical packages, and scientific modeling
systems
o Benchmarks are hard to describe due to the wide array of tasks
o With these applications, the computer should "vanish" so that the user
can be absorbed in their task domain

Accommodation of Human Diversity

Physical abilities and physical workplaces

 There is no average user, either compromises must be made or multiple


versions of a system must be created
 Physical measurement of human dimensions are not enough, take into
account dynamic measures such as reach, strength or speed
 Account for variances of the user population's sense perception
 Vision: depth, contrast, color blindness, and motion sensitivity
 Touch: keyboard and touchscreen sensitivity
 Hearing: audio clues must be distinct
 Workplace design can both help and hinder work performance

Cognitive and perceptual abilities

 cognitive process
o short-term memory
o long-term memory and learning
o problem solving
o decision making
o attention and set (scope of concern)
o search and scanning
o time perception
 factors affecting perceptual and motor performance
o arousal and vigilance
o fatigue
o perceptual (mental) load
o knowledge of results
o monotony and boredom
o sensory deprivation
o sleep deprivation
o anxiety and fear
o isolation
o aging
o drugs and alcohol
o circadian rhythms

Personality differences

 There is no set taxonomy for identifying user personality types


 Designers must be aware that populations are subdivided and that these
subdivisions have various responses to different stimuli
 Myers-Briggs Type Indicator (MBTI)
o extroversion versus introversion
o sensing versus intuition
o perceptive versus judging
o feeling versus thinking

Cultural and international diversity

 characters, numerals, special characters, and diacriticals


 Left-to-right versus right-to-left versus vertical input and reading
 Date and time formats
 Numeric and currency formats
 Weights and measures
 Telephone numbers and addresses
 Names and titles (Mr., Ms., Mme.)
 Social-security, national identification, and passport numbers
 Capitalization and punctuation
 Sorting sequences
 Icons, buttons, colors
 Pluralization, grammar, spelling
 Etiquette, policies, tone, formality, metaphors

Users with disabilities

 Designers must plan early to accommodate users with disabilities


 Early planning is more cost efficient than adding on later
 Businesses must comply with the "Americans With Disabilities" Act for
some applications

Elderly Users

 Including the elderly is fairly ease, designers should allow for variability
within their applications via settings for sound, color, brightness, font sizes,
etc.

High-Level Theories

Perceptual or Cognitive subtasks theories

 Predicting reading times for free text, lists, or formatted displays

Motor-task performance times theories:

 Predicting keystroking or pointing times


2. Interaction Devices
Keyboard Layouts

QWERTY layout

 1870
 Christopher Latham Sholes
 good mechanical design and a clever placement of the letters that slowed
down the users enough that key jamming was infrequent
 put frequently used letter pairs far apart, thereby increasing finger travel
distances

Dvorak layout

 1920
 reduces finger travel distances by at least one order of magnitude
 Acceptance has been slow despite the dedicated efforts of some devotees
 it takes about 1 week of regular typing to make the switch, but most users
have been unwilling to invest the effort

ABCDE style

 26 letters of the alphabet laid out in alphabetical order nontypists will find it
easier to locate the keys

Additional keyboard issues

 IBM PC keyboard was widely criticized because of the placement of a few


keys
o backslash key where most typists expect SHIFT key
o placement of several special characters near the ENTER key
 Number pad layout
 wrist and hand placement

Keys

 1/2 inch square keys


 1/4 inch spacing between keys
 slight concave surface
 matte finish to reduce glare finger slippage
 40- to 125-gram force to activate
 3 to 5 millimeters displacement
 tactile and audible feedback important
 certain keys should be larger (e.g. ENTER, SHIFT, CTRL)
 some keys require state indicator, such as lowered position or light indicator
(e.g. CAPS LOCK)
 key labels should be large, meaningful, permanent
 some "home" keys may have additional features, such as deeper cavity or
small raised dot, to help user locate their fingers properly (caution - no
standard for this)

Function keys

 typically simply labeled F1, F2, etc, though some may also have meaningful
labels, such as CUT, COPY, etc.
 users must either remember each key's function, identify them from the
screen's display, or use a template over the keys in order to identify them
properly
 can reduce number of keystrokes and errors
 meaning of each key can change with each application
 placement on keyboard can affect efficient use
 special-purpose displays often embed function keys in monitor bezel
 lights next to keys used to indicate availability of the function, or on/off
status
 frequent movement between keyboard home position and mouse or function
keys can be disruptive to use
 alternative is to use closer keys (e.g. ALT or CTRL) and one letter to
indicate special function

Cursor movement keys

 up, down, left, right


 some keyboards also provide diagonals
 best layout is natural positions
 inverted-T positioning allows users to place their middle three fingers in a
way that reduces hand and finger movement
 cross arrangement better for novices than linear or box
 typically include typamatic (auto-repeat) feature
 important for form-fillin and direct manipulation
 other movements may be performed with other keys, such as TAB, ENTER,
HOME, etc.
3. Pointing Devices
Pointing devices are applicable in six types of interaction tasks:

1. Select:

 user chooses from a set of items.


 used for traditional menu selection, identification of a file in a directory, or
marking of a part in an automobile design.
2. Position:

 user chooses a point in a one-, two-, three-, or higher-dimensional space


 used to create a drawing, to place a new window, or to drag a block of text
in a figure.
3. Orient:

 user chooses a direction in a two-, three-, or higher-dimensional space.


 direction may simply rotate a symbol on the screen, indicate a direction of
motion for a space ship, or control the operation of a robot arm.
4. Path:

 user rapidly performs a series of position and orient operations.


 may be realized as a curving line in a drawing program, the instructions for
a cloth cutting machine, or the route on a map.
5. Quantify:

 user specifies a numeric value.


 usually a one-dimensional selection of integer or real values to set
parameters, such as the page number in a document, the velocity of a ship,
or the amplitude of a sound.
6. Text:

 user enters, moves, and edits text in a two-dimensional space. The


 pointing device indicates the location of an insertion, deletion, or change.
 more elaborate tasks, such as centering; margin setting; font sizes;
highlighting, such as boldface or underscore; and page layout.

Direct-control pointing devices

lightpen

 enabled users to point to a spot on a screen and to perform a select, position,


or other task
 it allows direct control by pointing to a spot on the display
 incorporates a button for the user to press when the cursor is resting on the
desired spot on the screen
 lightpen has three disadvantages: users' hands obscured part of the screen,
users had to remove their hands from the keyboard, and users had to pick up
the lightpen

touchscreen

 allows direct control touches on the screen using a finger


 early designs were rightly criticized for causing fatigue, hand-obscuring-the-
screen, hand-off-keyboard, imprecise pointing, and the eventual smudging
of the display
 lift-off strategy enables users to point at a single pixel
 the users touch the surface
 then see a cursor that they can drag around on the display
 when the users are satisfied with the position, they lift their fingers off the
display to activate
 can produce varied displays to suit the task
 are fabricated integrally with display surfaces

Indirect pointing devices

mouse

 the hand rests in a comfortable position, buttons on the mouse are easily
pressed, even long motions can be rapid, and positioning can be precise

trackball

 usually implemented as a rotating ball 1 to 6 inches in diameter that moves a


cursor

joystick

 are appealing for tracking purposes

graphics tablet

 a touch-sensitive surface separate from the screen

touchpad

 built-in near the keyboard offers the convenience and precision of a


touchscreen while keeping the user's hand off the display surface

Comparisons of pointing devices


Human-factors variables

 speed of motion for short and long distances


 accuracy of positioning
 error rates
 learning time
 user satisfaction

Other variables

 cost
 durability
 space requirements
 weight
 left- versus right-hand use
 likelihood to cause repetitive-strain injury
 compatibility with other systems

Some results

 direct pointing devices faster, but less accurate


 graphics tablets are appealing when user can remain with device for long
periods without switching to keyboard
 mouse is faster than isometric joystick
 for tasks that mix typing and pointing, cursor keys a faster and are preferred
by users to a mouse
 muscular strain is low for cursor keys

Fitts' Law

 Index of difficulty = log2 (2D / W)


 Time to point = C1 + C2 (index of difficulty)
 C1 and C2 and constants that depend on the device
 Index of difficulty is log2 (2*8/1) = log2(16) = 4 bits
A three-component equation was thus more suited for the high-precision pointing
task:

 Time for precision pointing = C1 + C2 (index of difficulty) + C3 log2 (C4 /


W)
4. Speech Recognition, Digitization, and Generation

Speech recognition still does not match the fantasy of science fiction:

 demands of user's working memory


 background noise problematic
 variations in user speech performance impacts effectiveness
 most useful in specific applications, such as to benefit handicapped users

Discrete word recognition

 recognize individual words spoken by a specific person; can work with 90-
to 98-percent reliability for 20 to 200 word vocabularies
 Speaker-dependent training, in which the user repeats the full vocabulary
once or twice
 Speaker-independent systems are beginning to be reliable enough for certain
commercial applications
 been successful in enabling bedridden, paralyzed, or otherwise disabled
people to broaden the horizons of their life
 also useful in applications with at least one of the following conditions:
o speaker's hands are occupied
o mobility is required
o speaker's eyes are occupied
o harsh or cramped conditions preclude use of keyboard
 voice-controlled editor versus keyboard editor
o lower task-completion rate
o lower error rate
 use can disrupt problem solving

Continuous-speech recognition

Not generally available:

 difficulty in recognizing boundaries between spoken words


 normal speech patterns blur boundaries
 many potentially useful applications if perfected
Speech store and forward

Voice mail users can

 receive messages
 replay messages
 reply to caller
 forward messages to other users, delete messages
 archive messages
Systems are low cost and reliable.

Speech Generation
Michaelis and Wiggins (1982) suggest that speech generation is "frequently
preferable" under these circumstances:

 The message is simple.


 The message is short.
 The message will not be referred to later.
 The message deals with events in time.
 The message requires an immediate response.
 The visual channels of communication are overloaded.
 The environment is too brightly lit, too poorly lit, subject to severe
vibration, or otherwise unsuitable for transmission of visual information.
 The user must be free to move around.
 The user is subjected to high G forces or anoxia

Audio tones, audiolization, and music


Sound feedback can important:

 to confirm actions
 offer warning
 for visually-impaired users
 music used to provide mood context, e.g. in games
 can provide unique opportunities for user, e.g. with simulating various
musical instruments
5. Image and Video Displays
The visual display unit (VDU) has become the primary source of feedback to the
user from the computer.

The VDU has many important features, including:

 Rapid operation
 Reasonable size
 Reasonable resolution
 Quiet operation
 No paper waste
 Relatively low cost
 Reliability
 Highlighting
 Graphics and animation
Possible health concerns:

 visual fatigue
 stress
 radiation exposure

Display devices

monochrome displays

 are adequate, and are attractive because of their lower cost


RGB shadow-mask displays

 small dots of red, green, and blue phosphors packed closely


Raster-scan cathode-ray tube (CRT)

 electron beam sweeping out lines of dots to form letters


 refresh rates 30 to 70 per second
Liquid-crystal displays (LCDs)

 voltage changes influence the polarization of tiny capsules of liquid crystals


 flicker-free
 size of the capsules limits the resolution
Plasma panel

 rows of horizontal wires are slightly separated from vertical wires by small
glass-enclosed capsules of neon-based gases
Light-emitting diodes (LEDs)
 certain diodes emit light when a voltage is applied
 arrays of these small diodes can be assembled to display characters
The technology employed affects these display attributes:

 Size
 Refresh rate
 Capacity to show animation
 Resolution
 Surface flatness
 Surface glare from reflected light
 Contrast between characters and background
 Brightness
 Flicker
 Line sharpness
 Character formation
 Tolerance for vibration
Each display technology has advantages and disadvantages with respect to
these attributes. Users should expect these features:

 User control of contrast and brightness


 Software highlighting of characters by brightness
 Underscoring, reverse video, blinking (possibly at several rates)
 Character set (alphabetic, numeric, special and foreign characters)
 Multiple type styles (for example, italic, bold) and fonts
 Shape, size, and blinking rate of the cursor
 User control of cursor shape, blinking, and brightness
 Scrolling mechanism (smooth scrolling is preferred)
 User control of number of lines or characters per line displayed
 Support of negative and positive polarity (light on dark or dark on light)

Digital photography and scanners

 Many name-brand suppliers (e.g. Sony, Kodak, Nikon, Canon)


 Downloadable to PCs
 Can obtain PhotoCD from standard 35-mm slides
 Scanners have dropped significantly in price
 High-resolution color scanners now available for reasonable price
 Optical Character Recognition (OCR) software also available

Digital video

 12-inch videodisks store up to 54,000 still images or 30 minutes of motion


video
 widely used for
o museums
otravel facilities
oeducation
o industrial training
 CD-ROMs can provide up to
o 600 megabytes of textual or numeric data
o 6000 graphic images
o 1 hour of music
o 6 to 72 minutes of motion video (depending on quality)
 MPEG video cameras now available

Projectors, heads-up displays, helmet-mounted displays


6. Printers

These are the important criteria for printers:

Speed

1. Print quality
2. Cost
3. Compactness
4. Quiet operation
5. Use of ordinary paper (fanfolded or single sheet)
6. Character set
7. Variety of typefaces, fonts, and sizes
8. Highlighting techniques (boldface, underscore, and so on)
9. Support for special forms (printed forms, different lengths, and so on)
10. Reliability

dot-matrix printers

 print more than 200 characters per second, have multiple fonts, can print
boldface, use variable width and size, and have graphics capabilities

inkjet printers

 offer quiet operation and high-quality output

thermal printers or fax machines

 offer quiet, compact, and inexpensive output on specially coated papers

laser printers

 operate at 30,000 lines per minute

color printers

 allow users to produce hardcopy output of color graphics, usually by an


inkjet approach with three colored and black inks

photographic printers

 allow the creation of 35-millimeter or larger slides (transparencies) and


photographic prints
7. Expert Reviews, Usability Testing, Surveys, and
Continuing Assessment
Introduction

 Designers can become so entranced with their creations that they may fail to
evaluate them adequately.

 Experienced designers have attained the wisdom and humility to know that
extensive testing is a necessity.

 The determinants of the evaluation plan include:


o stage of design (early, middle, late)
o novelty of project (well defined vs. exploratory)
o number of expected users
o criticality of the interface (life-critical medical system vs. museum
exhibit support)
o costs of product and finances allocated for testing
o time available
o experience of the design and evaluation team
 The range of evaluation plans might be from an ambitious two-year test to a
few days test.

 The range of costs might be from 10% of a project down to 1%.

Expert Reviews

 While informal demos to colleagues or customers can provide some useful


feedback, more formal expert reviews have proven to be effective.

 Expert reviews entail one-half day to one week effort, although a lengthy
training period may sometimes be required to explain the task domain or
operational procedures.

 There are a variety of expert review methods to chose from:


o Heuristic evaluation
o Guidelines review
o Consistency inspection
o Cognitive walkthrough
o Formal usability inspection
 Expert reviews can be scheduled at several points in the development
process when experts are available and when the design team is ready for
feedback.
 Different experts tend to find different problems in an interface, so 3-5
expert reviewers can be highly productive, as can complementary usability
testing.

 The dangers with expert reviews are that the experts may not have an
adequate understanding of the task domain or user communities.

 To strengthen the possibility of successful expert reviews it helps to chose


knowledgeable experts who are familiar with the project situation and who
have a longer term relationship with the organization.

 Moreover, even experienced expert reviewers have great difficulty knowing


how typical users, especially first-time users will really behave.

Usability Testing and Laboratories

 The emergence of usability testing and laboratories since the early 1980s is
an indicator of the profound shift in attention to user needs.
 The remarkable surprise was that usability testing not only sped up many
projects but that it produced dramatic cost savings.
 The movement towards usability testing stimulated the construction of
usability laboratories.
 A typical modest usability lab would have two 10 by 10 foot areas, one for
the participants to do their work and another, separated by a half-silvered
mirror, for the testers and observers (designers, managers, and customers).
 Participants should be chosen to represent the intended user communities,
with attention to background in computing, experience with the task,
motivation, education, and ability with the natural language used in the
interface.
 Participation should always be voluntary, and informed consent should be
obtained. Professional practice is to ask all subjects to read and sign a
statement like this one:
o I have freely volunteered to participate in this experiment.
o I have been informed in advance what my task(s) will be and what
procedures will be followed.
o I have been given the opportunity to ask questions, and have had my
questions answered to my satisfaction.
o I am aware that I have the right to withdraw consent and to
discontinue participation at any time, without prejudice to my future
treatment.
o My signature below may be taken as affirmation of all the above
statements; it was given prior to my participation in this study.
 Videotaping participants performing tasks is often valuable for later review
and for showing designers or managers the problems that users encounter.
 Field tests attempt to put new interfaces to work in realistic environments
for a fixed trial period. Field tests can be made more fruitful if logging
software is used to capture error, command, and help frequencies plus
productivity measures.
 Game designers pioneered the can-you-break-this approach to usability
testing by providing energetic teenagers with the challenge of trying to beat
new games. This destructive testing approach, in which the users try to find
fatal flaws in the system, or otherwise to destroy it, has been used in other
projects and should be considered seriously.
 For all its success, usability testing does have at least two serious
limitations: it emphasizes first-time usage and has limited coverage of the
interface features.
 These and other concerns have led design teams to supplement usability
testing with the varied forms of expert reviews.

Surveys

 Written user surveys are a familiar, inexpensive and generally acceptable


companion for usability tests and expert reviews.
 The keys to successful surveys are clear goals in advance and then
development of focused items that help attain the goals.
 Survey goals can be tied to the components of the Objects and Action
Interface model of interface design. Users could be asked for their
subjective impressions about specific aspects of the interface such as the
representation of:
o task domain objects and actions
o syntax of inputs and design of displays.
 Other goals would be to ascertain
o users background (age, gender, origins, education, income)
o experience with computers (specific applications or software
packages, length of time, depth of knowledge)
o job responsibilities (decision-making influence, managerial roles,
motivation)
o personality style (introvert vs. extrovert, risk taking vs. risk aversive,
early vs. late adopter, systematic vs. opportunistic)
o reasons for not using an interface (inadequate services, too complex,
too slow)
o familiarity with features (printing, macros, shortcuts, tutorials)
o their feeling state after using an interface (confused vs. clear,
frustrated vs. in-control, bored vs. excited).
 Online surveys avoid the cost of printing and the extra effort needed for
distribution and collection of paper forms.
 Many people prefer to answer a brief survey displayed on a screen, instead
of filling in and returning a printed form, although there is a potential bias in
the sample.
Acceptance Tests

 For large implementation projects, the customer or manager usually sets


objective and measurable goals for hardware and software performance.
 If the completed product fails to meet these acceptance criteria, the system
must be reworked until success is demonstrated.
 Rather than the vague and misleading criterion of "user friendly,"
measurable criteria for the user interface can be established for the
following:
o Time to learn specific functions
o Speed of task performance
o Rate of errors by users
o Human retention of commands over time
o Subjective user satisfaction
 In a large system, there may be eight or 10 such tests to carry out on
different components of the interface and with different user communities.
 Once acceptance testing has been successful, there may be a period of field
testing before national or international distribution..
 The goal of early expert reviews, usability testing, surveys, acceptance
testing, and field testing is to force as much of the evolutionary development
as possible into the prerelease phase, when change is relatively easy and
inexpensive to accomplish.

Evaluation During Active Use

 A carefully designed and thoroughly tested system is a wonderful asset, but


successful active use requires constant attention from dedicated managers,
user-services personnel, and maintenance staff.
 Perfection is not attainable, but percentage improvements are possible and
are worth pursuing.
 Interviews and focus group discussions
o Interviews with individual users can be productive because the
interviewer can pursue specific issues of concern.
o After a series of individual discussions, group discussions are
valuable to ascertain the universality of comments.
 Continuous user-performance data logging
o The software architecture should make it easy for system managers to
collect data about the patterns of system usage, speed of user
performance, rate of errors, or frequency of request for online
assistance.
o A major benefit of usage-frequency data is the guidance they provide
to system maintainers in optimizing performance and reducing costs
for all participants.
 Online or telephone consultants
o Online or telephone consultants are an extremely effective and
personal way to provide assistance to users who are experiencing
difficulties.
o Many users feel reassured if they know there is a human being to
whom they can turn when problems arise.
o On some network systems, the consultants can monitor the user's
computer and see the same displays that the user sees while
maintaining telephone voice contact.
o This service can be extremely reassuring; the users know that
someone can walk them through the correct sequence of screens to
complete their tasks.
 Online suggestion box or trouble reporting
o Electronic mail can be employed to allow users to send messages to
the maintainers or designers.
o Such an online suggestion box encourages some users to make
productive comments, since writing a letter may be seen as requiring
too much effort.
 Online bulletin board or newsgroup
o Many interface designers offer users an electronic bulletin board or
newsgroups to permit posting of open messages and questions.
o Bulletin-board software systems usually offer a list of item headlines,
allowing users the opportunity to select items for display.
o New items can be added by anyone, but usually someone monitors
the bulletin board to ensure that offensive, useless, or repetitious
items are removed.
 User newsletters and conferences
o Newsletters that provide information about novel interface facilities,
suggestions for improved productivity, requests for assistance, case
studies of successful applications, or stories about individual users
can promote user satisfaction and greater knowledge.
o Printed newsletters are more traditional and have the advantage that
they can be carried away from the workstation.
o Online newsletters are less expensive and more rapidly disseminated
o Conferences allow workers to exchange experiences with colleagues,
promote novel approaches, stimulate greater dedication, encourage
higher productivity, and develop a deeper relationship of trust.
8. Direct Manipulation and Virtual Environments
Introduction

Positive feelings associated with good user interfaces:

 Mastery of the interface


 Competence in performing tasks
 Ease in learning the system originally and in assimilating advanced features
 Confidence in the capacity to retain mastery over time
 Enjoyment in using the system
 Eagerness to show the system off to novices
 Desire to explore more powerful aspects of the system

Examples of Direct-Manipulation systems

Command line vs. display editors and word processors

 Training times with display editors are much less than line editors
 Line editors are generally more flexible and powerful
 The advances of WYSIWYG word processors:
o Display a full page of text
o Display of the document in the form that it will appear when the final
printing is done
o Show cursor action
o Control cursor motion through physically obvious and intuitively
natural means
o Use of labeled icon for actions
o Display of the results of an action immediately
o Provide rapid response and display
o Offer easily reversible actions

Technologies that derive from the word processor:

 Integration
 Desktop publication software
 Slide-presentation software
 Hypermedia environments
 Improved macro facilities
 Spell checker and thesaurus
 Grammar checkers

The VISICALC spreadsheet and its descendants Spatial data management

 In some cases, spatial representations provide a better model of reality


 Successful spatial data-management systems depend on choosing
appropriate:
o Icons
o Graphical representations
o Natural and comprehensible data layouts

Video games

 Field of action is visual and compelling


 Commands are physical actions whose results are immediately shown on the
screen
 No syntax to remember

Computer-aided design Office automation Further examples of direct


manipulation

 HyperCard
 Quicken

Explanations of Direct Manipulation

Problems with direct manipulation

 Spatial or visual representations can be too spread out


 High-level flowcharts and database-schema can become confusing
 Designs may force valuable information off of the screen
 Users must learn the graphical representations
 The visual representation may be misleading
 Typing commands with the keyboard my be faster

The OAI Model explanation of direct manipulation

Portrait of direct manipulation:

 Continuous representation of the objects and actions of interest


 Physical actions or presses of labeled buttons instead of complex syntax
 Rapid incremental reversible operations whose effect on the object of
interest is immediately visible

Beneficial attributes:

 Novices learn quickly


 Experts work rapidly
 Intermittent users can retain concepts
 Error messages are rarely needed
 Users see if their actions are furthering their goals
 Users experience less anxiety
 Users gain confidence and mastery

Visual Thinking and Icons

 The visual nature of computers can challenge the first generation of hackers
 An icon is an image, picture, or symbol representing a concept
 Icon-specific guidelines
o Represent the object or action in a familiar manner
o Limit the number of different icons
o Make icons stand out from the background
o Consider three-dimensional icons
o Ensure a selected icon is visible from unselected icons
o Design the movement animation
o Add detailed information
o Explore combinations of icons to create new objects or actions

Direct-Manipulation Programming

Visual representations of information make direct-manipulation programming


possible in other domains

Demonstrational programming is when users create macros by simply doing their


tasks

The five challenges of programming in the user interface:

 Sufficient computational generality


 Access to the appropriate data structures and operators
 Ease in programming and editing programs
 Simplicity in invocation and assignment of arguments
 Low risk

Cognitive dimensions framework may help analyzing design issues of visual


programming environments.

Home Automation

Remote control of devices is being extended to:

 Channel audio and video


 lawn watering
 video surveillance and burglar alarms
 Multiple-zone environmental controls
 Maintenance records

Providing direct-manipulation with rich feedback is vital in these applications

Many direct-manipulation actions take place on a display of the floor plan

ON and OFF can have many representations and present problems with choosing
the appropriate one

Controlling complex home equipment by direct manipulation reshapes how we


think of homes and residents

Remote Direct Manipulation

Complicating factors in the architecture of remote environments:

 Time delays
o transmission delays
o operation delays
 Incomplete feedback
 Feedback from multiple sources
 Unanticipated interferences

Virtual Environments

Virtual reality breaks the physical limitations of space and allow users to act as
though they were somewhere else

Augmented reality shows the real world with an overlay of additional overlay

Situational awareness shows information about the real world that surrounds you
by tracking your movements in a computer model

Successful virtual environments depend on the smooth integration of:

 Visual Display
 Head position sensing
 Hand-position sensing
 Force feedback
 Sound input and output
 Other sensations
 Cooperative and competitive virtual reality
9. Haptic Interface
Immersion, interaction, and imagination are three features of virtual reality (VR)

Existing VR systems possess fairly realistic visual and auditory feedbacks, and
however, are poor with haptic feedback, by means of which human can perceive the
physical world via abundant haptic properties

Haptic feedback is indispensable for enhancing immersion, interaction, and


imagination of VR systems. Interaction can be enhanced by haptic feedback as users
can directly manipulate virtual objects, and obtain immediate haptic feedback.
Immersion of the VR system can be enhanced in terms of providing more realistic
sensation to mimic the physical interaction process. Imagination of users can be
inspired when haptics can provide more cues for user to mentally construct an
imagined virtual world beyond spatial and/or temporal limitations.

The haptic sensation obtained through virtual interaction is severely poor compared
to the sensation obtained through physical interaction. In our physical life, the haptic
channel is pervasively used, such as perception of stiffness, roughness and
temperature of the objects in external world, or manipulation of these objects and
motion or force control tasks such as grasping, touching or walking etc. In contrary,
in virtual world, haptic experiences are fairly poor in both quantity and quality. Most
commercial VR games and movies only provide visual and auditory feedbacks, and
a few of them provide simple haptic feedback such as vibrations. With the booming
of VR in many areas such as medical simulation and product design, there is an
urgent requirement to improve the realism of haptic feedback for VR systems, and
thus to achieve equivalent sensation comparable to the interaction in a physical
world.

Paradigms of haptic display driven by computing platform

The paradigm of human-computer interaction (HCI) can be defined with three


components: human user, interface device, and virtual environment synthesized by
computer.

Considering the human user, we need to study both the perception and manipulation
characteristics of the haptic channel during the bilateral communication between
human and computer. For the perception aspect, human perceptual system mainly
includes diverse kinesthetic and cutaneous receptors in our body, which located in
skin, muscles or tendons. For the manipulation/action aspect, we need to consider
the motor control parameters, including degree-of-freedom (DoF) of motion or force
tasks, the magnitude and resolution for motion and force signals for diverse
manipulation tasks.

Considering the interface device, its functions include sensing and actuation. For
sensing, a device needs to sense/track human manipulation information such as
motion/force signals, and then transmit this information to control a virtual avatar of
the device in the virtual environment. For actuation, the device receives simulated
force signals from the virtual environment, and then reproduces these forces by
actuators such as electric motors to exert these forces on user’s body.

Considering the virtual environment, diverse tasks can be simulated, including


surgical simulation, mechanical assembly, computer games etc. Whenever there are
direct contacts between the user-driven avatar and the manipulated objects, haptic
feedback devices can be used to display the resultant contact forces/torques to users.

History of paradigm shift in the past 30 years

In 1948, the first mechanical force feedback master-slave manipulator was


developed in Argonne National Laboratory by Raymond Goertz[2]. The master arm
is the pioneer of today’s haptic display. With the advancement of computing
platform, haptic display evolved accordingly. The impact between haptic display
and computing platform are two folded. In one aspect, each computing platform
required an effective way for human-computer interaction, and thus to maintain the
users’ work efficiency. In the other aspect, the haptic display required a powerful
computing platform that is capable to support its function and performance
specifications.

In the past 30 years, the evolution of computing platform can be summarized in three
eras: personal computer, mobile internet, and virtual reality based on wearable
computers. Accordingly, the paradigm of haptic HCI can be classified into three
stages: desktop haptics, surface haptics, and wearable haptics.

Figure illustrates the typical features of each paradigm.


Virtual Avatar:
noun. The definition of an avatar is something visual used to represent non-visual
concepts or ideas, or is an image that is used to represent a person in
the virtual world of the Internet and computers. An example of an avatar is an icon
you use to represent you on an Internet forum.

In a human-machine interaction system, a human user is manipulating a haptic


device, and the mechanical interface is mapped into the avatar in virtual environment
(VE), and typical task. With the evolution of the haptic HCI paradigm, following
components are also evolving, including the body organ of the human user, the
metaphor of the interface device, the controlled avatar, and motion/force dimensions
supported by the paradigm.

In desktop haptics, the user’s hand is holding the stylus of the device, and thus to
control a virtual tool such as surgical scalpel, mechanical screwdriver etc. The
simulated motion/force dimensions in desktop haptics are six, including three
translations and three rotations of the virtual tool.

In surface haptics, the user’s fingertip slides along the touchscreen of a mobile phone
with typical gestures such as panning, zooming and rotating etc., and thus to control
a finger avatar to feel the texture and/or shape of virtual objects. The simulated
motion/force dimensions in surface haptics are two within the planar surface of the
touchscreen.

In wearable haptics, the user’s hand is wearing a haptic glove, and thus to control a
virtual hand-shaped avatar with diverse simulated gestures such as grasping,
pinching, lifting etc. The motion dimensions are 22 in terms of the DoF of human’s
hand, and the force dimensions are dynamically changing depending on the number
and topology of contact points between the virtual hand and the manipulated objects.

Desktop haptics

Motivation

Mainstream interaction paradigm in the era of personal computer is Windows-Icons-


Menus-Pointer (WIMP) interface paradigm, in which the computer mouse is used as
a user-friendly interface to enable highly efficient human-computer interaction.
While the mouse can capture the two dimensional movement of user’s hand on a
desktop surface, it cannot provide force feedback on user’s hand when the virtual
avatar collides with a virtual obstacle.

In order to solve the limitation of the classic HCI paradigm, an enhanced mouse,
desktop force feedback devices like a multi-link robotic arm has been created to
provide both motion control and force feedback between user and the virtual
environment. Figure 3 shows the comparison between a computer mouse and a
desktop haptic device.

Figure 3 Comparison between a computer mouse and a desktop haptic


device.

In the era of person computer, the mainstream way of haptic interaction is multi-
joint force feedback devices fixed on a desk or ground. The interactive metaphor of
the desktop force feedback interface is that users interact with the virtual
environment through the handle, and the virtual avatar is the 6-DOF motor rigid tool,
such as surgical scalpel, mechanical screwdriver etc. The contact force is transmitted
from the desktop haptic device to user’s hands when the virtual avatar contacts or
collides with the objects in the virtual environment, thereby the users can obtain the
force feedback experience of manipulating the virtual objects.

Desktop haptic feedback device

Definition

A desktop haptic device is a multi-joint robotic arm with a stylus that held in user’s
hand, the device is able to track the movement of the stylus, and provide force
feedback on the stylus. The device is normally fixed on a table-top or the ground.

In order to fulfil above functions, there are three major components. First, there are
position or force sensors in each joint or the end-effector of the robotic arm, which
are used to measure the movement of the stylus driven by the user (or the exerted
force on the stylus by the user). Second, there are actuators proving the feedback
force or torques. Third, there are transmission mechanisms to transmit torque from
the joint to the end-effector.

Function and specifications


Compared with traditional robotics arms, the major challenges for a haptic device is
to simulate the sensation of interacting with both free space and constrained space.
In free space, the device should follow or allow the motion of the user and exert as
less as possible resistance on user’s hand. In constraint space, the device should
provide a sufficient range of impedance to simulate contact constraints from virtual
objects with diverse physical properties.

The three criteria/requirements for a good haptic device: (1) Free space must feel
free; (2) Solid virtual objects must feel stiff; (3) Virtual constraints must not be easily
saturated.

These criteria can be translated following design specifications, including low back-
drive friction, low inertia, highly adjustable impedance range, large force/torque,
high position sensing resolution, sufficient workspace for the simulated task.

Classification

Figure illustrates the taxonomy of desktop haptic devices based on diverse


classification criteria. According to the kinematic structure, haptic devices can be
classified into serial, parallel, and hybrid structures. According to the control
principle, haptic devices can be classified into impedance and admittance control.
Different actuation principles can be used, including electric motor, pneumatic and
hydraulic actuations, and novel actuators.

Taxonomy of desktop haptic devices based on diverse classification


criteria.

The characteristics of serial structure mechanism are as follows: simple topological


structure, easy to provide large workspace, the large rotation angle for the end-link,
flexible operation, relatively easy to solve the forward kinematics and the inverse
driving force/moment solution. However, this kind of mechanism has the following
disadvantages: the mechanical stiffness is relatively small; the error of the end-
effector is large because the error from each joint may be accumulated; most of the
driving motors and transmission links are mounted on the moving arms, which
increases the inertia of the system and leads to the poor dynamic performance.

The parallel structure mechanism offers potential advantages compared with the
serial mechanism, with multi-branch form, high overall stiffness, strong payload
capacity. While the errors of each joint will accumulate to the end-effector in serial
mechanism, in contrast, there is no error accumulation or amplification in parallel
mechanism, and the error of the end-effector is relatively small. Similarly, the
driving motors in the parallel mechanism are usually mounted on the frame, which
lead to a small inertia, small dynamical load and fast dynamic performance.
However, the available workspace of the parallel mechanism is small and the
rotation range of the end-effector (movable platform) is limited under the same
structure size. The solving process of the forward kinematics solution and the inverse
driving force/moment solution are relatively complex and difficult.

Hybrid structure mechanism combines the advantages of the serial and parallel
mechanisms, which increases the system stiffness and reduce the error of the end-
effector under the premise of simplifying the forward kinematics solution and the
inverse driving force/moment solution. But the torque feedback on the rotation axis
of the end-effector is not easy to be realized for the 6-DoF mechanism.

Based on the control principle, haptic devices can be classified into impedance
display and admittance display. Impedance display realizes the effect of force
feedback by measuring the motion of the operator and applying the feedback force
to the operator. Admittance display realizes the effect of force feedback by
measuring the active force applied by operator to the devices and controlling the
motion of the force feedback device.

Typical commercial devices and research prototypes

In recent years, a large number of desktop force feedback interfaces have emerged
with the rapid development of sensors, robotics and other technologies, such as the
Phantom series of SensAble Inc., the Omega and Delta series of Force Dimension
Inc.
Haptic rendering

Haptic rendering refers to the process of computing and generating forces in


response to user interactions with virtual objects. Figure shows the pipeline of haptic
rendering.

Figure: Pipeline of haptic rendering


Typical applications

Typical application of desktop haptic systems include virtual surgery, mechanical


assembly, and other tool-based entertainment such as virtual sculpture.

Surface haptics

Motivation

Since 2005, surface haptics has gradually become a research hotspot with the rise of
the mobile devices such as mobile phone, tablet PC, and multi-user collaborative
touch devices with a large screen.

Different from desktop haptic devices that simulate indirect contacts between hands
and objects (i.e., tool-mediated interaction), surface haptics aim to simulate direct
contacts between bare fingers and objects, for example, users can touch and feel the
contour or the roughness of an image displayed on the screen of a mobile phone.

Tactile device for touch screen


The main challenges of surface haptic feedback is to embed all the sensing and
actuating components within the compact space of a mobile device.

According to the actuation principle, tactile feedback devices can be classified into
three categories: tactile feedback devices based on mechanical vibration, tactile
feedback devices that change the surface shape, tactile feedback devices with
variable surface friction coefficients. In this section, we survey these three
approaches with respect to their principles, key technologies and representative
systems, along with their pros and cons.

Classification of tactile feedback devices based on actuation principles.

Vibro tactile rendering devices stimulate the operator’s fingers to present the tactile
sensation by controlling the tactile actuators such as eccentric rotating-mass (ERM)
actuators, linear resonant actuators (LRA), voice coils, solenoid actuators or
piezoelectric actuators. At present, almost all mobile phones and smart watches have
vibration feedback functions. For example, Immersion has been providing Samsung
with the OEM service of the TouchSense technology. TI has developed a series of
driver chips of piezoelectric vibrator, and Apple has used the linear motor to render
tactile feedback on the iPhone 6.

Micro roughness display

Tactile array is an intuitive way to realize tactile feedback, which utilizes a two-
dimensional array composed of pins, electrodes, air pressured chambers or voice
coils to stimulate the operator’s skin, and thus to form the spatial force distribution
that reflects the tactile properties of an object surface.

Distributed actuators are used to generate the physical image of a virtual component
by changing the unevenness of the touch surface, thereby improving the realism the
operation. Harrison and Hudson developed the tactile feedback device actuated by
the pneumatic chambers, which can fill the air into the customized-designed airbag
button. As show in Figure, positive pressure makes the button to formulate convex
shape, negative pressure causes the button to become concave, and the button
remains horizontal when not inflated. The design can also generate the effect of
touch screen by using a rear projection projector. Preliminary experiment results
show that compared to the smooth touch screen button, the airbag button can
improve the user's speed of discriminating different buttons, and the tactile effect of
the airbag button is very close to the physical button.

Varied states of customized-designed airbag buttons

Friction modulation

Friction modulation devices reproduce tactile information through the change of the
lateral force within a horizontal plane, including the squeeze-film effect and the
electrostatic effect. Compared with the devices using an actuator array, the force
output performance of friction modulation device is good in continuity and thus it
can present fine tactile effects.

Tactile rendering

Tactile rendering algorithms obtain the position of users’ fingers, and produce force
output to the device. Tactile rendering algorithms normally consists of four
components: object modelling, collision detection, and force generation. At present,
tactile rendering algorithms for surface haptic device are still in their early phase.
Most systems utilized image-based rendering approaches.

Typical applications

Tactile devices have been used for assisting visually impaired people to read texts
or images on the webpage, for enhancing the efficiency of menu or button
manipulation on mobile touchscreens, for developing mobile computer games with
diversified tactile sensations.

In the future, touchscreens with tactile feedback may find potential applications in
mobile phones for improving the efficiency and accuracy of fine manipulations such
as panning, pinching and spreading gestures. For automobile industry, tactile
feedback will be beneficial for eye-free interaction with the touchscreen for
enhancing the safety during driving. For effective communication and teamwork in
classrooms or meeting rooms, using large-sized screen mounted on a wall, tactile
feedback will be helpful for intuitive manipulation of graphical charts of widgets for
assisting brainstorming discussions.

Wearable haptics

Motivation

In recent years, low cost HMDs such as Oculus Rift and HTC Vive indicate the
booming era of VR. When a user experiences virtual objects in high-fidelity 3D
graphically rendered virtual environments, it is human’s instinct to touch, grasp and
manipulate the virtual objects.

The development of VR requires novel type of haptic feedback that can allow users
to move in a large workspace, provide force feedback to the whole hand, and support
users to use diverse gestures with the full DoF of fingers for fine manipulation.

Wearable haptic device

Haptic glove is a typical form of wearable haptic devices. Its main functions include
multi-DoF whole hand motion tracking, and providing distributed force and tactile
feedback to fingertips and the palm. Compared with desktop haptic force feedback
devices such as Phantom Desktop, haptic gloves are able to allow users to touch and
manipulate remote or virtual objects in an intuitive and direct way via the dexterous
manipulation and sensitive perception capabilities of our hands. A well designed
glove could provide force and tactile feedback that realistically simulates touching
and manipulating objects at a high update rate, while being light weighted and low
cost.
For simplicity, according to the mechanoreceptors in user’s body, current wearable
haptic devices include three types: kinesthetic/force feedback devices, tactile
feedback devices, and integrated feedback devices. Each type can be further
classified by different criteria. For example, based on the fixed link, we can classify
wearable haptic devices into dorsal-based, palm-based and digit-based. Based on
actuation type, we can classify them into electrical motors, pneumatic or hydraulic
actuators, novel actuators using functional materials etc.

Driven by strong application needs from virtual reality, lots of startup companies
developed haptic gloves. For the convenience of readers aiming to quickly construct
a virtual reality system with wearable haptic feedback, here we summarize existing
commercial haptic gloves and other wearable haptic devices in Table shows the
photos of several represented commercial force feedback gloves, including
CyberGrasp, H-glove, Dexmo, Haptx, Plexus, and Vrgluv.

CyberGrasp

Table Existing commercial haptic feedback gloves


Name of the Motion Force Tactile Actuation Sensing Typical
Company
device track DoF feedback feedback principle principle features
Feel the
One 22-sensor size and
Electric
CyberGrasp Immersion — actuator per — CyberGlove shape of
motor
finger device virtual
objects
Possibility
Each
Force to attach it
finger Electric
H-glove Haption feedback on — — to a
possesses motor
3 fingers Virtuose
3 DoF
6D
Name of the Motion Force Tactile Actuation Sensing Typical
Company
device track DoF feedback feedback principle principle features
Feel the
Three
shape, size
Tracking Force linear
Electric Rotary and
Dexmo DextaRobotics 11 DoF feedback on resonant
motor sensors stiffness of
for hand 5 fingers actuators
virtual
(LRA)
objects
Feel the
shape,
Lightweight texture and
Tracking exoskeleton motion of
Haptx 130 stimuli Microfluidic Magnetic
Haptx 6 DoF applies up virtual
glove points array tracking
per digit to 4 lbs per objects;
finger sub-
millimeter
precision
Using
tracing ada-
Track with
One tactile pters for the
21 DoF 0.01
Plexus Plexus — actuator — Vive,
per hand degree
per finger Oculus and
precision
Windows
MR devices
Force-
feedback is
Feel the
applied to
20 DoF One haptic shape and
each Electric
Sense glove Sense Glove finger motor per Sensor density of
fingertip in motor
tracking finger virtual
the flexion
objects
or grasping
direction
Track the
Full Ten movements
Avatar NeuroDigital Vibrotactile 6x 9-AXIS
Finger — vibrotactile of chest,
VR[57] Tech. array IMUs
Tracking actuators arms and
hands
Precise
gesture
Motion
capture,
5 Vibration capture
13 DoF nuanced
Exotendon feedback at system
Maestro[58] Contact CI hand — vibration
restriction each combined
tracking feedback
mechanism fingertip with flex
and smart
sensors
finger
restriction
Precisely
track
fingers and
Full 3d
hands
Senso tracking 5 vibration 7 IMU
Senso Device — — position in
Glove[59] for each motors sensors
space and
finger
provide
haptic
feedback
Name of the Motion Force Tactile Actuation Sensing Typical
Company
device track DoF feedback feedback principle principle features
Weighing
Holding
Dextres EPFL and Electrostatic less than 8
— force on — —
glove ETH Zurich attraction grams per
each finger
finger
12 DoF Apply up to Simulate
for the 5 lbs of stiffness,
5 sensors
VRgluv VRgluv fingers varying — DC motors shape, and
per finger
on each force per mechanical
hand finger features

Haptic rendering

In addition to high-performance force-feedback gloves, the hand-based haptic


rendering algorithms and software is another important engine to boost the
prosperity of wearable haptics. In hand-based haptic rendering, the user is wearing
a haptic glove, to control a hand avatar to touch and/or grasp virtual objects, and get
force feedback on fingertips or even whole hand surface.

Wearable haptics is the inherent demand of virtual reality, which aims to obtain more
intuitive gesture control of a hand avatar and get the sensation of multi-point contacts
between a hand and objects, which can greatly enhance the immersive sensation of
virtual reality manipulations. Hand-based haptic rendering will be useful in several
application fields, including surgery, industrial manufacturing, e-business and
entertainment etc.
10. Non-speech Auditory and Cross modal Output
Note: Refer Text book for the following sub topics.

1. Why use non speech sound in HCI?


2. Some advantages of sound
3. Some problems with sound
4. Non speech sound presentation techniques
5. Auditory icons
6. Design guidelines for auditory icons
7. Earcons
8. Comparing auditory icons and earcons
9. The applications of auditory output

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy