System - User Interface Design Goals: Interdisciplinary Work
System - User Interface Design Goals: Interdisciplinary Work
Introduction
User Interfaces Are Products of Interdisciplinary Work - Who is Involved?
Psychologists
Graphic Designers
Technical Writers
Human Ergonomical Engineers
Anthropologists and Sociologists
1. Life-critical systems
o Air traffic control, nuclear reactors, power utilities, police & fire
dispatch systems
o High costs, reliability and effectiveness are expected
o Length training periods are acceptable provide error-free performance
o Subject satisfaction is less an issue due to well motivated users
Retention via frequent use and practice
2. Industrial and commercial uses
o Banking, insurance, order entry, inventory management, reservation,
billing, and point-of-sales systems
o Lower cost may sacrifice reliability
o Training is expensive, learning must be easy
o Speed and error rates are relative to cost, however speed is the
supreme concern Subject satisfaction is fairly important to limit
operator burnout
3. Office, home, and entertainment applications
o Word processing, electronic mail, computer conferencing, and video
game systems
o Choosing functionality is difficult because the population has a wide
range of both novice and expert users
o Competition cause the need for low cost
4. Exploratory, creative, and cooperative systems
o Database, artist toolkits, statistical packages, and scientific modeling
systems
o Benchmarks are hard to describe due to the wide array of tasks
o With these applications, the computer should "vanish" so that the user
can be absorbed in their task domain
cognitive process
o short-term memory
o long-term memory and learning
o problem solving
o decision making
o attention and set (scope of concern)
o search and scanning
o time perception
factors affecting perceptual and motor performance
o arousal and vigilance
o fatigue
o perceptual (mental) load
o knowledge of results
o monotony and boredom
o sensory deprivation
o sleep deprivation
o anxiety and fear
o isolation
o aging
o drugs and alcohol
o circadian rhythms
Personality differences
Elderly Users
Including the elderly is fairly ease, designers should allow for variability
within their applications via settings for sound, color, brightness, font sizes,
etc.
High-Level Theories
QWERTY layout
1870
Christopher Latham Sholes
good mechanical design and a clever placement of the letters that slowed
down the users enough that key jamming was infrequent
put frequently used letter pairs far apart, thereby increasing finger travel
distances
Dvorak layout
1920
reduces finger travel distances by at least one order of magnitude
Acceptance has been slow despite the dedicated efforts of some devotees
it takes about 1 week of regular typing to make the switch, but most users
have been unwilling to invest the effort
ABCDE style
26 letters of the alphabet laid out in alphabetical order nontypists will find it
easier to locate the keys
Keys
Function keys
typically simply labeled F1, F2, etc, though some may also have meaningful
labels, such as CUT, COPY, etc.
users must either remember each key's function, identify them from the
screen's display, or use a template over the keys in order to identify them
properly
can reduce number of keystrokes and errors
meaning of each key can change with each application
placement on keyboard can affect efficient use
special-purpose displays often embed function keys in monitor bezel
lights next to keys used to indicate availability of the function, or on/off
status
frequent movement between keyboard home position and mouse or function
keys can be disruptive to use
alternative is to use closer keys (e.g. ALT or CTRL) and one letter to
indicate special function
1. Select:
lightpen
touchscreen
mouse
the hand rests in a comfortable position, buttons on the mouse are easily
pressed, even long motions can be rapid, and positioning can be precise
trackball
joystick
graphics tablet
touchpad
Other variables
cost
durability
space requirements
weight
left- versus right-hand use
likelihood to cause repetitive-strain injury
compatibility with other systems
Some results
Fitts' Law
Speech recognition still does not match the fantasy of science fiction:
recognize individual words spoken by a specific person; can work with 90-
to 98-percent reliability for 20 to 200 word vocabularies
Speaker-dependent training, in which the user repeats the full vocabulary
once or twice
Speaker-independent systems are beginning to be reliable enough for certain
commercial applications
been successful in enabling bedridden, paralyzed, or otherwise disabled
people to broaden the horizons of their life
also useful in applications with at least one of the following conditions:
o speaker's hands are occupied
o mobility is required
o speaker's eyes are occupied
o harsh or cramped conditions preclude use of keyboard
voice-controlled editor versus keyboard editor
o lower task-completion rate
o lower error rate
use can disrupt problem solving
Continuous-speech recognition
receive messages
replay messages
reply to caller
forward messages to other users, delete messages
archive messages
Systems are low cost and reliable.
Speech Generation
Michaelis and Wiggins (1982) suggest that speech generation is "frequently
preferable" under these circumstances:
to confirm actions
offer warning
for visually-impaired users
music used to provide mood context, e.g. in games
can provide unique opportunities for user, e.g. with simulating various
musical instruments
5. Image and Video Displays
The visual display unit (VDU) has become the primary source of feedback to the
user from the computer.
Rapid operation
Reasonable size
Reasonable resolution
Quiet operation
No paper waste
Relatively low cost
Reliability
Highlighting
Graphics and animation
Possible health concerns:
visual fatigue
stress
radiation exposure
Display devices
monochrome displays
rows of horizontal wires are slightly separated from vertical wires by small
glass-enclosed capsules of neon-based gases
Light-emitting diodes (LEDs)
certain diodes emit light when a voltage is applied
arrays of these small diodes can be assembled to display characters
The technology employed affects these display attributes:
Size
Refresh rate
Capacity to show animation
Resolution
Surface flatness
Surface glare from reflected light
Contrast between characters and background
Brightness
Flicker
Line sharpness
Character formation
Tolerance for vibration
Each display technology has advantages and disadvantages with respect to
these attributes. Users should expect these features:
Digital video
Speed
1. Print quality
2. Cost
3. Compactness
4. Quiet operation
5. Use of ordinary paper (fanfolded or single sheet)
6. Character set
7. Variety of typefaces, fonts, and sizes
8. Highlighting techniques (boldface, underscore, and so on)
9. Support for special forms (printed forms, different lengths, and so on)
10. Reliability
dot-matrix printers
print more than 200 characters per second, have multiple fonts, can print
boldface, use variable width and size, and have graphics capabilities
inkjet printers
laser printers
color printers
photographic printers
Designers can become so entranced with their creations that they may fail to
evaluate them adequately.
Experienced designers have attained the wisdom and humility to know that
extensive testing is a necessity.
Expert Reviews
Expert reviews entail one-half day to one week effort, although a lengthy
training period may sometimes be required to explain the task domain or
operational procedures.
The dangers with expert reviews are that the experts may not have an
adequate understanding of the task domain or user communities.
The emergence of usability testing and laboratories since the early 1980s is
an indicator of the profound shift in attention to user needs.
The remarkable surprise was that usability testing not only sped up many
projects but that it produced dramatic cost savings.
The movement towards usability testing stimulated the construction of
usability laboratories.
A typical modest usability lab would have two 10 by 10 foot areas, one for
the participants to do their work and another, separated by a half-silvered
mirror, for the testers and observers (designers, managers, and customers).
Participants should be chosen to represent the intended user communities,
with attention to background in computing, experience with the task,
motivation, education, and ability with the natural language used in the
interface.
Participation should always be voluntary, and informed consent should be
obtained. Professional practice is to ask all subjects to read and sign a
statement like this one:
o I have freely volunteered to participate in this experiment.
o I have been informed in advance what my task(s) will be and what
procedures will be followed.
o I have been given the opportunity to ask questions, and have had my
questions answered to my satisfaction.
o I am aware that I have the right to withdraw consent and to
discontinue participation at any time, without prejudice to my future
treatment.
o My signature below may be taken as affirmation of all the above
statements; it was given prior to my participation in this study.
Videotaping participants performing tasks is often valuable for later review
and for showing designers or managers the problems that users encounter.
Field tests attempt to put new interfaces to work in realistic environments
for a fixed trial period. Field tests can be made more fruitful if logging
software is used to capture error, command, and help frequencies plus
productivity measures.
Game designers pioneered the can-you-break-this approach to usability
testing by providing energetic teenagers with the challenge of trying to beat
new games. This destructive testing approach, in which the users try to find
fatal flaws in the system, or otherwise to destroy it, has been used in other
projects and should be considered seriously.
For all its success, usability testing does have at least two serious
limitations: it emphasizes first-time usage and has limited coverage of the
interface features.
These and other concerns have led design teams to supplement usability
testing with the varied forms of expert reviews.
Surveys
Training times with display editors are much less than line editors
Line editors are generally more flexible and powerful
The advances of WYSIWYG word processors:
o Display a full page of text
o Display of the document in the form that it will appear when the final
printing is done
o Show cursor action
o Control cursor motion through physically obvious and intuitively
natural means
o Use of labeled icon for actions
o Display of the results of an action immediately
o Provide rapid response and display
o Offer easily reversible actions
Integration
Desktop publication software
Slide-presentation software
Hypermedia environments
Improved macro facilities
Spell checker and thesaurus
Grammar checkers
Video games
HyperCard
Quicken
Beneficial attributes:
The visual nature of computers can challenge the first generation of hackers
An icon is an image, picture, or symbol representing a concept
Icon-specific guidelines
o Represent the object or action in a familiar manner
o Limit the number of different icons
o Make icons stand out from the background
o Consider three-dimensional icons
o Ensure a selected icon is visible from unselected icons
o Design the movement animation
o Add detailed information
o Explore combinations of icons to create new objects or actions
Direct-Manipulation Programming
Home Automation
ON and OFF can have many representations and present problems with choosing
the appropriate one
Time delays
o transmission delays
o operation delays
Incomplete feedback
Feedback from multiple sources
Unanticipated interferences
Virtual Environments
Virtual reality breaks the physical limitations of space and allow users to act as
though they were somewhere else
Augmented reality shows the real world with an overlay of additional overlay
Situational awareness shows information about the real world that surrounds you
by tracking your movements in a computer model
Visual Display
Head position sensing
Hand-position sensing
Force feedback
Sound input and output
Other sensations
Cooperative and competitive virtual reality
9. Haptic Interface
Immersion, interaction, and imagination are three features of virtual reality (VR)
Existing VR systems possess fairly realistic visual and auditory feedbacks, and
however, are poor with haptic feedback, by means of which human can perceive the
physical world via abundant haptic properties
The haptic sensation obtained through virtual interaction is severely poor compared
to the sensation obtained through physical interaction. In our physical life, the haptic
channel is pervasively used, such as perception of stiffness, roughness and
temperature of the objects in external world, or manipulation of these objects and
motion or force control tasks such as grasping, touching or walking etc. In contrary,
in virtual world, haptic experiences are fairly poor in both quantity and quality. Most
commercial VR games and movies only provide visual and auditory feedbacks, and
a few of them provide simple haptic feedback such as vibrations. With the booming
of VR in many areas such as medical simulation and product design, there is an
urgent requirement to improve the realism of haptic feedback for VR systems, and
thus to achieve equivalent sensation comparable to the interaction in a physical
world.
Considering the human user, we need to study both the perception and manipulation
characteristics of the haptic channel during the bilateral communication between
human and computer. For the perception aspect, human perceptual system mainly
includes diverse kinesthetic and cutaneous receptors in our body, which located in
skin, muscles or tendons. For the manipulation/action aspect, we need to consider
the motor control parameters, including degree-of-freedom (DoF) of motion or force
tasks, the magnitude and resolution for motion and force signals for diverse
manipulation tasks.
Considering the interface device, its functions include sensing and actuation. For
sensing, a device needs to sense/track human manipulation information such as
motion/force signals, and then transmit this information to control a virtual avatar of
the device in the virtual environment. For actuation, the device receives simulated
force signals from the virtual environment, and then reproduces these forces by
actuators such as electric motors to exert these forces on user’s body.
In the past 30 years, the evolution of computing platform can be summarized in three
eras: personal computer, mobile internet, and virtual reality based on wearable
computers. Accordingly, the paradigm of haptic HCI can be classified into three
stages: desktop haptics, surface haptics, and wearable haptics.
In desktop haptics, the user’s hand is holding the stylus of the device, and thus to
control a virtual tool such as surgical scalpel, mechanical screwdriver etc. The
simulated motion/force dimensions in desktop haptics are six, including three
translations and three rotations of the virtual tool.
In surface haptics, the user’s fingertip slides along the touchscreen of a mobile phone
with typical gestures such as panning, zooming and rotating etc., and thus to control
a finger avatar to feel the texture and/or shape of virtual objects. The simulated
motion/force dimensions in surface haptics are two within the planar surface of the
touchscreen.
In wearable haptics, the user’s hand is wearing a haptic glove, and thus to control a
virtual hand-shaped avatar with diverse simulated gestures such as grasping,
pinching, lifting etc. The motion dimensions are 22 in terms of the DoF of human’s
hand, and the force dimensions are dynamically changing depending on the number
and topology of contact points between the virtual hand and the manipulated objects.
Desktop haptics
Motivation
In order to solve the limitation of the classic HCI paradigm, an enhanced mouse,
desktop force feedback devices like a multi-link robotic arm has been created to
provide both motion control and force feedback between user and the virtual
environment. Figure 3 shows the comparison between a computer mouse and a
desktop haptic device.
In the era of person computer, the mainstream way of haptic interaction is multi-
joint force feedback devices fixed on a desk or ground. The interactive metaphor of
the desktop force feedback interface is that users interact with the virtual
environment through the handle, and the virtual avatar is the 6-DOF motor rigid tool,
such as surgical scalpel, mechanical screwdriver etc. The contact force is transmitted
from the desktop haptic device to user’s hands when the virtual avatar contacts or
collides with the objects in the virtual environment, thereby the users can obtain the
force feedback experience of manipulating the virtual objects.
Definition
A desktop haptic device is a multi-joint robotic arm with a stylus that held in user’s
hand, the device is able to track the movement of the stylus, and provide force
feedback on the stylus. The device is normally fixed on a table-top or the ground.
In order to fulfil above functions, there are three major components. First, there are
position or force sensors in each joint or the end-effector of the robotic arm, which
are used to measure the movement of the stylus driven by the user (or the exerted
force on the stylus by the user). Second, there are actuators proving the feedback
force or torques. Third, there are transmission mechanisms to transmit torque from
the joint to the end-effector.
The three criteria/requirements for a good haptic device: (1) Free space must feel
free; (2) Solid virtual objects must feel stiff; (3) Virtual constraints must not be easily
saturated.
These criteria can be translated following design specifications, including low back-
drive friction, low inertia, highly adjustable impedance range, large force/torque,
high position sensing resolution, sufficient workspace for the simulated task.
Classification
The parallel structure mechanism offers potential advantages compared with the
serial mechanism, with multi-branch form, high overall stiffness, strong payload
capacity. While the errors of each joint will accumulate to the end-effector in serial
mechanism, in contrast, there is no error accumulation or amplification in parallel
mechanism, and the error of the end-effector is relatively small. Similarly, the
driving motors in the parallel mechanism are usually mounted on the frame, which
lead to a small inertia, small dynamical load and fast dynamic performance.
However, the available workspace of the parallel mechanism is small and the
rotation range of the end-effector (movable platform) is limited under the same
structure size. The solving process of the forward kinematics solution and the inverse
driving force/moment solution are relatively complex and difficult.
Hybrid structure mechanism combines the advantages of the serial and parallel
mechanisms, which increases the system stiffness and reduce the error of the end-
effector under the premise of simplifying the forward kinematics solution and the
inverse driving force/moment solution. But the torque feedback on the rotation axis
of the end-effector is not easy to be realized for the 6-DoF mechanism.
Based on the control principle, haptic devices can be classified into impedance
display and admittance display. Impedance display realizes the effect of force
feedback by measuring the motion of the operator and applying the feedback force
to the operator. Admittance display realizes the effect of force feedback by
measuring the active force applied by operator to the devices and controlling the
motion of the force feedback device.
In recent years, a large number of desktop force feedback interfaces have emerged
with the rapid development of sensors, robotics and other technologies, such as the
Phantom series of SensAble Inc., the Omega and Delta series of Force Dimension
Inc.
Haptic rendering
Surface haptics
Motivation
Since 2005, surface haptics has gradually become a research hotspot with the rise of
the mobile devices such as mobile phone, tablet PC, and multi-user collaborative
touch devices with a large screen.
Different from desktop haptic devices that simulate indirect contacts between hands
and objects (i.e., tool-mediated interaction), surface haptics aim to simulate direct
contacts between bare fingers and objects, for example, users can touch and feel the
contour or the roughness of an image displayed on the screen of a mobile phone.
According to the actuation principle, tactile feedback devices can be classified into
three categories: tactile feedback devices based on mechanical vibration, tactile
feedback devices that change the surface shape, tactile feedback devices with
variable surface friction coefficients. In this section, we survey these three
approaches with respect to their principles, key technologies and representative
systems, along with their pros and cons.
Vibro tactile rendering devices stimulate the operator’s fingers to present the tactile
sensation by controlling the tactile actuators such as eccentric rotating-mass (ERM)
actuators, linear resonant actuators (LRA), voice coils, solenoid actuators or
piezoelectric actuators. At present, almost all mobile phones and smart watches have
vibration feedback functions. For example, Immersion has been providing Samsung
with the OEM service of the TouchSense technology. TI has developed a series of
driver chips of piezoelectric vibrator, and Apple has used the linear motor to render
tactile feedback on the iPhone 6.
Tactile array is an intuitive way to realize tactile feedback, which utilizes a two-
dimensional array composed of pins, electrodes, air pressured chambers or voice
coils to stimulate the operator’s skin, and thus to form the spatial force distribution
that reflects the tactile properties of an object surface.
Distributed actuators are used to generate the physical image of a virtual component
by changing the unevenness of the touch surface, thereby improving the realism the
operation. Harrison and Hudson developed the tactile feedback device actuated by
the pneumatic chambers, which can fill the air into the customized-designed airbag
button. As show in Figure, positive pressure makes the button to formulate convex
shape, negative pressure causes the button to become concave, and the button
remains horizontal when not inflated. The design can also generate the effect of
touch screen by using a rear projection projector. Preliminary experiment results
show that compared to the smooth touch screen button, the airbag button can
improve the user's speed of discriminating different buttons, and the tactile effect of
the airbag button is very close to the physical button.
Friction modulation
Friction modulation devices reproduce tactile information through the change of the
lateral force within a horizontal plane, including the squeeze-film effect and the
electrostatic effect. Compared with the devices using an actuator array, the force
output performance of friction modulation device is good in continuity and thus it
can present fine tactile effects.
Tactile rendering
Tactile rendering algorithms obtain the position of users’ fingers, and produce force
output to the device. Tactile rendering algorithms normally consists of four
components: object modelling, collision detection, and force generation. At present,
tactile rendering algorithms for surface haptic device are still in their early phase.
Most systems utilized image-based rendering approaches.
Typical applications
Tactile devices have been used for assisting visually impaired people to read texts
or images on the webpage, for enhancing the efficiency of menu or button
manipulation on mobile touchscreens, for developing mobile computer games with
diversified tactile sensations.
In the future, touchscreens with tactile feedback may find potential applications in
mobile phones for improving the efficiency and accuracy of fine manipulations such
as panning, pinching and spreading gestures. For automobile industry, tactile
feedback will be beneficial for eye-free interaction with the touchscreen for
enhancing the safety during driving. For effective communication and teamwork in
classrooms or meeting rooms, using large-sized screen mounted on a wall, tactile
feedback will be helpful for intuitive manipulation of graphical charts of widgets for
assisting brainstorming discussions.
Wearable haptics
Motivation
In recent years, low cost HMDs such as Oculus Rift and HTC Vive indicate the
booming era of VR. When a user experiences virtual objects in high-fidelity 3D
graphically rendered virtual environments, it is human’s instinct to touch, grasp and
manipulate the virtual objects.
The development of VR requires novel type of haptic feedback that can allow users
to move in a large workspace, provide force feedback to the whole hand, and support
users to use diverse gestures with the full DoF of fingers for fine manipulation.
Haptic glove is a typical form of wearable haptic devices. Its main functions include
multi-DoF whole hand motion tracking, and providing distributed force and tactile
feedback to fingertips and the palm. Compared with desktop haptic force feedback
devices such as Phantom Desktop, haptic gloves are able to allow users to touch and
manipulate remote or virtual objects in an intuitive and direct way via the dexterous
manipulation and sensitive perception capabilities of our hands. A well designed
glove could provide force and tactile feedback that realistically simulates touching
and manipulating objects at a high update rate, while being light weighted and low
cost.
For simplicity, according to the mechanoreceptors in user’s body, current wearable
haptic devices include three types: kinesthetic/force feedback devices, tactile
feedback devices, and integrated feedback devices. Each type can be further
classified by different criteria. For example, based on the fixed link, we can classify
wearable haptic devices into dorsal-based, palm-based and digit-based. Based on
actuation type, we can classify them into electrical motors, pneumatic or hydraulic
actuators, novel actuators using functional materials etc.
Driven by strong application needs from virtual reality, lots of startup companies
developed haptic gloves. For the convenience of readers aiming to quickly construct
a virtual reality system with wearable haptic feedback, here we summarize existing
commercial haptic gloves and other wearable haptic devices in Table shows the
photos of several represented commercial force feedback gloves, including
CyberGrasp, H-glove, Dexmo, Haptx, Plexus, and Vrgluv.
CyberGrasp
Haptic rendering
Wearable haptics is the inherent demand of virtual reality, which aims to obtain more
intuitive gesture control of a hand avatar and get the sensation of multi-point contacts
between a hand and objects, which can greatly enhance the immersive sensation of
virtual reality manipulations. Hand-based haptic rendering will be useful in several
application fields, including surgery, industrial manufacturing, e-business and
entertainment etc.
10. Non-speech Auditory and Cross modal Output
Note: Refer Text book for the following sub topics.