0% found this document useful (0 votes)
87 views24 pages

Socialroboticssurvey PDF

Uploaded by

Robinson XMI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views24 pages

Socialroboticssurvey PDF

Uploaded by

Robinson XMI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Robotics and Autonomous Systems 42 (2003) 143–166

A survey of socially interactive robots


Terrence Fong a,b,∗ , Illah Nourbakhsh a , Kerstin Dautenhahn c
a The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
bInstitut de production et robotique, Ecole Polytechnique Fédérale de Lausanne, CH-1015 Lausanne, Switzerland
c Department of Computer Science, The University of Hertfordshire, College Lane, Hatfield, Hertfordshire AL10 9AB, UK

Abstract
This paper reviews “socially interactive robots”: robots for which social human–robot interaction is important. We begin
by discussing the context for socially interactive robots, emphasizing the relationship to other research fields and the different
forms of “social robots”. We then present a taxonomy of design methods and system components used to build socially
interactive robots. Finally, we describe the impact of these robots on humans and discuss open issues. An expanded version
of this paper, which contains a survey and taxonomy of current applications, is available as a technical report [T. Fong, I.
Nourbakhsh, K. Dautenhahn, A survey of socially interactive robots: concepts, design and applications, Technical Report No.
CMU-RI-TR-02-29, Robotics Institute, Carnegie Mellon University, 2002].
© 2003 Elsevier Science B.V. All rights reserved.
Keywords: Human–robot interaction; Interaction aware robot; Sociable robot; Social robot; Socially interactive robot

1. Introduction As the field of artificial life emerged, researchers


began applying principles such as stigmergy (indi-
1.1. The history of social robots rect communication between individuals via modifi-
cations made to the shared environment) to achieve
From the beginning of biologically inspired robots, “collective” or “swarm” robot behavior. Stigmergy
researchers have been fascinated by the possibility of was first described by Grassé to explain how social
interaction between a robot and its environment, and insect societies can collectively produce complex be-
by the possibility of robots interacting with each other. havior patterns and physical structures, even if each
Fig. 1 shows the robotic tortoises built by Walter in individual appears to work alone [16].
the late 1940s [73]. By means of headlamps attached Deneubourg and his collaborators pioneered the first
to the robot’s front and positive phototaxis, the two experiments on stigmergy in simulated and physical
robots interacted in a seemingly “social” manner, even “ant-like robots” [10,53] in the early 1990s. Since
though there was no explicit communication or mutual then, numerous researchers have developed robot col-
recognition. lectives [88,106] and have used robots as models for
studying social insect behavior [87].
Similar principles can be found in multi-robot or

distributed robotic systems research [101]. Some of
Corresponding author. Present address: The Robotics Institute,
Carnegie Mellon University, Pittsburgh, PA 15213, USA.
the interaction mechanisms employed are communi-
E-mail addresses: terry@ri.cmu.edu (T. Fong), illah@ri.cmu.edu cation [6], interference [68], and aggressive compe-
(I. Nourbakhsh), k.dautenhahn@herts.ac.uk (K. Dautenhahn). tition [159]. Common to these group-oriented social

0921-8890/03/$ – see front matter © 2003 Elsevier Science B.V. All rights reserved.
doi:10.1016/S0921-8890(02)00372-X
144 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

Fig. 3. Khepera robots foraging for “food” [87].

Fig. 1. Precursors of social robotics: Walter’s tortoises, Elmer and


Elsie, “dancing” around each other.

Fig. 4. Collective box-pushing [88].


robots is maximizing benefit (e.g., task performance)
through collective action (Figs. 2–4).
The research described thus far uses principles of Although individuals may live in groups, they form re-
self-organization and behavior inspired by social in- lationships and social networks, they create alliances,
sect societies. Such societies are anonymous, homo- and they often adhere to societal norms and conven-
geneous groups in which individuals do not matter. tions [38] (Fig. 5).
This type of “social behavior” has proven to be an In [44], Dautenhahn and Billard proposed the fol-
attractive model for robotics, particularly because it lowing definition:
enables groups of relatively simple robots perform Social robots are embodied agents that are part of
difficult tasks (e.g., soccer playing). a heterogeneous group: a society of robots or hu-
However, many species of mammals (including hu- mans. They are able to recognize each other and
mans, birds, and other animals) often form individ- engage in social interactions, they possess histories
ualized societies. Individualized societies differ from (perceive and interpret the world in terms of their
anonymous societies because the individual matters. own experience), and they explicitly communicate
with and learn from each other.
Developing such “individual social” robots requires
the use of models and techniques different from “group

Fig. 5. Early “individual” social robots: “getting to know each


Fig. 2. U-Bots sorting objects [106]. other” (left) [38] and learning by imitation (right) [12,13].
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 145

competencies are required than with social interface


robots.
Sociable. Robots that pro-actively engage with hu-
mans in order to satisfy internal social aims (drives,
emotions, etc.). These robots require deep models of
social cognition.
Complementary to this list we can add the following
three classes:
Socially situated. Robots that are surrounded by a
social environment that they perceive and react to [48].
Socially situated robots must be able to distinguish
between other social agents and various objects in the
environment.1
Fig. 6. Fields of major impact. Note that “collective robots” and Socially embedded. Robots that are: (a) situated in
“social robots” overlap where individuality plays a lesser role. a social environment and interact with other agents
and humans; (b) structurally coupled with their social
environment; and (c) at least partially aware of human
social” collective robots (Fig. 6). In particular, social interactional structures (e.g., turn-taking) [48].
learning and imitation, gesture and natural language Socially intelligent. Robots that show aspects of hu-
communication, emotion, and recognition of interac- man style social intelligence, based on deep models
tion partners are all important factors. Moreover, most of human cognition and social competence [38,40].
research in this area has focused on the application
of “benign” social behavior. Thus, social robots are
1.3. Socially interactive robots
usually designed as assistants, companions, or pets, in
addition to the more traditional role of servants.
For the purposes of this paper, we use the term “so-
cially interactive robots” to describe robots for which
1.2. Social robots and social embeddedness: social interaction plays a key role. We do this, not to
concepts and definitions introduce another class of social robot, but rather to
distinguish these robots from other robots that involve
Robots in individualized societies exhibit a wide “conventional” human–robot interaction, such as those
range of social behavior, regardless if the society con- used in teleoperation scenarios.
tains other social robots, humans, or both. In [19], In this paper, we focus on peer-to-peer human–robot
Breazeal defines four classes of social robots in terms interaction. Specifically, we describe robots that ex-
of: (1) how well the robot can support the social model hibit the following “human social” characteristics:
that is ascribed to it and (2) the complexity of the in-
teraction scenario that can be supported as follows. • express and/or perceive emotions;
Socially evocative. Robots that rely on the human • communicate with high-level dialogue;
tendency to anthropomorphize and capitalize on feel- • learn/recognize models of other agents;
ings evoked when humans nurture, care, or involved • establish/maintain social relationships;
with their “creation”. • use natural cues (gaze, gestures, etc.);
Social interface. Robots that provide a “natural” • exhibit distinctive personality and character;
interface by employing human-like social cues and • may learn/develop social competencies.
communication modalities. Social behavior is only Socially interactive robots can be used for a vari-
modeled at the interface, which usually results in shal- ety of purposes: as research platforms, as toys, as ed-
low models of social cognition. ucational tools, or as therapeutic aids. The common,
Socially receptive. Robots that are socially passive
but that can benefit from interaction (e.g. learning 1
Other researchers place different emphasis on what socially
skills by imitation). Deeper models of human social situated implies (e.g., [97]).
146 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

underlying assumption is that humans prefer to inter- maintain the child’s interest. Learned development of
act with machines in the same way that they interact social (and other) skills is a primary concern of epi-
with other people. A survey and taxonomy of current genetic robotics [44,169].
applications is given in [60]. Some researchers design socially interactive robots
Socially interactive robots operate as partners, peers simply to study embodied models of social behav-
or assistants, which means that they need to exhibit a ior. For this use, the challenge is to build robots that
certain degree of adaptability and flexibility to drive have an intrinsic notion of sociality, that develop social
the interaction with a wide range of humans. Socially skills and bond with people, and that can show empa-
interactive robots can have different shapes and func- thy and true understanding. At present, such robots re-
tions, ranging from robots whose sole purpose and main a distant goal [39,44], the achievement of which
only task is to engage people in social interactions will require contributions from other research areas
(Kismet, Cog, etc.) to robots that are engineered to such as artificial life, developmental psychology and
adhere to social norms in order to fulfill a range of sociology [133].
tasks in human-inhabited environments (Pearl, Sage, Although socially interactive robots have already
etc.) [18,117,127,140]. been used with success, much work remains to in-
Some socially interactive robots use deep models crease their effectiveness. For example, in order for
of human interaction and pro-actively encourage so- socially interactive robots to be accepted as “natural”
cial interaction. Others show their social competence interaction partners, they need more sophisticated so-
only in reaction to human behavior, relying on humans cial skills, such as the ability to recognize social con-
to attribute mental states and emotions to the robot text and convention.
[39,45,55,125]. Regardless of function, building a so- Additionally, socially interactive robots will even-
cially interactive robot requires considering the human tually need to support a wide range of users: differ-
in the loop: as designer, as observer, and as interaction ent genders, different cultural and social backgrounds,
partner. different ages, etc. In many current applications, so-
cial robots engage only in short-term interaction (e.g.,
1.4. Why socially interactive robots? a museum tour) and can afford to treat all humans in
the same manner. But, as soon as a robot becomes part
Socially interactive robots are important for do- of a person’s life, that robot will need to be able to
mains in which robots must exhibit peer-to-peer in- treat him as a distinct individual [40].
teraction skills, either because such skills are required In the following, we closely examine the concepts
for solving specific tasks, or because the primary func- raised in this introductory section. We begin by de-
tion of the robot is to interact socially with people. A scribing different design methods. Then, we present a
discussion of application domains, design spaces, and taxonomy of system components, focusing on the de-
desirable social skills for robots is given in [42,43]. sign issues unique to socially interactive robots. We
One area where social interaction is desirable is conclude by discussing open issues and core chal-
that of “robot as persuasive machine” [58], i.e., the lenges.
robot is used to change the behavior, feelings or atti-
tudes of humans. This is the case when robots mediate
human–human interaction, as in autism therapy [162]. 2. Methodology
Another area is “robot as avatar” [123], in which the
robot functions as a representation of, or representa- 2.1. Design approaches
tive for, the human. For example, if a robot is used for
remote communication, it may need to act socially in Humans are experts in social interaction. Thus,
order to effectively convey information. if technology adheres to human social expectations,
In certain scenarios, it may be desirable for a robot people will find the interaction enjoyable, feeling
to develop its interaction skills over time. For exam- empowered and competent [130]. Many researchers,
ple, a pet robot that accompanies a child through his therefore, explore the design space of anthropomor-
childhood may need to improve its skills in order to phic (or zoomorphic) robots, trying to endow their
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 147

creations with characteristics of intentional agents. For it must perceive the same things that humans find to
this reason, more and more robots are being equipped be salient and relevant [169].
with faces, speech recognition, lip-reading skills, The second rationale for biological inspiration is
and other features and capacities that make robot– that it allows us to directly examine, test and refine
human interaction “human-like” or at least “creature- those scientific theories upon which the design is based
like” [41,48]. [1]. This is particularly true with humanoid robots.
From a design perspective, we can classify how so- Cog, for example, is a general-purpose humanoid plat-
cially interactive robots are built in two primary ways. form intended for exploring theories and models of
With the first approach, “biologically inspired”, de- intelligent behavior and learning [140].
signers try to create robots that internally simulate, or Some of the theories commonly used in biologically
mimic, the social intelligence found in living creatures. inspired design are as follows.
With the second approach, “functionally designed”, Ethology. This refers to the observational study
the goal is to construct a robot that appears outwardly of animals in their natural setting [95]. Ethology
to be socially intelligent, even if the internal design can serve as a basis for design because it describes
does not have a basis in science. the types of activity (comfort-seeking, play, etc.) a
Robots have limited perceptual, cognitive, and be- robot needs to exhibit in order to appear life-like
havioral abilities compared to humans. Thus, for the [4]. Ethology is also useful for addressing a range of
foreseeable future, there will continue to be significant behavioral issues such as concurrency, motivation, and
imbalance in social sophistication between human and instinct.
robot [20]. As with expert systems, however, it is pos- Structure of interaction. Analysis of interactional
sible that robots may become highly sophisticated in structure (such as instruction, cooperation, etc.) can
restricted areas of socialization, e.g., infant-caretaker help focus design of perception and cognition systems
relations. by identifying key interaction patterns [162]. Dauten-
Finally, differences in design methodology means hahn, Ogden and Quick use explicit representations
that the evaluation and success criteria are almost al- of interactional structure to design “interaction aware”
ways different for different robots. Thus, it is hard robots [48]. Dialogue models, such as turn-taking in
to compare socially interactive robots outside of their conversation, can also be used in design as in [104].
target environment and use. Theory of mind. Theory of mind refers to those
social skills that allow humans to correctly attribute
2.1.1. Biologically inspired beliefs, goals, perceptions, feelings, and desires to
With the “biologically inspired” approach, design- themselves and others [163]. One of the critical pre-
ers try to create robots that internally simulate, or cursors to these skills is joint (or shared) attention:
mimic, the social behavior or intelligence found in liv- the ability to selectively attend to an object of mutual
ing creatures. Biologically inspired designs are based interest [7]. Joint attention can aid design, by provid-
on theories drawn from natural and social sciences, ing guidelines for recognizing and producing social
including anthropology, cognitive science, develop- behaviors such as gaze direction, pointing gestures,
mental psychology, ethology, sociology, structure of etc. [23,140].
interaction, and theory of mind. Generally speaking, Developmental psychology. Developmental psy-
these theories are used to guide the design of robot chology has been cited as an effective mechanism
cognitive, behavioral, motivational (drives and emo- for creating robots that engage in natural social ex-
tions), motor and perceptual systems. changes. As an example, the design of Kismet’s
Two primary arguments are made for drawing in- “synthetic nervous system”, particularly the percep-
spiration from biological systems. First, numerous tual and behavioral aspects, is heavily inspired by the
researchers contend that nature is the best model for social development of human infants [18]. Addition-
“life-like” activity. The hypothesis is that in order for ally, theories of child cognitive development, such as
a robot to be understandable by humans, it must have Vygotsky’s “child in society” [92], can offer a frame-
a naturalistic embodiment, it must interact with its work for constructing robot architecture and social
environment in the same way living creatures do, and interaction design [44].
148 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

2.1.2. Functionally designed teristic of system engineering is that it emphasizes the


With the “functionally designed” approach, the ob- design of critical-path elements. Pineau et al. [127],
jective is to design a robot that outwardly appears to for example, describe mobile robots that assist the el-
be socially intelligent, even if the internal design does derly. Because these robots operate in a highly struc-
not have a basis in science or nature. This approach tured domain, their design centers on specific task
assumes that if we want to create the impression of an behaviors (e.g., navigation).
artificial social agent driven by beliefs and desires, we Iterative design. Iterative (or sequential) design, is
do not necessarily need to understand how the mind the process of revising a design through a series of
really works. Instead, it is sufficient to describe the test and redesign cycles [147]. It is typically used
mechanisms (sensations, traits, folk-psychology, etc.) to address design failures or to make improvements
by which people in everyday life understand socially based on information from evaluation or use. Willeke
intelligent creatures [125]. et al. [164], for example, describe a series of museum
In contrast to their biologically inspired counter- robots, each of which was designed based on lessons
parts, functionally designed robots generally have learned from preceding generations.
constrained operational and performance objectives.
Consequently, these “engineered” robots may need 2.2. Design issues
only to generate certain effects and experiences with
the user, rather than having to withstand deep scrutiny All robot systems, whether socially interactive or
for “life-like” capabilities. not, must solve a number of common design problems.
Some motivations for “functional design” are: These include cognition (planning, decision making),
perception (navigation, environment sensing), action
• The robot may only need to be superficially so- (mobility, manipulation), human–robot interaction
cially competent. This is particularly true when only (user interface, input devices, feedback display) and
short-term interaction or limited quality of interac- architecture (control, electromechanical, system). So-
tion is required. cially interactive robots, however, must also address
• The robot may have limited embodiment, capability those issues imposed by social interaction [18,40].
for interaction, or may be constrained by the envi- Human-oriented perception. A socially interactive
ronment. robot must proficiently perceive and interpret human
• Even limited social expression can help improve activity and behavior. This includes detecting and rec-
the affordances and usability of a robot. In some ognizing gestures, monitoring and classifying activity,
applications, recorded or scripted speech may be discerning intent and social cues, and measuring the
sufficient for human–robot dialogue. human’s feedback.
• Artificial designs can provide compelling interac- Natural human–robot interaction. Humans and
tion. Many video games and electronic toys fully robots should communicate as peers who know each
engage and occupy attention, even if the artifacts other well, such as musicians playing a duet [145].
do not have real-world counterparts. To achieve this, the robot must manifest believable
behavior: it must establish appropriate social ex-
The three techniques most often used in functional pectations, it must regulate social interaction (using
design are as follows. dialogue and action), and it must follow social con-
Human–computer interaction (HCI) design. Robots vention and norms.
are increasingly being developed using HCI tech- Readable social cues. A socially interactive robot
niques, including cognitive modeling, contextual must send signals to the human in order to: (1) pro-
inquiry, heuristic evaluation, and empirical user test- vide feedback of its internal state; (2) allow human to
ing [115]. Scheeff et al. [142], for example, describe interact in a facile, transparent manner. Channels for
robot development based on heuristic design. emotional expression include facial expression, body
Systems engineering. Systems engineering involves and pointer gesturing, and vocalization.
the development of functional requirements to facili- Real-time performance. Socially interactive robots
tate development and operation [135]. A basic charac- must operate at human interaction rates. Thus, a robot
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 149

needs to simultaneously exhibit competent behavior,


convey attention and intentionality, and handle social
interaction.
In the following sections, we review design issues
that are unique to socially interactive robots. Although
we do not discuss every aspect of design, we feel that
addressing each of the following is critical to building
an effective social robot.

2.3. Embodiment

We define embodiment as “that which establishes


a basis for structural coupling by creating the po-
tential for mutual perturbation between system and
environment” [48]. Thus, embodiment is grounded in
the relationship between a system and its environ-
ment. The more a robot can perturb an environment,
and be perturbed by it, the more it is embodied. This
also means that social robots do not necessarily need
a physical body. For example, conversational agents
[33] might be embodied to the same extent as robots
with limited actuation.
An important benefit of this “relational definition”
is that it provides an opportunity to quantify embod-
iment. For example, one might measure embodiment
in terms of the complexity of the relationship between
robot and environment over all possible interactions Fig. 7. Sony Aibo ERS-110 (top) and K-Team Khepera (bottom).
(i.e., all perturbatory channels).
Some robots are clearly more embodied than others can have profound effects on its accessibility, desir-
[48]. Consider the difference between Aibo (Sony) ability, and expressiveness.
and Khepera (K-Team), as shown in Fig. 7. Aibo The choice of a given form may also constrain the
has approximately 20 actuators (joints across mouth, human’s ability to interact with the robot. For exam-
heads, ears, tails, and legs) and a variety of sensors ple, Kismet has a highly expressive face. But because
(touch, sound, vision and proprioceptive). In con- it is designed as a head, Kismet is unable to inter-
trast, Khepera has two actuators (independent wheel act when touch (e.g., manipulation) or displacement
control) and an array of infrared proximity sensors. (self-movement) is required.
Because Aibo has more perturbatory channels and To date, most research in human–robot interaction
bandwidth at its disposal than does Khepera, it can has not explicitly focused on design, at least not in the
be considered to be more strongly embodied than traditional sense of industrial design. Although knowl-
Khepera. edge from other areas of design (including product,
interaction and stylized design) can inform robot con-
2.3.1. Morphology struction, much research remains to be performed.
The form and structure of a robot is important be-
cause it helps establish social expectations. Physical 2.3.2. Design considerations
appearance biases interaction. A robot that resembles A robot’s morphology must match its intended func-
a dog will be treated differently (at least initially) than tion [54]. If a robot is designed to perform tasks for
one which is anthropomorphic. Moreover, the relative the human, then its form must convey an amount of
familiarity (or strangeness) of a robot’s morphology “product-ness” so that the user will feel comfortable
150 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

gesture, etc.), then it must be structurally and function-


ally similar to a human. Moreover, if a robot is to learn
from humans (e.g., through imitation), then it should
be capable of behaving similarly to humans [11].
The role of anthropomorphism is to function as a
mechanism through which social interaction can be
facilitated. Thus, the ideal use of anthropomorphism
is to present an appropriate balance of illusion (to lead
the user to believe that the robot is sophisticated in
areas where the user will not encounter its failings)
and functionality (to provide capabilities necessary for
Fig. 8. Mori’s “uncanny valley” (from DiSalvo et al. [54]). supporting human-like interaction) [54,79].

using the robot. Similarly, if peer interaction is impor- 2.3.4. Zoomorphic


tant, the robot must project an amount of “humanness” An increasing number of entertainment, personal,
so that the user will feel comfortable in socially en- and toy robots have been designed to imitate living
gaging the robot. creatures. For these robots, a zoomorphic embodiment
At the same time, however, a robot’s design needs is important for establishing human–creature relation-
to reflect an amount of “robot-ness”. This is needed ships (e.g., owner-pet). The most common designs are
so that the user does not develop detrimentally false inspired by household animals, such as dogs (Sony
expectations of the robot’s capabilities [55]. Aibo and RoboScience RoboDog) and cats (Omron),
Finally, if a robot needs to portray a living creature, with the objective of creating robotic “companions”.
it is critical that an appropriate degree of familiarity Avoiding the “uncanny valley” may be easier with
be maintained. Mashiro Mori contends that the pro- zoomorphic design because human–creature relation-
gression from a non-realistic to realistic portrayal of ships are simpler than human–human relationships
a living thing is non-linear. In particular, there is an and because our expectations of what constitutes
“uncanny valley” (see Fig. 8) as similarity becomes “realistic” animal morphology tends to be lower.
almost, but not quite perfect. At this point, the subtle
imperfections of the recreation become highly disturb- 2.3.5. Caricatured
ing, or even repulsive [131]. Consequently, caricatured Animators have long shown that a character does
representations may be more useful, or effective, than not have to appear realistic in order to be believable
more complex, “realistic” representations. [156]. Moreover, caricature can be used to create de-
We classify social robots as being embodied in four sired interaction biases (e.g., implied abilities) and to
broad categories: anthropomorphic, zoomorphic, car- focus attention on, or distract attention from, specific
icatured, and functional. robot features.
Scheeff et al. [142] discusses how techniques from
2.3.3. Anthropomorphic traditional animation can be used in social robot de-
Anthropomorphism, from the Greek “anthropos” sign. Schulte et al. [143] describe how a caricatured
for man and “morphe” for form/structure, is the ten- human face can provide a “focal point” for attention.
dency to attribute human characteristics to objects with Similarly, Severinson-Eklund et al. [144] describe the
a view to helping rationalize their actions [55]. An- use of a small mechanical character, CERO, as a robot
thropomorphic paradigms have widely been used to “representative” (see Fig. 9).
augment the functional and behavioral characteristics
of social robots. 2.3.6. Functional
Having a naturalistic embodiment is often cited Some researchers argue that a robot’s embodiment
as necessary for meaningful social interaction should first, and foremost, reflect the tasks it must
[18,82,140]. In part, the argument is that for a robot perform. The choice and design of physical features is
to interact with humans as humans do (through gaze, thus guided purely by operational objectives. This type
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 151

In recent years, emotion has increasingly been used


in interface and robot design, primarily because of the
recognition that people tend to treat computers as they
treat other people [31,33,121,130]. Moreover, many
studies have been performed to integrate emotions into
products including electronic games, toys, and soft-
ware agents [8].

2.4.1. Artificial emotions


Artificial emotions are used in social robots for
several reasons. The primary purpose, of course, is
that emotion helps facilitate believable human–robot
interaction [30,119]. Artificial emotion can also pro-
vide feedback to the user, such as indicating the
robot’s internal state, goals and (to an extent) inten-
tions [8,17,83]. Lastly, artificial emotions can act as
a control mechanism, driving behavior and reflecting
how the robot is affected by, and adapts to, different
factors over time [29,108,160].
Numerous architectures have been proposed for
Fig. 9. CERO (KTH).
artificial emotions [18,29,74,132,160]. Some closely
of embodiment appears most often with functionally follow emotional theory, particularly in terms of how
designed robots, especially service robots. emotions are defined and generated. Arkin et al. [4]
Health care robots, for example, may be required to discuss how ethological and componential emotion
assist elderly, or disabled, patients in moving about. models are incorporated into Sony’s entertainment
Thus, features such as handle bars and cargo space, robots. Cañamero and Fredslund [30] describe an
may need to be part of the design [127]. affective activation model that regulates emotions
The design of toy robots also tends to reflect func- through stimulation levels.
tional requirements. Toys must minimize production Other architectures are only loosely inspired by
cost, be appealing to children, and be capable of fac- emotional theory and tend to be designed in an ad
ing the wide variety of situations that can experienced hoc manner. Nourbakhsh et al. [117] detail a fuzzy
during play [107]. state machine based system, which was developed
through a series of formative evaluation and design
2.4. Emotion cycles. Schulte et al. [143] summarize the design
of a simple state machine that produces four basic
Emotions play a significant role in human behavior, “moods”.
communication and interaction. Emotions are complex In terms of expression, some robots are only ca-
phenomena and often tightly coupled to social context pable of displaying emotion in a limited way, such
[5]. Moreover, much of emotion is physiological and as individually actuated lips or flashing lights (usu-
depends on embodiment [122,126]. ally LEDs). Other robots have many active degrees of
Three primary theories are used to describe emo- freedom and can thus provide richer movement and
tions. The first approach describes emotions in terms gestures. Kismet, for example, has controllable eye-
of discrete categories (e.g., happiness). A good review brows, ears, eyeballs, eyelids, a mouth with two lips
of “basic emotions” is [57]. The second approach char- and a pan/tilt neck [18].
acterizes emotions using continuous scales or basis di-
mensions, such as arousal and valence [137]. The third 2.4.2. Emotions as control mechanism
approach, componential theory, acknowledges the im- Emotion can be used to determine control
portance of both categories and dimensions [128,147]. precedence between different behavior modes, to
152 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

coordinating planning, and to trigger learning and


adaptation, particularly when the environment is un-
known or difficult to predict. One approach is to use
computational models of emotions that mimic animal
survival instincts, such as escape from danger, look
for food, etc. [18,29,108,160].
Several researchers have investigated the use of
emotion in human–robot interaction. Suzuki et al.
[153] describe an architecture in which interaction
leads to changes in the robot’s emotional state and
modifications in its actions. Breazeal [18] discusses
how emotions influence the operation of Kismet’s
motivational system and how this affects its interac-
tion with humans. Nourbakhsh et al. [117] discusses
how mood changes can trigger different behavior in
Sage, a museum tour robot.
Fig. 10. Actuated faces: Sparky (left) and Feelix (right).
2.4.3. Speech
Speech is a highly effective method for communi- rapidly, which rarely occurs in nature. The primary fa-
cating emotion. The primary parameters that govern cial components used are mouth (lips), cheeks, eyes,
the emotional content of speech are loudness, pitch eyebrows and forehead. Most robot faces express emo-
(level, variation, range), and prosody. Murray and tion in accordance with Ekman and Frieser’s FACS
Arnott [111] contend that the vocal effects caused by system [56,146].
particular emotions are consistent between speakers, Two of the simplest faces (Fig. 10) appear on
with only minor differences. Sparky [142] and Feelix [30]. Sparky’s face has
The quality of synthesized speech is significantly 4-DOF (eyebrows, eyelids, and lips) which portray a
poorer than synthesized facial expression and body set of discrete, basic emotions. Feelix is a robot built
language [9]. In spite of this shortcoming, it has proved using the LEGO MindstormsTM robotic construction
possible to generate emotional speech. Cahn [28] de- kit. Feelix’s face also has 4-DOF (two eyebrows,
scribes a system for mapping emotional quality (e.g., two lips), designed to display six facial expressions
sorrow) onto speech synthesizer settings, including ar- (anger, sadness, fear, happiness, surprise, neutral)
ticulation, pitch, and voice quality. plus a number of blends.
To date, emotional speech has been used in few In contrast to Sparky and Feelix, Kismet’s face has
robot systems. Breazeal describes the design of fifteen actuators, many of which often work together to
Kismet’s vocalization system. Expressive utterances display specific emotions (see Fig. 11). Kismet’s facial
(used to convey the affective state of the robot without expressions are generated using an interpolation-based
grammatical structure) are generated by assembling technique over a three-dimensional, componential
strings of phonemes with pitch accents [18]. Nour- “affect space” (arousal, valence, and stance) [18].
bakhsh et al. [117] describe how emotions influence Perhaps the most realistic robot faces are those de-
synthesized speech in a tour guide robot. When the signed at the Science University of Tokyo [81]. These
robot is frustrated, for example, voice level and pitch faces (Fig. 12) are explicitly designed to be human-like
are increased. and incorporate hair, teeth, and a covering silicone
skin layer. Numerous control points actuated beneath
2.4.4. Facial expression the “skin” produce a wide range of facial movements
The expressive behavior of robotic faces is generally and human expression.
not life-like. This reflects limitations of mechatronic Instead of using mechanical actuation, another ap-
design and control. For example, transitions between proach to facial expression is to rely on computer
expressions tend to be abrupt, occurring suddenly and graphics and animation techniques [99]. Vikia, for
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 153

Fig. 11. Various emotions displayed by Kismet.

example, has a 3D rendered face of a woman based


on Delsarte’s code of facial expressions [26]. Because
Vikia’s face (see Fig. 13) is graphically rendered, Fig. 12. Saya face robots (Science University of Tokyo).
many degrees of freedom are available for generating
expressions.
body movement [9]. Over 90% of gestures occur
2.4.5. Body language during speech and provide redundant information
In addition to facial expressions, non-verbal com- [86,105]. To date, most studies on emotional body
munication is often conveyed through gestures and movement have been qualitative in nature. Frijda [62],

Fig. 13. Vikia has a computer generated face.


154 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

Table 1 Low-level. Billard and Dautenhahn [12–14] de-


Emotional body movements (adapted from Frijda [62]) scribe a number of experiments in which an au-
Emotion Body movement tonomous mobile robot was taught a synthetic
Anger Fierce glance; clenched fists; brisk, short motions
proto-language. Language learning results from mul-
Fear Bent head, truck and knees; hunched shoulders; tiple spatio-temporal associations across the robot’s
forced eye closure or staring sensor-actuator state space.
Happiness Quick, random movements; smiling; Steels has examined the hypothesis that communi-
Sadness Depressed mouth corners; weeping cation is bootstrapped in a social learning process and
Surprise Wide eyes; held breath; open mouth
that meaning is initially context-dependent [150,151].
In his experiments, a robot dog learns simple words
for example, described body movements for a number describing the presence of objects (ball, red, etc.), its
of basic emotions (Table 1). Recently, however, some behavior (walk, sit) and its body parts (leg, head).
work has begun to focus on implementation issues, Non-verbal. There are many non-verbal forms of
such as in [35]. language, including body positioning, gesturing, and
Nakata et al. [113] state that humans have a strong physical action. Since most robots have fairly rudi-
tendency to be cued by motion. In particular, they refer mentary capability to recognize and produce speech,
to analysis of dance that shows humans are emotion- non-verbal dialogue is a useful alternative. Nicolescu
ally affected by body movement. Breazeal and Fitz- and Mataric [116], for example, describe a robot that
patrick [21] contend humans perceive all motor actions asks humans for help, communicating its needs and
to be semantically rich, whether or not they were in- intentions through its actions.
tended to be. For example, gaze and body direction is Social conventions, or norms, can also be expressed
generally interpreted as indicating locus of attention. through non-verbal dialogue. Proxemics, the social use
Mizoguchi et al. [110] discuss the use of gestures of space, is one such convention [70]. Proxemic norms
and movements, similar to ballet poses, to show emo- include knowing how to stand in line, how to pass
tion through movement. Scheeff et al. [142] describe in hallways, etc. Respecting these spatial conventions
the design of smooth, natural motions for Sparky. Lim may involve consideration of numerous factors (ad-
et al. [93] describe how walking motions (foot drag- ministrative, cultural, etc.) [114].
ging, body bending, etc.) can be used to convey emo- Natural language. Natural language dialogue is de-
tions. termined by factors ranging from the physical and per-
ceptual capabilities of the participants, to the social
2.5. Dialogue and cultural features of the situation. To what extent
human–robot interfaces should be based on natural
2.5.1. What is dialogue? language remains clearly an open issue [144].
Dialogue is a joint process of communication. It in- Severinson-Eklund et al. [144] discuss how explicit
volves sharing of information (data, symbols, context) feedback is needed for users to interact with service
between two or more parties [90]. Humans employ a robots. Their approach is to provide designed natural
variety of para-linguistic social cues (facial displays, language. Fong et al. [59,61] describe how high-level
gestures, etc.) to regulate the flow of dialogue [32]. dialogue can enable a human to provide assistance to
Such cues have also proven to be useful for control- a robot. In their system, dialogue is limited to mobility
ling human–robot dialogue [19]. issues (navigation, obstacle avoidance, etc.) with an
Dialogue, regardless of form, is meaningful only if it emphasis on query-response speech acts.
is grounded, i.e., when the symbols used by each party
describe common concepts. If the symbols differ, in- 2.6. Personality
formation exchange or learning must take place before
communication can proceed. Although human–robot 2.6.1. What is personality?
communication can occur in many forms, we consider In psychological terms, personality is the set of dis-
there to be three primary types of dialogue: low-level tinctive qualities that distinguish individuals. Since the
(pre-linguistic), non-verbal, and natural language. late 1980s, the most widely accepted taxonomy of
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 155

personality traits has been the “Big Five Inventory” Robot personality is conveyed in numerous ways.
[76]. The “Big Five”, which was developed through Emotions are often used to portray stereotype person-
lexical analysis, described personality in terms of five alities: timid, friendly, etc. [168]. A robot’s embodi-
traits: ment (size, shape, color), its motion, and the manner
in which it communicates (e.g., natural language) also
• extroversion (sociable, outgoing, confidence); contribute strongly [144]. Finally, the tasks a robot
• agreeableness (friendliness, nice, pleasant); performs may also influence the way its personality is
• conscientiousness (helpful, hard-working); perceived.
• neuroticism (emotional stability, adjusted);
• openness (intelligent, imaginative, flexibility). 2.7. Human-oriented perception

Common alternatives to the “Big Five” are questi- To interact meaningfully with humans, social robots
onnaire-based scales such as the Myers–Briggs Type must be able to perceive the world as humans do,
Indicator (MBTI) [112]. i.e., sensing and interpreting the same phenomena that
humans observe. This means that, in addition to the
2.6.2. Personality in social robots perception required for conventional functions (local-
There is reason to believe that if a robot had a com- ization, navigation, obstacle avoidance), social robots
pelling personality, people would be more willing to must possess perceptual abilities similar to humans.
interact with it and to establish a relationship with it In particular, social robots need perception that is
[18,79]. In particular, personality may provide a use- human-oriented: optimized for interacting with hu-
ful affordance, giving users a way to model and un- mans and on a human level. They need to be able
derstand robot behavior [144]. to track human features (faces, bodies, hands). They
In designing robot personality, there are numer- also need to be capable of interpreting human speech
ous questions that need to be addressed. Should the including affective speech, discrete commands, and
robot have a designed or learned personality? Should natural language. Finally, they often must have the ca-
it mimic a specific human personality, exhibiting spe- pacity to recognize facial expressions, gestures, and
cific traits? Is it beneficial to encourage a specific type human activity.
of interaction? Similarity of perception requires more than sim-
There are five common personality types used in ilarity of sensors. It is also important that humans
social robots. and robots find the same types of stimuli salient [23].
Tool-like. Used for robots that operate as “smart Moreover, robot perception may need to mimic the
appliances”. Because these robots perform service way human perception works. For example, the human
tasks on command, they exhibit traits usually associ- ocular-motor system is based on foveate vision, uses
ated with tools (dependability, reliability, etc.). saccadic eye movements, and exhibits specific visual
Pet or creature. These toy and entertainment robots behaviors (e.g., glancing). Thus, to be readily under-
exhibit characteristics that are associated with domes- stood, a robot may need to have similar visual motor
ticated animals (cats, dogs, etc.). control [18,21,25].
Cartoon. These robots exhibit caricatured person-
alities, such as seen in animation. Exaggerated traits 2.7.1. Types of perception
(e.g., shyness) are easy to portray and can be useful Most human-oriented perception is based on pas-
for facilitating interaction with non-specialists. sive sensing, typically computer vision and spoken
Artificial being. Inspired by literature and film, pri- language recognition. Passive sensors, such as CCD
marily science fiction, these robots tend to display ar- cameras, are cheap, require minimal infrastructure,
tificial (e.g., mechanistic) characteristics. and can be used for a wide range of perception tasks
Human-like. Robots are often designed to exhibit [2,36,66,118].
human personality traits. The extent to which a robot Active sensors (ladar, ultrasonic sonar, etc.), though
must have (or appear to have) human personality de- perhaps less flexible than their passive counterparts,
pends on its use. have also received attention. In particular, active
156 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

sensors are often used for detecting and localizing Facial expression. Since Darwin [37], facial expres-
human in dynamic settings. sions have been considered to convey emotion. More
recently, facial expressions have also been thought to
2.7.2. People tracking function as social signals of intent. A comprehensive
For human–robot interaction, the challenge is to find review of facial expression recognition (including a
efficient methods for people tracking in the presence review of ethical and psychological concerns) is [94].
of occlusions, variable illumination, moving cameras, A survey of older techniques is [136].
and varying background. A broad survey of human There are three basic approaches to facial ex-
tracking is presented in [66]. Specific robotics appli- pression recognition [94]. Image motion techniques
cations can be reviewed in [26,114,127,154]. identify facial muscle actions in image sequences.
Anatomical models track facial features, such as the
2.7.3. Speech recognition
distance between eyes and nose. Principal component
Speech recognition is generally a two-step process:
analysis (PCA) reduce image-based representations
signal processing (to transform an audio signal into
of faces into principal components such as eigenfaces
feature vectors) followed by graph search (to match
or holons.
utterances to a vocabulary). Most current systems use
Gaze tracking. Gaze is a good indicator of what a
Hidden Markov models to stochastically determine
person is looking at and paying attention to. A person’s
the most probable match. An excellent introduction to
gaze direction is determined by two factors: head ori-
speech recognition is [129].
entation and eye orientation. Although numerous vi-
Human speech contains three types of information:
sion systems track head orientation, few researchers
who the speaker is, what the speaker said, and how
have attempted to track eye gaze using only passive
the speaker said it [18]. Depending on what infor-
vision. Furthermore, such trackers have not proven to
mation the robot requires, it may need to perform
be highly accurate [158]. Gaze tracking research in-
speaker tracking, dialogue management, or emotion
cludes [139,152].
analysis. Recent applications of speech in robotics in-
clude [18,91,103,120,148].
2.8. User modeling
2.7.4. Gesture recognition
When humans converse, we use gestures to clarify In order to interact with people in a human-like
speech and to compactly convey geometric informa- manner, socially interactive robots must perceive hu-
tion (location, direction, etc.). Very often, a speaker man social behavior [18]. Detecting and recognizing
will use hand movement (speed and range of motion) human action and communication provides a good
to indicate urgency and will point to disambiguate spo- starting point. More important, however, is being able
ken directions (e.g., “I parked the car over there”). to interpret and reacting to behavior. A key mecha-
Although there are many ways to recognize ges- nism for performing this is user modeling.
tures, vision-based recognition has several advantages User modeling can be quantitative, based on the
over other methods. Vision does not require the user evaluation of parameters or metrics. The stereotype
to master or wear special hardware. Additionally, vi- approach, for example, classifies users into different
sion is passive and can have a large workspace. Two subgroups (stereotypes), based on the measurement
excellent overviews of vision-based gesture recogni- of pre-defined features for each subgroup [155]. User
tion are [124,166]. Details of specific systems appear modeling may also be qualitative in nature. Inter-
in [85,161,167]. actional structure analysis, story and script based
matching, and BDI all identify subjective aspects of
2.7.5. Facial perception behavior.
Face detection and recognition. A widely used ap- There are many types of user models: cognitive,
proach for identifying people is face detection. Two attentional, etc. A user model generally contains at-
comprehensive surveys are [34,63]. A large number tributes that describe a user, or group, of users. Models
of real-time face detection and tracking systems have may be static (defined a priori) or dynamic (adapted
been developed in recent years, such as [139,140,158]. or learned). Information about users may be acquired
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 157

explicitly (through questioning) or implicitly (inferred as “swarm intelligence” and “collective robotics”.
through observation). The former can be time con- Other robot–robot work has addressed the use of
suming, and the latter difficult, especially if the user “leader following” [38,72], inter-personal communi-
population is diverse [69]. cation [13,15,149], imitation [14,65], and multi-robot
User models are employed for a variety of purposes. formations [109].
First, user models help the robot understand human In recent years, there has been significant effort
behavior and dialogue. Second, user models shape and to understand how social learning can occur through
control feedback (e.g., interaction pacing) given to the human–robot interaction. One approach is to create se-
user. Finally, user models are useful for adapting the quences of known behaviors to match a human model
robot’s behavior to accommodate users with varying [102]. Another approach is to match observations (e.g.,
skills, experience, and knowledge. motion sequences) to known behaviors, such as mo-
Fong et al. [59] employ a stereotype user model to tor primitives [51,52]. Recently, Kaplan et al. [77]
adapt human–robot dialogue and robot behavior to dif- have explored the use of animal training techniques
ferent users. Pineau et al. discuss the use of a quantita- for teach an autonomous pet robot to perform com-
tive temporal Bayes net to manage individual-specific plex tasks. The most common social learning method,
interaction between a nurse robot and elderly individ- however, is imitation.
uals. Schulte et al. [143] describe a memory-based
learner used by a tour robot to improve its ability to 2.9.2. Imitation
interact with different people. Imitation is an important mechanism for learning
behaviors socially in primates and other animal species
2.9. Socially situated learning [46]. At present, there is no commonly accepted defi-
nition of “imitation” in the animal and human psychol-
In socially situated learning, an individual interacts ogy literature. An extensive discussion is given in [71].
with his social environment to acquire new compe- Researchers often refer to Thorpe’s definition [157],
tencies. Humans and some animals (e.g., primates) which defines imitation as the “copying of a novel or
learn through a variety of techniques including di- otherwise improbable act or utterance, or some act for
rect tutelage, observational conditioning, goal emula- which there is clearly no instinctive tendency”.
tion, and imitation [64]. One prevalent form of in- With robots, imitation relies upon the robot hav-
fluence is local, or stimulus, enhancement in which ing many perceptual, cognitive, and motor capabilities
a teacher actively manipulates the perceived environ- [24]. Researchers often simplify the environment or
ment to direct the learner’s attention to relevant stimuli situation to make the problem tractable. For example,
[96]. active markers or constrained perception (e.g., white
objects on a black background) may be employed to
2.9.1. Robot social learning make tracking of the model amenable.
For social robots, learning is used for transferring Breazeal and Scassellati [24] argue that even if a
skills, tasks, and information. Learning is important robot has the skills necessary for imitation, there are
because the knowledge of the teacher, or model, still several questions that must be addressed:
and robot may be very different. Additionally, be-
cause of differences in sensing and perception, the • How does the robot know when to imitate? In order
model and robot may have very different views of the for imitation to be useful, the robot must decide
world. Thus, learning is often essential for improving not only when to start/stop imitating, but also when
communication, facilitating interaction, and sharing it is appropriate (based on the social context, the
knowledge [80]. availability of a good model, etc.).
A number of studies in robot social learning have • How does the robot know what to imitate? Faced
focused on robot–robot interaction. Some of the ear- with a stream of sensory data, the robot must decide
liest work focused on cooperative, or group, behavior which of the model’s actions are relevant to the task,
[6,100]. A large research community continues to which are part of the instruction process, and which
investigate group social learning, often referred to are circumstantial.
158 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

• How does the robot map observed action into be- sors towards an object, and maintain its focus on the
havior? Once the robot has identified and observed selected object.
salient features of the model’s actions, it must as- Marom and Hayes [96–98] consider attention to be
certain how to reproduce these actions through its a collection of mechanisms that determine the signifi-
behavior. cance of stimuli. Their research focuses on the devel-
• How does the robot evaluate its behavior, correct opment of pre-learning attentional mechanisms, which
errors, and recognize when it has achieved its goal? help reduce the amount of information that an indi-
In order for the robot to improve its performance, it vidual has to deal with.
must be able to measure to what degree its imitation
is accurate and to recognize when there are errors. 2.10.2. Expression
Kozima and Yano [82,83] argue that to be inten-
Imitation has been used as a mechanism for learning
tional, a robot must exhibit goal-directed behavior. To
simple motor skills from observation, such as block
do so, it must possess a sensorimotor system, a reper-
stacking [89] or pendulum balancing [141]. Imitation
toire of behaviors (innate reflexes), drives that trigger
has also been applied to the learning of sensor–motor
these behaviors, a value system for evaluating percep-
associations [3] and for constructing task representa-
tual input, and an adaptation mechanism.
tions [116].
Breazeal and Scassellati [22] describe how Kismet
2.10. Intentionality conveys intentionality through motor actions and fa-
cial expressions. In particular, by exhibiting proto-
Dennett [50] contends that humans use three strate- social responses (affective, exploratory, protective, and
gies to understand and predict behavior. The physical regulatory), the robot provides cues for interpreting its
stance (predictions based on physical characteristics) actions as intentional.
and design stance (predictions based on the design Schulte et al. [143] discuss how a caricatured hu-
and functionality of artifacts) are sufficient to explain man face and simple emotion expression can convey
simple devices. With complex systems (e.g., humans), intention during spontaneous short-term interaction.
however, we often do not have sufficient information, For example, a tour guide robot might have the inten-
to perform physical or design analysis. Instead, we tion of making progress while giving a tour. Its facial
tend to adopt an intentional stance and assume that the expression and recorded speech playback can commu-
systems’ actions result from its beliefs and desires. nicate this information.
In order for a robot to interact socially, therefore, it
needs to provide evidence that is intentional (even if
3. Discussion
it is not intrinsic [138]). For example, a robot could
demonstrate goal-directed behaviors, or it could ex-
hibit the attentional capacity. If it does so, then the 3.1. Human perception of social robots
human will consider the robot to act in a rational
manner. A key difference between conventional and socially
interactive robots is that the way in which a human
2.10.1. Attention perceives a robot establishes expectations that guide
Scassellati [139] discusses the recognition and pro- his interaction with it. This perception, especially of
duction of joint attention behaviors in Cog. Just as the robot’s intelligence, autonomy, and capabilities is
humans use a variety of physical social cues to in- influenced by numerous factors, both intrinsic and ex-
dicate which object is currently under consideration, trinsic.
Cog performs gaze following, imperative pointing, and Clearly, the human’s preconceptions, knowledge,
declarative pointing. and prior exposure to the robot (or similar robots)
Kopp and Gardenfors [84] also claim that atten- have a strong influence. Additionally, aspects of the
tional capacity is a fundamental requirement for in- robot’s design (embodiment, dialogue, etc.) may play
tentionality. In their model, a robot must be able to a significant role. Finally, the human’s experience over
identify relevant objects in the scene, direct its sen- time will undoubtedly shape his judgment, i.e., initial
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 159

impressions will change as he gains familiarity with to observe interaction over time, especially after the
the robot. user had fully integrated the robot into his work
In the following, we briefly present studies that have routine. A key finding was that robots need to be
examined how these factors affect human–robot inter- capable of social interaction, or at least aware of
action, particularly in the way in which the humans the social context, whenever they operate around
relate to, and work with, social robots. people.
In [47], Dautenhahn and Werry describe a quantita-
3.1.1. Attitudes towards robots tive method for evaluating robot–human interactions,
Bumby and Dautenhahn conducted a study to which is similar to the way ethologists use observa-
identify how people, specifically children, perceive tion to evaluate animal behavior. This method has
robots and what type of behavior they may exhibit been used to study differences in interaction style
when interacting with robots [27]. They found that when children play with a socially interactive robotic
children tend to conceive of robots as geometric toy versus a non-robotic toy. Complementing this
forms with human features (i.e., a strong anthropo- approach, Dautenhahn et al. [49] have also proposed
morphic pre-disposition). Moreover, children tend qualitative techniques (based on conversation analy-
to attribute free will, preferences, emotion, and sis) that focus on social context.
male gender to the robots, even without external
cueing. 3.1.3. Effects of emotion
In [78], Khan describes a survey to investigate Cañamero and Fredslund [30] performed a study
people’s attitudes towards intelligent service robots. to evaluate how well humans can recognize facial ex-
A review of robots in literature and film, followed pressions displayed by Feelix (Fig. 10). In this study,
by a interview study, were used to design the sur- they asked test subjects to make subjective judgments
vey questionnaire. The survey revealed that people’s of the emotions displayed on Feelix’s face and in pic-
attitudes are strongly influenced by science fiction. tures of humans. The results were very similar to those
Two significant findings were: (1) a robot with reported in other studies of facial expression recog-
machine-like appearance, serious personality, and nition.
round shape is preferred; (2) verbal communication Bruce et al. [26] conducted a 2 × 2 full factorial
using a human-like voice is highly desired. experiment to explore how emotion expression and
indication of attention affect a robot’s ability to en-
3.1.2. Field studies gage humans. In the study, the robot exhibited different
Thus far, few studies have investigated people’s emotions based on its success at engaging and lead-
willingness to closely interact with social robots. ing a person through a poll-taking task. The results
Given that we expect social robots to play increas- suggest that having an expressive face and indicating
ingly larger roles in daily life, there is a strong need attention with movement can help make a robot more
for field studies to examine how people behave when compelling to interact with.
robots are introduced into their activities.
Scheeff et al. [142] conducted two studies to observe 3.1.4. Effects of appearance and dialogue
how a range of people interact with a creature-like so- One problem with dialogue is that it can lead to bi-
cial robot, in both laboratory and public conditions. In ased perceptions. For example, associations of stereo-
these studies, children were observed to be more en- typed behavior can be created, which may lead users
gaged than adults and had responses that varied with to attribute qualities to the robot that are inaccurate.
gender and age. Also, a friendly robot personality was Users may also form incorrect models, or make poor
reported to have prompted qualitatively better interac- assumptions, about how the robot actually works. This
tion than an angry personality. can lead to serious consequences, the least of which
In [75], Huttenrauch and Severinson-Eklund de- is user error [61].
scribe a long-term usage study of CERO, a service Kiesler and Goetz conducted a series of studies to
robot that assists motion-impaired people in an of- understand the influence of a robot’s appearance and
fice environment (Fig. 9). The study was designed dialogue [79]. A primary contribution of this work are
160 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

measures for characterizing the mental models used is really the design of a new human–machine coop-
by people when they interact with robots. A significant erative system. The design of automated systems is
finding was that neither ratings, nor behavioral obser- really the design of a team and requires provisions
vations alone, are sufficient to fully describe human for the coordination between machine agents and
responses to robots. In addition, Kiesler and Goetz practitioners.
concluded that dialogue more strongly influences de-
velopment and change of mental models than differ- In other words, humans and robots must be able to
ences in appearance. coordinate their actions so that they interact produc-
DiSalvo et al. [54] investigated how the features tively with each other. It is not appropriate (or even
and size of humanoid robot faces contribute to the necessary) to make the robot as socially competent as
perception of humanness. In this study, they analyzed possible. Rather, it is more important that the robot be
48 robots and conducted surveys to measure people’s compatible with the human’s needs, that it matches ap-
perception. Statistical analysis showed that the pres- plication requirements; that it be understandable and
ence of certain features, the dimensions of the head, believable, and that it provide the interactional support
and the number of facial features greatly influence the the human expects.
perception of humanness. As we have seen, building a social robot involves
numerous design issues. Although much progress has
3.1.5. Effects of personality already been made to solving these problems, much
When a robot exhibits personality (whether in- work remains. This is due, in part, to the broad range
tended by the designer or not), a number of effects of applications for which social robots are being devel-
occur. First, personality can serve as an affordance for oped. Additionally, however, is the fact that there are
interaction. A growing number of commercial prod- many research questions that remain to be answered,
ucts targeting the toy and entertainment markets, such including the following.
as Tiger Electronics Furby (a creature-like robot), What are the minimal criteria for a robot to be
Hasbro’s My Real Baby (a robot doll), and Sony’s social? Social behavior includes such a wide range
Aibo (robot dog) focus on personality as a way to of phenomena that it is not evident which features a
entice and foster effective interaction [18,60]. robot must have in order to show social awareness or
Personality can also impact task performance, in ei- intelligence. Clearly, a robot’s design depends on its
ther a negative or positive sense. For example, Goetz intended use, the complexity of the social environment
and Kiesler examined the influence of two different and the sophistication of the interaction. But, to what
robot personalities on user compliance with an exer- extent does social robot design need to reflect theories
cise routine [67]. In their study, they found some evi- of human social intelligence?
dence that simply creating a charming personality will How do we evaluate social robots? Many re-
not necessarily engender the best cooperation with a searchers contend that adding social interaction ca-
robotic assistant. pabilities will improve robot performance, e.g., by
increasing usability. Thus far, however, little experi-
3.2. Open issues and questions mental evidence exists to support this claim. What is
needed is a systematic study of how “social features”
When we engage in social interaction, there is no impact human–robot interaction in the context of dif-
guarantee that it will be meaningful or worthwhile. ferent application domains [43]. The problem is that
Sometimes, in spite of our best intentions, the inter- it difficult to determine which metrics are most ap-
action fails. Relationships, especially long-term ones, propriate for evaluating social “effectiveness”. Should
involve a myriad of factors and making them succeed we use human performance metrics? Should we ap-
requires concerted effort. ply psychological, sociological or HCI measures?
In [165], Woods writes: How do we account for cross-cultural differences and
individual needs?
It seems paradoxical, but studies of the impact of What differentiates social robots from robots that
automation reveal that design of automated systems exhibit good human–robot interaction? Although
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 161

conventional HRI design does not directly address the staff. Social robots will engage us, entertain us, and
issues presented in this paper, it does involve tech- enlighten us.
niques that indirectly support social interaction. For Central to the success of social robots will be close
example, HCI methods (e.g., contextual inquiry) are and effective interaction between humans and robots.
often used to ensure that the interaction will match Thus, although it is important to continue enhancing
user needs. The question is: are social robots so dif- autonomous capabilities, we must not neglect improv-
ferent from traditional robots that we need different ing the human–robot relationship. The challenge is not
interactional design techniques? merely to develop techniques that allow social robots
What underlying social issues may influence fu- to succeed in limited tasks, but also to find ways that
ture technical development? An observation made by social robots can participate in the full richness of hu-
Restivo is that “robotics engineers seem to be driven man society.
to program out aspects of being human that for one
reason or another they do not like or that make them
personally uncomfortable” [134]. If this is true, does Acknowledgements
that mean that social robots will always be “benign”
by design? If our goal is for social robots to eventu- We would like to thank the participants of the Robot
ally have a place in human society, should we not in- as Partner: An Exploration of Social Robots work-
vestigate what could be the negative consequences of shop (2002 IEEE International Conference on Intel-
social robots? ligent Robots and Systems) for inspiring this paper.
Are there ethical issues that we need to be concerned We would also like to thank Cynthia Breazeal, Lola
with? For social robots to become more and more so- Cañamero, and Sara Kiesler for their insightful com-
phisticated, they will need increasingly better compu- ments. This work was partially supported by EPSRC
tational models of individuals, or at least, humans in grant (GR/M62648).
general. Detailed user modeling, however, may not be
acceptable, especially if it involves privacy concerns.
A related question is that of user monitoring. If a so- References
cial robot has a model of an individual, should it be
capable of recognizing when a person is acting errat- [1] B. Adams, C. Breazeal, R.A. Brooks, B. Scassellati,
ically and taking action? Humanoid robots: a new kind of tool, IEEE Intelligent
Systems 15 (4) (2000) 25–31.
How do we design for long-term interaction? To [2] J. Aggarwal, Q. Cai, Human motion analysis: A review,
date, research in social robot has focused exclusively Computer Vision and Image Understanding 73 (3) (1999)
on short duration interaction, ranging from periods of 428–440.
several minutes (e.g., tour-guiding) to several weeks, [3] P. Andry, P. Gaussier, S. Moga, J.P. Banquet, Learning and
such as in [75]. Little is known about interaction over communication via imitation: An autonomous robot perspe-
ctive, IEEE Transactions on Systems, Man and Cybernetics
longer periods. To remain engaging and empowering 31 (5) (2001).
for months, or years, will social robots need to be capa- [4] R. Arkin, M. Fujita, T. Takagi, R. Hasekawa, An ethological
ble of long-term adaptiveness, associations, and mem- and emotional basis for human–robot interaction, Robotics
ory? Also, how can we determine whether long-term and Autonomous Systems 42 (2003) 191–201 (this issue).
human–robot relationships may cause ill-effects? [5] C. Armon-Jones, The social functions of emotions, in: R.
Harré (Ed.), The Social Construction of Emotions, Basil
Blackwell, Oxford, 1985.
3.3. Summary [6] T. Balch, R. Arkin, Communication in reactive multiagent
robotic systems, Autonomous Robots 1 (1994).
As we look ahead, it seems clear that social robots [7] S. Baron-Cohen, Mindblindness: An Essay on Autism and
will play an ever larger role in our world, working for Theory of Mind, MIT Press, Cambridge, MA, 1995.
[8] C. Bartneck, M. Okada, Robotic user interfaces, in: Procee-
and in cooperation with humans. Social robots will
dings of the Human and Computer Conference, 2001.
assist in health care, rehabilitation, and therapy. So- [9] C. Bartneck, eMuu—an emotional embodied character for
cial robots will work in close proximity to humans, the ambient intelligent home, Ph.D. Thesis, Technical
serving as tour guides, office assistants, and household University Eindhoven, The Netherlands, 2002.
162 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

[10] R. Beckers, et al., From local actions to global tasks: [27] K. Bumby, K. Dautenhahn, Investigating children’s attitudes
Stigmergy and collective robotics, in: Proceedings of towards robots: A case study, in: Proceedings of the
Artificial Life IV, 1996. Cognitive Technology Conference, 1999.
[11] A. Billard, Robota: clever toy and educational tool, Robotics [28] J. Cahn, The generation of affect in synthesized speech,
and Autonomous Systems 42 (2003) 259–269 (this issue). Journal of American Voice I/O Society 8 (1990) 1–19.
[12] A. Billard, K. Dautenhahn, Grounding communication in [29] L. Cañamero, Modeling motivations and emotions as a basis
situated, social robots, in: Proceedings Towards Intelligent for intelligent behavior, in: W. Johnson (Ed.), Proceedings
Mobile Robots Conference, Report No. UMCS-97-9-1, of the International Conference on Autonomous Agents.
Department of Computer Science, Manchester University, [30] L. Cañamero, J. Fredslund, I show you how I like you—
1997. can you read it in my face? IEEE Transactions on Systems,
[13] A. Billard, K. Dautenhahn, Grounding communication in Man and Cybernetics 31 (5) (2001).
autonomous robots: An experimental study, Robotics and [31] L. Cañamero (Ed.), Emotional and Intelligent II: The
Autonomous Systems 24 (1–2) (1998) 71–81. Tangled Knot of Social Cognition, Technical Report No.
[14] A. Billard, K. Dautenhahn, Experiments in learning by FS-01-02, AAAI Press, 2001.
imitation: Grounding and use of communication in robotic [32] J. Cassell, Nudge, nudge, wink, wink: Elements of face-to-
agents, Adaptive Behavior 7 (3–4) (1999). face conversation for embodied conversational agents, in:
[15] A. Billard, G. Hayes, Learning to communicate through J. Cassell, et al. (Eds.), Embodied Conversational Agents,
imitation in autonomous robots, in: Proceedings of the Inter- MIT Press, Cambridge, MA, 1999.
national Conference on Artificial Neural Networks, 1997. [33] J. Cassell, et al. (Eds.), Embodied Conversational Agents,
[16] E. Bonabeau, M. Dorigo, G. Theraulaz, Swarm Intelligence: MIT Press, Cambridge, MA, 1999.
From Natural to Artificial Systems, Oxford University Press, [34] R. Chellappa, et al., Human and machine recognition of
Oxford, 1999. faces: A survey, Proceedings of the IEEE 83 (5) (1995).
[17] C. Breazeal, A motivational system for regulating human– [35] M. Coulson, Expressing emotion through body movement:
robot interaction, in: Proceedings of the National Conference A component process approach, in: R. Aylett, L.
on Artificial Intelligence, Madison, WI, 1998, pp. 54–61. Cañamero (Eds.), Animating Expressive Characters for
[18] C. Breazeal, Designing Sociable Robots, MIT Press, Social Interactions, SSAISB Press, 2002.
Cambridge, MA, 2002. [36] J. Crowley, Vision for man–machine interaction, Robotics
[19] C. Breazeal, Toward sociable robots, Robotics and Auto- and Autonomous Systems 19 (1997) 347–358.
nomous Systems 42 (2003) 167–175 (this issue). [37] C. Darwin, The Expression of Emotions in Man and
[20] C. Breazeal, Designing sociable robots: Lessons learned, in: Animals, Oxford University Press, Oxford, 1998.
K. Dautenhahn, et al. (Eds.), Socially Intelligent Agents: [38] K. Dautenhahn, Getting to know each other—artificial
Creating Relationships with Computers and Robots, Kluwer social intelligence for autonomous robots, Robotics and
Academic Publishers, Dordrecht, 2002. Autonomous Systems 16 (1995) 333–356.
[21] C. Breazeal, P. Fitzpatrick, That certain look: Social [39] K. Dautenhahn, I could be you—the phenomenological
amplification of animate vision, in: Proceedings of the AAAI dimension of social understanding, Cybernetics and Systems
Fall Symposium on Society of Intelligence Agents—The Journal 28 (5) (1997).
Human in the Loop, 2000. [40] K. Dautenhahn, The art of designing socially intelligent
[22] C. Breazeal, B. Scassellati, How to build robots that make agents—science, fiction, and the human in the loop, Applied
friends and influence people, in: Proceedings of the Inter- Artificial Intelligence Journal 12 (7–8) (1998) 573–617.
national Conference on Intelligent Robots and Systems, [41] K. Dautenhahn, Socially intelligent agents and the primate
1999. social brain—towards a science of social minds, in:
[23] C. Breazeal, B. Scassellati, A context-dependent attention Proceedings of the AAAI Fall Symposium on Society of
system for a social robot, in: Proceedings of the International Intelligence Agents, 2000.
Joint Conference on Artificial Intelligence, Stockholm, [42] K. Dautenhahn, Roles and functions of robots in human
Sweden, 1999, pp. 1146–1153. society—implications from research in autism therapy,
[24] C. Breazeal, B. Scassellati, Challenges in building robots Robotica, to appear.
that imitate people, in: K. Dautenhahn, C. Nehaniv (Eds.), [43] K. Dautenhahn, Design spaces and niche spaces of believable
Imitation in Animals and Artifacts, MIT Press, Cambridge, social robots, in: Proceedings of the International Workshop
MA, 2001. on Robots and Human Interactive Communication, 2002.
[25] C. Breazeal, A. Edsinger, P. Fitzpatrick, B. Scassellati, [44] K. Dautenhahn, A. Billard, Bringing up robots or—the
Active vision systems for sociable robots, IEEE Transactions psychology of socially intelligent robots: From theory to
on Systems, Man and Cybernetics 31 (5) (2001). implementation, in: Proceedings of the Autonomous Agents,
[26] A. Bruce, I. Nourbakhsh, R. Simmons, The role of 1999.
expressiveness and attention in human–robot interaction, in: [45] K. Dautenhahn, C. Nehaniv, Living with socially intelligent
Proceedings of the AAAI Fall Symposium Emotional and agents: A cognitive technology view, in: K. Dautenhahn
Intelligent II: The Tangled Knot of Society of Cognition, (Ed.), Human Cognition and Social Agent Technology,
2001. Benjamin, New York, 2000.
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 163

[46] K. Dautenhahn, C. Nehaniv (Eds.), Imitation in Animals Social Learning: Psychological and Biological Perspectives,
and Artifacts, MIT Press, Cambridge, MA, 2001. Erlbaum, London, 1988.
[47] K. Dautenhahn, I. Werry, A quantitative technique for [65] P. Gaussier, et al., From perception–action loops to imitation
analysing robot–human interactions, in: Proceedings of the processes: A bottom-up approach of learning by imitation,
International Conference on Intelligent Robots and Systems, Applied Artificial Intelligence Journal 12 (7–8) (1998).
2002. [66] D. Gavrilla, The visual analysis of human movement: A
[48] K. Dautenhahn, B. Ogden, T. Quick, From embodied survey, Computer Vision and Image Understanding 73 (1)
to socially embedded agents—implications for interaction- (1999).
aware robots, Cognitive Systems Research 3 (3) (2002) [67] J. Goetz, S. Kiesler, Cooperation with a robotic assistant,
(Special Issue on Situated and Embodied Cognition). in: Proceedings of CHI, 2002.
[49] K. Dautenhahn, I. Werry, J. Rae, P. Dickerson, Robotic play- [68] D. Goldberg, M. Mataric, Interference as a tool for designing
mates: Analysing interactive competencies of children with and evaluating multi-robot controllers, in: Proceedings
autism playing with a mobile robot, in: K. Dautenhahn, et al. AAAI-97, Providence, RI, 1997, pp. 637–642.
(Eds.), Socially Intelligent Agents: Creating Relationships [69] D. Goren-Bar. Designing model-based intelligent dialogue
with Computers and Robots, Kluwer Academic Publishers, systems, in: M. Rossi, K. Siau (Eds.), Information Modeling
Dordrecht, 2002. in the New Millennium, Idea Group, 2001.
[50] D. Dennett, The Intentional Stance, MIT Press, Cambridge, [70] E. Hall, The hidden dimension: Man’s use of space in public
MA, 1987. and private, The Bodley Head Ltd., 1966.
[51] J. Demiris, G. Hayes, Imitative learning mechanisms in [71] C. Heyes, B. Galef, Social Learning in Animals: The Roots
robots and humans, in: Proceedings of the European Work- of Culture, Academic Press, 1996.
shop on Learning Robotics, 1996. [72] G. Hayes, J. Demiris, A robot controller using learning by
[52] J. Demiris, G. Hayes, Active and passive routes to imitation, imitation, in: Proceedings of the International Symposium
in: Proceedings of the AISB Symposium on Imitation in on Intelligent Robotic Systems, 1994.
Animals and Artifacts, 1999. [73] O. Holland, Grey Walter: The pioneer of real artificial
[53] J.-L. Deneubourg, et al., The dynamic of collective sorting
life, in: C. Langton, K. Shimohara (Eds.), Proceedings of
robot-like ants and ant-like robots, in: Proceedings of
the International Workshop on Artificial Life, MIT Press,
the International Conference on Simulation of Adaptive
Cambridge, MA.
Behavior, 2000.
[74] E. Hudlicka, Increasing SIA architecture realism by
[54] C. DiSalvo, et al., All robots are not equal: The design and
modeling and adapting to affect and personality, in: K.
perception of humanoid robot heads, in: Proceedings of the
Dautenhahn, et al. (Eds.), Socially Intelligent Agents:
Conference on Designing Interactive Systems, 2002.
Creating Relationships with Computers and Robots, Kluwer
[55] B. Duffy, Anthropomorphism and the social robot, Robotics
Academic Publishers, Dordrecht, 2002.
and Autonomous Systems 42 (2003) 177–190 (this issue).
[75] H. Hüttenrauch, K. Severinson-Eklund, Fetch-and-carry with
[56] P. Ekman, W. Friesen, Measuring facial movement with the
CERO: Observations from a long-term user study, in:
facial action coding system, in: Emotion in the Human Face,
Proceedings of the International Workshop on Robots and
Cambridge University Press, Cambridge, 1982.
[57] P. Ekman, Basic emotions, in: T. Dalgleish, M. Power (Eds.), Human Communication, 2002.
Handbook of Cognition and Emotion, Wiley, New York, [76] O. John, The ‘Big Five’ factor taxonomy: Dimensions of
1999. personality in the natural language and in questionnaires,
[58] B. Fogg, Introduction: Persuasive technologies, Commu- in: L. Pervin (Ed.), Handbook of Personality: Theory and
nications of the ACM 42 (5) (1999). Research, Guilford, 1990.
[59] T. Fong, C. Thorpe, C. Baur, Collaboration, dialogue, and [77] F. Kaplan, et al., Taming robots with clicker training: A
human–robot interaction, in: Proceedings of the International solution for teaching complex behaviors, in: Proceedings of
Symposium on Robotics Research, 2001. the European Workshop on Learning Robots, 2001.
[60] T. Fong, I. Nourbakhsh, K. Dautenhahn, A survey of socially [78] Z. Khan, Attitudes towards intelligent service robots,
interactive robots: concepts, design, and applications, Technical Report No. TRITA-NA-P9821, NADA, KTH,
Technical Report No. CMU-RI-TR-02-29, Robotics Institute, Stockholm, Sweden, 1998.
Carnegie Mellon University, 2002. [79] S. Kiesler, J. Goetz, Mental models and cooperation with
[61] T. Fong, C. Thorpe, C. Baur, Robot, asker of questions, robotic assistants, in: Proceedings of CHI, 2002.
Robotics and Autonomous Systems 42 (2003) 235–243 (this [80] V. Klingspor, J. Demiris, M. Kaiser, Human–robot-
issue). communication and machine learning, Applied Artificial
[62] N. Frijda, Recognition of emotion, Advances in Experi- Intelligence Journal 11 (1997).
mental Social Psychology 4 (1969). [81] H. Kobayashi, F. Hara, A. Tange, A basic study on dynamic
[63] T. Fromherz, P. Stucki, M. Bichsel, A survey of face control of facial expressions for face robot, in: Proceedings
recognition, MML Technical Report No. 97.01, Department of the International Workshop on Robots and Human
of Computer Science, University of Zurich, 1997. Communication, 1994.
[64] B. Galef, Imitation in animals: History, definition, and [82] H. Kozima, H. Yano, In search of otogenetic prerequisites
interpretation of data from the psychological laboratory, in: for embodied social intelligence, in: Proceedings of the
164 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

Workshop on Emergence and Development on Embodied [101] M. Mataric, Issues and approaches in design of collective
Cognition; International Conference on Cognitive Science, autonomous agents, Robotics and Autonomous Systems 16
2001. (1995) 321–331.
[83] H. Kozima, H. Yano, A robot that learns to communicate [102] M. Mataric, et al., Behavior-based primitives for articulated
with human caregivers, in: Proceedings of the International control, in: Proceedings of the International Conference on
Workshop on Epigenetic Robotics, 2001. Simulation of Adaptive Behavior, 1998.
[84] L. Kopp, P. Gardenfors, Attention as a Minimal Criterion [103] T. Matsui, H. Asoh, J. Fry, et al., Integrated natural
of Intentionality in Robotics, vol. 89, Lund University of spoken dialogue system of Jijo-2 mobile robot for office
Cognitive Studies, 2001. services, in: Proceedings of the AAAI-99, Orlando, FL,
[85] D. Kortenkamp, E. Huber, P. Bonasso, Recognizing and 1999, pp. 621–627.
interpreting gestures on a mobile robot, in: Proceedings of [104] Y. Matsusaka, T. Kobayashi, Human interface of humanoid
the AAAI-96, Portland, OR, 1996, pp. 915–921. robot realizing group communication in real space, in:
[86] R. Krauss, P. Morrel-Samuels, C. Colasante, Do conver- Proceedings of the International Symposium on Humanoid
sational hand gestures communicate? Journal of Personality Robotics, 1999.
and Social Psychology 61 (1991). [105] D. McNeill, Hand and Mind: What Gestures Reveal About
[87] M. Krieger, J.-B. Billeter, L. Keller, Ant-like task allocation Thought, University of Chicago Press, Chicago, IL, 1992.
and recruitment in cooperative robots, Nature 406 (6799) [106] C. Melhuish, O. Holland, S. Hoddell, Collective sorting and
(2000). segregation in robots with minimal sensing, in: Proceedings
[88] C. Kube, E. Bonabeau, Cooperative transport by ants of the International Conference on Simulation of Adaptive
and robots, Robotics and Autonomous Systems 30 (2000) Behavior, 1998.
85–101. [107] F. Michaud, S. Caron, Roball—An autonomous toy-rolling
[89] Y. Kuniyoshi, et al., Learning by watching: Extracting robot, in: Proceedings of the Workshop on Interactive Robot
reusable task knowledge from visual observation of human Entertainment, 2000.
performance, IEEE Transactions of the Robotics and [108] F. Michaud, et al., Artificial emotion and social robotics, in:
Automation 10 (6) (1994). Proceedings of the International Symposium on Distributed
Autonomous Robotic Systems, 2000.
[90] M. Lansdale, T. Ormerod, Understanding Interfaces, Aca-
[109] F. Michaud, et al., Dynamic robot formations using direc-
demic Press, New York, 1994.
tional visual perception, in: Proceedings of the International
[91] S. Lauria, G. Bugmann, T. Kyriacou, E. Klein, Mobile
Conference on Intelligent Robots and Systems, 2002.
robot programming using natural language, Robotics and
[110] H. Mizoguchi, et al., Realization of expressive mobile robot,
Autonomous Systems 38 (2002) 171–181.
in: Proceedings of the International Conference on Robotics
[92] V. Lee, P. Gupta, Children’s Cognitive and Language
and Automation, 1997.
Development, Blackwell Scientific Publications, Oxford,
[111] I. Murray, J. Arnott, Towards the simulation of emotion in
1995.
synthetic speech: A review of the literature on human vocal
[93] H. Lim, A. Ishii, A. Takanishi, Basic emotional walking
emotion, Journal of Acoustic Society of America 93 (2)
using a biped humanoid robot, in: Proceedings of the IEEE
(1993).
SMC, 1999. [112] I. Myers, Introduction to Type, Consulting Psychologists
[94] C. Lisetti, D. Schiano, Automatic facial expression inter- Press, Palo Alto, CA, 1998.
pretation: Where human–computer interaction, artificial [113] T. Nakata, et al., Expression of emotion and intention
intelligence, and cognitive science intersect, Pragmatics and by robot body movement, in: Proceedings of the 5th
Cognition 8 (1) (2000). International Conference on Autonomous Systems, 1998.
[95] K. Lorenz, The Foundations of Ethology, Springer, Berlin, [114] Y. Nakauchi, R. Simmons, A social robot that stands in
1981. line, in: Proceedings of the International Conference on
[96] Y. Marom, G. Hayes, Preliminary approaches to attention Intelligent Robots and Systems, 2000.
for social learning, Informatics Research Report No. [115] W. Newman, M. Lamming, Interactive System Design,
EDI-INF-RR-0084, University of Edinburgh, 1999. Addison-Wesley, Reading, MA, 1995.
[97] Y. Marom, G. Hayes, Attention and social situatedness [116] M. Nicolescu, M. Mataric, Learning and interacting in
for skill acquisition, Informatics Research Report No. human–robot domains, IEEE Transactions on Systems, Man
EDI-INF-RR-0069, University of Edinburgh, 2001. and Cybernetics 31 (5) (2001).
[98] Y. Marom, G. Hayes, Interacting with a robot to enhance [117] I. Nourbakhsh, An affective mobile robot educator with
its perceptual attention, Informatics Research Report No. a full-time job, Artificial Intelligence 114 (1–2) (1999)
EDI-INF-RR-0085, University of Edinburgh, 2001. 95–124.
[99] D. Massaro, Perceiving Talking Faces: From Speech [118] A. Rowe, C. Rosenberg, I. Nourbakhsh, CMUcam: a low-
Perception to Behavioural Principles, MIT Press, Cambridge, overhead vision system, in: Proceedings of the International
MA, 1998. Conference on Intelligent Robots and Systems (IROS 2002),
[100] M. Mataric, Learning to behave socially, in: Proceedings Lausanne, Switzerland, 2002.
of the International Conference on Simulation of Adaptive [119] T. Ogata, S. Sugano, Emotional communication robot:
Behavior, 1994. WAMOEBA-2R emotion model and evaluation experiments,
T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166 165

in: Proceedings of the International Conference on [139] B. Scassellati, Investigating models of social development
Humanoid Robots, 2000. using a humanoid robot, in: B. Webb, T. Consi (Eds.),
[120] H. Okuno, et al., Human–robot interaction through real-time Biorobotics, MIT Press, Cambridge, MA, 2000.
auditory and visual multiple-talker tracking, in: Proceedings [140] B. Scassellati, Foundations for a theory of mind for a
of the International Conference on Intelligent Robots and humanoid robot, Ph.D. Thesis, Department of Electronics
Systems, 2001. Engineering and Computer Science, MIT Press, Cambridge,
[121] A. Paiva (Ed.), Affective interactions: Towards a new gene- MA, 2001.
ration of computer interfaces, in: Lecture Notes in Computer [141] S. Schall, Robot learning from demonstration, in: Procee-
Science/Lecture Notes in Artificial Intelligence, vol. 1914, dings of the International Conference on Machine Learning,
Springer, Berlin, 2000. 1997.
[122] J. Panksepp, Affective Neuroscience, Oxford University [142] M. Scheeff, et al., Experiences with Sparky: A social
Press, Oxford, 1998. robot, in: Proceedings of the Workshop on Interactive Robot
[123] E. Paulos, J. Canny, Designing personal tele-embodiment, Entertainment, 2000.
Autonomous Robots 11 (1) (2001). [143] J. Schulte, et al., Spontaneous, short-term interaction with
[124] V. Pavlovic, R. Sharma, T. Huang, Visual interpretation of mobile robots in public places, in: Proceedings of the
hand gestures for human–computer interaction: A review, International Conference on Robotics and Automation,
IEEE Transactions on Pattern Analysis and Machine Inte- 1999.
lligence 19 (7) (1997). [144] K. Severinson-Eklund, A. Green, H. Hüttenrauch, Social
[125] P. Persson, et al., Understanding socially intelligent agents— and collaborative aspects of interaction with a service robot,
A multilayered phenomenon, IEEE Transactions on SMC Robotics and Autonomous Systems 42 (2003) 223–234 (this
31 (5) (2001). issue).
[126] R. Pfeifer, On the role of embodiment in the emergence [145] T. Sheridan, Eight ultimate challenges of human–robot com-
of cognition and emotion, in: Proceedings of the Toyota munication, in: Proceedings of the International Workshop
Conference on Affective Minds, 1999. on Robots and Human Communication, 1997.
[146] C. Smith, H. Scott, A componential approach to the meaning
[127] J. Pineau, M. Montemerlo, M. Pollack, N. Roy, S. Thrun,
of facial expressions, in: J. Russell, J. Fernandez-Dols (Eds.),
Towards robotic assistants in nursing homes: Challenges
The Psychology of Facial Expression, Cambridge University
and results, Robotics and Autonomous Systems 42 (2003)
Press, Cambridge, 1997.
271–281 (this issue).
[147] R. Smith, S. Eppinger, A predictive model of sequential
[128] R. Plutchik, Emotions: A general psychoevolutionary theory,
iteration in engineering design, Management Science 43 (8)
in: K. Scherer, P. Ekman (Eds.), Approaches to Emotion,
(1997).
Erlbaum, London, 1984.
[148] D. Spiliotopoulos, et al., Human–robot interaction based
[129] L. Rabiner, B. Jaung, Fundamentals of Speech Recognition,
on spoken natural language dialogue, in: Proceedings of
Englewood Cliffs, Prentice-Hall, NJ, 1993.
the European Workshop on Service and Humanoid Robots,
[130] B. Reeves, C. Nass, The Media Equation, CSLI Publications,
2001.
Stanford, 1996.
[149] L. Steels, Emergent adaptive lexicons, in: Proceedings of
[131] J. Reichard, Robotics: Fact, Fiction, and Prediction, Viking the International Conference on SAB, 1996.
Press, 1978. [150] L. Steels, F. Kaplan, AIBO’s first words. The social learning
[132] W. Reilly, Believable social and emotional agents, Ph.D. of language and meaning, in: H. Gouzoules (Ed.), Evolution
Thesis, Computer Science, Carnegie Mellon University, of Communications, vol. 4, No. 1, Benjamin, New York,
1996. 2001.
[133] S. Restivo, Bringing up and booting up: Social theory and [151] L. Steels, Language games for autonomous robots, IEEE
the emergence of socially intelligent robots, in: Proceedings Intelligent Systems 16 (5) (2001).
of the IEEE Conference on SMC, 2001. [152] R. Stiefelhagen, J. Yang, A. Waibel, Tracking focus of
[134] S. Restivo, Romancing the robots: Social robots and society, attention for human–robot communication, in: Proceedings
in: Proceedings of the Robots as Partners: An Exploration of the International Conference on Humanoid Robots,
of Social Robots Workshop, International Conference on 2001.
Intelligent Robots and Systems (IROS 2002), Lausanne, [153] K. Suzuki, et al., Intelligent agent system for human–robot
Switzerland, 2002. interaction through artificial emotion, in: Proceedings of the
[135] A. Sage, Systems Engineering, Wiley, New York, 1992. IEEE SMC, 1998.
[136] A. Samal, P. Iyengar, Automatic recognition and analysis [154] R. Tanawongsuwan, et al., Robust tracking of people by
of human faces and facial expressions: A survey, Pattern a mobile robotic agent, Technical Report No. GIT-GVU-
Recognition 25 (1992). 99-19, Georgia Institute of Technology, 1999.
[137] H. Schlossberg, Three dimensions of emotion, Psychology [155] L. Terveen, An overview of human–computer collaboration,
Review 61 (1954). Knowledge-Based Systems 8 (2–3) (1994).
[138] J. Searle, Minds, Brains and Science, Harvard University [156] F. Thomas, O. Johnston, Disney Animation: The Illusion of
Press, Cambridge, MA, 1984. Life, Abbeville Press, 1981.
166 T. Fong et al. / Robotics and Autonomous Systems 42 (2003) 143–166

[157] W. Thorpe, Learning and Instinct in Animals, Methuen, Terrence Fong is a joint postdoctoral
London, 1963. fellow at Carnegie Mellon University
[158] K. Toyama, Look, Ma—No hands! Hands-free cursor control (CMU) and the Swiss Federal Institute
with real-time 3D face tracking, in: Proceedings of the of Technology/Lausanne (EPFL). He re-
Workshop on Perceptual User Interfaces, 1998. ceived his Ph.D. (2001) in Robotics from
[159] R. Vaughan, K. Stoey, G. Sukhatme, M. Mataric, Go ahead, CMU. From 1990 to 1994, he worked at
make my day: Robot conflict resolution by aggressive the NASA Ames Research Center, where
competition, in: Proceedings of the International Conference he was co-investigator for virtual environ-
on SAB, 2000. ment telerobotic field experiments. His
[160] J. Velasquez, A computational framework for emotion-based research interests include human–robot
control, in: Proceedings of the Workshop on Grounding interaction, PDA and web-based interfaces, and field mobile robots.
Emotions in Adaptive Systems; International Conference on
SAB, 1998.
[161] S. Waldherr, R. Romero, S. Thrun, A gesture-based interface
for human–robot interaction, Autonomous Robots 9 (2000). Illah Nourbakhsh is an Assistant Profes-
[162] I. Werry, et al., Can social interaction skills be taught by sor of Robotics at Carnegie Mellon Uni-
a social agent? The role of a robotic mediator in autism versity (CMU) and is co-founder of the
therapy, in: Proceedings of the International Conference on Toy Robots Initiative at The Robotics In-
Cognitive Technology, 2001. stitute. He received his Ph.D. (1996) de-
[163] A. Whiten, Natural Theories of Mind, Basil Blackwell, gree in computer science from Stanford.
Oxford, 1991. He is a founder and chief scientist of
[164] T. Willeke, et al., The history of the mobot museum robot Blue Pumpkin Software, Inc. and Mobot,
series: An evolutionary study, in: Proceedings of FLAIRS, Inc. His current research projects include
2001. robot learning, believable robot personal-
[165] D. Woods, Decomposing automation: Apparent simplicity, ity, visual navigation and robot locomotion.
real complexity, in: R. Parasuraman, M. Mouloua (Eds.),
Automation and Human Performance: Theory and Appli-
cations, Erlbaum, London, 1996. Kerstin Dautenhahn is a Reader in ar-
[166] Y. Wu, T. Huang, Vision-based gesture recognition: A tificial intelligence in the Computer Sci-
review, in: Gesture-Based Communications in HCI, Lecture ence Department at University of Hert-
Notes in Computer Science, vol. 1739, Springer, Berlin, fordshire, where she also serves as coor-
1999. dinator of the Adaptive Systems Research
[167] G. Xu, et al., Toward robot guidance by hand gestures Group. She received her doctoral degree
using monocular vision, in: Proceedings of the Hong Kong in natural sciences from the University of
Symposium on Robotics Control, 1999. Bielefeld. Her research lies in the areas
[168] S. Yoon, et al., Motivation driven learning for interactive of socially intelligent agents and HCI, in-
synthetic characters, in: Proceedings of the International cluding virtual and robotic agents. She
Conference on Autonomous Agents, 2000. has served as guest editor for numerous special journal issues in
[169] J. Zlatev, The Epigenesis of Meaning in Human Beings and AI, cybernetics, artificial life, and recently co-edited the book “So-
Possibly in Robots, Lund University Cognitive Studies, vol. cially Intelligent Agents—Creating Relationships with Computers
79, Lund University, 1999. and Robots”.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy