


default search action
HAI 2014: Tsukuba, Japan
- Hideaki Kuzuoka, Tetsuo Ono, Michita Imai, James E. Young:
Proceedings of the second international conference on Human-agent interaction, HAI '14, Tsukuba, Japan, October 29-31, 2014. ACM 2014, ISBN 978-1-4503-3035-0
Workshops
- Michita Imai, Tetsuo Ono, Kazushi Nishimoto:
AS 2014: workshop on augmented sociality and interactive technology. 1 - Michita Imai:
CID 2014: workshop on cognitive interaction design. 3 - Masahide Yuasa, Kazuki Kobayashi, Takahiro Tanaka, Daisuke Katagami:
IWME 2014: first international workshop on mood engineering. 5
Keynote talk I
- Ellen Yi-Luen Do
:
Creative design computing for happy healthy living. 7-8
Agents for support and learning
- Irini Giannopulu
, Valérie Montreynaud, Tomio Watanabe:
PEKOPPA: a minimalistic toy robot to analyse a listener-speaker situation in neurotypical and autistic children aged 6 years. 9-16 - Masahiro Shiomi
, Takamasa Iio, Koji Kamei, Chandraprakash Sharma, Norihiro Hagita:
User-friendly autonomous wheelchair for elderly care using ubiquitous network robot platform. 17-22 - Hideaki Kuzuoka
, Naomi Yamashita, Hiroshi Kato, Hideyuki Suzuki, Yoshihiko Kubota:
Tangible earth: tangible learning environment for astronomy education. 23-27 - Tamami Saga, Nagisa Munekata, Tetsuo Ono:
Daily support robots that move on the body. 29-34 - Hirotake Yamazoe, Tomoko Yonezawa:
Simplification of wearable message robot with physical contact for elderly's outing support. 35-38
Novel interaction techniques
- Nico Li, Stephen Cartwright, Ehud Sharlin, Mario Costa Sousa
:
Ningyo of the CAVE: robots as social puppets of static infrastructure. 39-44 - Naoto Yoshida, Miyuki Yano, Tomoko Yonezawa:
Personal and interactive newscaster agent based on estimation of user's understanding. 45-50 - Hirotaka Osawa:
Emotional cyborg: complementing emotional labor with human-agent interaction technology. 51-57 - John Harris, Stephanie Law, Kazuki Takashima, Ehud Sharlin, Yoshifumi Kitamura
:
Calamaro: perceiving robotic motion in the wild. 59-66
Telepresence and teleoperation
- Akira Hayamizu, Michita Imai, Keisuke Nakamura, Kazuhiro Nakadai:
Volume adaptation and visualization by modeling the volume level in noisy environments for telepresence system. 67-74 - Elham Saadatian, Thoriq Salafi
, Hooman Samani
, Yu De Lim, Ryohei Nakatsu:
An affective telepresence system using smartphone high level sensing and intelligent behavior generation. 75-82 - Tsuyoshi Komatsubara, Masahiro Shiomi
, Takayuki Kanda
, Hiroshi Ishiguro, Norihiro Hagita:
Can a social robot help children's understanding of science in classrooms? 83-90 - Christian Becker-Asano
, Kai Oliver Arras, Bernhard Nebel:
Robotic tele-presence with DARYL in the wild. 91-95 - Tsunehiro Arimoto, Yuichiro Yoshikawa, Hiroshi Ishiguro:
Nodding responses by collective proxy robots for enhancing social telepresence. 97-102
Keynote talk II
- Jane Yung-jen Hsu:
Crowdsourcing agents for smart IoT. 103
Interactive session
- Kazunori Terada
, Yuto Imamura, Hideyuki Takahashi, Akira Ito:
A fixed pattern deviation robot that triggers intention attribution. 105-107 - Takakazu Mizuki, Akira Ito, Kazunori Terada
:
The sharing of meta-signals and protocols is the first step for the emergence of cooperative communication. 109-112 - Celestino Alvarez, Lucía Fernández Cossío:
FIONA: a platform for embodied cognitive agents. 113-116 - Mutsuo Sano, Yuka Kanemoto, Syogo Noda Noda, Kenzaburo Miyawaki, Nami Fukutome:
A cooking assistant robot using intuitive onomatopoetic expressions and joint attention. 117-120 - Syed Khursheed Hasnain, Ghilès Mostafaoui
, Caroline Grand, Philippe Gaussier:
Synchrony based side by side walking: an application in human-robot interactions. 121-124 - Hirotsugu Minowa:
Image recognition method which measures angular velocity from a back of hand for developing a valve UI. 125-128 - Ryota Nishimura
, Daisuke Yamamoto, Takahiro Uchiya, Ichi Takumi:
Development of a dialogue scenario editor on a web browser for a spoken dialogue system. 129-132 - Marie Uemura, Keiko Yamamoto
, Itaru Kuramoto, Yoshihiro Tsujino:
Notification design using mother-like expressions. 133-136 - Hyewon Lee, Jung-Ju Choi, Sonya S. Kwak:
Will you follow the robot's advice?: the impact of robot types and task types on people's perception of a robot. 137-140 - Shinya Kiriyama, Kenichi Shibata
, Shogo Ishikawa, Kei Ogawa, Harunobu Nukushina, Yoichi Takebayashi:
Multimodal bodily feeling analysis to design air conditioning services for elderly people. 141-144 - Yuri Kumahara, Yoshikazu Mori:
Portable robot inspiring walking in elderly people. 145-148 - Chaehyun Baek, Jung-Ju Choi, Sonya S. Kwak:
Can you touch me?: the impact of physical contact on emotional engagement with a robot. 149-152 - Takamasa Iio, Masahiro Shiomi
, Koji Kamei, Chandraprakash Sharma, Norihiro Hagita:
Social acceptance by elderly people of a fall-detection system with range sensors in a nursing home. 153-156 - Masahiro Shiomi
, Norihiro Hagita:
Preliminary investigation of supporting child-care at an intelligent playroom. 157-160 - Yongyao Yan, Greg S. Ruthenbeck, Karen J. Reynolds
:
Recovery of virtual object contact surface features for replaying haptic feeling. 161-164 - Kasumi Abe, Chie Hieida
, Muhammad Attamimi
, Takayuki Nagai, Takayuki Shimotomai, Takashi Omori, Natsuki Oka:
Toward playmate robots that can play with children considering personality. 165-168 - Takahiro Matsumoto, Shunichi Seko, Ryousuke Aoki
, Akihiro Miyata
, Tomoki Watanabe, Tomohiro Yamada:
Affective agents for enhancing emotional experience. 169-172 - Christian Becker-Asano, Eduardo Meneses, Nicolas Riesterer, Julien Hué, Christian Dornhege, Bernhard Nebel
:
The hybrid agent MARCO: a multimodal autonomous robotic chess opponent. 173-176 - Wu Jhong Ren, Hooman Samani
:
Artificial endocrine system for language translation robot. 177-180 - Ren Ohmura, Yuki Kusano, Yuta Suzuki:
Pointing gesture prediction using minimum-jerk model in human-robot interaction. 181-184 - Yukako Watanabe, Yoshiko Okada, Hirotaka Osawa, Midori Sugaya:
Digital play therapy for children with learning disabilities. 185-188 - Koushi Mitarai, Hiroyuki Umemuro
:
Amae and agency appraisal as japanese emotional behavior: influences on agent's believability. 189-192 - Oskar Palinko
, Alessandra Sciutti
, Francesco Rea
, Giulio Sandini
:
Weight-aware robot motion planning for lift-to-pass action. 193-196 - Jie Sun:
Emotion recognition and expression in therapeutic social robot design. 197-200 - Akira Matsuda, Midori Sugaya, Hiroyuki Nakamura:
Luminous device for the deaf and hard of hearing people. 201-204 - Yu Kobayashi, Hirotaka Osawa
, Michimasa Inaba, Kousuke Shinoda, Fujio Toriumi
, Daisuke Katagami:
Development of werewolf match system for human players mediated with lifelike agents. 205-207 - Elham Saadatian, Reihaneh Hosseinzade Hariri, Adrian David Cheok
, Ryohei Nakatsu:
Development of smart infant-parents affective telepresence system. 209-212 - Yasutaka Takeda, Kohei Yoshida, Shotaro Baba, P. Ravindra De Silva, Michio Okada:
COLUMN: persuasion as a social mediator to establish the interpersonal coordination. 213-216 - Oskar Palinko
, Alessandra Sciutti
, Francesco Rea
, Giulio Sandini
:
Towards better eye tracking in human robot interaction using an affordable active vision system. 217-220 - Yutaka Ishii, Tomio Watanabe:
Evaluation of a video communication system with speech-driven embodied entrainment audience characters with partner's face. 221-224 - Andreas Kipp, Franz Kummert:
Dynamic dialog system for human robot collaboration: playing a game of pairs. 225-228 - Takashi Ichijo, Nagisa Munekata, Tetsuo Ono:
Unification of demonstrative pronouns in a small group guided by a robot. 229-232 - Ritta Baddoura, Gentiane Venture
, Guillaume Gibert:
Evaluating an intuitive teleoperation platform explored in a long-distance interview. 233-236 - Shochi Otogi, Hung-Hsuan Huang, Ryo Hotta, Kyoji Kawagoe:
Analysis of personality traits for intervention scene detection in multi-user conversation. 237-240 - Masahide Yuasa:
A design method using cooperative principle for conversational agent. 241-244 - Yuichiro Tsuji, Ami Tsukamoto, Takashi Uchida, Yusuke Hattori, Ryosuke Nishida, Chie Fukada, Motoyuki Ozeki, Takashi Omori, Takayuki Nagai, Natsuki Oka:
Experimental study of empathy and its behavioral indices in human-robot interaction. 245-248 - Junya Nakanishi, Hidenobu Sumioka
, Masahiro Shiomi
, Daisuke Nakamichi, Kurima Sakai, Hiroshi Ishiguro:
Huggable communication medium encourages listening to others. 249-252 - Takahisa Tani, Seiji Yamada:
Tap model to improve input accuracy of touch panels. 253-256 - Kensuke Miyamoto, Hiroaki Yoshioka, Norifumi Watanabe, Yoshiyasu Takefuji
:
Modeling of cooperative behavior agent based on collision avoidance decision process. 257-260 - Ken Yonezawa, Hirotada Ueda:
Representation of gaze, mood, and emotion: movie-watching with telepresence robots. 261-264 - Hyunsoek Choi
, Hyeyoung Park:
A hierarchical structure for gesture recognition using RGB-D sensor. 265-268 - Oliver Damm, Britta Wrede
:
Communicating emotions: a model for natural emotions in HRI. 269-272 - Hideyuki Takahashi, Nobutsuna Endo, Hiroki Yokoyama, Takato Horii, Tomoyo Morita, Minoru Asada:
How does emphatic emotion emerge via human-robot rhythmic interaction? 273-276 - Takashi Yoshino, Yuki Hayashi, Yukiko I. Nakano:
Determining robot gaze according to participation roles in multiparty conversations. 277-280 - Takayuki Todo, Takanari Miisho:
Interactions on eyeballs of humanoid-robots. 281-283 - Gil-Jin Jang, Ahra Jo, Jeong-Sik Park:
Video-based emotion identification using face alignment and support vector machines. 285-286 - Angie Lorena Marin Mejia:
Social networking sites photos and robots: a pilot research on facebook photo albums and robotics interfaces for older adults. 287-291 - Komei Hasegawa, Yasushi Nakauchi:
Telepresence robot that exaggerates non-verbal cues for taking turns in multi-party teleconferences. 293-296 - Taewoong Kim, Minho Lee:
Emotional scene understanding based on acoustic signals using adaptive neuro-fuzzy inference system. 297-300
Techniques and strategies for developing agents
- Taichi Sono, Toshihiro Osumi, Michita Imai:
SB simulator: a method to estimate how relation develops. 301-307 - Alaeddine Mihoub
, Gérard Bailly
, Christian Wolf:
Modeling perception-action loops: comparing sequential models with frame-based classifiers. 309-314 - Daniel J. Rea, Takeo Igarashi, James Everett Young
:
PaintBoard: prototyping interactive character behaviors by digitally painting storyboards. 315-322 - Daisuke Yamamoto, Keiichiro Oura, Ryota Nishimura
, Takahiro Uchiya, Akinobu Lee, Ichi Takumi, Keiichi Tokuda:
Voice interaction system with 3D-CG virtual agent for stand-alone smartphones. 323-330
Social interaction strategies for agents
- Yoshito Ogawa, Kouki Miyazawa, Hideaki Kikuchi:
Assigning a personality to a spoken dialogue agent through self-disclosure of behavior. 331-337 - Leigh Michael Harry Clark, Khaled Bachour, Abdulmalik Ofemile, Svenja Adolphs, Tom Rodden:
Potential of imprecision: exploring vague language in agent instructors. 339-344 - Jun Kato
, Daisuke Sakamoto
, Takeo Igarashi, Masataka Goto
:
Sharedo: to-do list interface for human-agent task sharing. 345-351 - Jekaterina Novikova
, Leon Adam Watts
:
A design model of emotional body expressions in non-humanoid robots. 353-360 - Raphaela Gehle, Karola Pitsch
, Sebastian Wrede:
Signaling trouble in robot-to-group interaction.emerging visitor dynamics with a museum guide robot. 361-368
Keynote talk III
- Takeo Igarashi:
Design everything by yourself. 369
Understanding users
- David J. Atkinson, Micah Henry Clark:
Methodology for study of human-robot social interaction in dangerous situations. 371-376 - Masayuki Nakane, James Everett Young
, Neil D. B. Bruce:
More human than human?: a visual processing approach to exploring believability of android faces. 377-381 - Tatsuya Nomura, Takayuki Kanda
:
Differences of expectation of rapport with robots dependent on situations. 383-389 - Takafumi Sakamoto, Yugo Takeuchi:
Stage of subconscious interaction in embodied interaction. 391-396

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.