


default search action
IUI 2019: Marina del Ray, CA, USA - Companion
- Proceedings of the 24th International Conference on Intelligent User Interfaces: Companion, Marina del Ray, CA, USA, March 16-20, 2019. ACM 2019, ISBN 978-1-4503-6673-1
Demos and Posters
- Homanga Bharadhwaj:
Explainable recommender system that maximizes exploration. 1-2 - Benjamin Wagstaff, Chiao Lu, Xiang 'Anthony' Chen:
Automatic exam grading by a mobile camera: snap a picture to grade your tests. 3-4 - Yugo Hayashi:
A preliminary study on the use of emotional recurrence analysis to identify coordination in collaborative learning. 5-6 - Yi-Chieh Lee, Wai-Tat Fu:
Supporting peer assessment in education with conversational agents. 7-8 - Shizuka Shirai, Tetsuo Fukui:
Evaluation of intelligent input interface for entering equations on smartphone. 9-10 - Atsutoshi Ikeda, Shinichi Kosugi, Yasuhiro Tanaka:
Propagating vibration analysis of leg towards ankle joint angle estimation. 11-12 - Honoka Kakimoto, Yuanyuan Wang
, Yukiko Kawai, Kazutoshi Sumiya:
Movie trailer analysis based on editing features of movies. 13-14 - Fabio Gasparetti, Luca Maria Aiello
, Daniele Quercia:
Evaluating the efficacy of traditional fitness tracker recommendations. 15-16 - Zhiqiang Gao
, Dawei Liu, Kaizhu Huang
, Yi Huang
:
Mining human activity and smartphone position from motion sensors. 17-18 - Kushal Chawla, Niyati Chhaya, Aman Deep Singh, Soumya Vadlamannati, Aarushi Agrawal:
Sequence learning using content and consumption patterns for user path prediction. 19-20 - Saeed Amal
, Mustafa Adam, Peter Brusilovsky, Einat Minkov, Tsvi Kuflik:
Enhancing explainability of social recommendation using 2D graphs and word cloud visualizations. 21-22 - Mousa Ahmadi, Cristian Borcea, Quentin Jones:
Collaborative lifelogging through the integration of machine and human computation. 23-24 - Ryosuke Kawamura, Yuushi Toyoda, Koichiro Niinuma:
Engagement estimation based on synchrony of head movements: application to actual e-learning scenarios. 25-26 - Giulia Cosentino, Giulia Leonardi, Mirko Gelsomini
, Micol Spitale, Mattia Gianotti
, Franca Garzotto, Venanzio Arquilla
:
GENIEL: an auto-generative intelligent interface to empower learning in a multi-sensory environment. 27-28 - Mirko Gelsomini
, Micol Spitale, Eleonora Beccaluva
, Leonardo Viola, Franca Garzotto
:
Reflex: adaptive learning beyond the screen. 29-30 - Amirreza Rouhi
, Micol Spitale, Fabio Catania
, Giulia Cosentino, Mirko Gelsomini
, Franca Garzotto:
Emotify: emotional game for children with autism spectrum disorder based-on machine learning. 31-32 - Micol Spitale, Fabio Catania
, Giulia Cosentino, Mirko Gelsomini
, Franca Garzotto:
WIYE: building a corpus of children's audio and video recordings with a story-based app. 33-34 - Tanja Schneeberger, Patrick Gebhard, Tobias Baur, Elisabeth André
:
PARLEY: a transparent virtual social agent training interface. 35-36 - Cheng-Yu Chung, I-Han Hsiao:
An exploratory study of augmented embodiment for computational thinking. 37-38 - Xingxin Liu, Xingyu Huang, Yue Wang, Lin Zhang:
Point-of-interest category recommendation based on group mobility modeling. 39-40 - Federico Maria Cau, Mattia Samuel Mancosu, Fabrizio Mulas, Paolo Pilloni, Lucio Davide Spano
:
An interface for explaining the automatic classification of runners' trainings. 41-42 - Jitender Singh Virk
, Abhinav Dhall:
Garuda: a deep learning based solution for capturing selfies safely. 43-44 - Filippo Andrea Fanni, Angelo Mereu, Martina Senis, Alessandro Tola
, Lucio Davide Spano
, Fabio Murru, Marco Romoli, Ivan Blecic
, Giuseppe Andrea Trunfio:
PAC-PAC: intelligent storytelling for point-and-click games on the web. 45-46 - Hyocheol Ro, Yoonjung Park, Jung-Hyun Byun
, Tack-Don Han:
Mobile device interaction using projector metaphor. 47-48 - Wataru Uno, Ryota Ozaki, Noriji Kato:
Identifying useful documents in the workplace from web browsing and working logs. 49-50 - Yoonjung Park, Hyocheol Ro, Junghyun Byun
, Tack-Don Han:
Adaptive projection augmented reality with object recognition based on deep learning. 51-52 - Bin Chen, Kenji Nakajima, Koki Hatada, Jun'ichi Yura, Masashi Uyama, Keiju Okabayashi:
Real-time collaborative system based on distributed data sharing method. 53-54 - Isuru Jayarathne
, Michael Cohen
, Michael Frishkopf, Gregory Mulyk:
Relaxation "sweet spot" exploration in pantophonic musical soundscape using reinforcement learning. 55-56 - Pei Hao Chen, Ryo Shirai, Masanori Hashimoto
:
Coverage-scalable instant tabletop positioning system with self-localizable anchor nodes. 57-58 - Tooba Ahsen
, Fahad R. Dogar:
A case for a richer, bi-directional interface between augmented reality applications and the network. 59-60 - Kwan Hui Lim, Shanika Karunasekera, Aaron Harwood, Yasmeen M. George
:
Geotagging tweets to landmarks using convolutional neural networks with text and posting time. 61-62 - Takumi Nakamura, Kenichi Iida, Etsuko Ueda:
Quantification of gracefulness from hand trajectory in classical dance motion. 63-64 - H. Paul Zellweger:
A demonstration of automated database application development. 65-66 - Yuan-Yi Fan, Soyoung Shin, Vids Samanta:
Evaluating expressiveness of a voice-guided speech re-synthesis system using vocal prosodic parameters. 67-68 - Tianyi Li
, Gregorio Convertino, Ranjeet Kumar Tayi, Shima Kazerooni, Gary Patterson:
Adding intelligence to a data security analysis system: recommendation and planning support. 69-70 - Tomoyasu Nakano, Yuki Koyama, Masahiro Hamasaki, Masataka Goto
:
Autocomplete vocal-fo annotation of songs using musical repetitions. 71-72 - Yoo-Mi Park, Eun-Ji Lim, Shin-Young Ahn, Wan Choi, Taewoo Kim
:
DL-dashboard: user-friendly deep learning model development environment. 73-74 - Camille Gobert, Kashyap Todi, Gilles Bailly, Antti Oulasvirta
:
SAM: self-adapting menus on the web. 75-76 - Nina Hagemann, Michael P. O'Mahony, Barry Smyth:
Visualising module dependencies in academic recommendations. 77-78 - Yuanyuan Wang
, Panote Siriaraya, Yukiko Kawai, Keishi Tajima:
A proposal of spatial operators for a collaborative map search system. 79-80 - Jaejun Lee, Raphael Tang, Jimmy Lin:
Universal voice-enabled user interfaces using JavaScript. 81-82 - Tanya Bafna, John Paulin Hansen
:
Cognitive load during gaze typing. 83-84 - Katri Leino, Kashyap Todi, Antti Oulasvirta
, Mikko Kurimo:
Computer-supported form design using keystroke-level modeling with reinforcement learning. 85-86 - Alessandro Carcangiu
, Lucio Davide Spano
:
Integrating declarative models and HMMs for online gesture recognition. 87-88 - John E. Wenskovitch
, Michelle Dowling
, Chris North:
Simultaneous interaction with dimension reduction and clustering projections. 89-90 - Richard Vogl, Hamid Eghbal-Zadeh, Peter Knees:
An automatic drum machine with touch UI based on a generative neural network. 91-92 - Xiumin Shang, Marcelo Kallmann, Ahmed Sabbir Arif:
Effects of correctness and suggestive feedback on learning with an autonomous virtual trainer. 93-94 - Jamie Sanders, Aqueasha Martin-Hammond:
Exploring autonomy in the design of an intelligent health assistant for older adults. 95-96 - Phillip Odom, Raymond Hebard, Stephen Lee-Urban:
HuManIC: human machine interpretive control. 97-98 - Behnam Rahdari
, Peter Brusilovsky:
User-controlled hybrid recommendation for academic papers. 99-100 - Qiaoyu Zheng, Katia Vega:
Landscape-freestyle: restyling site plans for landscape architecture with machine learning. 101-102 - Jordan Barria-Pineda, Peter Brusilovsky:
Explaining educational recommendations through a concept-level knowledge visualization. 103-104 - Steven R. Rick
, Shubha Bhaskaran, Yajie Sun, Sarah McEwen, Nadir Weibel
:
NeuroPose: geriatric rehabilitation in the home using a webcam and pose estimation. 105-106 - Steven R. Rick
, Aaron Paul Goldberg, Nadir Weibel
:
SleepBot: encouraging sleep hygiene using an intelligent chatbot. 107-108 - Tetsuya Matsui, Seiji Yamada:
The effect of subjective speech on product recommendation virtual agent. 109-110 - Daniel Garijo, Deborah Khider, Varun Ratnakar
, Yolanda Gil
, Ewa Deelman, Rafael Ferreira da Silva
, Craig A. Knoblock, Yao-Yi Chiang, Minh Pham, Jay Pujara, Binh Vu, Dan Feldman, Rajiv Mayani, Kelly M. Cobourn, Christopher J. Duffy, Armen R. Kemanian, Lele Shu
, Vipin Kumar, Ankush Khandelwal, Kshitij Tayal, Scott D. Peckham, Maria Stoica, Anna Dabrowski, Daniel Hardesty-Lewis, Suzanne A. Pierce
:
An intelligent interface for integrating climate, hydrology, agriculture, and socioeconomic models. 111-112 - Minori Narita, Takeo Igarashi:
Programming-by-example for data transformation to improve machine learning performance. 113-114 - Konrad Zielinski
, Ryszard Szamburski, Adrianna Biernacka, Joanna Raczaszek-Leonardi
:
Field study as a method to assess effectiveness of post-laryngectomy communication assistive interfaces. 115-116 - Miriam L. Boon
, Larry Birnbaum:
Identifying "horse race" stories in election news. 117-118 - Ivan Giangreco, Loris Sauter
, Mahnaz Amiri Parian
, Ralph Gasser
, Silvan Heller, Luca Rossetto
, Heiko Schuldt
:
VIRTUE: a virtual reality museum Experience. 119-120 - Balaji Vasan Srinivasan, Vishwa Vinay, Niyati Chhaya:
Metrics based content layouting. 121-122 - Michelle X. Zhou, Wenxi Chen, Ziang Xiao, Huahai Yang, Tracy Chi, Ransom Williams:
Getting virtually personal: chatbots who actively listen to you and infer your personality. 123-124
Workshops and Tutorials
- Brian Y. Lim, Advait Sarkar, Alison Smith-Renner
, Simone Stumpf
:
ExSS: explainable smart systems 2019. 125-126 - Mark P. Graus
, Bruce Ferwerda, Marko Tkalcic
, Panagiotis Germanakos:
Third workshop on theory-informed user modeling for tailoring and personalizing interfaces (HUMANIZE): preface. 127-128 - Styliani Kleanthous
, Tsvi Kuflik, Jahna Otterbacher, Alan Hartman, Casey Dugan, Veronika Bogina:
Intelligent user interfaces for algorithmic transparency in emerging technologies. 129-130 - Johanne Christensen, Juhee Bae, Benjamin Watson, Kartik Talamadupula, Josef B. Spjut, Stacy Joines:
UIBK: user interactions for building knowledge. 131-132 - Q. Vera Liao, Michal Shmueli-Scheuer, Tsung-Hsien (Shawn) Wen, Zhou Yu:
User-aware conversational agents. 133-134 - Peter Knees, Markus Schedl, Rebecca Fiebrink
:
Intelligent music interfaces for listening and creation. 135-136 - Shoko Wakamiya, Adam Jatowt
, Yukiko Kawai, Toyokazu Akiyama, Ricardo Campos
, Zhenglu Yang
:
Second workshop on user interfaces for spatial and temporal data analysis (UISTDA2019). 137-138 - Bart P. Knijnenburg, Paritosh Bahirat, Yangyang He, Martijn C. Willemsen
, Qizhang Sun
, Alfred Kobsa:
IUIoT: intelligent user interfaces for IoT. 139-140 - Dorota Glowacka, Evangelos E. Milios, Axel J. Soto
, Fernando Vieira Paulovich
, Denis Parra
, Osnat Mokryn
:
Third workshop on exploratory search and interactive data analytics (ESIDA). 141-142
Student Consortium
- Toby Jia-Jun Li
:
End user programing of intelligent agents using demonstrations and natural language instructions. 143-144 - James Simpson
:
How is siri different than GUIs? 145-146 - Martijn Millecamp, Katrien Verbert
:
Personal user interfaces for recommender systems. 147-148 - Kai Holländer:
A pedestrian perspective on autonomous vehicles. 149-150 - Micol Spitale:
Training self-sufficiency and social skills with embodied conversational agent for children with autism. 151-152 - Fabio Catania
:
Conversational technology and affective computing for cognitive disability. 153-154 - Mohammad Rafayet Ali:
Online virtual standardized patient for communication skills training. 155-156 - Yangyang He:
Recommending privacy settings for IoT. 157-158 - Chelsea M. Myers:
Adaptive suggestions to increase learnability for voice user interfaces. 159-160 - Matthew Bonham:
Augmented reality simulation toward improving therapeutic healthcare communication techniques. 161-162 - Konrad Zielinski
, Ryszard Szamburski, Ewa Machnacz:
Post-laryngectomy interaction restoration system. 163-164 - Chris Kim:
A modular framework for collaborative multimodal annotation and visualization. 165-166

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.