Abstract
Music has become an integral part that enriches human lives. It serves as a method of entertainment in social gatherings. Consequently Different playlists of music are played to energize, uplift, and comfort listeners in social gatherings. If most in that group of people don’t like the songs playing energy in between them can be lost. In this study, a musical companion robot was developed to interact with the audience through its rhythmic behaviors and through the song selections done by the robot based on the moods of the audience using Neural Networks and emotion recognition through facial recognition. The study demonstrated high levels of user acceptance based on the feedback received from a sample set of 15 volunteers within the age range of 21–25 years. Though The robot’s interaction through its rhythmic behaviors was a success, the song selection algorithm was below the level of acceptance at the startup of the robot. However, as the data demonstrates, the robot was able to accomplish the task of enhancing the audience through its appearance and behaviors.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Data sets. http://marsyas.info/downloads/datasets.html
Librosa. https://librosa.github.io/librosa/
Travis/Shimi robotic speaker dock. http://guyhoffman.com/travis-shimi-robotic-speaker-dock/
Blogger, G.: Why music is a universal language, January 2016
Brownlee, J.: What is a confusion matrix in machine learning, January 2020. https://machinelearningmastery.com/confusion-matrix-machine-learning/
Keltner, D., Sauter, D., Tracy, J., Cowen, A.: Emotional expression: advances in basic emotion theory. J. Nonverbal Behav. 43(2), 133–160 (2019). https://doi.org/10.1007/s10919-019-00293-3
Kerstin, D.: Mutual gaze: implications for human-robot interaction. Front. Comput. Neurosci. 5 (2011). https://doi.org/10.3389/conf.fncom.2011.52.00024
Li, T., Tzanetakis, G.: Factors in automatic musical genre classification of audio signals. In: 2003 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (IEEE Cat. No. 03TH8684). https://doi.org/10.1109/aspaa.2003.1285840
Mcduff, D., Mahmoud, A., Mavadati, M., Amr, M., Turcot, J., Kaliouby, R.E.: AFFDEX SDK. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA 16 (2016). https://doi.org/10.1145/2851581.2890247
Rouse, M.: What is social robot? - definition from whatis.com, May 2019. https://searchenterpriseai.techtarget.com/definition/social-robot
Siemer, M.: Mood-congruent cognitions constitute mood experience. Emotion (Washington, D.C.) 5, 296–308 (2005). https://doi.org/10.1037/1528-3542.5.3.296
Spectrum, I.: Keepon, May 2018. https://robots.ieee.org/robots/keepon/
Thayer, R.E.: The Biopsychology of Mood and Arousal. Oxford University Press (1990)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Hansika, W.K.N., Nanayakkara, L.Y., Gammanpila, A., de Silva, R. (2020). AuDimo: A Musical Companion Robot to Switching Audio Tracks by Recognizing the Users Engagement. In: Stephanidis, C., Kurosu, M., Degen, H., Reinerman-Jones, L. (eds) HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence. HCII 2020. Lecture Notes in Computer Science(), vol 12424. Springer, Cham. https://doi.org/10.1007/978-3-030-60117-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-60117-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60116-4
Online ISBN: 978-3-030-60117-1
eBook Packages: Computer ScienceComputer Science (R0)