425 17.face Emotion Based Music Detection System
425 17.face Emotion Based Music Detection System
Prof. Subhash Rathod1, Akshay Talekar2, Swapnil Patil3, Prajakta Gaikwad4, Ruchita Gunjal5
Department of Computer Engineering, Marathwada Mitramandal’s Institute of Technology, Lohgaon,
Pune- 411047, India1,2,3,4,5
subhash.rathod@mmit.edu.in , talekar.akshay163@gmail.com2, pswapnil924@gmail.com3,
1
gaikwadpraju1998@gmail.com4, ruchita.gunjal@mmit.edu.in5
------------------------------------------------------ ***--------------------------------------------------
Abstract: - A novel approach that provides, the user 1.1 Motivation
with an automatically generated playlist of songs Our motivation in this work is to use emotion recognition
based on the mood of the user. Music plays a very techniques with wearable computing devices to generate
important role in human’s daily life and in the modern additional inputs for music recommended system’s
advanced technologies. The difficulties in the creation algorithm, and to enhance the accuracy of the resulting
of large playlists can overcome here. This Music player music recommendations. In our previous works, we have
itself selects songs according to the current mood of the studied emotion recognition from only GSR signals. In
user. An automatic Facial Expression Recognition this study we are enriching signals with PPG and propose
system needs to solve the following problems: a data fusion based emotion recognition method for music
detection and location of faces in a cluttered scene, recommendation engines The proposed wearable attached
facial feature extraction, and facial expression music recommendation framework utilizes not only the
classification. Then, the application returns songs user’s demographics but also his/her emotion state at the
which have the same mood as the user's emotion. The time of recommendation. Using GSR and PPG signals we
experimental results show that the proposed approach have obtained promising results for emotion prediction.
is able to precisely classify the happy emotion. 1.2 Objectives and goals
Keywords: - Audio Emotion Recognition, Music 1. To implement Convolution Neural Networks for
Information Retrieval, Extraction Module, Audio Feature classification of facial expressions.
Extraction Module, CNN, Confusion Matrix, Cluster 2. User facial expression based recommended
Music. music player.
--------------------------------***------------------------------------ 3. This system based on extraction of facial
I. INTRODUCTION expressions that will generate a playlist
automatically thereby reducing the effort and
From others this plays a novel approach that removes the
time.
risk that the user has to face the task of manually browsing
here an efficient and accurate algorithm that would II RELATED WORK AND LITERATURE
generate a playlist based on current emotional state and SURVEY
behaviour of the user. The introduction of Emotion “Emotion Based Music Player”
Recognition and Music Information Retrieval in the Nowadays, people tend to increasingly have more
traditional music players provided automatically parsing stress because of the bad economy, high living
the playlist based on various classes of emotions and expenses, etc. Listening to music is a key activity that
moods. Facial expression is the most ancient and natural assists to reduce stress. However, it may be unhelpful
way of expressing feelings, emotions and mood and its if the music does not suit the current emotion of the
algorithm requires less computational, time, and cost. listener. Moreover, there is no music player which is
WWW.IJASRET.COM 28
Volume 4 || Special Issue 4 || ICCEME 2019-2020 || ISSN (Online) 2456-0774
INTERNATIONAL JOURNAL OF ADVANCE SCIENTIFIC RESEARCH
AND ENGINEERING TRENDS
able to select songs based on the user emotion. To solve dependent to the mood of that user. This paper proposes
this problem, this paper proposes an emotion-based music an emotion based music recommendation framework that
player, which is able to suggest songs based on the user's learns the emotion of a user from the signals obtained via
emotions; sad, happy, neutral and angry.. An Intelligent wearable physiological sensors. In particular, the emotion
Music Player based on Emotion Recognition This paper of a user is classified by a wearable computing device
proposes an intelligent agent that sorts a music collection which is integrated with a galvanic skin response (GSR)
based on the emotions conveyed by each song, and then and photo plethysmography (PPG) physiological sensors.
suggests an appropriate playlist to the user based on This emotion information is feed to any collaborative or
his/her current mood. The user’s local music collection is content based recommendation engine as a supplementary
initially clustered based on the emotion the song conveys, data.
i.e. the mood of the song. This is calculated taking into III MATHEMATICAL MODULE
consideration the lyrics of the song, as well as the melody.
Every time the user wishes to generate a mood-based
playlist.
“Emotion Based Music System”
The human face is an important organ of an individual‘s
body and it especially plays an important role in
extraction of an individual‘s behaviour and emotional
state. Manually segregating the list of songs and
Where,
generating an appropriate system based on an individual‘s
emotional features is a very tedious, time consuming, Q = User entered input
labour intensive and upheld task. Various algorithms have CB = Check face
been proposed and developed for automating the system C = face emotion detect
generation process. However the proposed existing PR = Recommend
algorithms in use are computationally slow, less accurate UB = Recommend music
and sometimes even require use of additional hardware Set Theory
like sensors. This proposed system based on facial 1) Let S be as system which input image
expression extracted will generate a system automatically
thereby reducing the effort and time involved in rendering S = {In, P, Op,𝛷}
the process manually. 2) Identify Input In as
“Emotion Based Music Player – Xbeats” In = {Q}
This paper showcases the development of an Android Where,
platform based application named X Beats which acts as Q = User entered input image (dataset)
a Music Player working on Image Processing
3) Identify Process P as
fundamentals to capture, analyse and present music as per
the emotion or mood of the user using this application. P = {CB, C, PR}
The Android application was developed using the Where,
Android SDK software and Open CV software was used CB = System check face
to implement facial recognition algorithms and cascades.
C = face emotion detect
Emotion Based Music Recommendation System Using
Wearable Physiological Sensors Most of the existing PR = Pre-process request
music recommendation systems use collaborative or 4) Identify Output Op as
content based recommendation engines. However, the
Op = {UB}
music choice of a user is not only dependent to the
historical preferences or music contents. But also
WWW.IJASRET.COM 29
Volume 4 || Special Issue 4 || ICCEME 2019-2020 || ISSN (Online) 2456-0774
INTERNATIONAL JOURNAL OF ADVANCE SCIENTIFIC RESEARCH
AND ENGINEERING TRENDS
Where, for generation of an automated playlist, thereby increasing
UB = Update Result the total cost incurred.
WWW.IJASRET.COM 31