0% found this document useful (0 votes)
43 views7 pages

Emotion Based Music Recommendation System

The document discusses developing an emotion-based music recommendation system using facial expression data. It describes related work in emotion detection, music emotion classification, and developing recommendation systems. Deep learning models like CNNs are used to build the system to identify emotions and recommend appropriate music based on the identified emotion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views7 pages

Emotion Based Music Recommendation System

The document discusses developing an emotion-based music recommendation system using facial expression data. It describes related work in emotion detection, music emotion classification, and developing recommendation systems. Deep learning models like CNNs are used to build the system to identify emotions and recommend appropriate music based on the identified emotion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

International Journal of Scientific Research in Engineering and Management (IJSREM)

Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

Emotion based Music Recommendation System

Mihir Brahmane , Suraj Jawalkar , Amit Narute , Mayuri Jagtap


Guide : Prof. Vishal Nayakwadi
Department of AI&DS Engineering ,
Zeal College of Engineering & Research ,
Narhe , Pune

endeavors like feeling portrayal, feeling Identification /


acknowledgment , highlight based arrangement, and
Abstract : The human face assumes a significant part in
satisfying-based suggestion are required for the
knowing a person's state of mind. The expected information is
extricated from the human face clearly utilizing a camera. One
accomplishment in the satisfied-based music proposal
of the uses of this information can be for extricate the data to framework. Music scientific classification has been
reason the mind-set of a person. This information can then be depicted really utilizing a feeling descriptor . A suspicion
utilized to get a rundown of tunes that follow the "state of mind" for feeling portrayal is that feeling can be viewed as a
you got from the information given before. This kills the tedious bunch of nonstop amounts and planned into a bunch of
and repetitive undertaking of physically isolating or gathering genuine numbers. A circumflex model in which each effect
melodies into various records and helps in creating a fitting is displayed over two bipolar dimensions was proposed by
playlist in light of a person's close-to-home elements. The goal researchers as a pioneering effort to describe human
of the Facial Expression-Based Music Player is to fetch the data
emotions. Those two aspects are charming, unsavory, and
and interpret it before making a playlist with the provided
properties. Consequently, our proposed framework canters
arousing. As a result, it is possible to define each affect
around identifying human feelings for creating a feeling-based word as a particular combination of pleasure and arousal.
music player, which are the methodologies utilized by accessible Afterward, another specialist adjusted Russel's model to
music players to distinguish feelings, the approach our music music. "Excitement" and "valence" are the two primary
player follows to recognize human feelings, and the way things aspects in Thayer's model. Feeling terms were portrayed as
are smarter to involve our framework for feeling identification. quiet to enthusiastic along the excitement aspect. Feeling
A concise thought regarding our frameworks' working, playlist terms were named as quiet to vivacious and pessimistic to
age, and feeling order is given. Watchwords: Tensor-Flow & good along the valence aspect. With Thayer's model, the
Keras , MediaPipe , Convolutional Neural Networks , Deep
two-layered feeling plane can be partitioned into four
Learning , StreamLit RTC , Holistic .
quadrants with eleven feeling modifiers set over them. Then
again, Xiang et al. proposed a "psychological state change
organization" for depicting feelings of change in people.
1. INTRODUCTION Test data is used to calculate the probability of each
The new possibility in the field of music information transition between two states. In any case, different
retrieval is for computers to automatically analyze and feelings, for example, apprehensive and energized, aren't
comprehend music . Numerous research topics in this field thought of. Automatic emotion detection and recognition in
are pursued by researchers, including computer science, music is expanding rapidly thanks to digital signal
digital signal processing, mathematics, and statistics processing advancements and a variety of efficient feature
applied to musicology. This is due to the high diversity and extraction techniques. Many other potential applications,
richness of music content. Late improvement in music data such as music entertainment and systems for human-
recovery incorporates programmed sound computer interaction, may benefit greatly from emotion
kind/temperament arrangement, music closeness detection and recognition . Feng introduced the primary
calculation, sound craftsman recognizable proof, sound to- exploration of feeling recognition in music. There are
score arrangement, question by-singing/murmuring, etc. Several Deep Learning Models which are used to build this
Content-based music suggestions are one of the most Mood Based Song Recommendation System .
achievable applications that can be given. From the setting
data, we can accomplish more intelligent context-based
music suggestions. Multidisciplinary

© 2024, IJSREM | www.ijsrem.com | Page 1


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

2. LITERATURE SURVEY It has in this manner helped in decreasing the endeavors of


clients in making and overseeing playlists and giving a
superb encounter to the music audience members by
Renuka R. Londhe et al. proposed a paper which zeroed in
presenting to them the most appropriate melody as per the
on the investigation of changes in the ebbs and flows of the
client's ongoing articulation. Anukritine et al. concocted a
face and the powers of the relating pixels. The author
calculation that gives a rundown of melodies from the
classified the emotions using Artificial Neural Networks
client's playlist as per the client's inclination. The
(ANN). The creator likewise proposed different
calculation that was planned was centered around having
methodologies for the playlist. Zheng et al. proposed two
less computational time and, furthermore, decreasing the
critical classifications for facial element extraction, which
expense remembered for utilizing different equipment. The
included appearance-based highlight extraction and
fundamental thought was to isolate the feelings into five
mathematical-based include extraction, which included
classifications, i.e., euphoria, miserable, outrage, shock, and
extraction of a few fundamental marks of the face like
dread, likewise given a profoundly exact sound data
mouth, eyes, and eyebrows. It determines the attitude of the
recovery approach that removed pertinent data from a
client by utilizing look People frequently express their
sound sign quicker than expected. Aditya et al. fostered an
inclination by their demeanors, hand motions, and by
android application that goes about as a tweaked music
raising the voice of tone, yet generally people express their
player for a client, utilizing picture handling to dissect and
sentiments by their face. A feeling-based music player
introduce tunes to clients as indicated by the client's state of
diminishes the time intricacy of the client. For the most
mind. The application was created utilizing Shroud and
part, individuals have an enormous number of melodies on
furthermore OpenCV to carry out facial acknowledgment
their playlist. Playing melodies haphazardly doesn't fulfill
calculations. This paper likewise showed examination
the mind-set of the client. This framework assists clients
between different calculations utilized in facial discovery.
with playing melodies naturally as indicated by their state
The pictures of the client were caught using the front
of mind. The picture of the client is caught by the Web
camera of the cell phone. By capturing their emotions, it
camera, and the pictures are saved. The pictures are first
aimed to satisfy music enthusiasts. A. Habibzad et al.
changed over from RGB to paired design. This course of
proposed another calculation to perceive the facial
addressing the information is known as an element point
inclination, which included three phases: pre-handling,
location strategy. This cycle should likewise be possible by
highlight extraction, and grouping. The initial segment
utilizing the Haar Fountain innovation given by OpenCV.
portrays different stages in picture handling that incorporate
The music player is created by utilizing a Java program. It
preprocessing and separating used to extricate different
deals with the data set and plays the tune as indicated by the
facial highlights. The subsequent part upgraded the eye and
mindset of the user. Zeng et al. explored different advances
lip oval attributes, and in the third part, the eye and lip ideal
in human influence acknowledgment. He concentrated on
boundaries were utilized to arrange the feelings. The
various strategies for dealing with recordings of affective
acquired results showed that the speed of facial
states in either audio or visual form. The paper gives a
acknowledgment was obviously better than other regular
point-by-point survey of general media-figuring techniques.
methodologies. Prof. Nutan Deshmukh et al. zeroed in on
The impact is portrayed as a model of feeling classes which
making a framework that brings the feeling of the client
incorporate joy, misery, dread, outrage, repugnance, and
utilizing a camera and afterward robotizes the outcome
shock. The issues surrounding the development of an
utilizing the feeling location calculation. This calculation
automatic, spontaneous affect recognizer that assisted in
catches the mind-set of the client after each concluded time
emotion detection were the primary focus of this paper. It
frame, as the client's temperament may not be similar after
likewise recognized a few issues that have been missed or
some time; it might change. An emotion-based music
stayed away from in uni-modular presented feeling
system can be created using the proposed algorithm in an
acknowledgment. Parul Tambe et al. proposed a thought
average of 0.95 to 1.05 seconds, which is faster than
that computerized the communications between the clients
previous algorithms and lowers design costs. A system that
and the music player, realized every one of the inclinations,
makes use of Brain-Computer Interfaces, or BCIs, was
feelings, and exercises of a client, and gave melody choice
described by Chang Liuet al. . BCI utilizes gadgets to
thus. The different looks of clients were recorded by the
convey messages to its handling frameworks. EEG
gadget to decide the feeling of the client and anticipate the
equipment is utilized to screen the individual's mental
class of the music. Jayshree Jha et al. proposed a feeling-
perspective. The disadvantage of the plan is that they
based music player utilizing picture handling. This showed
require the contribution of the client's mind consistently to
how different calculations and methods that were proposed
play out the characterization. A calculation in view of MID
by various creators in their examination could be utilized
is utilized to ceaselessly screen and cycle the signs got from
for associating the music player alongside human feelings.
the cerebrum of the client and utilize these signs to screen

© 2024, IJSREM | www.ijsrem.com | Page 2


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

and create feelings that the client is at present encountering grounds that the underlying convolutional network layers
effectively. Swati Vaid et al. investigated EEG. recover the most significant level qualities from the caught
Electroencephalography (EEG) is a type of clinical science picture. As we add further levels, we duplicate the quantity
that records the electrical movement from the neurons of of channels by a few, contingent upon how enormous the
synapses. From within the brain's cells, the electrical channel was in the first layer. The picture's class will either
activity of the neurons is recorded. In view of the recorded be parallel or multi-class to recognize various kinds of dress
action of the neurons, a guess is made, and the feeling of or to distinguish digits. The learned properties in a brain
the individual is assessed from that examination. This network can't be perceived in light of the fact that brain
strategy is referenced above, in spite of the fact that it networks are like a "black box." Thus, the CNN model is
effectively gets the action of synapses yet neglects to fill used in Image Classification and Face Detection. CNN
the need for convey ability and financial matters. essentially returns the discoveries subsequent to getting an
information picture. The model that was advanced by loads
utilizing CNN is stacked to distinguish feelings. At the
3. METHODOLOGY point when a client takes a continuous picture, the picture is
submitted to a CNN model that has proactively been
A) Dataset :- The 48x48 grayscale portraits of faces that prepared. The CNN model then, at that point, predicts the
make up the Emotional Datasets which were used to create inclination and adds a mark to the picture. CNN Models are
the model. The seven emotions that are assigned to each integrated with Deep Neural Networks and Tensor-Flow
image are Anger, Joy, Fear, Happiness, Sadness, Surprise through a python Programming language and Libraries for
and Neutral. The public test set consists of 1568 examples, Efficient Facial Recognition and Emotion Detection Tasks.
while the training set consists of 7321 samples. Music Further , For an additional basis , Media-Pipe Libraries are
Mood Collection where dataset that is used is a labelled used for Hand gesture Identification and Classification
dataset of size 264. It has 14 sections, including Name, which will Recognise the patterns of Hand gesture through
Collection, Music Artist, User Id, Release Date, Prevalence, a emotion Body Language methods and Classification. It is
Danceability, Energy, Liveness, Valence, Beat, Key, Song also used with Holistic Function which Captures the
Language and State of Mind. Dataset has been linked functions of Hand Gestures and Recognise the Patterns and
Directly from the Music Platforms like Spotify or YouTube Emotions attached to a particular function (ex:- Fist Closed
through Stream-Lit RTC libraries. Therefore , no need to for anger , Palm Open for Happiness or Joy ).
collect and Store different musical datasets in a Personal
Data Storage Disk for training and application purpose C) Music Recommendation :- Every feature in our dataset
which saves time and processing with memory. Data is has a magnitude that indicates its intensity, and these
Directly Processed on Music Platform through the Facial features are also regarded as acoustic features of that
Inputs of User and Personal Parameters. This Leads in particular song. Greatness might be measured on various
fetching and Sorting all the Music Recommendation scales. Along these lines, there are 4-5 principal highlights
Process through Facial and Emotional Data Processing and among 10–14 that contribute more. Along these lines,
Integrating. melodies are characterized by various classifications like
cheerful, miserable, nonpartisan, and vivacious. For this,
B) Feeling Identification :- Face detection is one we utilized the Convolution Brain Organization's powerful,
application of computer vision technology. Calculations are discriminative highlights. Recognition of a particular
made and prepared in this method to accurately find faces emotion is done by using 264 neurons . To determine which
or items in object identification or related frameworks in features were most essential for classifying the image, load
photographs. It is possible to detect something in real time the input image for which you wish to view the feature
from an image or frame of a video. Face detection primarily map. Involving the ReLu actuation capability in
aims to identify the face that is contained within the frame convolution brain network engineering, channels or element
by minimizing external noises and other elements. This locators are applied to the info picture to create highlight
strategy depends on AI, and an assortment of information guides or enactment maps. Edges, vertical and horizontal
documents is utilized to prepare the outpouring function. lines, bends, and other characteristics already present in the
This utilizes AI methods to extract preparing information image can be identified using feature detectors or filters. It
with a serious level of exactness. We utilize the pre- is feasible to use pooling from the base, greatest, or normal.
prepared network, which is a consecutive model, as an In any case, when contrasted with min or normal pooling,
erratic element extractor while performing highlight max pooling gives better execution. Categorial Cross
extraction, permitting the information picture to advance to Entropy is the Loss Function used to reduce the error rate
the following layer, halting there, and involving that layer's and RMS Prop Optimizer is used for Optimizing the
results as our highlights. Utilize a couple of channels on the Working of the Model .

© 2024, IJSREM | www.ijsrem.com | Page 3


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

D) User Interface :- Using deep neural networks, the shocked. The proposed framework distinguishes the
method teaches the best feature abstraction. Profound Brain feelings, and on the off chance that the point includes a
Organizations are an effective methodology for facial gloomy inclination, a chosen playlist will be introduced that
emotion recognition, personalized music recommendation contains the principal reasonable kinds of music that will
and some more. Convolutional brain organizations have upgrade the mind-set of the individual decidedly. There are
been demonstrated exceptionally compelling in regions like four modules in music recommendation based on facial
picture acknowledgment and order. The proposed emotion recognition.
framework can identify the looks of the client utilizing a
convolutional brain organization model. In this venture, a • Real-Time Capture :- The system is responsible for
primary site page is planned utilizing the StreamLit Capturing the accurate user's face in Real Time.
structure, where a picture of the client is captured. The • Face Recognition :- User's face will be used as input. The
picture caught is then shipped off the model to anticipate convolutional brain network is modified to assess the
the feeling of the client. When the inclination is identified, highlights of the client picture.
the Spotify programming interface is called by the Python
module Spotify to demand music tracks, which are then • Emotion Detection :- The system extracts features from
shown in the UI. The detect emotion capability deals with the user image to determine the user's emotions and
feeling identification. There are four CNN layers in the captions are generated based on the user's feelings.
model, and it is trained for 50 epochs. UI Execution The UI
• Music Recommendation :- The song is proposed by the
is worked with the stream-lit system. When the page is
suggestion module to the client by planning their feelings
stacked, a following Musical Platform Website through
according to the temperament sort of the melody.
changing Interface is opened to catch the picture of the
client. Keras Backend Library with 264 Neuron classifiers CNN is used for Image Processing and Face Detection.
is used to determine whether a face is present in the Tensor-Flow is used to Simplify the Complex tasks and
captured image. CV2 Module is used to provide an easy-to- Keras is use to handle the complex tasks. RMS Prop
use interface for working with Real-Time Image and Video optimizer is used for optimizing the Model performance
processing. There is also a selection options of textbox with and Categorical cross entropy is used as a Loss Function
checklist for personalization of the Language of the song with ReLU activation Function because it never let values
and the artist of a Playlist for Privacy and Individual became negative while training. MediaPipe and Holistic is
Requirement Management. The Spotipy module uses the used to capture Hand Gestures and Identify them. StreamLit
emotion that has been detected to display the emotion on RTC is used to deploy apps in various platforms and
the screen. In addition, the Spotipy module searches the control the user Interfaces. CV2 is used for providing east-
Spotify library for songs that correspond to the user's mood to-use Interface for Image Processing. Various Functional
and then displays those songs on the screen. The tracks are Options are provided for Personal Requirements through
implanted so that the client could pay attention to the tune Text Box and Check-List Options.
in the web application itself or explore the spotify
application by tapping on the specific track. RAM :- 4 GB or higher
ROM :- 100 GB or higher
4. PROPOSED SYSTEM Programming Framework :- Windows 10 or higher
The proposed framework benefits us by introducing Processor :- I3 Processing System or higher
associations between the client and the music player. The
Coding Language :- Python 3.5 or higher version
motivation behind the system is to catch the face
appropriately with the camera. Caught pictures are taken Programming Platform :- Jupyter Notebook
care of by the Convolutional Brain Organization, which
predicts the inclination. Then, at that point, the feeling got Coding Libraries :- Tensorflow , StreamLit
from the caught picture is utilized to get a playlist of tunes. A quality result is one that meets the necessities of the end
The primary point of our proposed framework is to give a client and presents the data plainly. In any framework
music playlist, consequently changing the client's consequences of handling are conveyed to the clients and to
temperaments, which can be cheerful, miserable, normal, or other framework through yields.

© 2024, IJSREM | www.ijsrem.com | Page 4


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

gradient descent as an optimization method for training


deep learning models. The misfortune capability clear cut
5. RESULT & ANALYSIS cross-entropy is utilized to measure profound learning
model errors, commonly in single-name, multi-class order
We assessed some of the examinations, in which issues. The little edge in the distinction between the
Convolutional brain networks are used for detecting the consequences of preparation and approval shows the model
Accurate emotions and Recommending the Optimal Songs. isn't overfitting. This is because of the accessibility of
Correlation of elated calculations and Exactness values are additional information for preparing the model.
given for each review. The effectiveness of emotion Calculations of 5 emotion recognition was experimented for
detection is enhanced by using a convolutional neural testing the model acuurate rate and efficient performance
network. Hyperparameters for the prepared CNN network metrics. A test accuracy of Happy , Sad , Anger , Neutral ,
are working at higher precision. The weight update at the Joy were calculated and the results of each Accuracy Rate
end of each batch is controlled by the learning rate. A few were 94% , 92% , 95% , 90% , 90% respectively and Loss
ages of the cycles of the whole preparation dataset are Function Calculated for each emotion was 1.766 , 1.344 ,
given to the organization during preparation. Group size is 1.456 , 1.099 , 1.267 . The Overall Test Accuracy was
the quantity of examples displayed in the organization 92.72%. In training Process overall Epochs iterations were
before the loads are refreshed. The model can learn 50 and observations states that accuracy rates was
nonlinear prediction boundaries thanks to activation drastically increasing per epoch while loss functions was
functions. Adam may be an alternative to stochastic

© 2024, IJSREM | www.ijsrem.com | Page 5


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

highly decreasing each epoch. Even when the User trails for disabled people who can just express their emotions to
have changed the characteristics as gender , wearables like activate a customize playlist for them to listen what they
glasses or earrings have no affected the system’s want to listen as emotions are the part of every individual or
performance that much and also testing some facial changes living being even if they are disabled. It can be established
like beards or changing the hair styles have not that much as an Application or may be integrated with well known
significance of affecting the performance metrics and music platforms like Spotify.
loss/error functions of the model. This demonstrates that the
model's capacity to generalize is affected by the kind of
noise that is added. The model anyway returns great
outcomes for every one of the examinations completed, 7. REFERENCES
with F1 scores of more noteworthy than 70% for every one
of the tests and exactness of around 95% approximate [1] Londhe RR and Pawar DV 2012 Analysis of facial
calculated by training processes. expression and recognition based on statistical approach
International Journal of Soft Computing and Engineering 2
6. CONCLUSION & FUTURE WORK [2] Kabani H, Khan S, Khan O and Tadvi S 2015 Emotion
based music player International Journal of Engineering
All in all, our proposed feeling-based music suggestion Research and General Science 3 750-6
framework, utilizing facial pictures and ereal time video
[3] Gupte A, Naganarayanan A and Krishnan M Emotion
captures for overflow calculations, accomplished a
Based Music Player-XBeats International Journal of
precision of around 70%. This shows that it is feasible to
Advanced Engineering Research and Science 3 236854
involve looks as a dependable contribution to foresee the
feelings of a client and suggest fitting music in a similar [4] Hadid A, Pietikäinen M and Li SZ 2007 Learning
manner. The framework gives a customized music personal specific facial dynamics for face recognition from
experience to clients, which is a significant calculation of videos International Workshop on Analysis and Modeling
the present reality where individuals are continuously of Faces and Gestures pp1-15 Springer Berlin Heidelberg
searching for redone encounters. The proposal framework [5] Zeng Z, Pantic M, Roisman GI and Huang TS 2008 A
recommends melodies in light of the feelings distinguished, survey of affect recognition methods Audio, visual, and
which upgrades the client's mind-set and gives a superior spontaneous expressions IEEE transactions on pattern
encounter. In any case, there is still an open door for analysis and machine intelligence 31 39-58
development in the framework's precision. One elective [6] Patel AR, Vollal A, Kadam PB, Yadav S and Samant
choice is to research different AI models that might create RM 2016 MoodyPlayer a mood based music player Int. J.
improved results. Furthermore, stretching out the dataset Comput. Appl. 141 0975-8887
used to prepare the model might support working on the [7] ParulTambe, YashBagadia, Taher Khalil and Noor
framework's precision. Generally, our framework gives a UlAin Shaikh 2015 Advanced Music Player 5
promising way to deal with customized music suggestions [8] Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z and
and can be stretched out to different regions where feeling Matthews I 2010 The extended cohn-kanade dataset (ck+)
acknowledgment assumes a significant part, for example, A complete dataset for action unit and emotion-specified
medical care and client support. The future extent of this expression In 2010 ieee computer society conference on
examination could include the investigation and computer vision and pattern recognition-workshops 94-101
consolidation of further developed facial acknowledgment IEEE
and feeling discovery calculations, like profound learning [9] Kanade T, Cohn JF and Tian Y 2000 Comprehensive
and brain organizations, to additionally work on the database for facial expression analysis InProceedings
exactness of the feeling-based music proposal framework. Fourth IEEE International Conference on Automatic Face
Furthermore, the framework could be extended to and Gesture Recognition 46-53 IEEE
incorporate more music classes and customized proposals [10] Luoh L, Huang CC and Liu HY 2010 Image
in view of client listening history and inclinations. The processing based emotion recognition In2010 International
incorporation of user feedback to enhance the Conference on System Science and Engineering 491-494
recommendation algorithm and the user experience as a IEEE
whole is one more potential area of future research. [11] Vivek JD, Gokilavani A, Kavitha S, Lakshmanan S
Moreover, the framework could be applied to other and Karthik S 2017 A novel emotion recognition based
domains other than music, like movies or television show mind and soul-relaxing system In2017 International
proposals, to give a more customized and connecting Conference on Innovations in Information, Embedded and
experience for clients. It will definitely improve the user Communication Systems 1-5 IEEE
interface in musical applications and also provide a high [12] Jyoti Rani and Kanwal Garg 2014 Emotion Detection
technological advancement not just in Computer vision Using Facial Expressions A Review International Journal
industries but all over every fields of science and of Advanced Research in Computer Science and Software
technology. It can also be helpful for recommending songs Engineering 4

© 2024, IJSREM | www.ijsrem.com | Page 6


International Journal of Scientific Research in Engineering and Management (IJSREM)
Volume: 08 Issue: 04 | April - 2024 SJIF Rating: 8.448 ISSN: 2582-3930

[13] Joshi A and Kaur R 2013 A Study of speech emotion


recognition methods Int. J. Comput. Sci. Mob. Comput. 2
28-31
[14] Shoaib M, Hussain I, Mirza HT and Tayyab M 2017
The role of information and innovative technology for
rehabilitation of children with autism a systematic literature
review In2017 17th International Conference on
Computational Science and Its Applications 1-10 IEEE
[15] Dubey M and Singh L 2016Automatic emotion
recognition using facial expression a review International
Research Journal of Engineering and Technology (IRJET)
3 488-92
[16] Anwar S, Milanova M, Bigazzi A, Bocchi L and
Guazzini A 2016 Real time intention recognition In IECON
2016-42nd Annual Conference of the IEEE Industrial
Electronics Society 1021-1024 IEEE
[17] Rázuri JG, Sundgren D, Rahmani R, Moran A, Bonet I
and Larsson A 2015 Speech emotion recognition in
emotional feedbackfor human-robot interaction
International Journal of Advanced Research in Artificial
Intelligence 4 20-7
[18] Dureha A 2014 An accurate algorithm for generating a
music playlist based on facial expressions International
Journal of Computer Applications 100

© 2024, IJSREM | www.ijsrem.com | Page 7

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy