0% found this document useful (0 votes)
63 views23 pages

Smart Music Player Based On Expression: Prepared By: Saugat Niroula (073-Bct

This document proposes a smart music player that generates playlists based on a user's detected emotional state. It aims to use a convolutional neural network to recognize seven facial expressions (angry, happy, sad, neutral, surprised, disgusted, scared) and play corresponding music. The methodology involves detecting faces with HAAR cascades, extracting 255 pixel inputs, feeding them through the neural network with backpropagation, and playing music based on the identified expression. Diagrams illustrate the system's workflow, classes, and user interface. The conclusion discusses efficient expression estimation and future plans to expand functionality.

Uploaded by

Giriraj Niroula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views23 pages

Smart Music Player Based On Expression: Prepared By: Saugat Niroula (073-Bct

This document proposes a smart music player that generates playlists based on a user's detected emotional state. It aims to use a convolutional neural network to recognize seven facial expressions (angry, happy, sad, neutral, surprised, disgusted, scared) and play corresponding music. The methodology involves detecting faces with HAAR cascades, extracting 255 pixel inputs, feeding them through the neural network with backpropagation, and playing music based on the identified expression. Diagrams illustrate the system's workflow, classes, and user interface. The conclusion discusses efficient expression estimation and future plans to expand functionality.

Uploaded by

Giriraj Niroula
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Smart Music

Player based on
Expression
P R E PA R E D B Y: S A U G AT N I R O U L A ( 0 7 3 - B C T-
570)
S A U R AV N E U PA N E ( 0 7 3 - B C T- 5 7 1 )
TABLE OF CONTENTS
1. Introduction
2. Objectives
3. Previous Work
4. Research Question and Hypothesis
5. Study Design
6. Feasibility Study
7. Methodology used

2
Table of Contents(cont.….)
8. Use Case diagram
9. Class Diagram
10. Work Flow
11. Convolution Neural Network
12. Back Propagation algorithm
13. User Interface
14. Conclusion
15. Future Scope

3
Introduction
Expression detection is important research area in biomedical engineering
Focuses on predicting human emotion and diagnosis of psychological disorders
Aims to detect facial expression automatically to identify emotional state with
high accuracy
Heals the psychology of human being using music
7 types of expression will be captured based on depth and weight of image

4
Objectives
To construct efficient and accurate model that would generate a playlist based
on current emotional state and behavior of the user.

5
Previous Work
Sang, Cuong and Ha proposed a discriminative deep feature learning approach
with dense CNN for facial emotion Recognition
Kumar, Kant and Sang proposed a better approach to predict human emotion
using Deep CNN and how emotion intensity is expressed by face changes from
low level to high level
Wang Dong and Hu proposed deep CNN that uses fast Region Based CNN for
face detection

6
Research question and Hypothesis
Research question:
• How to detect expression and play music based on expression discovered
using CNN?
Null Hypothesis:
• Expression detected are not as per expectation.
Alternate Hypothesis:
• Expression are as per expectation and music is played accordingly.

7
STUDY DESIGN
1 participant or user
1 webcam
Detecting 7 expression
 Angry
 Happy
 Sad
 Neutral
 Surprised
 Disgust
 Scared
Play music in music player wrt Happy, Sad and Angry expression

8
Feasibility Study
Technical feasibility
Legal feasibility
Economic feasibility

9
Methodology Used

Fig: Waterfall model


Source:1200px-Waterfall_model.svg.png (1200×900) (wikimedia.org)
10
Use case Diagram

Fig: Use case


11
Class Diagram

Fig: Use Case diagram


12
WORK-FLOW

Fig: Work flow diagram


13
Convolution Neural Network

Fig: Convolution Neural network


Source: threecnnmode.jpg (1310×859) (b-cdn.net)
14
Back Propagation Algorithm

Fig: Back Propagation


15
Steps
a. The user takes a picture of himself or herself
b. The face rectangle is computed using a HAAR Cascade filter to obtain
internal coordinates
c. The 255 pixels of input is fed into the neural network
d. At initial stage the weight is taken as random
e. Z=x1*w1+x2*w2+b
f. The weight is readjusted based on back propagation algorithm

16
USER INTERFACE

Fig: User Interface

17
Expression

Fig: Happy Expression


18
Fig: Angry Expression

19
MUSIC PLAYER

Fig: Music Player

20
CONCLUSION
Various types of features like High speed estimation and features extraction
Efficient
Easy to get result on feature scaling

21
FUTURE SCOPE
Facial recognition can be used for authentic purpose
Will be developed on Android system
Will be include more expressions
Will be used to determine mood of physically Challenged and mentally
challenged people
Using more dataset of Asian people.

22
Demo Video

23

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy