0% found this document useful (0 votes)
5 views9 pages

Minor Project

The Human Emotion Detection System employs a Convolutional Neural Network (CNN) to classify emotions from facial expressions using the FER2013 dataset. It offers real-time processing, high accuracy, and easy integration into various applications while addressing challenges like data limitations and single-face detection. Future improvements aim to enhance functionality by supporting multiple faces and dynamic environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

Minor Project

The Human Emotion Detection System employs a Convolutional Neural Network (CNN) to classify emotions from facial expressions using the FER2013 dataset. It offers real-time processing, high accuracy, and easy integration into various applications while addressing challenges like data limitations and single-face detection. Future improvements aim to enhance functionality by supporting multiple faces and dynamic environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Human Emotion Detection

Nishant Yadav
Shubham Chakraborty
Ansh Pratap Singh
Satish Mourya

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 1/9
Project Overview

The Human Emotion Detection System uses a Convolutional Neural


Network (CNN) to classify emotions based on facial expressions in images.
The system uses the FER2013 dataset, containing 48x48 grayscale facial
images, to train the model to recognize emotions. The system can be
integrated into various applications, such as virtual assistants, chatbots,
and human-computer interaction systems.
Uses CNN for deep learning-based emotion recognition.
Trained on the FER2013 dataset (grayscale 48x48 pixel facial images).
Provides predicted emotion labels: Happy, Sad, Angry, Fear, Surprise,
Disgust, and Neutral.
Can be integrated into multiple platforms (e.g., web, desktop apps).

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 2/9
How the Project Was Created

Dataset: The FER2013 dataset was used for training. It contains


48x48 grayscale images of human faces with labeled emotions.
Preprocessing: The images were normalized, and one-hot encoding
was applied to emotion labels to ensure consistent model input.
Model Architecture: A CNN model was built using Keras, utilizing
layers such as convolutional, pooling, and dense layers for feature
extraction and classification.
Training: Model training was performed on a machine with a GPU to
speed up the process. We used categorical cross-entropy loss and
accuracy metrics for model evaluation.
Deployment: Flask framework was used for easy deployment and
integration into external applications.

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 3/9
Key Features and Benefits

High Accuracy: Achieves high accuracy in emotion classification


from facial expressions.
Real-Time Processing: Fast inference time of under 1 second for
emotion prediction.
Easy Integration: Provides a well-documented API for seamless
integration with external applications.
Cross-Platform Support: Deployable on Linux, macOS, and
Windows with all necessary dependencies.
Scalability: Easily scalable to handle more users or larger datasets via
cloud platforms.
Security: Encryption and data privacy mechanisms to ensure
sensitive information is protected.

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 4/9
Functional Requirements

Emotion Classification: Classify emotions from 48x48 grayscale


images using a Convolutional Neural Network (CNN).
Input: 48x48 grayscale image (preprocessed and normalized).
Processing: Convolutional layers extract relevant features from the
image. The model uses a fully connected dense layer to output the
predicted emotion label.
Output: Predicted emotion label (Happy, Sad, Angry, etc.) along
with a confidence score (probability of prediction).
Error Handling: Returns an error message for invalid input images,
such as non-grayscale or incorrect dimensions.

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 5/9
Non-Functional Requirements

Performance: The system should classify images in under 1 second


for real-time applications after model training.
Reliability: The system should be reliable with minimal downtime
during operation.
Availability: Should be available 24/7 when deployed on cloud
services, ensuring consistent performance.
Security: Data privacy is ensured. The images and prediction results
are never stored permanently and all communication is encrypted.
Maintainability: The system is modular, allowing for easy updates or
fine-tuning. The software logs provide visibility into potential issues.
Portability: The system can be deployed across various platforms,
including web applications, desktop applications, and cloud-based
environments.

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 6/9
Challenges Encountered

Data Limitations: The FER2013 dataset contains only seven


emotions, limiting the model’s ability to generalize to new or unseen
emotions.
Single-Face Limitation: The system is limited to classifying
emotions in single-face images and does not support multi-face
detection or video input.
Training Time: Training the model on machines without GPUs is
time-consuming and may lead to delays.
Overfitting: Ensuring the model does not overfit to the training data
by using techniques such as dropout and early stopping.

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 7/9
Design Constraints

Dataset Dependency: The model relies on the FER2013 dataset,


which may not cover a wide variety of emotions or diverse facial
expressions.
Performance Dependency: The training time is dependent on the
hardware used. Training on machines without GPUs can result in long
times for model training.
Single-Face Limitation: The system is designed for single-face
emotion classification only.
Open-Source Libraries: The system uses TensorFlow, Keras, and
NumPy, meaning that updates to these libraries can affect system
behavior.

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 8/9
Conclusion

The Human Emotion Detection System successfully uses deep learning


techniques (CNN) to classify human emotions from facial expressions. The
model, trained on the FER2013 dataset, can classify emotions in real-time
and is ready for deployment in various applications, such as virtual
assistants and human-computer interaction systems. The system is
designed for easy integration, scalability, and secure communication.
Future Improvements:
Expand the model to support multiple faces and video input.
Improve accuracy with additional datasets or by fine-tuning the model.
Explore real-time emotion classification in dynamic environments
(e.g., live video).

Nishant Yadav Shubham Chakraborty Ansh Pratap Singh


HumanSatish
Emotion
Mourya
Detection 9/9

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy