Face Emotion Detection Opencv
Face Emotion Detection Opencv
By
Deepesh Rajpoot (2100430310023)
Ishant Vaidh(2100430310031)
Satyendra Singh Chauhan(2100430310048)
Under the supervision of
Dr. Atul Kumar Dwivedi
This is to certify that the work contained in the thesis entitled “Face Emotion
Detection using image signal” submitted by Deepesh Rajpoot, Ishant Vaidh,
Satyendra Singh Chauhan for the award of the degree of Bachelor of Technology in
Electronics and Communication Engineering to the Bundelkhand Institute of
Engineering and Technology, Jhansi is a record of bonafide research works carried
out by him under my direct supervision and guidance.
I considered that the thesis has reached the standards and fulfilling the
requirements of the rules and regulations relating to the nature of the degree. The
contents embodied in the thesis have not been submitted for the award of any other
degree or diploma in this or any other university.
We hereby declare that the project work “Face emotion analysis using image signal”
entitled is an authenticated work carried out by us under the guidance of Dr. Atul
Kumar Dwivedi of Electronics and Communication Engineering Department at
Bundelkhand Institute of Engineering and Technology, Jhansi. Information derived
from the other source has been quoted in the text and a list of refrences has been given.
We would like to express our gratitude towards Dr. Atul Kumar Dwivedi for his
guidance and constant supervision as well as for providing us necessary information
regarding the project and this report. We feel thankful and express our kind gratitude
towards our Director, Head of Department and all faculty members. We would also like
to express our special gratitude and thanks to our parents for giving us constant support
that improved our performance significantly.
ABSTRACT
The project involves several key stages. First, a comprehensive dataset comprising
diverse facial expressions is collected and annotated. This dataset is then used to train a
deep convolutional neural network (CNN), which learns to recognize and extract
discriminative features from facial images. The CNN is fine-tuned through transfer
learning to enhance its performance on the emotion detection task.
Face Emotion Detection using image signal
Introduction:
Emotion plays a fundamental role in human communication and interaction. The ability to accurately
recognize and interpret facial expressions is crucial for understanding emotional states, intentions,
and reactions. Consequently, the field of computer vision and artificial intelligence has witnessed
significant advancements in developing systems that can automatically detect and analyze emotions
from facial expressions. This project aims to contribute to this field by proposing an AI-based
approach for face emotion detection, leveraging deep learning techniques and real-time processing.
Facial emotion detection has diverse applications across multiple domains. In psychology, it can
assist in studying emotional responses, personality traits, and mental health conditions. In marketing,
it can provide valuable insights into consumer preferences and reactions to products and
advertisements. In human-computer interaction, it can enable more natural and intuitive interfaces,
enhancing user experiences. Furthermore, in fields like entertainment and gaming, emotion detection
can create immersive and personalized experiences.
Traditional approaches to facial emotion detection relied on manually designed features and rule-
based algorithms, which often struggled to capture the complexity and variability of facial
expressions. However, recent advancements in deep learning, particularly convolutional neural
networks (CNNs), have revolutionized this field. CNNs excel at automatically learning
discriminative features from raw data, making them well-suited for analyzing facial images and
extracting meaningful information.
The proposed project involves training a CNN model using a comprehensive dataset of annotated
facial expressions.
Real-time processing is another critical aspect of the project. To make the emotion detection system
practical and applicable in real-world scenarios, it needs to operate with minimal delay. The
proposed system will leverage the computational efficiency of deep learning models and optimize
the processing pipeline to achieve near real-time performance, allowing for seamless integration into
various applications and devices.
The evaluation of the proposed system will involve benchmarking it against existing approaches
using standard datasets. Performance metrics such as accuracy, precision, recall, and F1-score will
be utilized to assess the system's effectiveness in detecting and classifying emotions accurately.
Solution:
We make android app that capture image of human continuously and give emotion of
face of human being instantly
Code:-
package com.example.imagepro;
import android.content.Context;
import android.content.res.AssetFileDescriptor;
import android.content.res.AssetManager;
import android.graphics.Bitmap;
import android.util.Log;
import org.opencv.android.Utils;
import org.opencv.core.Core;
import org.opencv.core.Mat;
import org.opencv.core.MatOfRect;
import org.opencv.core.Point;
import org.opencv.core.Rect;
import org.opencv.core.Scalar;
import org.opencv.core.Size;
import org.opencv.imgproc.Imgproc;
import org.opencv.objdetect.CascadeClassifier;
import org.tensorflow.lite.Interpreter;
import org.tensorflow.lite.gpu.GpuDelegate;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.lang.reflect.Array;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.MappedByteBuffer;
import java.nio.channels.FileChannel;
import java.util.Arrays;
}
// close input and output stream
is.close();
os.close();
cascadeClassifier=new CascadeClassifier(mCascadeFile.getAbsolutePath());
// if cascade file is loaded print
Log.d("facial_Expression","Classifier is loaded");
// check your code one more time
// select device and run
//I/MainActivity: OpenCv Is loaded
//D/facial_Expression: Model is loaded
//D/facial_Expression: Classifier is loaded
// Next video we will predict face in frame.
// cropped frame is then pass through interpreter which will return facial
expression/emotion
}
catch (IOException e){
e.printStackTrace();
}
}
// Before watching this video please watch my previous video :
//Facial Expression Or Emotion Recognition Android App Using TFLite (GPU) and
OpenCV:Load Model Part 2
// Let's start
// Create a new function
// input and output are in Mat format
// call this in onCameraframe of CameraActivity
public Mat recognizeImage(Mat mat_image){
// before predicting
// our image is not properly align
// we have to rotate it by 90 degree for proper prediction
Core.flip(mat_image.t(),mat_image,1);// rotate mat_image by 90 degree
// start with our process
// convert mat_image to gray scale image
Mat grayscaleImage=new Mat();
Imgproc.cvtColor(mat_image,grayscaleImage,Imgproc.COLOR_RGBA2GRAY);
// set height and width
height=grayscaleImage.height();
width=grayscaleImage.width();
bitmap=Bitmap.createBitmap(cropped_rgba.cols(),cropped_rgba.rows(),Bitmap.Config
.ARGB_8888);
Utils.matToBitmap(cropped_rgba,bitmap);
// resize bitmap to (48,48)
Bitmap scaledBitmap=Bitmap.createScaledBitmap(bitmap,48,48,false);
// now convert scaledBitmap to byteBuffer
ByteBuffer byteBuffer=convertBitmapToByteBuffer(scaledBitmap);
// now create an object to hold output
float[][] emotion=new float[1][1];
//now predict with bytebuffer as an input and emotion as an output
interpreter.run(byteBuffer,emotion);
// if emotion is recognize print value of it
// define float value of emotion
float emotion_v=(float)Array.get(Array.get(emotion,0),0);
Log.d("facial_expression","Output: "+ emotion_v);
// create a function that return text emotion
String emotion_s=get_emotion_text(emotion_v);
// now put text on original frame(mat_image)
// input/output text: Angry (2.934234)
Imgproc.putText(mat_image,emotion_s+" ("+emotion_v+")",
new Point((int)faceArray[i].tl().x+10,(int)faceArray[i].tl().y+20),
1,1.5,new Scalar(0,0,255,150),2);
// use to scale text color R G B alpha thickness
// after prediction
// rotate mat_image -90 degree
Core.flip(mat_image.t(),mat_image,0);
return mat_image;
}
byteBuffer=ByteBuffer.allocateDirect(4*1*size_image*size_image*3);
// 4 is multiplied for float input
// 3 is multiplied for rgb
byteBuffer.order(ByteOrder.nativeOrder());
int[] intValues=new int[size_image*size_image];
scaledBitmap.getPixels(intValues,0,scaledBitmap.getWidth(),0,0,scaledBitmap.getWid
th(),scaledBitmap.getHeight());
int pixel=0;
for(int i =0;i<size_image;++i){
for(int j=0;j<size_image;++j){
final int val=intValues[pixel++];
// now put float value to bytebuffer
// scale image to convert image from 0-255 to 0-1
byteBuffer.putFloat((((val>>16)&0xFF))/255.0f);
byteBuffer.putFloat((((val>>8)&0xFF))/255.0f);
byteBuffer.putFloat(((val & 0xFF))/255.0f);
}
}
return byteBuffer;
// check one more time it is important else you will get error
}
long startOffset=assetFileDescriptor.getStartOffset();
long declaredLength=assetFileDescriptor.getDeclaredLength();
return
fileChannel.map(FileChannel.MapMode.READ_ONLY,startOffset,declaredLength);
}
Advantages:
1. Improved User Experience: Face detection apps can enhance user experience by offering intuitive
and personalized interactions. For example, apps can use face detection to automatically adjust
settings, customize content, or provide tailored recommendations based on the user's facial features.
2. Enhanced Security: Face detection can be used as a biometric authentication method, providing a
more secure and convenient way to unlock devices, access sensitive information, or perform secure
transactions. It offers an additional layer of security compared to traditional password-based
authentication.
3. Photography and Image Editing: Face detection can be utilized in photography and image editing
apps to identify and track faces in photos, enabling features such as automatic focus and exposure
adjustment, facial recognition, and applying filters or effects specifically on detected faces.
4. Social Media and Entertainment: Face detection is widely used in social media platforms for
various purposes, including automatic tagging of people in photos, applying filters or augmented
reality effects to faces in real-time, and creating personalized video or image content based on facial
expressions.
5. Marketing and Advertising: Face detection apps can be utilized in marketing and advertising
campaigns to gather demographic data and analyze customer reactions. This information can help
businesses tailor their products, advertisements, and user experiences to specific target audiences.
Disadvantages:
1. Privacy Concerns: Similar to face detection apps, face emotion detection apps raise privacy
concerns. The analysis of facial expressions and emotions involves capturing sensitive data, requiring
careful consideration of privacy regulations and ensuring proper consent from users.
2. Accuracy and Bias: Face emotion detection algorithms may not always accurately interpret facial
expressions, leading to misinterpretation of emotions. Moreover, these algorithms can exhibit biases,
especially when dealing with diverse demographics, cultural differences, or individuals with atypical
expressions.
3. Emotional Complexity: Facial expressions and emotions are complex and context-dependent. Face
emotion detection apps may not fully capture the subtleties or nuances of emotions, potentially
leading to oversimplification or misinterpretation of the emotional state.
4. Ethical Considerations: Face emotion detection apps need to address ethical considerations,
including responsible use of emotional data, ensuring transparency in data handling, and guarding
against potential misuse or unauthorized access.
Applications:
1. Privacy Concerns: Similar to face detection apps, face emotion detection apps raise privacy
concerns. The analysis of facial expressions and emotions involves capturing sensitive data,
requiring careful consideration of privacy regulations and ensuring proper consent from users.
2. Accuracy and Bias: Face emotion detection algorithms may not always accurately interpret facial
expressions, leading to misinterpretation of emotions. Moreover, these algorithms can exhibit biases,
especially when dealing with diverse demographics, cultural differences, or individuals with atypical
expressions.
3. Emotional Complexity: Facial expressions and emotions are complex and context-dependent. Face
emotion detection apps may not fully capture the subtleties or nuances of emotions, potentially
leading to oversimplification or misinterpretation of the emotional state.
4. Ethical Considerations: Face emotion detection apps need to address ethical considerations,
including responsible use of emotional data, ensuring transparency in data handling, and guarding
against potential misuse or unauthorized access.
References:
1. Ekman, P., & Friesen, W. V. (1971). Constants across cultures in the face and emotion. Journal of
Personality and Social Psychology, 17(2), 124-129.
2. Bartlett, M. S., Littlewort, G. C., Frank, M. G., Lainscsek, C., Fasel, I., & Movellan, J. R. (2005).
Automatic recognition of facial actions in spontaneous expressions. Journal of Multimedia, 1(6), 22-
35.
3. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
4. Liu, J., Luo, S., Yu, Z., Chen, M., Zhou, M., & Cui, X. (2020). A deep learning model for facial
emotion recognition based on extended data samples. Sensors, 20(2), 483.
5. Mollahosseini, A., Hasani, B., & Mahoor, M. H. (2017). AffectNet: A database for facial expression,
valence, and arousal computing in the wild. IEEE Transactions on Affective Computing, 10(1), 18-
31.